Sunday, May 3rd 2020

Core i3-10100 vs. Ryzen 3 3100 Featherweight 3DMark Showdown Surfaces

AMD's timely announcement of the Ryzen 3 "Matisse" processor series could stir things up in the entry-level as Intel kitted its 10th generation Core i3 processors as 4-core/8-thread. Last week, a head-to-head Cinebench comparison between the i3-10300 and 3300X ensued, and today we have a 3DMark Firestrike and Time Spy comparison between their smaller siblings, the i3-10100 and the 3100, courtesy of Thai PC enthusiast TUM_APISAK. The two were benchmarked on Time Spy and Fire Strike on otherwise constant hardware: an RTX 2060 graphics card, 16 GB of memory, and a 1 TB Samsung 970 EVO SSD.

With Fire Strike, the 3100-powered machine leads in overall 3DMark score (by 0.31%), CPU-dependent Physics score (by 13.7%), and the Physics test. The i3-10100 is ahead by 1.4% in the Graphics score thanks to a 1.6% lead in graphics test 1, and 1.4% lead in graphics test 2. Over to the more advanced Time Spy test, which uses the DirectX 12 API that better leverages multi-core CPUs, we see the Ryzen 3 3100 post a 0.63% higher overall score, 1.5% higher CPU score; while the i3-10100 powered machines post within 1% higher graphics score. These numbers may suggest that the i3-10100 and the 3100 are within striking distance of each other and that either is a good pick for gamers, until you look at pricing. Intel's official pricing for the i3-10100 is $122 (per chip in 1,000-unit tray), whereas AMD lists the SEP price of the Ryzen 3 3100 at $99 (the Intel chip is at least 22% pricier), giving AMD a vast price-performance advantage that's hard to ignore, more so when you take into account value additions such as an unlocked multiplier and PCIe gen 4.0.
Source: TUM_APISAK (Twitter)
Add your own comment

14 Comments on Core i3-10100 vs. Ryzen 3 3100 Featherweight 3DMark Showdown Surfaces

#1
Caring1
"between the i3-10300 and 3300X " ?

Interestingly the 3100 boosted to 4.4GHz is shown in both comparisons so the number should only be used from the 3.6GHz run for a fair comparison.
Posted on Reply
#2
moproblems99
I am surprised that Intel has the graphics lead in this. That is what stands out the most to me.
Caring1"between the i3-10300 and 3300X " ?

Interestingly the 3100 boosted to 4.4GHz is shown in both comparisons so the number should only be used from the 3.6GHz run for a fair comparison.
I'm confused. Why is it fair to limit one when we are looking at total performance? If we were looking at per clock performance then I could see the limit.
Posted on Reply
#3
zlobby
moproblems99I am surprised that Intel has the graphics lead in this. That is what stands out the most to me.
Wait, what? If by 'lead' you mean a fraction of a FPS in half of the tests, then you're spot on.
Posted on Reply
#4
moproblems99
zlobbyWait, what? If by 'lead' you mean a fraction of a FPS in half of the tests, then you're spot on.
Is 1 > 2? I don't care how small, large, or indifferent the lead is, the number next to Intel's is larger. I am surprised because this is the one area AMD has always had a lead in. Usually sizeable too.
Posted on Reply
#5
RandallFlagg
moproblems99Is 1 > 2? I don't care how small, large, or indifferent the lead is, the number next to Intel's is larger. I am surprised because this is the one area AMD has always had a lead in. Usually sizeable too.
That's for iGPU. These tests show they are using an RTX 2060.

Naturally all these benchmarks need to be viewed with a healthy dose of skepticism. Not only could they be run with immature drivers, it's possible they are being run with engineering samples.

To illustrate, he has 4 Ryzen benchmarks that vary by 2.3% overall. This is more than the difference between the Ryzen and the Intel chip, and it's not explained here what if any tweaks were done for the different runs.

EDIT: Nevermind. Dude is overclocking the Ryzen. The 3100 is only rated for 3.9Ghz turbo and he has it running at 4.4Ghz. What a bogus comparison.

I would really like to see these 4C\8T i3's thrown up against old school i7's - like the 4790, 6700, 7700, and 8700 (non K models).
Posted on Reply
#6
Melvis
its all within margin of error, so to me its all basically a tie, it needs to be at least 5% difference or above to be called "faster" or "better" or a clear winner in my eyes.

Fact is your getting the same performance for a better price and platform on the AMD, cool!
Posted on Reply
#7
Fouquin
moproblems99I am surprised that Intel has the graphics lead in this. That is what stands out the most to me.
Within margin for error. Rerun the test 10 times and you get more variance in results than that from a single hardware config. 3DMark's error variance is over 2% on a single platform.
Posted on Reply
#8
Caring1
moproblems99I'm confused. Why is it fair to limit one when we are looking at total performance? If we were looking at per clock performance then I could see the limit.
Off the shelf performance should be compared, not one overclocked and not the other.
Posted on Reply
#9
watzupken
RandallFlaggEDIT: Nevermind. Dude is overclocking the Ryzen. The 3100 is only rated for 3.9Ghz turbo and he has it running at 4.4Ghz. What a bogus comparison.

I would really like to see these 4C\8T i3's thrown up against old school i7's - like the 4790, 6700, 7700, and 8700 (non K models).
I don't deny that they should be comparing at stock, but considering that Intel deliberately lock their CPU from overclocking actually gave them a disadvantage here. The cheaper Ryzen 3 while slower at stock can get a good boost in performance through an overclock. Also, I am doubtful that the Intel processor can maintain its boost clock for long with the included stock cooler. If review uses the respective stock coolers, Intel chips will not be able to perform well since it will be going well above the 65W TDP at its >4Ghz clockspeed.

Also I see no point comparing this with the older 6xxx, 7xxx and 8xxx series. They are basically the same architecture. The Comet Lake is just a version on steroids (higher clock due to higher power consumption).
Posted on Reply
#10
moproblems99
RandallFlaggThat's for iGPU. These tests show they are using an RTX 2060.
LOL, I somehow managed to not process this entire article. Ignore me.
Posted on Reply
#11
RandallFlagg
watzupkenI don't deny that they should be comparing at stock, but considering that Intel deliberately lock their CPU from overclocking actually gave them a disadvantage here. The cheaper Ryzen 3 while slower at stock can get a good boost in performance through an overclock. Also, I am doubtful that the Intel processor can maintain its boost clock for long with the included stock cooler. If review uses the respective stock coolers, Intel chips will not be able to perform well since it will be going well above the 65W TDP at its >4Ghz clockspeed.
Most chips in general are never overclocked, and in particular chips in this range are not going to be overclocked. Go to any store and you'll find gobs of i5-9400 based pre-builts. This type of comparison is very misleading. It is, frankly, garbage.

And, while your statement about overclocking the Intel CPU itself being disabled is true, it is possible to significantly overclock the memory. So again, this is all apples to oranges false comparisons.
watzupkenAlso I see no point comparing this with the older 6xxx, 7xxx and 8xxx series. They are basically the same architecture. The Comet Lake is just a version on steroids (higher clock due to higher power consumption).
I always prefer actual results to assumption, assumptions are usually only as accurate as the objectivity of the person making the assumptions - and humans are not very objective.

There is already a comparison of the i5-10400 to an i7-9700F (8 core) out there which was fascinating. Specifically, the 6c/12t i5-10400 spanks the 8c/8t 9700 in WinRar and some of the 3dMark physics tests.
Posted on Reply
#13
goodeedidid
Synthetic test aren't good indicator and basically they don't mean much. Especially for thing unreleased those result mean nothing at all.
Posted on Reply
#14
zlobby
FouquinWithin margin for error. Rerun the test 10 times and you get more variance in results than that from a single hardware config. 3DMark's error variance is over 2% on a single platform.
Yes, look how people are drooling over some 'benchmarks', while real world numbers are all that matter.
Posted on Reply
Add your own comment