• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

How is Intel Beating AMD Zen 3 Ryzen in Gaming?

Deversification will only get deeper as time goes on, I'm afraid, my friend.

  • There will be people who want Zen 3 based GPU tests, because for their scenario it makes the most sense
  • There will be people who want Intel based GPU tests, since globally it is still very common
  • There will be people who want tests with slower and more common memory speeds, and those who want with high speeds and tight timings
  • There will be people who want CPU tests with an RTX 3080 because it is the most advanced NVIDIA GPU the common can afford, but others who want an RX 6800 XT one because it represents a fair alternative
  • There will be those who want GPU tests on both Intel and AMD systems, since RDNA 2's SAM feature is AMD exclusive, but you can't exclude most users who don't have the option to use SAM since they have already purchased older hardware
  • RTX ON? RTX OFF? Smart Cache OFF? Smart cache ON? What about 8 core vs 10 cores vs 12 cores? And Memory types and configuration? why not 4 DIMMS? why not 2 DIMMS?
  • What about overclocking? And what about manual memory tuning? will you be tuning each system to its own capabilities? And what about cooler use? Air, WC, or maybe just stock?
With how many possible branches to base a testing platform on, you just can't satisfy everyone. Makes sense.
I'd advise the little I can here, and say, go with the gut feeling of what can serve the common viewer the most benefit. At times, Its ok to serve a piece of information knowing not everyone can benefit from it, but at least a sizeable amount of readers can.

Extrapolate, my friends... extrapolate.
Base testing is the most important. Just perform the Raw test for all components and keep the complexity out of it all. If you want to do specific Ray Tracing testing so be it. Just keep it plain and simple.
ZEN3 with RDNA2 vs. ZEN3 with Ampere equivalent. That's good too. It might show how much better ZEN3 & RDNA2 perform over an Nvidia GPU. etc.,

Keep in mind of single rank and dual rank memory sticks. GN just tested 2 vs 4 memory sticks and the results were off because of differences of memory rank during testing.
Single Rank & Dual Rank seem to make a bigger difference then 1st thought.
 
memory speeds and pcie 4.0 are the cause here.
Regarding PCIe 4.0, I heard somewhere on Reddit that Horizon Zero Dawn is actually very sensitive to that, with PCIE 4.0 enabling up to around 20% boost in FPS compared to PCIE 3.0... the guy apparently did benchmarks for other site, but he just mentioned it in passing, never put the actual numbers or graphs about.
i just meant Lisa Su so ninjutsu she doesn't need security patches. get skill Intel executives
:laugh:
Really. It's measly single digit % difference in the first place, likely completely imperceptible in real world. Hence should be discarded as meaningless, yet some folks blew it out of proportion.
And if Wizz removes it, people are gonna complain about it missing. Besides, it's just more data. Having more data has never hurt anyone.
Don't know why but Zen 3 seems to run best with 4 sticks of fast ram... I've played with 2 and 4 stick configuration after watching Gamers Nexus video and achieved 0-9% difference at 1080p low settings depending on a title. I compared 2x8gb and 4x8gb Patriot Viper Steel DDR4-3733 DIMM CL17 (both combos with manually fine tuned timings) on friend's 5600X & my Gigabyte B550 Aorus PRO MB. Worth considering 4x8 combo if you're buying 32 gigs of ram.
There were some discussions on Reddit and Twitter about memory ranks, but admittedly that seemed a little beyond what I could understand at the time and frankly I had a work-related headache at the time so I didn't even want to read anymore.
 
Extremely interesting, great job as always @W1zzard .
 
Well let's see... Intel was ahead with unsafe speculative execution and clock speed for the longest time. AMD bridged that gap with smaller node, larger L3 cache, and now improved IPC. And they're probably doing some optimization with RAM striping when the optimal number of channels are being used.

I think the real interesting comparison would be to use a scenario that is constantly loading new data, aka random data instead of re-usable (or cache-heavy) data.
 
Great ideas, keep them coming, I'll test them all over the weekend

Edit: note to self, from Jonny via email, force Zen 3 + Ampere to Gen 3 to more clearly see PCIe 3 vs 4

We know Zen 2 didn't fair so well when using memory NOT @ 1:1:1 because the latency offset was too big, but what about Zen 3? Does it get latency hit too much to the point higher RAM speeds are a waste, still?

Pick a game more sensitive to RAM speeds and test with that: IF it turns out there's no difference, then there's no point in testing further than that but, if there is ... it would be VERY interesting to know ...
 
Just an idea, would it be due to Ampere running in PCI-E 4.0 mode which would require the AMD Zen3 cores going into idle a lot less? What if you force PCI-E 3.0 on the Ampere cards?
 
And if Wizz removes it, people are gonna complain about it missing. Besides, it's just more data. Having more data has never hurt anyone.

Never said anything about removing this LMAO.

I did say it was blown out of proportion, primarily in relevant discussion thread.
 
Nice find there @W1zzard . A big relief to you having found the cause of original review's and we hope you find more hints & tips on Zen3 perfromance improvements. So, are you going to change your GPU testing setup to a zen3 one?
 
ram doesn't even matter much as long as you hit 3600 cas 16-17 range. if you game at 1440p ram really doesnt matter much after that. single rank vs dual rank argument only applies to 1080p
 
Regarding PCIe 4.0, I heard somewhere on Reddit that Horizon Zero Dawn is actually very sensitive to that, with PCIE 4.0 enabling up to around 20% boost in FPS compared to PCIE 3.0... the guy apparently did benchmarks for other site, but he just mentioned it in passing, never put the actual numbers or graphs about.

:laugh:

And if Wizz removes it, people are gonna complain about it missing. Besides, it's just more data. Having more data has never hurt anyone.

There were some discussions on Reddit and Twitter about memory ranks, but admittedly that seemed a little beyond what I could understand at the time and frankly I had a work-related headache at the time so I didn't even want to read anymore.


To keep it simple, single rank memory sticks have 1 set of memory chips and dual rank memory sticks have 2 sets of memory chips.

Single rank sticks will typically have a set of memory chips one side and dual rank both sides.

8Gbx1 sticks are typically single rank and 16Gbx1 sticks are typically dual rank. Although, with higher density memory chips, we are seeing more single rank 16gbx1 memory sticks.

A total of 4 memory ranks seems to be the most optimal setup for ryzen. An optimal setup would be 2x8Gb (dual rank), 4x8Gb (single rank), or 2x16Gb (dual rank) memory sticks. However, the cost of more ram may not be worth it.

The higher rank and memory frequency would only benefit software and games that can take advantage of it. There are a lot of games where memory doesn't matter much.
 
Thank you for the clarification!

"After a lot of testing even I can confirm what other reviewers have reported during the Zen 3 launch: AMD has beaten Intel in gaming performance, but only in a best-case scenario, using fast memory and with the latest graphics architecture. "
However, the difference is not only with 3800 MHz RAMs. If you take games like CS: GO and other, CPU-dependent games, you can see a huge win for AMD. That's what you can see in Linus' and other reviews.

And the assertion regarding the latest graphics arhitecture also stood for earlier Ryzen vs. Intel graphs, where Intel was ahead by 4-5%: when you get 1 level smaller in GPU, there is also zero difference between the 2 (f.e. Zen 2 and Comet Lake, etc.) in FHD.

It's also interesting to see that in BFV, Zen 3 has a quite impressive advantage even in 1440p with this setup (nearly 9%).
 
How about a 2 vs 4 sticks memory comparison ?

It's less about 2 vs 4 sticks and more about ranks. 2 sticks of dual rank or 4 sticks of single rank seems to have a very measurable performance difference in lots of scenarios for Zen 3, including games.
 
hope you get rid of those games that nobody play on your bench list
 
Thank you, this morning I had finally decided to go for a 10900K and a Z490 motherboard for my new build, returning the brand new 570X motherboard I bought a couple of weeks ago for Zen 3.

There hasn't been any drops of new 5900x in a week now, and your review of that CPU didn't show much of a difference when compared to the Intel CPU in gaming benchmarks, but this new article changes that, I have decided to wait a bit longer.

Now all we need is for more stock to become available, come on AMD, you're our only hope!
 
How is Intel Beating AMD Zen 3 Ryzen in Gaming?
Answer: Bottlenecks

Finally it has been acknowledged here that Ryzen 5900X has a better overall fps result in gaming than i9-10900K.
At last some of the bottlenecks have been addressed.
 
For anyone looking for cheap dual rank 16gb (8gbx2) memory kits, Crucial Ballistix uses dual rank memory sticks. Also, Patriot Viper Steel uses single rank chips.
 
Last edited:
Answer: Bottlenecks

Finally it has been acknowledged here that Ryzen 5900X has a better overall fps result in gaming than i9-10900K.
At last some of the bottlenecks have been addressed.

It just shows you how tiny that difference is though. CPU difference if 20% between the top 2 CPUs when extremely CPU bound is not humanly perceptible.

When Ryzen was behind 20% of Intel at 720P AMD fanboys were all like "It games the same! who plays at 720P?" now when AMD is 20% ahead in the most academic scenarios AMD fanboys are going "It CRUSHES INTEL IN GAMING" -- it does, but I think the moral of this story is you shouldn't spend $500+ on a CPU for gaming lol.
 
Rocket Lake is a few months off then I guess we'll see a repeat of this again? At least it's good to know that AMD in 2020 is beating a 5 year old architecture!

Just remember that there are 4000mhz CL15 kits out there. I would compare 3200mhz CL14 to 4000 CL15 instead of 3600/3800 which is the middle-ground at this point in time.

Also, one of the most dumbest tests here is comparing FPS in 4X games. You care about turn time rather than FPS in 4X. Nobody cares if you get over 9000 fps in 4X if each turn varies by 30 seconds if you compare Intel to AMD. At least I'm no longer surprised why every "game reviewer" uses the same games and benchmarks every time.
 
Last edited by a moderator:
So wizzard why not just test multi-gpu on them if they're gpu bottlenecked?
 
Last edited:
from the rumors I think rocket lake matches current AMD -- but you probably won't see difference until the 4080ti between any of the top procs right now.
 
It just shows you how tiny that difference is though. CPU difference if 20% between the top 2 CPUs when extremely CPU bound is not humanly perceptible.

When Ryzen was behind 20% of Intel at 720P AMD fanboys were all like "It games the same! who plays at 720P?" now when AMD is 20% ahead in the most academic scenarios AMD fanboys are going "It CRUSHES INTEL IN GAMING" -- it does, but I think the moral of this story is you shouldn't spend $500 on a CPU for gaming lol.

Fan boys will be fan boys on both sides.

Buy whatever you like and worry less about what random strangers on the internet think of your CPU of choice.
 
hope you get rid of those games that nobody play on your bench list
Which games?

Also on another note.

"When we compare AMD against Intel, AMD easily wins the CPU-limited lowest resolution tests from +2% to +52%, averaging around +21% higher FPS. In the 1080p Maximum however, AMD and Intel trade blows, swaying from -4% to +6% for AMD (except in our Civ6 test, which is a +43% win for AMD). " - Anandtech.

Tested with stock settings using rated out of box spec. Which meant AMD system had 3200 ram and Intel 2933 ram. With 2080ti.

With that said this was an excellent Re-review. Great work as always and much appreciated.
 
Last edited:
Interesting review, sure.

I supose that you already know: Robert Hallock told in social networks that Infinity Fabric working at 2 GHz would be easier yo achieve with future newer BIOS so maybe your efforts now retesting all could be innacurate in a future not long ago.

Forums are really actives with a lot of people testing their new Zen3 but also I see some frustration due instability at certain memory speeds, new beta BIOS solves this and runs faster, but got problems on other areas like USB ports, so some people decided to stop testing their OC and memory speed until next stable BIOS version arrives.

I'm still waiting for you habitual and really interesting memory speed comparison on Zen 3 to know exactly where is the best performance and where the best perf/price, but probably those test will wait until next BIOS are released and this platform gets more mature.

Thanks for your time and for your honesty considering that you could did some wrong
Why would this be inaccurate because AMD is improving overclocking of the IF bus and DRAM? What does that have to do with anything? Not everyone is going to reach 2000/4000MHz.

The 1000 and 3000-series was a lot worse from what I can tell, so if people are frustrated, they should be glad they didn't try either of those. Boost speeds are working properly now, that was not the case with the 3000-series. Memory seems to be working much better as well, which it didn't do with the 1000 or 3000-series.
People just like to complain a lot, but sadly reality these days is that if you're an early adopter, you're also a beta tester, or sometimes worse.
 
So why not just test multi-gpu on them if they're gpu bottlenecked?
Because hardly any game uses that feature. AFAIK you have to code the game so that it explicitly supports multi-GPU. And a multi-GPU setup is rather niche. Not to mention, AMD already discarded CrossFire with RDNA and Nvidia allows SLI only on their most powerful consumer card, the RTX 3090. Why bother with a feature that at best 1 out of a 100 people will use?
 
Back
Top