• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

How is Intel Beating AMD Zen 3 Ryzen in Gaming?

Hardware OC is also a good issue for a discussion. However, it is important to note the operating system optimization that causes several 100 point deviations when not properly optimized. cinebench r20 pl basic after installation on my system the value is 4600-4700 points multi. after optimization this number is 5000 and even hardware OC did not happen. What do you think about this, sir? (I see an awful lot of controversy on the OC line in hardware and many are unable to use or optimize the operating system properly.)
Hmm, interesting topic. If you mean AMD chipsets being installed, or Intel specific OS updates being installed, I think those should definitely happen before testing, as the manufacturer intended for them to happen. It's the same story as with GPU drivers. If you're lucky, you might get them through Windows update anyway.

For example, I got my ASUS Armoury Crate software through Windows update, and then it installed my AMD chipset driver which gave me an AMD power plan. It all happened automatically, I didn't touch a thing.

What do you think?
 
Hmm, interesting topic. If you mean AMD chipsets being installed, or Intel specific OS updates being installed, I think those should definitely happen before testing, as the manufacturer intended for them to happen. It's the same story as with GPU drivers. If you're lucky, you might get them through Windows update anyway.

For example, I got my ASUS Armoury Crate software through Windows update, and then it installed my AMD chipset driver which gave me an AMD power plan. It all happened automatically, I didn't touch a thing.

What do you think?
I don’t even use these factory tuning nonsense.
I even turned off a lot of factory windows services which is unnecessary for me. and a simple priority increase, for example, to a high level can mean 100 points. Here now the Intel or AMD platform is Independent.
 
I don’t even use these factory tuning nonsense.
I even turned off a lot of factory windows services which is unnecessary for me. and a simple priority increase, for example, to a high level can mean 100 points. Here now the Intel or AMD platform is Independent.
Interesting choice.

As for me, I think these are just built-in features of said hardware that you're not utilising fully without the OS specific optimisations. I remember the AMD FX era, when you had to install an FX specific Windows update just to make the scheduler aware of the unique core layout of your FX CPU. It was kind of a must, to avoid two workloads being sent to two adjacent INT32 cores that shared a common FP unit.
 
Interesting choice.

As for me, I think these are just built-in features of said hardware that you're not utilising fully without the OS specific optimisations. I remember the AMD FX era, when you had to install an FX specific Windows update just to make the scheduler aware of the unique core layout of your FX CPU. It was kind of a must, to avoid two workloads being sent to two adjacent INT32 cores that shared a common FP unit.
Maybe he didn't understand. I just wanted to say that I see pointless OC debates constantly referring to platforms people who can’t even optimize or use their operating system properly. for this I wrote . what is your opinion about ? sir.
 
Let me rephrase my original question: do you think every single benchmark result there is comes from overclocking (and is therefore invalid) just because nobody runs RAM at the JEDEC specified standard 2133 MHz? :wtf: :rolleyes:
Pull all the faces you want, but your question is irrelevant to what I originally stated.
I never suggested running Ram at any given speed.
Seems you misunderstood what I said, just as you misunderstand Ram speeds and testing.
 
Still wrong.
XMP is overclocking.
Example 1: If my CPU and MB officially support memory speeds up to 3200 MHz (OC), and I put in 3200MHz Ram sticks in, and enable XMP to run them at 3200MHz, it IS overclocking.
Example 2: Is wrong as if your MB supports memory speed up to 3200MHz and you put in 3600MHz Ram sticks they will run at 3200MHz unless overclocked.
Example 3: Is correct.
XMP/DOCP isnt overclocking unless it goes past the rated speed of the platform. With Ryzen currently, that's 3200MHz, Intel 2933. If I buy 3200 mhz sticks and enable xmp on an amd machine, that is not overclocking. Nothing in that scenario is out of spec. The sticks are 3200 and the platform is 3200. If I bought 3600 sticks and enabled xmp on amd, Im overclocking (the IMC), but the sticks are stock/running at its rated speed (xmp).

If I buy 3600 sticks and put them in and amd system, enable xmp and change the memory speed to 3733, I've then overclockied the imc and the sticks as the sticks are past their rated speeds on the box.
 
Pull all the faces you want, but your question is irrelevant to what I originally stated.
I never suggested running Ram at any given speed.
Seems you misunderstood what I said, just as you misunderstand Ram speeds and testing.
Let's agree to disagree, then. Still, running memory at XMP up to the platform's maximum rated (non-OC) speed is not overclocking. That's why I think every new CPU should be benchmarked as such.

Maybe he didn't understand. I just wanted to say that I see pointless OC debates constantly referring to platforms people who can’t even optimize or use their operating system properly. for this I wrote . what is your opinion about ? sir.
Fair enough. Though people who can't optimise an OS for their specific hardware are generally not going to care about (or even notice) the minor improvements such optimisations bring.
 
Fair enough. Though people who can't optimise an OS for their specific hardware are generally not going to care about (or even notice) the minor improvements such optimisations bring.
And that's why they start hardware overlocking... :)
which jeopardizing the stable operation moving of the system in all cases.
 
These will be added with the 2021 CPU test platform, using the same format I added for my GPU reviews recently

Amazing to hear it ! Keep up the great work man .
 
Let's agree to disagree, then. Still, running memory at XMP up to the platform's maximum rated (non-OC) speed is not overclocking.
Platform rated values are specific to the Motherboard.
Ram speeds are relative to the programmed SPD which often factory overclocked.
3200MHz Ram can be and is often Ram of a lower value factory overclocked, much like higher values like 4000MHz.
 
Regarding ram speed vs overclocked ram speed, IMO, anything over JEDEC is overclocked. JEDEC sets the standards, not the CPU company or the motherboard company.
However all these CPUs run high ram speeds now so somewhere in the mid 3000s DDR4 should be what things are tested at, IMO.
 
Last edited:
Are you running it within factory advertised specs?
Then its not overclocked

DDR 4, 4000 ram? stock
Can your mobo handle it without OCing? no. Your IMC? no.
Does that mean the ram it still at stock? sure does.
 
Regarding ram speed vs overclocked ram speed, IMO, anything over JEDEC is overclocked. JEDEC sets the standards, not the CPU company or the motherboard company.
However all these CPUs run high ram speeds now so somewhere in the mid 3000s DDR4 should be what things are tested at, IMO.
That your opinion, but respectfully, it is wrong. :(

AMD and Intel set the speed for their platforms. All JEDEC does is create a BASE standard for the memory. This standard is WELL below the platform's rated speed (which again is made by amd and intel) and timings way higher than the stock are rated for typically for compatibility and stability reasons. But if amd and Intel say 3200 is the max rated speed of the platform your arent overclocking the imc until you get past that point.

This is really OFN guys. I'm not sure why, even with links and pictures the info is being shunned. :(

Platform rated values are specific to the platform.
Take a few mins and look through mobo specs from amd and intel You'll notice amd b550/x570 for example will say...2133/2666/2933/3200/3600(OC)/3800(OC). Notice how the "OC" doesn't come in until after 3200 (platform's rated max?). X470 boards show 2933 and after that is overcoocking.

Anyway, this is a bit OT...Donezo.:)
 
Last edited:
Hi,
I realize this is super late and nobody might see this but i thought I would give it a shot.
I would love to see gaming performance test between 5900x and 10900k while other things are running.
I use my computer for work which usually uses up 4 cores. While things are running I like to play games. Somebody mentioned to me that 5900x gaming performance boost only comes from a specific core. So if that core is being used for work (since i cant control which cores get used) then gaming performance would take a huge hit, but that wouldn't happen with the 10900k. Is this true?
 
Amd beating in higher Price per Performance.


Here by us:
I got a 10100F for 70 Euro (my I3), a 3100 is at 110 Euro

10400F 135 Euro, a 3600 179 Euro
10700KF 276 Euro, a 3700X 319 Euro

I dont need more than 8 threads with HT and if Intel would set the IPC 18% over my I3 im Happy with a Rocket Lake I3
I wanted a system that works out of the Box no BIOS Flash etc. Bullshit Bingo :laugh:

Edit:
Performance Gain over my I5 4590s is about Multi Thread 43% and Single Thread about 25%
I5 4590S = 96w Full Load
I3 10100F = 72w Full Load
 
Last edited:
Hi,
I realize this is super late and nobody might see this but i thought I would give it a shot.
I would love to see gaming performance test between 5900x and 10900k while other things are running.
I use my computer for work which usually uses up 4 cores. While things are running I like to play games. Somebody mentioned to me that 5900x gaming performance boost only comes from a specific core. So if that core is being used for work (since i cant control which cores get used) then gaming performance would take a huge hit, but that wouldn't happen with the 10900k. Is this true?
I think the 10900K has a "preferred core" as well, but somebody correct me if I'm wrong.

IMO, if you use 4 cores for work (theoretically 4 of the better boosting ones), then you still have 8 left with the 5900X and 6 with the 10900K. Of course they won't boost as high as they would without using the other half of the CPU for work, but I doubt it affects gaming performance in a meaningful way. You're basically left with a slightly lower clocked 5800X or 10600K respectively, which is still fine. It would be interesting to see it tested, though (because again, I might be wrong).
 
Hi,
I realize this is super late and nobody might see this but i thought I would give it a shot.
I would love to see gaming performance test between 5900x and 10900k while other things are running.
I use my computer for work which usually uses up 4 cores. While things are running I like to play games. Somebody mentioned to me that 5900x gaming performance boost only comes from a specific core. So if that core is being used for work (since i cant control which cores get used) then gaming performance would take a huge hit, but that wouldn't happen with the 10900k. Is this true?

It's not possible to test that scientifically, so no one can do it and have real results.
 
Hi,
I realize this is super late and nobody might see this but i thought I would give it a shot.
I would love to see gaming performance test between 5900x and 10900k while other things are running.
I use my computer for work which usually uses up 4 cores. While things are running I like to play games. Somebody mentioned to me that 5900x gaming performance boost only comes from a specific core. So if that core is being used for work (since i cant control which cores get used) then gaming performance would take a huge hit, but that wouldn't happen with the 10900k. Is this true?
This is difficult to test at best. The problem is running a consistent repeatable load while testing games. This is likely something you'll need to test yourself for your specific use case.

As far as the performance goes, I dont imagine most titles will take a notable performance hit. The difference in clock speeds isn't that much, first of all. Second, it depends on the title and settings as well. If you're playing a game that can use a lot of cores and threads it may take a bigger hit since cores/threads are being used for other work. Games that use fewer cores/threads, less of an impact.
 
Great job laying everything out in this article. Your review style (and the way you format the data) has become my favourite over the entire interwebs.
 
This is probably stating the obvious, but I love the general format of the reviews on this site. Like how he compares both AMD and Intel in the same graph, color coded, and also how there are plenty of systems (from budget to high end). To give a random example.. there's 4 graphs for "power consumption" where as most reviews would have two. Some only have one. I wish all motherboards, CPU's and GPU's could be reviewed in this way. I don't sense a bias towards Intel either. I just sense a bias of figuring out what's going on and how each one is better in certain situations. I've been in a big time crunch in the last week trying to make decisions while local stock is running out, or certain low prices won't last for much longer and they've helped a lot.
 
This is probably stating the obvious, but I love the general format of the reviews on this site. Like how he compares both AMD and Intel in the same graph, color coded, and also how there are plenty of systems (from budget to high end). To give a random example.. there's 4 graphs for "power consumption" where as most reviews would have two. Some only have one. I wish all motherboards, CPU's and GPU's could be reviewed in this way. I don't sense a bias towards Intel either. I just sense a bias of figuring out what's going on and how each one is better in certain situations. I've been in a big time crunch in the last week trying to make decisions while local stock is running out, or certain low prices won't last for much longer and they've helped a lot.

w1zzards got three skillsets

1. coding skill
2. patience
3. really nice bar charts
 
What about intel getting a optimal ram for the 10900k? 3800mhz is good for 5900x and its infinity fabric but the 10900k could get much more speed ram.

I'm using a x570 dark hero motherboard with a 5900x and g.skill 32gb 3800 cl14 and it's going slightly slower than my 8700k@5ghz with a z370-e board and the same RAM (4k gaming, around 5fps less with amd). Nvidia rtx 3090 at the same speed in both systems.

I'm very disapointed cause I waited for ryzen for so long and productivity doesn't matter for me. Just gaming.
 
Once again ZEN3 is newer, on the better node and is faster in everything including PC Gaming over anything Intel has to offer.
 
Back
Top