• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

RTX 4090 & 53 Games: Ryzen 7 5800X vs Core i9-12900K

So now you’ll test a 7950x/5800x3D vs. 13400f right?

Not that this article is in bad faith, but low/high end CPUs from either manf. should have been tested.
It's completely in Bad Faith.. Wizzard is anything but a unbiased reviewer.
 
Should I test 5800X3D or faster RAM speed?
no 5800x3d? why even bother with the 5800x. Seems like a huge time suck for no reason...
 
thank you amazing work that's a lot of testing, that helps a lot i am on 5600 i think 3080 will good enough for me.
 
It's completely in Bad Faith.. Wizzard is anything but a unbiased reviewer.

You can tell the biggest reason for this comparison is a result of criticism pointed towards the RTX 4090 review where it was criticized that the original test platform with the 5800x was holding back the RTX 4090.

This is addressing that criticism and testing it rather than simply saying the 5800x is a good enough platform for such a high end card, particularly at 4k.

And this article validates that criticism. It is humble for Wizzard to publish this article.

All articles are not ads for AMD where AMD need to be shown in the best light possible.
 
Superb article, the author deserves all the praise in the world for the collected and provided data. That said, I still believe a 5800X3D instead of a regular 5800X would have been much more interesting to benchmark in order to assess the extent as to which the "3D stacked L3 cache" technology can improve performance with regard to state-of-the-art GPUs.
 
Can you justify benching a $500 Intel CPU vs a $270 AMD CPU (current Amazon.com prices)?

Why not the 5800X3D which is still only $400?
 
no 5800x3d? why even bother with the 5800x. Seems like a huge time suck for no reason...
Huge time was sucked just because the bad ram was used, he is better settings off with 3200 CL16, he would get better results than with 4000 CL20. But I do agree that 5800X3D should be used with atleast E-die ram (so atleast 3600 CL16, no more above 3800 CL16). 5800X3D came out to answer Intels 12gen, 5800X was and should be compared to 10900k since it came out like 2 years ago.
 
Am I the only one astonished by that 5800X run 1:1 IF with 4000 DDR4?

Any Intel gen 12 can easily run xmp with 6000 or higher 6800 DDR5. But a Zen 3 to push 2000 infinity fabric for 24x7 daily is uneasy.
 
Superb article, the author deserves all the praise in the world for the collected and provided data. That said, I still believe a 5800X3D instead of a regular 5800X would have been much more interesting to benchmark in order to assess the extent as to which the "3D stacked L3 cache" technology can improve performance with regard to state-of-the-art GPUs.

Actually, the only way you'd know that is if they did a 3-way comparison. Your baseline would be the 5800X, not the 12900K.

This is the first review I've seen that shows conclusively that CPU is becoming the prime limiting factor for new GPUs at anything below 4K, and it's not small.

The funny thing is, this review such that it is pretty much means every 4090 review in existence as far as representing that GPUs performance is probably invalid.
 
@W1zzard hi, are you going to change in CPU reviews testbed the VGA also (RTX3080 -> RTX4090 for example) or you are waiting for more mainstream next-gen VGA models to become available (RTX 4080 or upcoming RDNA3 ones) in order the CPU results to be more representative for the vast majority of users (not those buying +$1600 VGAs)
 
no 5800x3d? why even bother with the 5800x. Seems like a huge time suck for no reason...
Nothing better to do? Intel's last gen high end vs. AMD Mid-High made obsolete by the 5800X3D. Next month Intel 13900K vs. AMD 3800X!
 
why are you still testing borderlands 3 using dx11 when dx12 has been the most performant renderer almost since the game came out?
 
It's an odd setup.

The plus is running IF at 2000 1:1

I think TPU's bench is normally run at 1800 1:1, so the loose timings were likely necessary to get 4000 with IF 2000 1:1 (this is rare on Zen 3).

I would bet that in some cases the 4000 C20 wins with the higher IF speed 2000 vs 1800 1:1. Probably more of the eye-candy games like Cyberpunk and Assassin's Creed will prefer 4000, while what I call 'twitch' e-sports titles will prefer the lower latency.

I personally would have rather seen standard bench setup though, with more CPUs tested, even if the tested set of games were smaller.

The standard test bench at TPU IMO represents what most DIY enthusiasts will build (DDR5-6000 C30 / C36, DDR4-3600 C14 or C16). Every other site seems to have some implausible config that makes their data unrepresentative.

This setup, unfortunately, kinda fits in that 2nd category.



I'd prefer you used the standard bench setup, regardless of CPUs tested. I don't have a basis for comparison using DDR4-4000 C20.
If 2000 1:1 is winning over 1800 1:1 in gaming (does not matter wich game) then something is wrong, especially when its 4000 CL20-23-23. The only AMD CPUs that profit from higher freq is APU systems. Also latency on AMD is a bit weird. So, I've got E-die 3600 CL16, I can do 4000 (1:1) with much better timings that on this system and my latency in Aida64 was around 53ns, with much tighter 3733 CL14-8-17- I'm getting 55.5, but difference in games were night and day. Someone who is aiming for maxed out fps should consider doing 3600/3733mhz and as tight as possible timings.
 
Wizzard will never paint AMD CPUs in good light..!
Why do you come to my house and shit on me?

Edit: maybe post something nice next time, goodbye
 
Last edited:
Well done on taking the time to do this massive test, it is more than appreciated. I think it needs to be stated that this is not an Intel vs AMD 'or mine is better than yours'! test but rather a bottle neck test show casing what a GPU like the RTX 4090 is going to need if you are looking to extract the best out of it...
 
You can tell the biggest reason for this comparison is a result of criticism pointed towards the RTX 4090 review where it was criticized that the original test platform with the 5800x was holding back the RTX 4090.

This is addressing that criticism and testing it rather than simply saying the 5800x is a good enough platform for such a high end card, particularly at 4k.

And this article validates that criticism. It is humble for Wizzard to publish this article.

All articles are not ads for AMD where AMD need to be shown in the best light possible.
Not everyone buys the fastest ST/gaming CPU & also buys the fastest singe GPU out there! In fact less that 0.000001% of people out there will do that, so while you could argue that 5800x may be holding the 4090 back at lower resolutions but when you're spending 1.6 grands on a freaking brick are you also going to spend another grand, or $400 on 5800x3d, on the rest of the system? I guess everyone drives a Koenigsegg here or they're looking forward to inherit Warren Buffet's millions o_O
 
From Nvidia's point of view, wouldn't it make more sense to present RTX 4090 after both Zen 4 and Raptor lake were out and say outright that any test on older gear will bottleneck the GPU badly? There was no hurry since Raptor will be out long before Navi 3 and i read that they have to clear Ampere stock anyway. I was just wondering if im missing something.

edit: Trying to be completely objective here. Dont prefer any brand, always buy second hand best bang for buck so they have no profit from me whatsoever. But it got me thinking if they should wait another 2-3 weeks.
 
Is the 12900K here running with e-cores enable in these games, I just want want claification of 24T vs 16T ?
 
If you want to test 5800X3D , Please have DDR4 3733mhz or 3200 CL14, Thanks.
Since I've been playing around with my Ram Speeds DDR4 3600 is the better choice instead of DDR 3733 as most people took advantage of the price vs performance between the two.

CL 14 of course does make a difference. 3200 CL14 is almost equal/or equal to 3600 CL14 is what I found out with my testing.

I actually down clocked/volted my PC 4000 CL 15-15-15-36 To PC 3600 CL 14-14-14-34. 7.8ns vs 7.5ns is virtually a tie and am running less voltages on my ram. 1.5v vs 1.4 volts is a no brainer.

Everything counts in my case so. Again I've posted this before but it is worth while information.

So I suggest PC 3600 and its flavors. BTY I'm using the AM4 platform for my results.

Thats a true statement from what i been seeing and reading
Not really. A lot of people including myself have been supporting AMD for years. When a company changes its culture and forgets who got them there as well as IMHO putting out components at a price where it is out of the reach for the average person. You VOTE with your wallet.

Also a few of us old timers that have been in the industry, myself for 34 years are sick and tired of the excuses given by Influencers and Apologists on why "X" product is 60%-to100+ more in cost while giving less at times 30% performance increases.

I have stated this before and stand by this comment as this is directed at not only at AMD but Ngreedia and the rest of the tech industry.

When performance is at the cost of excessive wattage and/or heat... THERE IS NO PERFORMANCE AT ALL.

Oh and thank you W1zzard for the results.
 
Last edited:
Even my tiny laptop SO DIMMs (2x16GB CJR) can do 4000Mhz with better timings than that. :(

It's going to significantly affect those results by more than 10%.
 
53 games!!!!!!!

And not one DX9!!! ;)

"After hundreds of individual benchmarks, we present you with a fair idea of just how much the GeForce RTX 4090 "Ada" graphics card is bottlenecked by the CPU."

A question for the software/hardware experts out there.....

When it is bound by CPU, how much of that 'boundness' would be due to
-IPC
-Frequency
-Cache
-Architecture/IMC/ram speed/BIOS/OS (branch prediction, HPET, other OS oddities, etc.)
-Other (API/optimization, instruction sets, etc., misc., other?)

or basically impossible to say?
 
Actually DDR5-7400 results coming from me this week

Zen 4 and Raptor Lake?

A lot of these X670 and Z790 motherboards are touting the ability to run that kind of speed now.

Hi @W1zzard do you can confirm if amd can with memories up to 6000mhz?, asking that because infinity fabric on zen4 stay at 3000mhz right?

Other information showing ddr5 memories up to 6400mhz have troubles to work good in many mainboards for apparently issue related to layers on mainboard pcb (various talking about run this high frecuencies memories with mainboards with 8 or more layers) and talking about mainboard must be have 2oz of copper compared with regular copper in most mainboards

Any confirmation about before themes will be appreciated

thanks

:)
 
Back
Top