• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Can someone explain to me why a 13600k gets more fps in games than my ryzen 5600, even though games are not maximizing usage on the 5600?

Most games are bound on one thread and memory latency -- the 13600K is a much stronger single core and memory subsystem so you will get higher fps vs zen3.
this, 100%


So should new cpu's like the ryzen 7800x3d be tested with a midrange gpu like the 6800 xt instead of a 4090 (not exclusively of course) to give those of us considering a cpu upgrade a realistic view of how our fps will increase?

Fuck, I worded that off. My head hurts, sorry. I think you get what I am asking though.
should they? no
Would it be nice for them to do? yes

Obviously you want to remove the GPU bottleneck as much as possible hence why you test at low res with the best performance video card (most of us including you know this). Reviewers are trying to show the potential of the hardware not necessarily real world testing all the time. Techspot/HuB do some mid range video card testing for CPUs on occasion but have said in response those tests don't always get the page views/video views that top tier testing gets and they also get complaints from people asking them to test the 6800XT and the 6700XT and complaints on why are you testing just AMD video card, you need to test the RTX 3070 and 3060ti. I think it's a open can of worms for them.
 
You should use a different style of CPU load monitor, not one which displays all cores combined as 100%.

The better monitors display full load on a single core as 100%. Then you can usually see that you are running out of single core speed.
 
So should new cpu's like the ryzen 7800x3d be tested with a midrange gpu like the 6800 xt instead of a 4090 (not exclusively of course) to give those of us considering a cpu upgrade a realistic view of how our fps will increase?

Fuck, I worded that off. My head hurts, sorry. I think you get what I am asking though.
I think u can run the game at 720p and check the fps, then your head will be less hurting.
All done by cpu.
 
And they're just simply lowered clocked chips.


Intel's chips are anything but easy to cool, also it's not the wattage that matters here, ryzen chips have higher energy/area density.

Intel's easy-er to cool than Zen watt-for-watt because of said energy/area density. You can't ignore the watts because they're part of that equation. Intel K SKUs aren't "easy" to cool because they push the watt total to get performance parity (or advantage) and because the package can handle it. Under similar thermal load and cooling solution, an Intel chip would almost certainly run at least a few C cooler.

QnD example, I've got on hand a 5600G and 11700K. Both at stock clocks in closed cases with dual front fans, single rear fan and 4-pipe air coolers. Under CPU-Z stress test, the Ryzen pulls 58W @ 77C against the i7's 135W @ 76C. If limited to the AMD chip's (reported) 58W, the Intel sits at 55C.
 
So should new cpu's like the ryzen 7800x3d be tested with a midrange gpu like the 6800 xt instead of a 4090 (not exclusively of course) to give those of us considering a cpu upgrade a realistic view of how our fps will increase?

Fuck, I worded that off. My head hurts, sorry. I think you get what I am asking though.
I dont think they should.

Benchmarks for GPU should not be limited by what the CPU can do. Everyone knows the primary factor to increase perf in gaming is the GPU. So you toss the fastest CPU at it.

This way you get an honest view of the capability of the GPU.

The same thing applies to CPUs in gaming. You want to see them at max performance on the fastest GPU there is. That is a low resolution, so there is minimal GPU impact.

If you combine the two data sets, you can get a good idea of whatever your setup might hit in perf level.

PC configs in the end are not an exact science, if you run background apps your game perf influenced by CPU will already be different. So you want to delete all of that variabilty from reviews.

For those that want perf in game X or Y or setup Z, surely there is some article out there or a YT retard to view.
 
Because they are completely different class of CPUs - you will actually get much lower FPS with a 5600... it also costs 1/3 as much so. The other things do matter as well, for sure, but the biggest glaring difference would be CPU.

View attachment 290394

The dips are also much less pronounced on the newer chips, so it actually does feel quite different in games. I game at 4k and I could instantly tell difference in Cyberpunk between 12600K OC'd at 5.3 and 13700KF @ 5.6 - the FPS counter was only about 10% higher but it felt much smoother. Not all games many of them are exactly the same - but stuff like hogwarts legacy, spider man etc...



They compared Gear 2 DDr5 vs gear 2 DDR 4 -- a low latency quad rank (4 sticks) b-die kit at 4000+ with tuned subs will beat / match a DDR5 CL36 6000 kit -

Source: me -- I went from a 12600K gear 1 on a DDR4 full atx board into an itx system using ddr5 (needed 64gb and DDR5 has the density) and I actually lost a bit of fps. FPS chasers also did a comparison with a 13600K and came to the same conclusion. Basically very fast DDR4 = mid range non overclocked DDR5.

As you might know, I like benchmarks.

Hey look. 3DMark06 is cpu heavy.

All in this screen shot is on DDR5 6000mhz or less on 13th gen. 13600KF and 13700k 5.3ghz average cpu clocks.....

Screenshot_20230405_114624_Chrome.jpg


Isn't that the same 13600KF I have now? One more case mod and the guts go into it. Both my 12600K and 12700K rigs are running at 5.3GHz now but they're running DDR4. I have 64GB (32x2) of G.Skill DDR5 6400 Trident Z5 Hynix A-die for the 13600KF.
Yes, that's the same cpu. You're 3rd owner.

If I didn't update the ME firmware on my board, I might had kept it. But to obtain 5.2ghz and up without OC, I had to get a 13700K.

But I've purchased another board which will hopefully come with stock ME installed so I can have my BCLK back again.

Got a 12600K that needs proper benching....
 
As you might know, I like benchmarks.

Hey look. 3DMark06 is cpu heavy.

All in this screen shot is on DDR5 6000mhz or less on 13th gen. 13600KF and 13700k 5.3ghz average cpu clocks.....

View attachment 290424


Yes, that's the same cpu. You're 3rd owner.

If I didn't update the ME firmware on my board, I might had kept it. But to obtain 5.2ghz and up without OC, I had to get a 13700K.

But I've purchased another board which will hopefully come with stock ME installed so I can have my BCLK back again.

Got a 12600K that needs proper benching....
Alder Lake CPUs common discussion | Page 13 | TechPowerUp Forums

^ here's that 12600K on DDR4 -- I still sit at #11 with the 12600k in Timespy CPU

1680717759224.png



Top CPU score was 16700 with Tuned 4133 DDR4 and 17014 with TREFI tuned DDR5 @6800Mhz
 
If not then
...you use your "redundantly fast" memory with your then-CPU not needing to upgrade the whole system. This price is very low considering how long they will stay fast enough and how long it takes to make use of older equipment (you may want to sell it, or use it in your auxiliary PC, or whatever, needs some thinking and effort anyway).

Not aware of X3D vs non-X3D IMC controllers' differencies though. Still it's more typical for a Ryzen user to only upgrade their CPU+GPU keeping the rest of their system the same till the complete deprecation. Why not using futureproof memory modules, again?
 
honestly it's still a pretty decent setup --if it's running smooth -- I would just tune it and game. But there's also this:

View attachment 290352

This will get you back to 13600K levels of performance (or close enough) in gaming -- you can easily sell your 5600 for $100 and then just hang out until meteor lake/zen 5./

Alot of the 5800x3d crowd is going to be selling their stuff to go zen 5, and it's really a sweet chip still you might be able to pick up fairly inexpensively. That plus some DDR4 tuning and you're set for a while.
This!

If I had a 5600, the only upgrade path I would think about is the 5800X3D. Everything else is a waste of money.
 
This!

If I had a 5600, the only upgrade path I would think about is the 5800X3D. Everything else is a waste of money.

it just depends what your life situation is if its a waste or not. i want to go to UK this summer, but can't because my fiance has no room for me as she had to get a gf of hers to move in with her cause of inflation in the UK being so rough there.

really bummed out. so I most likely am going to say fuck it all and buy a 7800x3d
 
...you use your "redundantly fast" memory with your then-CPU not needing to upgrade the whole system. This price is very low considering how long they will stay fast enough and how long it takes to make use of older equipment (you may want to sell it, or use it in your auxiliary PC, or whatever, needs some thinking and effort anyway).

Not aware of X3D vs non-X3D IMC controllers' differencies though. Still it's more typical for a Ryzen user to only upgrade their CPU+GPU keeping the rest of their system the same till the complete deprecation. Why not using futureproof memory modules, again?
Faster memory than the IMC will support will require the OP to use manual timings or non expo/xmp, stock timings. Something he should be aware of before purchasing... as well as the long POST times on AM5, especially when changing timings. Your thought process regarding longevity is good but the issues I raised are things a person like this OP should be made aware of so they can make an informed decision.
 
use manual timings
I may be too oldschool but I thought AMD users typically don't consider running timings @ auto an option at all so I haven't even thought about mentioning it.

Finding the fastest stable OC once and the fastest stable OC 4 years later when you max your system out with some Ryzen 9900X4D or something, and that's all.

Be you just enabling X.M.P. you're as good as free to buy the cheapest sticks of low CL 6000 MHz RAM and case closed. I'd never do that anyway. Some memory sticks sport multiple X.M.Ps so it must prove issueless (e.g. enabling X.M.P. 2 of 6000 CL30 when the sticks have X.M.P. 1 with 6600 CL32).
 
I may be too oldschool but I thought AMD users typically don't consider running timings @ auto an option at all so I haven't even thought about mentioning it.
I run them at auto. I can't be asked to fiddle with numbers for half a day for a couple percent extra performance that I won't even feel outside of benchmarks. :ohwell:
 
for a couple percent extra performance that I won't even feel outside of benchmarks
DDR5 is DDR5. It's literally close to no real software to get advantage from it for now. By the year of 2027 you'll see how your 6000 MHz RAM allows for not so stable 60 FPS with drops to 40 in some AAA whilst optimised 6400 will allow for steady 65–70 FPS.

This happened with DDR4 which was meaningless to OC in 2014. Doncha see it became ridiculous to run DDR4 at speeds below 3200 MHz today?
This also happened with DDR3 which was meaningless to OC in 2007.
 
DDR5 is DDR5. It's literally close to no real software to get advantage from it for now. By the year of 2027 you'll see how your 6000 MHz RAM allows for not so stable 60 FPS with drops to 40 in some AAA whilst optimised 6400 will allow for steady 65–70 FPS.
By that time, 6400 MHz will be the standard, and probably also dirt cheap. I didn't build this PC for 2027 - I built it for today. :) What will happen will happen, and I'll see what makes sense to buy and tune then.
 
By the year of 2027 you'll see how your 6000 MHz RAM allows for not so stable 60 FPS with drops to 40 in some AAA whilst optimised 6400 will allow for steady 65–70 FPS.
most likely it will work like 3200mhz RAM today which was considered good speed RAM four years ago. Capable of getting the job done but clearly getting long in the tooth. I doubt anyone would be seeing almost 40% drop in FPS lows as in your example.
 
Last edited:
You've got an AMD GPU, so asking yourself if you should go Intel or AMD CPU is a useless question. By going with the 7800x3d you have the best price and performance of this generation and just like the 5800x3d before, it will give a hard time to the next one as well. Provides best gaming efficiency and easy to use only AMD apps and drivers for only AMD products. On top of that you get the best advantage of them all - SAM.
 
Why would you even try to compare 5600 to a 13600k. The 13600k is the superior period! You can compare 5600 to a 12100 and 12400. The 13600k has most results on tests as the 12900k. That only shows how powerful it is. Also on windows 11 you don't need to turn the e cores off. Because on win11 they do a good job. So make sure you run 13600k on windows 11 to see its full potential.
 
Why would you even try to compare 5600 to a 13600k. The 13600k is the superior period! You can compare 5600 to a 12100 and 12400. The 13600k has most results on tests as the 12900k. That only shows how powerful it is. Also on windows 11 you don't need to turn the e cores off. Because on win11 they do a good job. So make sure you run 13600k on windows 11 to see its full potential.
that wasn't the question space lynx posed, he's well aware the 13600k is better
 
It was just a mistake. Also, someone had told me SAM didn't work as well with Intel + AMD combo. I just was dumb though. Lesson learned.
Hi,
The way intel is dropping prices it might work out for the best
Now you can get a higher tier chip.
 
I may be too oldschool but I thought AMD users typically don't consider running timings @ auto an option at all so I haven't even thought about mentioning it.

Finding the fastest stable OC once and the fastest stable OC 4 years later when you max your system out with some Ryzen 9900X4D or something, and that's all.

Be you just enabling X.M.P. you're as good as free to buy the cheapest sticks of low CL 6000 MHz RAM and case closed. I'd never do that anyway. Some memory sticks sport multiple X.M.Ps so it must prove issueless (e.g. enabling X.M.P. 2 of 6000 CL30 when the sticks have X.M.P. 1 with 6600 CL32).
Maybe AMD has issues getting their IMC working faster than 6000mt/s and decide to do quad channel instead of increasing frequency. Doubtful but idk what AMD has planned for future. I recommend people buy 6000mt/s DDR5 now to enjoy pretty much the fastest RAM they can currently use and get faster RAM down the road when it's cheaper and know what the then current processor supports.

It could very well be that a person decides to get a 16/32 GB kit right now and decides they really want 32/64 GB in 4 years and would like to buy another 2 sticks. In that scenario, even if 9xxxX series AMD support 7000 mt/s RAM with 2 sticks, it could be limited to 6000mt/s when using 4 sticks, like how the current IMC is limited to around 5200mt/s now with 4 sticks.

The most RAM-overclock savvy AMD users amongst us buy the cheaper, slower advertised speed kits with the same chips as the faster, more expensive ones and then OC them for best bang/buck.
 
Most games are bound on one thread and memory latency -- the 13600K is a much stronger single core and memory subsystem so you will get higher fps vs zen3.



^ yes they should -- you will still get better fps with the 7800x3d especially on the 1% lows and much less noticeable occurrence of the infamous UE4 shader load stutter.


Yes very much true. Even if you disable e-cores and make both CPUs 6 core 12 thread counterparts, the Raptor Cove cores in the 13600K have around 20% better IPC than Zen 3 cores in the 5600X. And they clock higher to so it will perform much better in games and most other apps except for rare outliers optimized for AMD.

Why would you even try to compare 5600 to a 13600k. The 13600k is the superior period! You can compare 5600 to a 12100 and 12400. The 13600k has most results on tests as the 12900k. That only shows how powerful it is. Also on windows 11 you don't need to turn the e cores off. Because on win11 they do a good job. So make sure you run 13600k on windows 11 to see its full potential.


Well even the 12400 has better IPC than Ryzen 5600. 12400 is basically a 12600K with no e-cores and also no unlocked multiplier. Though closer comparison as I believe 5600 generally runs higher clock to compensate for 15% lower IPC than 12400. And you cannot overclock the 12400 without playing with BCLK which is hard.

You can OC DDR4 on raptor lake to get almost as good performance as DDR5. Alot of the initial DDR4 vs DDR5 reviews (HWUB) were done with gear 2 instead of gear 1.


I have been off and on the fence regarding DDR5. I wanted to use fast DDR5 to future proof my system (yeah I know I know, something better always comes out and you cannot really do that, but well) so basically so it would still give better performance down the road to maybe keep up even thoug it iwll lose to latest to avoid haivng to build again for a while.

But I have had nothing but trouble with DDR5 XMP and Asus boards.. Never fully stable. Even downclocking RAM to 6000 makes it better but still fails OCCT Large Data Set Variable with a random WHA or BSOD after 1 run or 2 and very random as it could seem fine then boom. This is on Intel 13th Gen and I have heard Intel 12th and 13th Gen have signal balancing issue and that is why DDR5 overclcoking/XMP is so much trouble and raising timings does not bring stability and it is worse on Asus boards.

I went back to DDR4 and was hoping I could get well tuned.

Well I scored a $100 deal on used Samsung BDie kit XMP 4000 CL19 1.35V

I played around and got it almost fully stable at 4300 CL16. CL15 not stable at even 1.6V at even 4100. At 4000 perfectly stable at 1.5V as my last BDie kit I sold as I tried Ryzen 7000, but gave up after mobo coil whine where as none on Intel 13th Gen.

However not totally stable at 4300 CL16 as maybe IMC cannot handle it and intermittent Y Cruncher errors and TestMem5 error

So I settled on 4200 CL16-16-16-36 and TRC 52 and TRFC 260 (TRC and TRFC value per Good old Gamer on YouTube memory tuning guide for Samsung BDie) and appears to be fully stable. So how does this stakc up with DDR5 6400 or even 7200. Is DDR5 6400 to 7200 still much better than 4200 CL16?

Cause if so I can try eVGA Dark which is recommended as a board that makes it straight forward and plays nice with Auto values and the Intel signal balancing issue to get fully stable DDR5 XMP up to 7200 without any work.


Read the post by affxct to see what I am talking about and if you see my thread, stability has been a nightmare with DDR5 XMP on like 5 different Asus baords I tried even hoping last would be a charm with better BIOS updates but no. Even manual tweaking did not do anything.

So is it worth it or will CL16 4200 Gear 1 match DDR5 with a 13900K?
 
Back
Top