• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Ryzen 7000X3D Announced, Claims Total Dominance over Intel "Raptor Lake," Upcoming i9-13900KS Deterred

4 upgrades? It has been a while for me! :laugh:

Yes, the 2700K is my main PC; I don't have anything higher spec than this - see specs for full info. Note that I've upgraded just about everything other than the CPU, supporting components and case since I built it in 2011.

Well, on the desktop, it feels as snappy as ever. Seriously, no slowdown at all since I first built it, hence Microsoft hasn't made Windows any slower. Fantastic. I don't run any intensive apps that would really show up the lack of performance compared to a modern system.

Now, while I do have hundreds of games, I haven't played that many of them (Steam is very efficient at separating me from my money with special offers lol) or that often.

I ran Cyberpunk2077 and got something like 15-25fps even when dropping screen res and details right down, so it's no good for that. In hindsight, I should have gotten my money back, nvm.

CoD: Modern Warfare (the newer one) runs quite well at 60-110fps or so. Jumps around a lot, but with my newish G-SYNC monitor, that hardly matters and it plays fine. Even before that, the experience was still good, but not great, especially if I set the screen refresh rate to 144Hz and vsync off. Felt very responsive like that. Note that my 2080 Super easily plays this game at 4K. I don't have all the details maxed out though, regardless of resolution. I don't like motion blur and ambient occlusion doesn't make that much visual difference, so I turn them both off, for example, but both really reduce the performance.

CoD: Modern Warfare II Warzone 2.0 runs with rather less performance and can drop down into the stuttery 40fps which is below what the G-SYNC compatibility will handle, but otherwise not too bad. It also tends to hitch a bit, but my console friends reported that too, so is a game engine problem, not my CPU.

I've got CoD games running back generations and they all work fine. Only the latest one struggles to any degree.

I've run various other games which worked alright too, can't remember the details now. It's always possible to pick that one game that has a really big system load, or is badly optimised and runs poorly, but that can happen even with a modern CPU.

I have a feeling that this old rig, with its current spec, can actually game better than a modern gaming rig with a low end CPU. Haven't done tests of course, but it wouldn't surprise me.

Agreed, I don't like the greater expense for AMD either, so the devil will be in the details. I want to see what the realworld performance uplift will be compared to the 13700K I have my eye on before I consider my upgrade. Thing is, every time, I think I'm finally gonna pull the trigger, the goal posts move! The real deadline here of course, is Windows 10 patch support is gonna end in 2025, so it's gonna happen for sure by then.

And finally, out of interest, here's the thread I started when I upgraded to my trusty 2700K all those years ago. It's proven to be a superb investment to last this long and still be going strong.


oops i meant 3 upgrades after the 2700K. The 4th pending

If its getting the job done, i thinks its fantastic you've got the 2700K sweet sailing for this long. Looks like you've had a blast at 4K and it makes sense with most of the weight probably thrown over at the GPU end.

With my 2700K, I kept hitting a brick wall with battlefield - although 100% playable i fell short on visual smoothness - its a difficult one to explain with FPS+FT being decent but i could still sense some lumpy roughness>some jiggery jaggery boo in fast paced scenes or in dense environments. The first assumption was the GPU which was upgraded and i could still feel some irregularity. Eventually reinstalled windows for one last attempt and then gave up... grabbed a 4790K. With each Battlefield release, the lack-of-smooth offender returned hence each time a jump up a couple of Gens resolved the problem which eventually landed me on a 9700K. Not gonna lie, it wasn't just observable performance punching in the upgrade ignition button, i sadly suffer from the upgrade-itch too. Now the current BF is starved for threads and the single threaded 9700K (does a decent job) will be sadly put to rest. I'm a buff for screen time silkiness and something like a 7800X3D/5800X3D sounds like a sound plan for a 3 year excursion (or 2, you know the upgrade-itch hehe)

Thing is, every time, I think I'm finally gonna pull the trigger, the goal posts move!

The goal posts stayed put for me when considering CPU upgrades.... but moved a couple of miles far and beyond when considering GPU upgrades. 40-series (or RDNA3) was the last stop, the unyielding affirmative buy... and then nV dropped those rediculous MSRPs and crushed the hope and glory and left me traumatised lol (ok a bit dramatic - simple as, no thanks aint gonna withdraw from me wallet to fill the corps already fattened up pockets)
 
If hes getting worse 1% and 0.1% is he going to get better onez at 4k?
I'm not seeing where either one of you posted 0.1% or 1% low gaming results for 12900KS & 5800X3D, let alone @ 4K. And yes, you will get different margins @ 4K than lower resolutions.

Below are some average FPS differences between 13900K and 5800X3D showing that @ 4K the 13900K averages 1.3% faster but 6.2% faster @ 1080P. These results don't resolve who's claim is right regarding lows, nor an actual comparison to the 12900KS. It does however show how posting 720P results are not useful in arguing what CPU is going to be faster @ 4K, as the lower clocked, higher cache CPU is at a major disadvantage as the resolution is reduced below 4K.

Maybe you or @Crylune would like to actually provide 1% and 0.1% low @ 4K results between the 5800X3D and 12900KS (and I suppose the 12900K since you claimed 12900K was also faster) so the thread is more informative?

2160P:
1673497007738.png


1080P:
1673498830806.png
 
Last edited:
why does everything has to be for gaming
View attachment 277623

MEH.
because this is nothing more than a dick measuring contest between AMD and intel. Each want to be able to market that they make the fastest undisputed most powerful gaming cpu powerhouse etc etc no matter the cost, efficiency or anything else, when the reality is any midrange chip from previous gen is more than enough to drive high refresh rate gaming. In a way this is good for consumers but id rather them focus on things that actually matter for gaming like u know... the damn gpu. That market is not exactly good rn. AMD should focus on better driver support, better pricing, fixing their current lineup, more powerful gpus so they crawl back that market share from nvidia. Instead they focus on single digit percentage improvements that can mostly be seen in 720p gaming... because u know that makes so much more sense lmao.
 
If the 7800X3D is as good as this article suggests, then Intel have a big problem.

I'm gonna seriously consider this for my 2700K upgrade once the reviews are out. Will be really nice to dodge the e-core bullet, if nothing else.

2700k to a 7800x3d would be a baller as fuck upgrade lol, do it!!! :rockout::rockout::rockout::rockout:
 
Very reliable result, the 12700k has better lows than the 13900k, lol.
Why not? This is the core of the issue: maximum or average FPS says absolutely nothing about frametimes, especially at the 0,1% and 1% lows.

Its very common to see a combination of lower maximums and better control over outliers. It relates to bursty frequency behaviour as well: if the CPU can boost high, it creates a larger gap between boost and base clock, so your peak FPS might be higher, but your worst numbers are also worse. Why do you think Intel is progressively lowering base clocks gen to gen to attain higher boost? Its not to help minimums, but to shine in maximums. In GPUs, with pre-rendered frames and frame smoothing you create some of the same effects: maximum FPS is sacrificed to use the available time to start on the next frame earlier.

X3D isn't about peak frequency, its about peak consistency, and it shows everywhere. The CPUs are most useful for gaming because they elevate the performance in precisely those gaming situations where you dip the hardest because you're missing the required information at the correct timing. That's where the cache shows its value best and that's where it differs from every other CPU.

Intel can keep up for a large number of games because they're well managed in CPU load; this applies to most triple A content, most console content, but it absolutely does NOT apply to simulations that expand as you go into end-game (almost every generated frame is one where lots of info must be collected to present the correct next step in simulation, the amount increasing the further your army/village/galactic empire expands).

Who cares if you can run a shooter at 250 or 300 FPS, basically is the gist of this. What matters is if you can keep your minimums in check. Only X3Ds offer a technology that does that regardless of the frequency the CPU runs at.

And this in a nutshell is why most CPU reviews don't manage to emphasize or cover properly the impact of CPU performance in gaming. Measuring lows is the way, and in fact should be the defining thing on your CPU choice, and NOT max/avg FPS. The things that damage the experience most, are the dips, not the peaks.

because this is nothing more than a dick measuring contest between AMD and intel. Each want to be able to market that they make the fastest undisputed most powerful gaming cpu powerhouse etc etc no matter the cost, efficiency or anything else, when the reality is any midrange chip from previous gen is more than enough to drive high refresh rate gaming. In a way this is good for consumers but id rather them focus on things that actually matter for gaming like u know... the damn gpu. That market is not exactly good rn. AMD should focus on better driver support, better pricing, fixing their current lineup, more powerful gpus so they crawl back that market share from nvidia. Instead they focus on single digit percentage improvements that can mostly be seen in 720p gaming... because u know that makes so much more sense lmao.
Minor difference, the X3D is real innovation, Intel's next KS is not.

And as pointed out above, there are tons of in-game situations where you play not a canned benchmark run, but a real game where the real CPU load is many times higher than you see in reviews. Stuff like Stellaris or Cities Skylines wants every % of performance on the CPU it can get.
 
Last edited:
Why not? This is the core of the issue: maximum or average FPS says absolutely nothing about frametimes, especially at the 0,1% and 1% lows.

Its very common to see a combination of lower maximums and better control over outliers. It relates to bursty frequency behaviour as well: if the CPU can boost high, it creates a larger gap between boost and base clock, so your peak FPS might be higher, but your worst numbers are also worse. Why do you think Intel is progressively lowering base clocks gen to gen to attain higher boost? Its not to help minimums, but to shine in maximums. In GPUs, with pre-rendered frames and frame smoothing you create some of the same effects: maximum FPS is sacrificed to use the available time to start on the next frame earlier.

X3D isn't about peak frequency, its about peak consistency, and it shows everywhere. The CPUs are most useful for gaming because they elevate the performance in precisely those gaming situations where you dip the hardest because you're missing the required information at the correct timing. That's where the cache shows its value best and that's where it differs from every other CPU.

Intel can keep up for a large number of games because they're well managed in CPU load; this applies to most triple A content, most console content, but it absolutely does NOT apply to simulations that expand as you go into end-game (almost every generated frame is one where lots of info must be collected to present the correct next step in simulation, the amount increasing the further your army/village/galactic empire expands).

Who cares if you can run a shooter at 250 or 300 FPS, basically is the gist of this. What matters is if you can keep your minimums in check. Only X3Ds offer a technology that does that regardless of the frequency the CPU runs at.

And this in a nutshell is why most CPU reviews don't manage to emphasize or cover properly the impact of CPU performance in gaming. Measuring lows is the way, and in fact should be the defining thing on your CPU choice, and NOT max/avg FPS. The things that damage the experience most, are the dips, not the peaks.


Minor difference, the X3D is real innovation, Intel's next KS is not.

And as pointed out above, there are tons of in-game situations where you play not a canned benchmark run, but a real game where the real CPU load is many times higher than you see in reviews. Stuff like Stellaris or Cities Skylines wants every % of performance on the CPU it can get.
There is no way in hell the 12700k gets better minimums than the 13900k at anything. Ever. Gnexus ways has pretty weird numbers when it comes to lows and minimums, you can check his older reviews as well and you'll see a trend.

Also, intel cpus never drop from the boost clocks during gaming. The 12900k is running at 4.9 ghz 100% of the time, the 13900k runs at 5.5GHz 100% of the time etc.
 
Last edited:
There is no way in hell the 12700k gets better minimums than the 13900k at anything. Ever. Gnexus ways has pretty weird numbers when it comes to lows and minimums, you can check his older reviews as well and you'll see a trend.
Would you mind checking the base clock for both processors' P and E-cores?
 
Would you mind checking the base clock for both processors' P and E-cores?
The base clocks refer to the 125w long duration power limit and youll only drop to these clocks during something like ycruncher or prime. Even with the 125w power limit in place (which the gnexus review didnt have) they'll still run at max clocks during gaming cause they dont exceed those likings. Only the 13900k does in cyberpunk where it can hit 140 to 150w sometimes but that's about it, in most other games it sits below 100.
 
All im gonna say is, I find it hillarious you are on TPU forums and instead of quoting their benchmarks, you are using someone elses just because they happen to agree with the opinion you have. Hilarious stuff.

To the point now, I have a 12900k and a 13900k with 7600c34 ram. If anyone wants to test their 3d and see how much better it is compared to intels offerings, just come forward
 
All im gonna say is, I find it hillarious you are on TPU forums and instead of quoting their benchmarks,
Are you claiming @Crylune posted a benchmark?

you are using someone elses just because they happen to agree with the opinion you have. Hilarious stuff.
I hope you find that benchmark @Crylune posted and aren't making another false claim!

I didn't have any opinion on this, I tried to fact check both claims that were made and your claim was the easiest to prove or disprove while his claim is harder, given the fact that 1% and 0.1% low data @ 4K is generally compiled for GPU reviews, not CPU reviews, and the KS is not reviewed as much.

Given the difficulty in finding 0.1/1% lows @ 4K for a 12900KS vs 5800X3D I won't be spending more time on this. It was interesting at first to see if one would be head and shoulders better than the other, but it appears the X3D only has a slight lead and I fully believe the X3D will be so similarly close in performance to the KS, based on the K, that it's not worth looking into further to find the answer.
 
Are you claiming @Crylune posted a benchmark?


I hope you find that benchmark @Crylune posted and aren't making another false claim!

I didn't have any opinion on this, I tried to fact check both claims that were made and your claim was the easiest to prove or disprove while his claim is harder, given the fact that 1% and 0.1% low data @ 4K is generally compiled for GPU reviews, not CPU reviews, and the KS is not reviewed as much.

Given the difficulty in finding 0.1/1% lows @ 4K for a 12900KS vs 5800X3D I won't be spending more time on this. It was interesting at first to see if one would be head and shoulders better than the other, but it appears the X3D only has a slight lead and I fully believe the X3D will be so similarly close in performance to the KS, based on the K, that it's not worth looking into further to find the answer.
No im saying TPU has a benchmark. Sure they dont record 0.1 and 1% lows, but - looking at the 13900k review, the x3d is so far behind the 12900k and the ks in averages that I doubt the lows are better
 
No im saying TPU has a benchmark. Sure they dont record 0.1 and 1% lows, but - looking at the 13900k review, the x3d is so far behind the 12900k and the ks in averages that I doubt the lows are better
It's was only 1.3% behind the 13900K @ 4K in averages on a TPU benchmark, I even talked about it in post #104. Based on that, it should have been hard to believe it was far behind a 12900K @ 4K on a TPU benchmark.

Here's the 12900K/S & 5800X3D @ 4K on a TPU benchmark, they are VERY close in averages. It doesn't surprise me at all that the X3D, with its extra cache, could beat the KS in 0.1/1% lows, as it only trails the KS by 0.9% in average FPS:
1673567231281.png
 
why are you guys comparing the 5800X3D to a 12900K? Wouldn't the correct comparison sit with the 12700K vs 5800X3D? For gaming i wouldn't touch anything above this range.

You guys got me interested in looking in on 1% lows between the 2 discussed models.... a 20 game average:

Screenshot (98).png

[source: eTeknix]

In short, these 1% low averages puts both the 5800X3D and 12900K on an equal war path...practically the same. Its a given both trade blows depending on the titles played/resolutions applied leaving each inquirer to come to their own conclusion based on their setups and targeted games.

Again if i were going 12th Gen intel (for gaming) i wouldn't touch the 12900K.. just silly beans unless non-gaming core-hungry workloads suggest otherwise. The 12700/12700K is what makes sense or the 5800X/5800X3D. Oddly enough, i've seen the Zen 3 X3D even trading blows with 13th Gen in a small number of titles (probably compared to a 13600K at a given resolution - might need to revisit the stats) but overall 13th Gen easily came out ahead.

Anyone on either 12th Gen or AM4-5000-series should be over the moon for this sort of cutting edge processing power and yet the WWW is full of people beating the third leg against the non-conformist militant wall of futility.
 
There is no way in hell the 12700k gets better minimums than the 13900k at anything. Ever. Gnexus ways has pretty weird numbers when it comes to lows and minimums, you can check his older reviews as well and you'll see a trend.

Also, intel cpus never drop from the boost clocks during gaming. The 12900k is running at 4.9 ghz 100% of the time, the 13900k runs at 5.5GHz 100% of the time etc.
Very cool story but it relates in NO WAY to what you're replying to.

I specifically talked about the relative impact of frequency. Frequency is what the CPU core runs at, its not indicative of how fast the CPU can fetch data. You can believe whatever you want to believe, but there are countless examples where the X3D shines and no Intel CPU can reach it, and they're specifically in highest CPU load cases for gaming. In other words: where it matters most.

And even in your own weird take on how the Intel CPUs work, you can't deny there are already games (like Cyberpunk... as if that's not a writing on the wall but an outlier) that pull these CPUs to base clock because they're exceeding limits for turbo. In fact, your ideas don't match reality in any way shape or form, except perhaps from your own N=1 perspective, but then all I can say is, you ain't gaming a lot, or you're playing the games where the impact just isn't there. I've already pointed out as well, specific types of games excel on X3Ds. Comparing the bog standard bench suite, even if its big, isn't really doing that justice. Not a single reviewer plays a Stellaris 'endgame' or a TW Warhammer 3 campaign in turn 200.
 
Last edited:
Very cool story but it relates in NO WAY to what you're replying to.

I specifically talked about the relative impact of frequency. Frequency is what the CPU core runs at, its not indicative of how fast the CPU can fetch data. You can believe whatever you want to believe, but there are countless examples where the X3D shines and no Intel CPU can reach it, and they're specifically in highest CPU load cases for gaming. In other words: where it matters most.

And even in your own weird take on how the Intel CPUs work, you can't deny there are already games (like Cyberpunk... as if that's not a writing on the wall but an outlier) that pull these CPUs to base clock because they're exceeding limits for turbo. In fact, your ideas don't match reality in any way shape or form, except perhaps from your own N=1 perspective, but then all I can say is, you ain't gaming a lot, or you're playing the games where the impact just isn't there. I've already pointed out as well, specific types of games excel on X3Ds. Comparing the bog standard bench suite, even if its big, isn't really doing that justice. Not a single reviewer plays a Stellaris 'endgame' or a TW Warhammer 3 campaign in turn 200.
But cyberpunk does not pull the 13900k to base clocks. Every review runs them power unlimited, therefore gnexus numbers were with the cpu running at 5.5ghz all core all day long. Therefore thats not the explanation for his 0.1% numbers. But regardless, the same applies to every cpu. The 7950x draws 140 watts in that game, if you limit the cpu to the same 125w its going to throttle as well.

I agree with you that there are games that the 3d is king. But the same applies to the 12900k (im not even mentioning the 13900k that is much faster). Spiderman, spiderman miles morales, cyberpunk etc, the 12900k just poops on the 3d by a big juicy margin. Especially if you run ingame and not the built in bench, the differences are staggering. Im talking about close to 50% differences.

Im absolutely ready to backup my statements with videos, i have the 12900k and the 13900k running with 7600c34, if anyone has the 3d and wants to test the above games, lets do it.
 
But cyberpunk does not pull the 13900k to base clocks. Every review runs them power unlimited, therefore gnexus numbers were with the cpu running at 5.5ghz all core all day long. Therefore thats not the explanation for his 0.1% numbers. But regardless, the same applies to every cpu. The 7950x draws 140 watts in that game, if you limit the cpu to the same 125w its going to throttle as well.

I agree with you that there are games that the 3d is king. But the same applies to the 12900k (im not even mentioning the 13900k that is much faster). Spiderman, spiderman miles morales, cyberpunk etc, the 12900k just poops on the 3d by a big juicy margin. Especially if you run ingame and not the built in bench, the differences are staggering. Im talking about close to 50% differences.

Im absolutely ready to backup my statements with videos, i have the 12900k and the 13900k running with 7600c34, if anyone has the 3d and wants to test the above games, lets do it.
Spiderman & Cyberpunk at retarded settings... Who cares about FPS in the shittiest-optimized first/third person games of the moment? Some examples there.... I'm sure TW3 also gets a fantastic number somewhere in its newest RT-on rendition that is generally considered grossly inefficient on pretty much everything, with not much to show for it. This isn't new, every gen has a few of those poster childs. I fondly remember how Nvidia pushed Hairworks in TW3 vanilla.

This is just parroting the cherry picked marketing examples that were given the exact treatment to create buy incentive for high end. I consider them about as relevant as Minesweeper performance honestly.
 
Spiderman & Cyberpunk at retarded settings... Who cares about FPS in the shittiest-optimized first/third person games of the moment? Some examples there.... I'm sure TW3 also gets a fantastic number somewhere in its newest RT-on rendition that is generally considered grossly inefficient on pretty much everything, with not much to show for it. This isn't new, every gen has a few of those poster childs. I fondly remember how Nvidia pushed Hairworks in TW3 vanilla.

This is just parroting the cherry picked marketing examples that were given the exact treatment to create buy incentive for high end. I consider them about as relevant as Minesweeper performance honestly.
I disagree with the shittiest optimized, but sure, lets say they are. So? Why does that make them run way faster on intel? There must be something that intel cpus do better to make those games run that much faster, right?
 
I disagree with the shittiest optimized, but sure, lets say they are. So? Why does that make them run way faster on intel? There must be something that intel cpus do better to make those games run that much faster, right?
Sure, and I think that something isn't quite so relevant in the games where your FPS really tanks to a level of unplayability because of CPU load. That's where the X3D starts to shine - of course not everywhere, but then nothing does.

From what I've gathered on those games a big part of the additional CPU load is in fact caused by graphics options, DLSS3, frame generation, etc. I suppose that's where Intel can put its core count to work. Its an interesting development nonetheless, both the biglittle approach and the cache heavy CPU, in how they accelerate gaming. There is definitely untapped potential in CPUs to put to use.

Im absolutely ready to backup my statements with videos, i have the 12900k and the 13900k running with 7600c34, if anyone has the 3d and wants to test the above games, lets do it.
I would definitely be interested in this!
 
Last edited:
Sure, and I think that something isn't quite so relevant in the games where your FPS really tanks to a level of unplayability because of CPU load. That's where the X3D starts to shine - of course not everywhere, but then nothing does.

From what I've gathered on those games a big part of the additional CPU load is in fact caused by graphics options, DLSS3, frame generation, etc. I suppose that's where Intel can put its core count to work. Its an interesting development nonetheless, both the biglittle approach and the cache heavy CPU, in how they accelerate gaming. There is definitely untapped potential in CPUs to put to use.


I would definitely be interested in this!
I have some videos on my channel with the 12900k running those games if you are interested
 
Especially if you run ingame and not the built in bench, the differences are staggering. Im talking about close to 50% differences.
50% diff comparing 5800x3d to 12900k? I need to see this. How were the 1% and 0.1% lows? Which resolution?
 
50% diff comparing 5800x3d to 12900k? I need to see this. How were the 1% and 0.1% lows? Which resolution?
The resolution doesn't matter, if the GPU isn't the bottleneck..

This is a 12900k with just 6000 ram at max settings + RT

 
Back
Top