• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Ryzen 9 9950X

Meanwhile, the 14900K going from the 13900K gained anywhere from 3.1% up to 5.6%. in IPC a slight power decrease.

No. Power stayed the same or higher going from the 13900k to 14900k. There is also no IPC increase, it’s the same core pumping more juice and getting higher clocks via process refinement.

Meanwhile 7950X to 9950X sees improvements while at the same or lower frequencies while using less power.

Very different.
 
Meanwhile, the 14900K going from the 13900K gained anywhere from 3.1% up to 5.6%. in IPC a slight power decrease.
Uhh, where do we see a 3 to 5% IPC uplift going from 13900K to 14900K?
The only improvements are from increase core count or increased frequency with increased power draw. A 14900K performs almost exactly as a 13900KS.
 
PRO: AMD's fastest Zen 5 gaming processor / CON: Slower than 7800X3D and 7950X3D in gaming

1723662229539.png


Both statements are correct, still it's very disappointing.

Why not market them as Zen 4+ with a performance/efficiency bump and slot them in at the price brackets of their predecessors instead? :confused:
Wasn't the IOD already holding Zen 4 7000 series back in certain scenarios? Why knowingly crash into this PR disaster and most likely blunder the potential of Zen 6 already? It does not compute.
 
Outside of 3d stacking I don't really know how you can solve these without massive latency issues?

zen5 said ouch o_O
gnr_c2c.png


Core to core latency is bad as well, think of it how the internet could be choked by just a handful of tubes cables :laugh:

It's not just about Intel vs AMD, whether MS has botched win11 wrt zen5 or AMD didn't work with them long enough to optimize performance on Windows the fact remains it's severely underperforming on the most popular desktop platform. At the end of the day plebs won't care who screwed them as AMD should be carrying the burden of their products!


They're definitely not 100% server focused, otherwise AMD wouldn't bother selling them at half or a quarter of their server margins!

It would be interesting to see if better memory speed/timings would have any (major) impact here. Or maybe a magical AGESA fix just for Windows o_O
The high latency between CCDs is probably a consequence of some power saving measure. There's no reason for it to have regressed so badly compared to Zen 4.
 
The issue with Intel, for me, back then wasn't anything related to stability or whatever else. It was the absurd clocks, the chips are already pushed way beyond sane limits at "13th" gen, I also said AMD was stupid to follow Intel down that rabbit hole but at least so far as winning benchmarks was concerned it was "somewhat" understandable. Though personally I wouldn't buy any of 7900/50x & let it go untethered at those clocks/power limits! Intel's created a massive headache for themselves as well because for the first time in a long long time we will probably see a major(?) clock regression coming from 14th gen to whatever they release now.

The thing is the two best cpus from both camps in my book require user intervention to be at their best and while I still like my 7950X3D by a hair over a 13900k anyone who has used both would know once tweaked they are pretty hard to tell apart even at much more sane power limits on the 13th gen part they kinda just trade blows depending on what the user finds more important both companies are releasing their parts at stupid defaults settings honestly and with the 7950X3D you can't even trust windows or the chipset to schedule properly it's best to manually do it yourself with process lasso.

The 9950X3D could still be a win to me if amd fixes it's scheduling issues and the ultra i9 just needs to be faster and consume less power at least out of the box than the 13900k.

I mean that's really our only hope for these generations not being a yawnfest and for some it likely still will be dependent on the uplift of Arrow Lake

I don't really care about 14th generation I don't really look at it as a real generation the last new cpus from each cpu maker came out 2 years ago is what I make comparison of Zen5 to still the gap between 12th and 14th generation isn't much larger than Zen 4 to 5 so you could technically just look at the gains from 12th vs 14th and it technically would be ballpark time line wise and probably what Intel should have done with raptorlake needing more time to cook.
 
Last edited:
I guess the SMT issue is probably prevalent here too and I bet turning it off will make it much more competitive on the gaming scene. They really need to work on an update to adress that issue as that I believe is going to make a huge difference since we saw that in that review on here.

Overall good processor, wish it did a bit more on the performance front.
 
The high latency between CCDs is probably a consequence of some power saving measure. There's no reason for it to have regressed so badly compared to Zen 4.

Much more likely to be an AGESA-level bug regarding NUMA configuration if it isn't an architectural regression, IMHO. It's one of the points of contention to keep an eye over, from my understanding, these are neighboring cross-socket access latencies seen in recent Epyc processors.

Anyway, early adoption of AMD products is always a bad idea, they never have working firmware, drivers, or software at least 6 months to a year into a product's lifecycle, be it Ryzen or Radeon.
 
Much more likely to be an AGESA-level bug regarding NUMA configuration if it isn't an architectural regression, IMHO. It's one of the points of contention to keep an eye over, from my understanding, these are neighboring cross-socket access latencies seen in recent Epyc processors.

Anyway, early adoption of AMD products is always a bad idea, they never have working firmware, drivers, or software at least 6 months to a year into a product's lifecycle, be it Ryzen or Radeon.

To this day you can still have scheduling issues on the 7950X3D W1Z seems to have ran into them and corrected it for this review but it still requires user intervention so I wouldn't hold my breath on AMD fixing it completely.
 
To this day you can still have scheduling issues on the 7950X3D W1Z seems to have ran into them and corrected it for this review but it still requires user intervention so I wouldn't hold my breath on AMD fixing it completely.

They'll never really go away, the 7900X3D and 7950X3D's hybrid architecture are inherently flawed due to resource imbalance, this problem is particularly nasty on the 7900X3D. With these processors you either get a full X3D or a full standard Ryzen 5/7 experience in one package, but you don't get to make the best use of both. That's why a dual-X3D processor is so badly needed this generation. I really hope AMD delivers that.
 
They'll never really go away, the 7900X3D and 7950X3D's hybrid architecture are inherently flawed due to resource imbalance, this problem is particularly nasty on the 7900X3D. With these processors you either get a full X3D or a full standard Ryzen 5/7 experience in one package, but you don't get to make the best use of both. That's why a dual-X3D processor is so badly needed this generation. I really hope AMD delivers that.

Yeah I've fixed it in 99% of scenarios with a 7950X3D at least till games like more than 8 cores although that'll probably be in two console generations I still see more games that use a single render thread than ones that spread usage over a full ccd.
 
They'll never really go away, the 7900X3D and 7950X3D's hybrid architecture are inherently flawed due to resource imbalance, this problem is particularly nasty on the 7900X3D. With these processors you either get a full X3D or a full standard Ryzen 5/7 experience in one package, but you don't get to make the best use of both. That's why a dual-X3D processor is so badly needed this generation. I really hope AMD delivers that.

Unless theres no or less of a clock deficit for X3D parts on Zen 5, there’s little to no point in releasing dual ccd 3DVcache parts. Thread assignment aside, they trail in almost every productivity workload, while potentially providing the same gaming performance as a 9800X3D.
 
It's not subjective. It's called cherry picking. You selected to show all the benchmarks with the least performance relative to the 7950X. How cunning of you but I'm not sure why you took so much time to copy and paste the worst data into your comment. Here is the chart for the rest of us not looking for the worst performance but performance across all apps. Take what you like from it but this is at least ALL the data versus the 7950X.

Your graph also shows how pathetic this new CPU is. Skip it, don't buy.
 
Our brothers at Tom's hardware award only three stars to the Ryzen 9 9900 models. Quite rightly so. Regardless of whether and how much architectural change effort has gone into it as well as the long development time, these are unfinished products. Maybe on purpose, so that they have something to surprise us with in the next generation. But I'm tired of waiting for the moment when all the things under the cap will finally be renewed and arranged in the best way, and this is not achieved with "piece by piece work".
 
I guess the SMT issue is probably prevalent here too and I bet turning it off will make it much more competitive on the gaming scene. They really need to work on an update to adress that issue as that I believe is going to make a huge difference since we saw that in that review on here.
AMD recommends turning off the 2nd CCD since the 3000 series, if you actually want decent gaming performance on the 2CCD parts. They even call it Game Mode inside the Ryzen Master GUI.

Most people don't do that on workstation type of machines, in case they just want to casually play some games as well. However, since the arrival of X3D, buying a dual CCD part just for the slightly higher clocks of the 1st CCD in gaming seems pretty wasteful.
 
AMD recommends turning off the 2nd CCD since the 3000 series, if you actually want decent gaming performance on the 2CCD parts. They even call it Game Mode inside the Ryzen Master GUI.

Most people don't do that on workstation type of machines, in case they just want to casually play some games as well. However, since the arrival of X3D, buying a dual CCD part just for the slightly higher clocks of the 1st CCD in gaming seems pretty wasteful.
I mean I know about that, but we saw the Ryzen 7 9700X perform alot better with SMT off so I was curious if the same will happen here. I am more just curious if this slight change will make a difference so hopefully in the future an update may fix that performance hiccup.
 
So, extremely pathetic. Maybe it needs faster than DDR5-6000?
This review is already giving it the benefit of the doubt by running overclocked memory. At stock speeds it would be 5600 MHz at JEDEC timings, making any potential bottlenecks a bit worse.

If I could bet 2 cents, I think the 3D version will do much better
Depends what you mean.
We know what kinds of workloads extra L3 cache will help, primarily gaming and a few select workloads, otherwise there will be little difference.

For AVX512 workloads it`s certainly memory bandwidth starved but it already can't use the entire DDR-6000 bandwidth(which is about 96 GB/s).
This is almost certainly why it barely had improvements over the 7950X in Y-Cruncher, it just doesn`t have the bandwidth to do more.
This is why I've been saying for years that abandoning the old "HEDT" segment was a big mistake. Many of those real workloads that benefits from >8 cores really need a lot of bandwidth too, which is why having a ~2500 pin CPU socket with 4 memory channels, ~250W TDP, more PCIe lanes etc. is so desperately needed. The much more expensive Threadripper and Xeon-W platforms have little availability, lacking clock speeds* and absurd pricing. We would be much better served if the mainstream sockets (LGA1700 and AM5) cut off at ~100W (which would make them cheaper), and all the high-performance models were made for a proper HEDT platform. With $600 motherboards we are already paying "HEDT prices" anyways, while getting "crippled" platforms.
(*But Threadripper and Xeon-W does retain one non-obvious advantage though; more consistent performance.)
If at the very least AMD could quickly pull out a 9000 series Threadripper at decent clocks and get motherboard vendors to make $600 motherboards for it, then at the very least we could see those cores with a little more breathing room.

As for the alternative of running overclocked memory at ~8000 MHz or similar, that's really just a benchmarking thing. This would not be stable over time, and would result in so many application crashes and file corruptions making it useless for anyone who actually needs a high performance computer.
 
idk if anyone mentioned this, but Wendell from Level1techs discovered that running a game as Administrator in windows was boosting the performance, that there is something strange happening in windows, he even said some games where running better on Linux than on Windows native. Something is not right with zen5 on Windows.
If you google cyberpunk optimisations you’ll find a lot of these. You disable core isolation, then windows defender etc. running as admin isn’t easy because they’ve added the second play button on gog which is awful. Wendell looks like average american Intel user with tons of expensive hardware but no idea how to use amd. “Why it must be so hard” to passprase from video.
Best safe way to tweak windows is backup, install privacy.sexy and that’s is. And no, fps, ain’t go 2x.
Edit: he mentioned process lasso. Also Hardware Unboxed mentioned process lasso multiple times in videos, but no one show how to use it and no one benchmark it.
I think I’ll pay $25 and give it a try on my 5700x3d.
 
So, extremely pathetic. Maybe it needs faster than DDR5-6000?
But much lower power consumption. But, again, same as with 9700X and 9600X, they should have offer an app with a single click power profile switcher out at launch.
For 9600X and 9700X offer 45 W (Eco Mode), 65 W (Standard Mode), 95 W (Performance Mode) and 120 W (Ultra Performance Mode) profiles.
For 9900X - 45 W (Eco Mode), 95 W (Efficient Mode), 120 W (Standard Mode), 170 W (Ultra Performance Mode)
For 9950X - 45 W (Eco Mode), 95 W (Efficient Mode), 170 W (Standard Mode), 220 W (Ultra Performance Mode)

And they should also bump single thread boost to 6 GHz on all models.
 
I mean I know about that, but we saw the Ryzen 7 9700X perform alot better with SMT off so I was curious if the same will happen here. I am more just curious if this slight change will make a difference so hopefully in the future an update may fix that performance hiccup.
Ok, got ya.
At the moment, it looks like the dual CCDs have more trouble with those pesky PPM provisioning drivers and potential core parking issues. Hopefully, we see some fixed chipset drivers and a properly working AGESA soon. Right now, it seems that benchmarking is mostly a roll of the dice, in case Windows decides to push your game randomly onto the 2nd CCD.
 
Unless theres no or less of a clock deficit for X3D parts on Zen 5, there’s little to no point in releasing dual ccd 3DVcache parts. Thread assignment aside, they trail in almost every productivity workload, while potentially providing the same gaming performance as a 9800X3D.

The entire point is a consistent topology, which means a lot more than just winning benchmarks (the 9950X proves that it can't win them anyway). Without having to worry about whether resources are being allocated correctly, you no longer need "drivers", deal with manual affinity and the chip will no longer be a hail Mary - except there is a side effect involved, this will also dramatically improve the tasks that are already sped up by X3D, and you'd be taking home more performance than what AMD is willing to let you have for the price that a 9950X3D would be sold for, hence their initial excuse that games didn't benefit and they decided to can that idea way back in Zen 3 (this is the true reason - not their excuse from back in the day). Even though plenty of people, myself included, would literally part with $800-$1K for one.

7900X3D and 7950X3D's problems were never their resources, but rather their topology which causes the processor's vast resources to go underutilized unless software is specifically written with them in mind (and in general, they will never be), the closest analogy I can think of is precious ore deep within a mine, out of easy reach for all but the most skilled of miners. Zen 5 still lacks a hardware thread scheduler like Intel's thread director, which further compounds the problem with a standard+3D approach. It doesn't work well in practice, proof of that is the 7800X3D just smokes the 7900X3D in practically everything that makes use of ~8 cores and the cache, such as games. Even in productivity applications, if you balance out the resources available in either of these chips and the relative performance percentage, the 7800X3D comes out as much more resource-efficient (it will do more work per core, thread, and MB of cache) than the 7900X3D ever will. And that's why the 79X3D sold poorly.

Sincerely, I would take a dual X3D part even if it had a full GHz of a clock hit vs. the standard model. It's just better.

If you google cyberpunk optimisations you’ll find a lot of these. You disable core isolation, then windows defender etc. running as admin isn’t easy because they’ve added the second play button on gog which is awful. Wendell looks like average american Intel user with tons of expensive hardware but no idea how to use amd. “Why it must be so hard” to passprase from video.
Best safe way to tweak windows is backup, install privacy.sexy and that’s is. And no, fps, ain’t go 2x.
Edit: he mentioned process lasso. Also Hardware Unboxed mentioned process lasso multiple times in videos, but no one show how to use it and no one benchmark it.
I think I’ll pay $25 and give it a try on my 5700x3d.

No need, just play around with the free version if you must. The 5700X3D's topology is contiguous and you have only one CCD/CCX with full access to your processor's resources. It will not improve your performance under any circumstances.
 
If there was a 9950X3D with 2x 3D dies and it performed marginally better than the 7800X3D in gaming while having 7950X or better multithread performance, then that's plenty for me. That would be a processor that I'd sit on for a good while. I'll take the multithread hit if it means that I don't have to screw around with core priority drama. But that probably won't happen, so I'll be looking for what AMD and Intel do in 2025 to see if they have anything worth moving from a 7800X3D.
 
Anyway, early adoption of AMD products is always a bad idea, they never have working firmware, drivers, or software at least 6 months to a year into a product's lifecycle, be it Ryzen or Radeon.
Early adoption generally is risky, AMD or not.

The Pentium Pro was generally 'meh' before the oddities were fixed for the Pentium II, PCI 'Plug&Play' in the early days was affectionately known as 'Plug&Pray' mainly because BIOS resource management was abysmal even on Intel chipset boards, the original days of P4 hyper-threading were not perfectly smooth, and let's not forget Intel's ARC driver development path - something which people seem to be very forgiving about in a way I'd expect of a company that was brand new to working on DirectX/OpenGL/Vulkan API products, but I'm not sure why as Intel have been actively making GPU cores since they integrated the i752 into their chipsets - it's not like they had no skin in the game already.

That said, this meme never gets old it seems
1000047572.jpg
 
Last edited:
If there was a 9950X3D with 2x 3D dies and it performed marginally better than the 7800X3D in gaming while having 7950X or better multithread performance, then that's plenty for me. That would be a processor that I'd sit on for a good while. I'll take the multithread hit if it means that I don't have to screw around with core priority drama. But that probably won't happen, so I'll be looking for what AMD and Intel do in 2025 to see if they have anything worth moving from a 7800X3D.
If they went with an off chip L4 cache that was shared between the 2 dies it would hide the latency, otherwise even with the 3D cache on both CCD it's still going to have that slowness between but at least the ram would be buffered better.
 
Anyone knows this?

The new 9950X and 9900X need to be treated as X3D parts prior to review...
Steve from GN said that AMD communicate this 5 days after send the CPUs to reviewers.

1723675511751.png

1723675683661.png

This claims to fixing scheduler (at least for 3DVcache parts) without lasso
Windows should be running the service "amd3dvcacheSvc" or else scheduling with be all over the place.
And the thing is that a simple chipset driver installation is not sufficient if windows have seen a different CPU before the 3DVcache part.
Needs good driver/registry cleaning


1723675971802.png
 
Back
Top