• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Ryzen 9 9950X3D and 9900X3D to Feature 3D V-cache on Both CCD Chiplets

Huh... Won't at least the 9800X3D take the crown anyway?

Yes

Gaming on Ryzen 9 is not nearly as bad as some make it out to be lol..

Doesn't make sense from a cost perspective, even with the prices as of late. It's primarily the 7900X3D, and to a lesser extent the other 6+6 models of current and prior generations.
 
YOU TAKE THAT BACK! I want my 7800X3D to wear the crown one more generation. :D
Your 7800X3D will just wear a different crown. It'll be in the charts putting up an impressive fight as the best of the previous generation(s) against Zen5 and Zen6, doing well enough in 2025 and 2026 games that you feel unburdened by the itch to spend another $750 on a new platform. That's the crown of "great purchasing decision" or "money well spent".

I bought (rarely, with my own money) a 5800X3D and a 6800XT to replace the 5950X and 3090 I'd obtained from work for free. For someone who can basically get any hardware they want, no questions asked, the fact I technically downgraded to the 5800X3D and have voluntarily skipped free upgrades to both AM5 generation says more about just how completely OP the single-CCD X3D chips really are. Part of it is laziness; When I deal with hardware all day, the last thing I want to do in my free time is deal with more hardware. The 5800X3D has been with me over two years now. I don't think my PC has ever stuck with the same hardware for this long before, not even as a cash strapped student 25 years ago.

I'll likely replace it with the 9800X3D or a 9950X3D, depending on which one is the better fit for me, but if the 5800X3D is still trucking at 1440p120 then I'm honestly not sure I'm in a hurry to replace it.

Gaming on Ryzen 9 is not nearly as bad as some make it out to be lol.
Nabbed a 5900X from work for the living room PC and it replaced a 5800X (because I wanted to give someone that 5800X).

Yes, it's measurably slower than the 5800X was in games, but I can't be arsed to actually measure the difference - I'm still getting hundreds of frames a second. Either the Windows scheduler is doing its job, or the inter-CCD latency just isn't a big enough performance hit to notice in the real world.
 
how on earth? the 9950x is a 170w chip if they dont reduce the clocks that cache is going to fry itself
 
Finally, now the people will see that the 3D cache on both dies is useless and will stop crying for this
Shots fired! Pat did you fix your rear-view mirror :laugh:

how on earth? the 9950x is a 170w chip if they dont reduce the clocks that cache is going to fry itself
You do know they have 300W(400W?) EPUC chips with this extra cache? Of course they'll reduce clocks.
 
And if the clocks are also close to non v-cache models, suddenly 9900X3D is appealing. I hope this means we don't need to run core-parking BS. Still I'm not a huge gamer so I want more performance for productivity and will be keen to see 9900X3D vs i7 265KF across games and productivity. Only thing holding me back would be the poor X870 sidegrade. Z890 looks better.

There are plenty tasks besides gaming that will take advantage of that victim cache. Also now that both dies are getting stacked cache, it should get rid of problem arising from assymetric cores.
IIRC Phoronix did benchmarks on Epyc v-cache models and showed there are plenty of apps that benefitted, but I doubt many desktop users would use them. The only one I can recall that might be of use to me was OPENFoam.

I would get the X3D due to lower TDP if it performs within a few % for productivity. I still think for msot of thew apps I'd use 265 will beat out 9900X/X3D and I care less about gaming, because I'm already happy enough with my 5800X and any of the Zen 5 or Arrow Lake cpu's will trash it in gaming.
 
With 3DVcache on both CCD, disabling SMT on the 9950X3D will make a whole of sense, pretty much an all-round king of the hill CPU

The battle between Arrow Lake vs 9950X3D is gonna be interesting...
 
I guess you’ve never heard of thread scheduling and how hit or miss it is. This solves that problem. Even better if clock speeds can also be higher.
It's a software issue and only on Windows. You are now spending more money on hardware to avoid software issue, and you lose performance for basically every no-gaming workloads.
 
Yes



Doesn't make sense from a cost perspective, even with the prices as of late. It's primarily the 7900X3D, and to a lesser extent the other 6+6 models of current and prior generations.
Yes we know how you feel.
 
Yes we know how you feel.

You can't help yourself, can you dude?

Neither did the 4090.. but people still justified their purchase :)

The difference is that the 4090 is the uncontested king, its performance is on a league all its own. The 12-core Ryzen 9's have basically no claim to anything but failing to lead at both gaming and productivity alike, you play games? Buy the Ryzen 7, preferably the X3D variant. You render? Then just buy the 16-core Ryzen 9 - it's gonna be better all around.

The 7900X3D is the worst of the worst - 6 standard cores plus 6 3D cores ensure the CPU never performs above what a Ryzen 5 7600 or a 7600X3D would, and you're going to be paying more than what a 7800X3D - a chip that will beat it at virtually everything by up to 30% for it. Good thing AMD learned and will be releasing dual X3D chips now... as they should have from the very start, so we didn't need to deal with the poor topology and software bugs that plague the 7900X3D and to a lesser extent, the 7950X3D.
 
You can't help yourself, can you dude?



The difference is that the 4090 is the uncontested king, its performance is on a league all its own. The 12-core Ryzen 9's have basically no claim to anything but failing to lead at both gaming and productivity alike, you play games? Buy the Ryzen 7, preferably the X3D variant. You render? Then just buy the 16-core Ryzen 9 - it's gonna be better all around.

The 7900X3D is the worst of the worst - 6 standard cores plus 6 3D cores ensure the CPU never performs above what a Ryzen 5 7600 or a 7600X3D would, and you're going to be paying more than what a 7800X3D - a chip that will beat it at virtually everything by up to 30% for it. Good thing AMD learned and will be releasing dual X3D chips now... as they should have from the very start, so we didn't need to deal with the poor topology and software bugs that plague the 7900X3D and to a lesser extent, the 7950X3D.
Yeah a 4090 is so much faster in raster than a 7900XTX that it justifies being 3 times more. Too bad in the real world the narrative makes people think Ray Tracing is more important than raster. Of course when the 5090 launches the same thing that happened to the 3090 or 2080Ti before it. It becomes a nothing burger in the narrative and Nvidia will use DLSS 4.0 to make sure it seems that is warranted.

Just look at what you are typing though none of what you say is from experience. Even AMD said that putting V cache on both CCDs made no difference. What you seem to forget is that the community demanded V cache on Dual CCD chips. Then they gave us a boon by giving us cores that run at 5.7 Ghz. You seriously have it so wrong about the chip that it is actually humourous. I am willing to bet that unless they have seriously refined the process that 9900X3D could indeed be slower than the 7900X3D in Games that do not support it like TWWH3. As I said before we will see if the community is smarter than AMD engineers. So going on your analysis how does Space Marine 2 use all 12 cores. What about City Skylines 2? Or whatever else I am playing that does not support V Cache. Then there are people using the argument of more cache being beneficial when the 7900X3D has just as much cache as the 7950X3D but more than the 7800X3D. So yes 8 cores with Vcache will perform better than 6 cores with Vcache but adding another 6 cores of 5.7 Ghz is not bad no matter how you try to colour it. Do you even realize how many transistors we are talking about? Unless you actually believe that Windows disables cores if it does not detect Vcache?
 
Yeah a 4090 is so much faster in raster than a 7900XTX that it justifies being 3 times more. Too bad in the real world the narrative makes people think Ray Tracing is more important than raster. Of course when the 5090 launches the same thing that happened to the 3090 or 2080Ti before it. It becomes a nothing burger in the narrative and Nvidia will use DLSS 4.0 to make sure it seems that is warranted.

Just look at what you are typing though none of what you say is from experience. Even AMD said that putting V cache on both CCDs made no difference. What you seem to forget is that the community demanded V cache on Dual CCD chips. Then they gave us a boon by giving us cores that run at 5.7 Ghz. You seriously have it so wrong about the chip that it is actually humourous. I am willing to bet that unless they have seriously refined the process that 9900X3D could indeed be slower than the 7900X3D in Games that do not support it like TWWH3. As I said before we will see if the community is smarter than AMD engineers. So going on your analysis how does Space Marine 2 use all 12 cores. What about City Skylines 2? Or whatever else I am playing that does not support V Cache. Then there are people using the argument of more cache being beneficial when the 7900X3D has just as much cache as the 7950X3D but more than the 7800X3D. So yes 8 cores with Vcache will perform better than 6 cores with Vcache but adding another 6 cores of 5.7 Ghz is not bad no matter how you try to colour it. Do you even realize how many transistors we are talking about? Unless you actually believe that Windows disables cores if it does not detect Vcache?

Too bad that in the real world, NVIDIA cards sell 10:1 to AMD, the RTX 4090 alone sold more than the entire Radeon lineup and the 7900 XTX is an utter failure to an extent AMD dared not make another high-end card for this generation, regardless of your text wall. We've discussed this ad nauseam, all you care about is that AMD or your purchasing choices come in a positive light, there's no arguing with you here
 
Whether or not it will actually result in some gains, I like that AMD does new stuff and at least seems to listen to consumers.
 
Doesn't make sense from a cost perspective, even with the prices as of late. It's primarily the 7900X3D, and to a lesser extent the other 6+6 models of current and prior generations.

Sure but neither does the 13900KS in your specs either.

At the end of the day people are willing to pay big money for top of the line gear.

If doubling down on 3D cache provides a notable increase to performance or efficiency, I don't see why AMD shouldn't do it. It's a heck of a lot better than Intel's approach of pumping up the watts. Having multiple X3D caches was previously limited to the enterprise so I'm curious how much we'll be able to throw at this thing.
 
Too bad that in the real world, NVIDIA cards sell 10:1 to AMD, the RTX 4090 alone sold more than the entire Radeon lineup and the 7900 XTX is an utter failure to an extent AMD dared not make another high-end card for this generation, regardless of your text wall. We've discussed this ad nauseam, all you care about is that AMD or your purchasing choices come in a positive light, there's no arguing with you here
Yes with the Chinese Govt openly buying 4090s. In fact did that not lead to the 4090D? Are those numbers not included in GPU sales? The truth is not always what the narrative thinks. This reminds of when people were saying the 3000 sales were so good without looking at the fact that Nvidia was selling them directly to mining customers during the height of GPU mining. It is like the argument that AMD cards don't sell. Do you really think the 4090 sells in retail channels that well?


But sometimes people say Canada does not matter


Please tell me what page you get to when you see the 4090. Did you want me to use Amazon instead?
 
I really wonder why there is now a graphic card discussion about most likely NVIDIA 4090 vs RADEON Graphic cards in a processor topic.

Makes it harder to not read processor related topic.

It's a software issue and only on Windows.

I disagree. When you check linux (its only the kernel, not the userspace!, not the desktop!) there is a lot of code for the cores and where the threads are put. I want to point out to the source code for reference.
 
Last edited:
I really wonder why there is now a graphic card discussion about most likely NVIDIA 4090 vs RADEON Graphic cards in a processor topic.

Makes it harder to not read processor related topic.
Some people like to bash AMD everything.
 
With 3DVcache on both CCD, disabling SMT on the 9950X3D will make a whole of sense, pretty much an all-round king of the hill CPU

The battle between Arrow Lake vs 9950X3D is gonna be interesting...
Zen 5 was REALLY designed to have SMT on, you can't make proper use of its dual 4-wide decode cluster without SMT.

I disagree. When you check linux (its only the kernel, not the userspace!, not the desktop!) there is a lot of code for the cores and where the threads are put. I want to point out to the source code for reference.
Linux's CFS has been aware of heterogeneous cores for a long time, so there's not much issue in there as we have seen on windows. Intel and AMD being able to easily submit patches to improve stuff ahead of time makes it way better as well, which doesn't seem to be the case on Windows.
Heck, Intel having to come up with their hw scheduler because windows' scheduler is a shit, and AMD having tons of issues with that just serves to prove this point.
 
Some people like to bash AMD everything.

You're wrong, but you do step in like a knight in shining armor to defend them (and your purchasing choices) at every turn regardless

Sure but neither does the 13900KS in your specs either.

At the end of the day people are willing to pay big money for top of the line gear.

If doubling down on 3D cache provides a notable increase to performance or efficiency, I don't see why AMD shouldn't do it. It's a heck of a lot better than Intel's approach of pumping up the watts. Having multiple X3D caches was previously limited to the enterprise so I'm curious how much we'll be able to throw at this thing.

Not comparable, Intel KS are actually top of the line, halo products, the 6+6 Ryzens aren't top of the line nor advertised as such. Agree otherwise, though
 
You're wrong, but you do step in like a knight in shining armor to defend them (and your purchasing choices) at every turn regardless
Yep white knight when the way you describe AMD. Maybe because I started with a TRS80 all those years ago.
 
Zen 5 was REALLY designed to have SMT on, you can't make proper use of its dual 4-wide decode cluster without SMT.


Linux's CFS has been aware of heterogeneous cores for a long time, so there's not much issue in there as we have seen on windows. Intel and AMD being able to easily submit patches to improve stuff ahead of time makes it way better as well, which doesn't seem to be the case on Windows.
Heck, Intel having to come up with their hw scheduler because windows' scheduler is a shit, and AMD having tons of issues with that just serves to prove this point.

TPU tested Zen5 without SMT proved otherwise
 
Back
Top