• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Ryzen 9 9950X3D

Similar to your results I've noticed when gaming on my 5950x Windows tends to use the first CCD for gaming and usually little to no activity on the 2nd CCD.

The chiplet architecture Ryzen and EPYC utilize is particular to locality, if you can comport a workload's full demand onto a single CCD and avoid sending data through the infinity fabric or accessing data on cache that is currently residing in an adjacent tile (worst-case scenario), this is what you should do. Of course, that means only the resources locally available to that node are fully exploited. This is essentially why the X3D chips have some degree of trouble on Windows, the OS just sees it as "one big block of available resources" without any regard for their physical location. CCD1 should only be accessed if more resources than CCD0 can provide are requested by the application.

That's why a 8 threaded workload will run better on a 5800X or 5950X as opposed to a 5900X CPU, even though the latter has 12 cores and technically should comport 12 threads just fine. It's because it's 6+6, not 8+0 or 8+8. Zen 3 also did away with the issue that Zen 2 had with the CCXs by making them the same size as the CCD itself, the 3900X was effectively a 3+3+3+3 setup and 3950X 4+4+4+4.


Scroll down a little on this article, there is a huge chart demonstrating both inter-core, inter-CCD and inter-socket access latencies on a Turin system. As you can see, though, there are both memory bandwidth and access latency implications, the same concept also applies to the Ryzen 9's obviously at a much, much smaller scale.
 
At the prices asked, it's a total upsell. That's the CPU that'll go to the bargain bin for below 9800X3D prices down the road, just as it happened to the 7900X3D... because it won't push the frame rates gamers want, and it won't pull its productivity weight for the creator camp. It's a CPU that no one wants, and AMD is releasing likely for binning reasons.
Yup, I just don't get why they do a 9900x3d instead of doing a 9600x3d, have a nice budget gaming CPU they can dump the defective dies into
 
Great CPU AMD. Well done!!

They are really enforcing their domination atm and congrats to them from where they used to be. It's been a hard slog. They deserve it.
 
The 9800X3D being 9°C higher than the 9950X3D is very weird though...
Strange,
I did not even see that the 9800X3D is hotter :D
Only interested on the higher core-count units for some time.
 
What is the max boost clock on the 3D V-Cache CCD ?
Didn't check that specifically, I vaguely remember seeing in some of my data that it's around 5.5, because I first was like "uh didn't they say 5.7?"
 
No NPU, well, I have NPU on multiple Intel machines and I've yet to see one App that got a hold of this dedicated HW. Either stuff runs on dGPU or in Cloud, but hey, what do I know about this, right? Maybe in 2 - 3 years...but hey, in 2 - 3 years dedicated NPU is gonna be in AMD Desktop CPUs' too.
 
The testing is in a controlled environment.

A real person would have multiple browser tabs open.

A Twich person would be streaming to other people while playing games and having other programs running in the background.

A normal person is doing a lot of stuff on their Windows PC.
Nowhere and I mean nowhere has it been reported that disabling threads causes an overload whatever that means.

It doesn't happen when you turn off HT.
It doesn't happen when you turn off e-cores or p-cores.
It doesn't happen when you disable a CCD.
It doesn't happen when you disable 1 to N-1 cores in the BIOS.

IT JUST DOESN'T HAPPEN.
 
Last edited:
1741782806229.png


How 9950x3D almost 9c cooler than 9800x3D ?
 
View attachment 389220

How 9950x3D almost 9c cooler than 9800x3D ?
I can think of some reasons (just guesses, not certainty):
- Double CCD reduces the thermal density. The 9950x3D is a 170W with 2 CCDs + IOD, whereas the 9800x3D is a 120W + IOD. Assuming ~20W for the IOD, the 9800x3D has 100W for its sole CCD, whereas the 9950x3D has 75W per CCD.
- Better binning: afaik the x950 CPUs usually have better binnings that can clock higher and use less power compared to their smaller siblings.
- Better manufacturing process: in the same vein as the above, given that it's a product that came later, maybe AMD/TSMC has improved their fab process and made it a bit more efficient.

If you take all of the above into consideration, each CCD of the 9950x3D is able to reach the same clocks as a 9800x3D at a lower power usage, and dissipating heat from 2 CCDs is easier than a lot of contained heat from a single CCD.

It could also be just sheer magic, as said above.
 
Two CCDs, heat spread among twice the surface area
But it's 16cores spread across two CCX's instead of 8 cores on one CCX. So shouldn't there be a higher heat output at the same general frequency? Binning likely helps in this situation but twice as many cores...
 
But it's 16cores spread across two CCX's instead of 8 cores on one CCX. So shouldn't there be a higher heat output at the same general frequency? Binning likely helps in this situation but twice as many cores...
It's more heat, yes, but the area is bigger, so the thermal density drops. Instead of trying to cool just the head of a needle with a heatsink, you now have 2 heads splitting the heat and touching different parts of the heatsink, so less heat per needle head in the end, even if the total heat output is higher.
 
Clearly you can’t just buy things anymore without a long ass line.

We’ll see if I can scoop a 9950X3D today… doesn’t open for another 29 mins.

1741793422487.jpeg
 
But it's 16cores spread across two CCX's instead of 8 cores on one CCX. So shouldn't there be a higher heat output at the same general frequency? Binning likely helps in this situation but twice as many cores...
9800X3D = 150W -> x2 -> 300W (estimated dual CCD)
9950X3D = 203W (actual)

When under full load 9950X3D cooler because less watts per core, per CCD, less concentrated heat per CCD, and more surface area for cooling.
...but 203W is still more heat than 150W so 9950X3D does produce more heat in total.

1741793881058.png


The extra 26W with PBO produces minimal gains. At least that his how I see it.

I just noticed the PBO temperatures are missing from the chart.
1741795138468.png
 
Last edited:
Ok good news, plenty of 9950X3D has plenty of stock at Denver’s Microcenter. They had 3 5090s - $2600 Zotacs and 7-10 5080 - $1500 MSI and Zotacs. They are trickling in…
 
It's more heat, yes, but the area is bigger, so the thermal density drops.
Well no, not really. Instead of 1 8core CCX, there is 2 8core CCXs spaced about 1cm apart. Heat density increases in such a situation. So the temps being only about 10C apart is rather remarkable.

9800X3D = 150W -> x2 -> 300W (estimated dual CCD)
9950X3D = 203W (actual)

When under full load 9950X3D cooler because less watts per core, per CCD, less concentrated heat per CCD, and more surface area for cooling.
...but 203W is still more heat than 150W so 9950X3D does produce more heat in total.
Even accounting for that, it's still remarkable that the 9800X3D and the 9950X3D are only a few degrees apart.
 
Back
Top