• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Ryzen 7 5800X3D

Correct me if I am wrong, but I thought L3 across 2 chiplets had to be duplicated for core cohesion, meaning you cannot simply add up L3?

No, they are independent and fully usable, though this is not without certain drawbacks. In Zen 2 and Zen 3, L3 cache slices are tied up to a core complex (CCX), and while data can be accessed between CCXs, doing so incurs an access latency penalty.

Zen 2 had two CCXs per CCD (die), and Zen 3 streamlined this to have one CCX per CCD, as it doubled the amount of cores and associated L3 per CCX. The magic of the 5800X3D is that it is a single CCD design, so it turns out to be a very straightforward setup that won't incur the inter-CCD and inter-CCX penalties because it only has one of each.

R9 3950X: 4 cores + 16 MB L3 * 2 * 2 (4C+16M/4C+16M + 4C/16M+4C/16M), you can see here this was not the most efficient topology, i.e. imagine data on CCX4/CCD2 trying to access something on CCX1/CCD1
R9 5950X: 8 cores + 32 MB L3 * 2 (8C+32M + 8C+32M), far more efficient as few tasks ever need more than 8 cores or 32 MB of cache, so it usually manages pretty well with much fewer issues
R7 5800X3D: 8 cores + 96 MB L3, should be self explanatory, the processor can fully utilize its resources with maximum efficiency

An eventual 5950X3D would behave very much like the 5950X, except that it each CCD/CCX would have the full benefits of the 96 MB L3 (just like the 5800X3D), enabling very large data sets.
 
AMD could cut the L3 cache in half and then 3D stack it and use that extra L3 die space at the same time. They could even do that in tandem with a 6nm die shrink. If they wanted they could probably have a 10c to 12c single CCD chip in doing so. Use cut the L3 cache in half and 3D stack it to compensate then repurpose that die area space. I figure they could potentially have a 20c 5975X3D chip that's got the same L3 size as the 5950X, but 4 more cores that adds L1/L2 cache that are of more importance and usefulness at the same time along with more cores. This 3D stacked cache was good if for no other reason, but to prototype and explore it's effectiveness.
 
No, they are independent and fully usable, though this is not without certain drawbacks. In Zen 2 and Zen 3, L3 cache slices are tied up to a core complex (CCX), and while data can be accessed between CCXs, doing so incurs an access latency penalty.
Oh, I see. I thought I had read the opposite somewhere.

Edit: wouldn't the higher L3 latency and the reduced clocks make it a bit of a step backwards, for applications that use more than the 8 cores of the 5800X3D?
 
1650682582470.jpeg


It’s not very likely for AMD to release the 5900X3D (192MB cache), that would be kind of vandalism on their own next platform, but it would be great to close the AM4 era with something like that.
 
Oh, I see. I thought I had read the opposite somewhere.

Edit: wouldn't the higher L3 latency and the reduced clocks make it a bit of a step backwards, for applications that use more than the 8 cores of the 5800X3D?

Well, as the 5800X3D has shown, any losses from access latency are easily offset and overcome by the benefits of the larger cache, so I don't think so. The same "limitations" of the 5950X's design would certainly apply, though it would be more of an "maximum efficiency" issue, eg. it could be even faster somehow I guess? It would be a royal processor, mate...

View attachment 244625

It’s not very likely for AMD to release the 5900X3D (192MB cache), that would be kind of vandalism on their own next platform, but it would be great to close the AM4 era with something like that.

Yeah I believe that ES was a one-off and they decided not to release a 12- or 16- core SKU, unfortunately.
 
Shame they didn't cut the L3 cache in half then 3D stack it. If you look at the die area shot seems you could fit another 4 cores per CCD for the potential of single CCD 12c part or 24C part to replace the 5950X.

1650692249031.png
 
Last edited:
Oh boy. I read the post but I could not read them all. The drama and butt pain of some people here, it is like soap operas with their arguments and problems seeing the gaming results of the 5800X3D.

Nice halo product, a cherry on top. Awesome performance considering bigger cache only and even a bit lower clock, I actually didn't expect that much and I thought AMD has stretched the truth a bit but it would seem they didn't. The CPU performs pretty good. It would seem the clocks are not that important, good to have but the cache does the trick. It is a nice showcase, how much cache capacity matter. It would be hard to imagine, how much more clock frequency up you would need to achieve this.
 
Well, hellooo

Wccftech Reader Tunes His AMD Ryzen 7 5800X3D Into a Efficiency Monster With Undervolting: Same Performance at 1V, 57W Peak Power at Sub-1V


"At 1V, Shaun states that he started seeing performance regression but one interesting aspect was that the performance itself didn't take a huge hit. The power and temperatures saw a huge fall. It was stated that at 1V (4.4 GHz all-core), the CPU peaked at 43C in Cinebench whereas it peaked at 80C in the same benchmark when running at stock The power consumption was rated at 73W."
 
Very interesting for sff builds. In general though, it’s an enthusiast class gaming cpu and the power consumption is not much of a value.
It matters only when it’s ridiculous like the KS but the high end products are not meant to be efficient but powerful.
 
"Shaun is running a custom-loop cooling kit with a 420mm radiator and triple 140mm fans" i'm sure the temp for this even at 1v would not be the same as this with an air cooler. His cooling could be classed as high custom.

I'd like to see the temps with the same settings and a air cooler.

Nice CPU though, and if i did not have the ADL i would probably go for one. I do not regret the 12700k though as it is still a very good CPU, even though ADL has got loads of derision from AMD fans.
 
Basically anything that scales with cores and frequency would be worse with a 5800X3D compaired to the 5800X. People will be posting blk overclocking and undervolting. This will then be the new better performance. Ignore it, its single samples and you cant make any judgements.

The 12900k/ks overclocked will be faster, better RAM, more cores/higher clocks and higher DDR5 ram frequency with tightened timings. Example 5800x3d will max out likely at 1900 IF at most, so this limits RAM frequency and performance. Example Highest posible on 5800x.

Next will be the power is lower on the 5800x3d, both gamers and overclockers dont care. An overclocked 5800x 5GHz will do 100 watts in gaming and Intel 12900k will do a little more with an overclock. Proof here of 5800x 100Watts 98 watts peak. Power draw stock AMD vs Intel Example of a 12900k using lower power in games note that most of the time its below 100 watts and below the 5950x. The two are close at times.

What the 5800x3d is great for is people that dont overclock and want to upgrade from say a 1000 or 3000 series cpu. They already have DDR4 3600 RAM and this gives them a path to great gaming performance on an older motherboard that cant take the power draw of say a 5950x. I have an old 3800x with DDR4 3600 Ram on the x570 motherboard. I could put the 5800x3d in that system and change nothing else. This system will out perform my 10900k system at 1080p but I am still gpu limited at 4k 3Dmark Time Spy 10900k and Time spy 3800x.
 
Those not running a higher end GPU like the RTX 3080 in the review can expect much lower gaming improvements from running the 5800X3D @1080p
That's something I'm planning to test myself with a 6600XT. However, I don't really care for maximum fps or 200+ averages in games. What I'm interested in is frame time consistency, 1% and 0.1% lows and overall energy efficiency.

It's a shame AMD decided not to refresh the Ryzen 9 lineup, I would sell my 5950X and buy a 5950X3D without thinking twice
I'm almost sure we won't get another Ryzen3D on AM4. One reason is obviously the imminent release of Zen 4. And the gains we've seen in games owing to V-cache do not really translate to massively parallel workloads, such as rendering or encoding - which are primary use scenarios for Ryzen 9.
 
The 12900k/ks overclocked will be faster, better RAM, more cores/higher clocks and higher DDR5 ram frequency with tightened timings. Example 5800x3d will max out likely at 1900 IF at most, so this limits RAM frequency and performance. Example Highest posible on 5800x.
What are you On? the 12900KS is an OC version of the 12900K and the 5800X3D it's beating both of them even with 6400Mhz RAM...
 
What are you On? the 12900KS is an OC version of the 12900K and the 5800X3D it's beating both of them even with 6400Mhz RAM...
Its a better binned 12900k, that is correct but the 12900ks is its own product with higher clocks. Better binned cpu's will normally have better clocks at lower vcore. There is no way a 5800x3d can't beat an overclocked 12900ks with overclocked and tuned DDR5 memory. Look again at the aida64 score on the DDR5 RAM I posted. If you know how to tune RAM you can go high with a 12900k system. Most of the performance is RAM side. THe 5800x3d will cap out with its maximum IF frequency. The better you tune the RAM, then the more copy and less latency you can get. This is what the extra cache of the 5800x3d provides over a normal 5800x.

Take my 10900k system, look at the aida64 link and see what my RAM gets. Game Shadow of the tomb raider, 1080p highest TAA. My 10900k is 225fps with a 3080 ti (380 watts power limit). Here the same settings, a 3090 ti and 5800x3d/DDR4-3800. 191fps scroll down to see bar chart. 12900ks/DDR5-6400 190 fps.

The higher the RAM frequency and better the RAM tuning; the massive effect on performance in games.

Note: Fixed Tomb raider benchmark as PC was in power saving mode.
 
Last edited:
Hi,
12900ks was a 800.us suckers release dropping early just before 5800x3d dropped so it's a typical intel trolling release.
 
Some 12900ks cpu's can reach 5.7GHz on the performance cores for two threads and 5.2GHz for all P-cores. You can set E-cores to +3. Settings as per this video. So yeah Intel have the fastest overclocked CPU but they make you pay for it. Call it trolling if you like but some people will buy one or even buy and bin 1000's. Then get a 3090 ti. Sell the rejected cpu's on ebay.

Also my shadow of the tomb raider result in in power saving mode.
 
Last edited:
Why do I fear this CPU will go the way of the 3300X?
 
Why do I fear this CPU will go the way of the 3300X?
Yeah I'm starting to regret not grabbing it on launch. It's sold out everywhere now and it looks like there won't be any restock until 5/12, at least in EU (31st of May in UK).
 
Why do I fear this CPU will go the way of the 3300X?
It most likely will. It's a unicorn chip, just as the 3300X was. And just like the 3300X today, I believe it's going to stay relevant in games for a long time. The next shipment may be the last chance to grab it first hand, though probably at an inflated price.

Luckily, there are other good options, such as the 5700X/5800X/5900X or the 12700 from Intel. None of them is better suited to gaming than the 5800X3D, but they're priced lower, and so offer better value overall.

And of course we'll have Zen 4 and Raptor Lake in a few months.
 
It most likely will. It's a unicorn chip, just as the 3300X was. And just like the 3300X today, I believe it's going to stay relevant in games for a long time. The next shipment may be the last chance to grab it first hand, though probably at an inflated price.

Luckily, there are other good options, such as the 5700X/5800X/5900X or the 12700 from Intel. None of them are better suited to gaming than the 5800X3D, but they're priced lower, and so offer better value overall.

And of course we'll have Zen 4 and Raptor Lake in a few months.

All of them are pretty good for gaming, and better for everything else.
 
It most likely will. It's a unicorn chip, just as the 3300X was. And just like the 3300X today, I believe it's going to stay relevant in games for a long time. The next shipment may be the last chance to grab it first hand, though probably at an inflated price.

Luckily, there are other good options, such as the 5700X/5800X/5900X or the 12700 from Intel. None of them is better suited to gaming than the 5800X3D, but they're priced lower, and so offer better value overall.

And of course we'll have Zen 4 and Raptor Lake in a few months.
The thought process that I have for this chip is Gaming. The PS5 and Xbox1 run on the same AM4 chips. As most Games will be produced for one or the other and as Developers extract more and more performance (console) as the platform ages this chip could indeed remain the best Gaming CPU around. I was gobsmacked that even at $569.99 Canadian it sold out in one day.

All of them are pretty good for gaming, and better for everything else.
The 5000 series chips are all sweet. The 5900X is a beast of a CPU and stable as granite. It is actually cheaper than the 5800X3D but that will not matter. I love my 5950X because there is nothing I can do to make the CPU feel sluggish. The best thing about AMD though is the utter flexibility that all these chips have now that there is official support for X370/B350.
 
Hi,
12900ks was a 800.us suckers release dropping early just before 5800x3d dropped so it's a typical intel trolling release.

I don't think Intel did this because they knew the 5800X3D would come, but simply due to a demand for a halo product in that market segment. The KS is a nicely pre-binned CPU, and just as I bought a 5950X, I would buy a 12900KS if I were building today.

I'm almost sure we won't get another Ryzen3D on AM4. One reason is obviously the imminent release of Zen 4. And the gains we've seen in games owing to V-cache do not really translate to massively parallel workloads, such as rendering or encoding - which are primary use scenarios for Ryzen 9.

Gaming-wise you're probably right, but otherwise, I don't think so, it's not that results don't translate, it's that the resulting chip would be like the 1080 Ti - a bit too good of a product that would cannibalize AMD's own sales of higher end products in the future. Aforementioned EPYC 7373X for example ;)
 
I don't think Intel did this because they knew the 5800X3D would come, but simply due to a demand for a halo product in that market segment. The KS is a nicely pre-binned CPU, and just as I bought a 5950X, I would buy a 12900KS if I were building today.



Gaming-wise you're probably right, but otherwise, I don't think so, it's not that results don't translate, it's that the resulting chip would be like the 1080 Ti - a bit too good of a product that would cannibalize AMD's own sales of higher end products in the future. Aforementioned EPYC 7373X for example ;)
Not sure if the 192mb would complete with the 768mb and 8 memory channels of the EPYC.
 
Not sure if the 192mb would complete with the 768mb and 8 memory channels of the EPYC.
The 5800x3d works because its one chiplet. The cache helps reduce latency and improves game performance as a result. The down side is reduced clock speeds and worse temperatures. For a desktop cpu, more chiplets would just be the same chiplet as the 5800x3d. More heat and increased latency, because data has to be accessed between different chiplets. You are looking at reduced multi-core performance in a cpu that has a primary purpose to provide more mutli-core performance.

Only the 5800X3D makes sense, one chiplet means lower latency because you don't have to talk to other. Before AMD would have 2 4 core CCX's per die(CCD). With the Zen 3 based Ryzen 5000 and Milan processors, AMD aims to discard the concept of two CCXs in a CCD. Instead an 8-core CCD (or CCX) with access to the entire 32MB of cache on the die. The example vcache is added on top. This means nothing has to go over the IF to access other ccd's or ccx's.

This all help latencies, the fact that all cores, cache are on one ccd. As the vcache is on top it acts like a blanket blocking heat from escaping from the cores. Also the extra cache limits vcore to 1.35volts which further limits boost frequencies and overclocking.

The up side is better gaming performance in some (not all games) but the down side is reduced overall cpu performance when compared to the 5800x. For Space Engineers a 5800x is a better CPU as an example. There are many games like this were pushing high fps is not the issue but the cpu gets hammered simulating the game world.

The same happens to the server chips like EPYC, if you don't need the large cache then performance is reduced.

It comes packed with 256 MB of standard L3 cache and an additional 512 MB of 3D V-Cache, giving up to 768 MB of L3 cache and 804 MB of total cache per chip. Since two of these chips are featured on the 2P SP3 platform, you get 128 cores, 256 threads, and 1608 MB of cache which is truly insane. Each chip also comes with 280W of TDP though the ES chips may operate at a different TDP owing to their lower clocks.
In all of the benchmarks used in the test suite, the AMD EPYC 7773X Milan-X Dual-CPU config was lost to the older EPYC 7T83 Milan CPU and also the Core i9-12900K despite having a massive cache & core advantage. The reason is simply the fact that this CPU isn't designed for the applications the content creator used in his test suite. The Milan-X lineup is designed specifically for workloads that are cache-dependent & software suites such as the ones used for benchmarking aren't optimized for the high core and cache count that this chip has to offer. The second reason is that this is an ES CPU so clocks may not be boosting as intended, hence the variable in performance versus the old part. Source

Here are some examples from AMD on how their new processors will improve specific time-to-results workloads:
  • EDA – The 16-core, AMD EPYC 7373X CPU can deliver up to 66 percent faster simulations on Synopsys VCS, when compared to the EPYC 73F3 CPU.
  • FEA – The 64-core, AMD EPYC 7773X processor can deliver, on average, 44 percent more performance on Altair Radioss simulation applications compared to the competition’s top of stack processor.
  • CFD – The 32-core AMD EPYC 7573X processor can solve an average of 88 percent more CFD problems per day than a comparable competitive 32-core count processor while running Ansys CFX.

AMD indicates the following workloads that might be a good fit for Milan-X:
  • Workloads that are sensitive to L3 cache size
  • Workloads that have high L3 cache capacity misses (for example, the data set is often too large for L3 cache)
  • Workloads that have high L3 cache conflict misses (for example, the data pulled into cache has low associativity
Some areas that might have these kinds of workloads include fluid dynamics (CFD), finite element analysis (FEA), electronic design automation (EDA) and structural analysis. Source
 
Back
Top