• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Ryzen 7 5800X3D

12700K has better performance per dollar in productivity, gaming (both low and high resolution)

Also, keep in mind that turning E cores off is going to improve gaming performance. ALder Lake was not tested on its full potential for gaming
5800X3D is more expensive than 12700K. SO turning E cores will not make put 12700K at any disadvantage anyway

So you're paying 450 dollar for a 5800x3d when you can get a 12600kf for 280 with 4.4% slower gaming performance at 1080p?


To both of you: pricing varies per country, let alone per state in the USA.
And as always... stop looking at just the CPU prices.

You can mix this with an A320 board and 16GB of DDR4 3200 and a 120mm air cooler, vs needing a top end DDR5 board and a 360mm AIO
 
To both of you: pricing varies per country, let alone per state in the USA.
And as always... stop looking at just the CPU prices.

You can mix this with an A320 board and 16GB of DDR4 3200 and a 120mm air cooler, vs needing a top end DDR5 board and a 360mm AIO
Really? You need ddr5 and a 360 aio for gaming on a 12600 / 12700? REALLY? Come on...

Should i repost my results on a 12900k on a small single tower cooler in cbr23?

Fact is, the 3d is almost 50% more expensive than a 12700f, gets crucified in every workload and only wins in 240p gaming by single digits percentage. Assuming you only care about gaming, you can even disable the ecores to close some of that single digit difference.

And no, you dont need ddr5. Tpup also tested the difference between ddr5 and ddr4, they actually tested the exact same kit they used on the 5800x 3d against the one used on alderlake. The difference was 2%. So the total gaming difference between the 12700 and the 5800x3d, both with ddr4 is less than 5%.
 
Really? You need ddr5 and a 360 aio for gaming on a 12600 / 12700? REALLY? Come on...

Should i repost my results on a 12900k on a small single tower cooler in cbr23?

Fact is, the 3d is almost 50% more expensive than a 12700f, gets crucified in every workload and only wins in 240p gaming by single digits percentage. Assuming you only care about gaming, you can even disable the ecores to close some of that single digit difference.

And no, you dont need ddr5. Tpup also tested the difference between ddr5 and ddr4, they actually tested the exact same kit they used on the 5800x 3d against the one used on alderlake. The difference was 2%. So the total gaming difference between the 12700 and the 5800x3d, both with ddr4 is less than 5%.
Like someone will get 12700f or 5800x3d for "workload" you will either get the big boys 5950x , Threadripper or the cheapest Celeron there is no "workload" in between.
 
Like someone will get 12700f or 5800x3d for "workload" you will either get the big boys 5950x , Threadripper or the cheapest Celeron there is no "workload" in between.
Like someone would get the 5950x for "workload", you either get the 64core threadripper or a pentium 3. There is no workload in between
 
Like someone would get the 5950x for "workload", you either get the 64core threadripper or a pentium 3. There is no workload in between
You will get 5950x when you need both physics simulation and then rendering. Threadripper is slow for most physics tools cause they care more about core clock not core count. The best case would be a AL machine for the physics and a Threadripper for the rendering but then the budget skyrockets.
 

1649941244560.png


A bench at the above clocks would be interesting...
 
So you're paying 450 dollar for a 5800x3d when you can get a 12600kf for 280 with 4.4% slower gaming performance at 1080p?

The price of Intel CPU's are built into the chipset itself. If you look at motherboard prices, they are pretty insane. A clever trick of marketing. Also, a good amount of games have a pretty large disparity. It makes really wonder about a Zen 4 3D stacker in the works, because I would be all over that. Imagine a 20% IPC gain, and additional 15% game performance gain with a cache edition CPU, marketed as Gamer Edition (kinda like Black Edition), where people buying those CPU's don't care about blender. I was really hoping someone would test this on starcraft 2 since it's single thread limited, was curious if cache would help boost the performance.
 

Attachments

  • untitled-10.png
    untitled-10.png
    31.9 KB · Views: 164
The 5800X is just a single CCD; the main benefit from this alone would be no latency at all if it would had to switch or read/write data from other cores.

The 96MB of additional cache added on top would work wonders on any CPU really, but this cache experiment has bin tested with EPYC before with good results.
No latency at all? There is quite significant latency.
If this design worked they way you expected, then we should see 5800X3D pull ahead of 5800X in most heavy multithreaded workloads. Instead we see the opposite; the extra L3 proves useful mainly in gaming and a few select workloads. And the reason for this will be obvious for those who know how CPUs and caches work; the caches are overwritten every few microseconds (if not faster), so if a core were to benefit from another core recently using a cache line, the window there is extremely small and the chance of a cache hit in L3 from another core is very small, at least for data cache lines. With instruction cache lines the chances are a bit larger, but still it's more likely that the same core just evicted that cache line from L2 than another core executing the same code just moments apart.

This is why we only see only see select workloads benefit from the extra L3. And as anyone who understands how caches work, just throwing more cache at it will not suddenly make the gains significant to "every" workload, in fact, adding more cache will have diminishing returns.

And don't get me wrong, 5800X3D looks like a good gaming CPU. :)
 
Yeap, surprising both ways. It's so far behind in cyberpunk and riftbreakers

It significantly improved over the 5950x in those titles however, majorly closing the cap with the Intel's. And in other titles, it's just beating Intel period. Then you have cost and power to factor in. I suspect in a review that covers 30 games or more, will be more telling.
 
New Benchmarks..

Bearing in mind that Far Cry and Borderlands are the games that most aggressively benefit from the 3D cache, leading this to be an insanely cherry picked result, as is the Factorio one.

Decisive losses throughout the entire productivity suite and general experiences aren't something to be set aside too lightly, IMHO, but the X3D has achieved the goal of bringing it to parity with Alder Lake in gaming - it doesn't necessarily beat it, however, nor is the gap significant against it or the standard Zen 3 chips. I like this CPU, it will do great for gamers (though anyone who bought a Ryzen 9 or Alder Lake absolutely do not need to lose sleep over it), but I really question its $450 asking price.

I'm seeing the TPU testing methodology being brought to question and how many seem unwilling to take the site seriously, but I fail to see where TPU's findings have been conflicting with other media. The use of DDR4-3200 memory isn't a big deal, reflected in Hardware Unboxed's testing and imho helps to see how the processor would behave on a medium-budget gaming system, and I've also seen the use of an RTX 3080 10GB being questioned - sure it's no RTX 3090 Ti or 6900 XT, but at best this would make the 720p scores a little less representative of the processor's true capabilities (since GA102 scales poorly to low resolutions) - maybe do that one pass using the RX 6900 XT as an addendum? People will never be happy anyway.
 
Then you have cost and power to factor in. I suspect in a review that covers 30 games or more, will be more telling.
Yeah, cost is the major issue. The 12700f is practically as fast at a fraction of the cost, while absolutely crucifying the 3d in everything non gaming. Who would pay 450 for the 3d? 350 is already pushing it
 
well , This CPU is a last gift to AM4. It was a showcase to celebrate AM4 platform.

Farewell AM4 !
 
Most people already have or able to get an inexpensive AM4 board and RAM and with that CPU will have the best gaimg experience to date even when the next gen GPUs arrive. So, what's not to like of this CPU?
 
Yeah, cost is the major issue. The 12700f is practically as fast at a fraction of the cost, while absolutely crucifying the 3d in everything non gaming. Who would pay 450 for the 3d? 350 is already pushing it

Don't mix the productivity performance with the gaming one.
Obviously, someone who builds a pc from scratch now, it's better buying the Intel platform, although it costs ridiculously more.
(or wait for AM5 - the normal person will invest on DDR5/PCIe5 and even get a 12400 in order to have an upgrade path)

But for the majority who already own a X370/X470/X570/B350/B450/B550, and most of active gamers have AMD now, this is the absolute cpu. No one will replace the whole pc to get the value of a 12700K. It's not meant to lure this group of people.

The cpu is for those who seek absolute gaming performance without having to replace the whole platform.
And yes, it's meaningless to talk about value when we talk about absolute performance, even in a specific target group.
No reasonable person will choose the 3D over a 5900X/5950X. Even me, who waited for this cpu (I'm on 3700X now) for so long, I may end up with a 5900X/5950X.

*It's embarrasing for Intel, having released the KS that needs a nuclear power plant to work, to brag about the crown of the absolute gaming performance. But KS is the king.
Also, when we talk about the absolute performance, there is no place for value or power consumption in the discussion. The 3D makes the KS look like a dinosaur (and it is) but the latter has got the Crown, no matter if it costs twice the price.
 
I've got a question... Since they had to shrink the thickness of the original Zen 3 silicon to make room for 3D V-Cache silicon, why didn't they flip it upside down, which is to put V-Cache at the bottom, close to the substrate, and Zen 3 chip on the top, close to the IHS? I think it should help greatly with thermal performance... Well, maybe that's too difficult for cores to connect with the pins underneath... They know it way better than me, of course.
And at the same time I wish that they would apply the same method on new Zen 4 to cut it thin, just like Intel did on the 10th gen Core. Looking forward to Zen 4.
By the way, I long have said that E-cores can't avoid scheduling problems, which is why I never like the idea of hybird architecture. It's hilarious when they call it E-cores yet still 250 W or so.
 
Last edited:
Power consumtion numbers are impressive!
12900K needs 80% more power, 12900KS even 260% more...

The same power consumption numbers are attainable by using a regular Ryzen with the same 1.35 volt ceiling. XFR (and to that extent, PBO) throw efficiency out of the window in bursts in order to achieve the performance target, the biggest achievement here is how AMD managed to make this without incurring a significant power consumption penalty. Also consider that Alder Lake has twice the cores, and the efficiency is not all that different when spread across all available execution units.

I've got a question... Since they have had to shrink the thickness of the original Zen 3 silicon to make room for 3D V-Cache silicon, why didn't they flip it upside down, which is to put V-Cache at the bottom, close to the substrate, and Zen 3 chip on the top, close to the IHS? I think it should help greatly with thermal performance... Well, maybe that's too difficult for cores to connect with the pins underneath... They know it way better than me, of course.
And at the same time I wish that they would apply the same method on new Zen 4 to cut it thin, just like Intel did on the 10th gen Core. Looking forward to Zen 4.
By the way, I long have said that E-cores can't avoid scheduling problems, which is why I never like the idea of hybird architecture. It's hilarious when they call it E-cores yet still 250 W or so.

Alder Lake gets around your claimed scheduling problems using a hardware thread scheduler called Thread Director. The processor knows exactly where to place data before it even reaches the cores themselves, the downside to this is that it requires Windows 11 or a modern Linux kernel that can understand how the ITD works.

As for the positioning, you'll find that the through-silicon vias for the 3D cache were already present on the original Zen 3 design. I would guess AMD just did not have the packaging technology ready at the time, at least not with an acceptable cost anyhow.
 
Yeah, cost is the major issue. The 12700f is practically as fast at a fraction of the cost, while absolutely crucifying the 3d in everything non gaming. Who would pay 450 for the 3d? 350 is already pushing it
Like I said in another Post, the cost of the Intel CPU is partitioned off into the chipset itself, never minding the cost of DDR 5. Using DDR 4 with ADL absolutely brings down its performance.
 
It seems stupid to me that they have not yet realized how dependent games are on cache memory... every time there is a jump in cache memory in terms of quantity and speed, this has more repercussions on the performance of a processor than anything improvement in IPC... for End users Currently multitasking and gaming is more important than compressing a 100GB file...

Honestly, synthetic tests do not measure reality.
 
Yeap, surprising both ways. It's so far behind in cyberpunk and riftbreakers
It's a meager 3.2 frame behind in Cyberpunk at 720p and less at higher resolutions and over x3 the wattage average/max. It also costs significantly significantly less.
 
Like I said in another Post, the cost of the Intel CPU is partitioned off into the chipset itself, never minding the cost of DDR 5. Using DDR 4 with ADL absolutely brings down its performance.
That is factually wrong. The chipset price (the money intel is charging) got up by a whooping....1€ the last couple of years. LOL
 
Back
Top