• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Ryzen 9000X3D Processors with 3D V-Cache Arrive in January at CES 2025

So if 9700X is only slightly faster than 7700X, does that mean 9800X3D will be only slightly faster than 7800X3D?

4/2023 7800X3D is going to age like fine wine. PBO, SMT off in some titles and now windows updates to improve over vanilla performance. Oh wait that was Zen5 marketing strategy.
 
So if 9700X is only slightly faster than 7700X, does that mean 9800X3D will be only slightly faster than 7800X3D?
My take on the subject

 
Since they haven't changed the cache size I wouldn't expect too much. Hopefully it's a little more exciting than the earlier 9K series refresh was, but I'm not sure if it the X3D parts will in relative terms to the previous generation parts that are getting refreshed or whatever will see about the same uplift, more uplift, or possibly even less uplift. Hopefully it's slightly better due to the larger cache alongside the frequency uplift, but I guess we'll see how it pans out in practice. I would think it'll be mostly the same in the end they need more serious architectural changes than a small frequency bump.
 
My take on the subject

Well, there are also diminishing returns when it comes to GAMING architecture as well. While there may be features that seem more "server" oriented, those same features are capable of running NEWER software approaches better than the Ryzen 7000 architecture.
When you can't increase the frequency much anymore, offer a large CACHE (i.e. 9800X3D) and the low hanging fruit for x86 code is mostly gone there's not a lot you can do except minor tweaks.
There are also a lot of SECURITY issues that come into play if you try to add in branch prediction or whatever that has the potential to increase performance.
SOFTWARE is where the big gains need to be made. I'm baffled why, in 2024, we're still having single-thread issues. Why isn't the RENDER thread or whatever the bottleneck is multi-threaded better? I honestly don't know as I have only a small amount of programming skills, but we got a little bit if improvement with DX11 and then supposedly DX12 was going to come along and solve this. But here we are. I've got a 12-core Ryzen R9-3900x but a modern 6-core is sometimes 2x as fast because of single core/thread bottlenecking.
 

:toast:
4/2023 7800X3D is going to age like fine wine. PBO, SMT off in some titles and now windows updates to improve over vanilla performance. Oh wait that was Zen5 marketing strategy.
 
So if 9700X is only slightly faster than 7700X, does that mean 9800X3D will be only slightly faster than 7800X3D?

I would say yes, with the following logic: given 9700X is only 2% faster than 7700X in gaming (with 1% lows slightly worse!), this gap could widen to 4-5% with upcoming AGESA, Chipset, and Windows 24H2 updates. It has a much lower TDP limit and manages a small overall improvement. X870 might also bring a tiny bump. Based on all that, plus the higher TDP of X3D, it's fair to guess that 9800X3D might be 6% faster than 7800X3D. This is rather lame and not generational, but will likely be my upgrade up from my 5800X. X3D can't come soon enough this time.

The bigger question is now if Intel Arrow Lake brings competition this fall.
 
Last edited:
I would say yes, with the following logic: given 9700X is only 2% faster than 7700X in gaming (with 1% lows slightly worse!), this gap could widen to 4-5% with upcoming AGESA, Chipset, and Windows 24H2 updates. It has a much lower TDP limit and manages a small overall improvement. X870 might also bring a tiny bump. Based on all that, plus the higher TDP of X3D, it's fair to guess that 9800X3D might be 6% faster than 7800X3D. This is rather lame and not generational, but will likely be my upgrade up from my 5800X. X3D can't come soon enough this time.

The bigger question is now if Intel Arrow Lake brings competition this fall.
I am hopeful Intel will deliver with Arrowlake and 9800x3d will bring back the enthusiasm in competition. Intel can't be on an infinite losing streak forever. AMD didn't want Intel to have too the negative limelight with Zen5.
 
I am hopeful Intel will deliver with Arrowlake and 9800x3d will bring back the enthusiasm in competition. Intel can't be on an infinite losing streak forever. AMD didn't want Intel to have too the negative limelight with Zen5.
Nice summer child way of looking at it, I hope amd realises they need to make haste bringing zen 6 to market with the new iodies to feed these zen 5 dies
 
Flop, flop, flop - oh no, MS saved us all... Now Windows 11 looks much more attractive than 10... What?
But did the 7800x3D also get a boost? New video soon?
Now what, Copilot+ or no Ryzen boost, so sad :(
 
Nice summer child way of looking at it, I hope amd realises they need to make haste bringing zen 6 to market with the new iodies to feed these zen 5 dies
We all have a pediatric in us all when gaming is involved just saying. Even the geriatrics lol.
Based on AM4 and 5800X3D performance I can see Amd making zen6 very competitive. AMD stagnates performance with Zen5 makes a big bang with zen6. It's plausible.
 
So if 9700X is only slightly faster than 7700X, does that mean 9800X3D will be only slightly faster than 7800X3D?

Its possible.

Maybe a touch better improvement with the X3D, perhaps 10% gains. If Intels next gen fails to strike a blow, AMD will see no reason to offer anything substantially better. I just hope, if there are limitations at the hardware level, AMD doesn't end up piercing and clawing for some added perf with opportunistic voltage profiles for potentially another set of early adopter burners.

The 7800X3D was around 20% faster over the 5800X3D. I'm not confident with seeing the same with 9000-series. Anyway doesn't bother me, i'm on a 5800X3D and mostly GPU limited.
 
Its possible.

Maybe a touch better improvement with the X3D, perhaps 10% gains. If Intels next gen fails to strike a blow, AMD will see no reason to offer anything substantially better. I just hope, if there are limitations at the hardware level, AMD doesn't end up piercing and clawing for some added perf with opportunistic voltage profiles for potentially another set of early adopter burners.

The 7800X3D was around 20% faster over the 5800X3D. I'm not confident with seeing the same with 9000-series. Anyway doesn't bother me, i'm on a 5800X3D and mostly GPU limited.
Competition with Intel shouldn't be AMD's only motivation to deliver. I mean, there's R&D and manufacturing costs that have to be returned. If the end product fails, it's not gonna happen. We don't want another FX situation where AMD (or Intel) edges towards bankruptcy due to underwhelming products that no one buys, regardless if there's decent enough competition or not.
 
That would beat the whole purpose of chiplets.

AMD can use chiplets to make EPYC CPUs and desktop CPUs with more than 8 cores, as I have drawn in this image:
oAPGSd7.png
 
AMD can use chiplets to make EPYC CPUs and desktop CPUs with more than 8 cores, as I have drawn in this image:
oAPGSd7.png
That is 3 separate chip designs in the same package instead of the 2 we have today, which costs R&D and manufacturing. Think of it in terms of defects. A more complex chip results in fewer chips per wafer, and thus, rejects are more costly. If a few cores are defective, you can still repurpose those chips as R5, or R9 x900. If the memory controller is defective, that chip is dead. Therefore, having the memory controller on the IO die which is made on an older, more mature process with fewer defects makes complete sense. Your suggestion, on the other hand, would increase costs with little to no benefit.

TL,DR: Don't assume that AMD's engineers didn't consider several different designs and chose the most cost-effective one.
 
That is 3 separate chip designs in the same package instead of the 2 we have today, which costs R&D and manufacturing. Think of it in terms of defects. A more complex chip results in fewer chips per wafer, and thus, rejects are more costly. If a few cores are defective, you can still repurpose those chips as R5, or R9 x900. If the memory controller is defective, that chip is dead. Therefore, having the memory controller on the IO die which is made on an older, more mature process with fewer defects makes complete sense. Your suggestion, on the other hand, would increase costs with little to no benefit.

TL,DR: Don't assume that AMD's engineers didn't consider several different designs and chose the most cost-effective one.
Exactly
Keeping relatively small and separate dies (cores/IO) can improve wafer yields vastly and also you have the scalability to put together many different number of dies on package + you can cheap out on IOD as you dont need the last state of the art lithography for that one.

Tho I believe that if the uncore part is on die right next to cores a lot of things improve but then you loose all the above.
Its a trade off that AMD has chosen. Simple as that.
 
Tho I believe that if the uncore part is on die right next to cores a lot of things improve but then you loose all the above.
I'll go one step further: I do not even believe that. Otherwise, the 8700G (which does have the MC on the same die as the cores) would show massive gains with RAM tuning/compatibility. The only thing it could improve is idle power consumption, but if only the MC is on the main CCD, and the rest of the IO is on a separate chip (like in the drawing above), then not even that. It's just extra cost with no benefit.

What we need, in my opinion, is an improved IF (something like Intel's tiles, or the Navi 31-32 interposer), and better CCD placement (closer to the centre of the package).
 
Well again cache is king & if AMD ever decided to lace x700G or x600G with x3d levels of cache we'd have a monster CPU+GPU combo ~ oh wait that's (almost)what Halo is for :slap:

Strix Halo is for me the most interesting chip from AMD in the last 3-5 years, especially seeing how Strix Point performed!
 
I'll go one step further: I do not even believe that. Otherwise, the 8700G (which does have the MC on the same die as the cores) would show massive gains with RAM tuning/compatibility. The only thing it could improve is idle power consumption, but if only the MC is on the main CCD, and the rest of the IO is on a separate chip (like in the drawing above), then not even that. It's just extra cost with no benefit.

What we need, in my opinion, is an improved IF (something like Intel's tiles, or the Navi 31-32 interposer), and better CCD placement (closer to the centre of the package).
Yeah not so much on DRAM compatibility but on speed and latency. Its just a logical thought, nothing more.
Agree on the rest...
Faster interconnect is needed and AMD will improve at some point... The could've done it now with Zen5 but again they chose not to.
I think they will get away eventually with this one after X3Ds and many optimizations that will probably come.

But I dont see Zen6 to able to operate well enough if IOD and FCLK/UCLK staying same.

At least FCLK needs to gain like +50% or even x2 from current speed

Cant imagine though how would this affect power if they keep using same type of interconnection.
Probably they will introduce a more advanced one that higher speed will not draw much more power.
I cant comment further cause Im lacking knowledge on the subject.
 
Cant imagine though how would this affect power if they keep using same type of interconnection.
Probably they will introduce a more advanced one that higher speed will not draw much more power.
I cant comment further cause Im lacking knowledge on the subject.
They will have to. 25-30 W with 6000 MHz RAM is already a ridiculous amount, imo (no wonder I'm running mine at bog standard 4800).
 
They will have to. 25-30 W with 6000 MHz RAM is already a ridiculous amount, imo (no wonder I'm running mine at bog standard 4800).
AM4 is no better either... The 5900X cant go below 37W PPT with (3600MT/s) 1800MHz 1:1:1 even if all windows power settings are set to efficiency.

1724756530557.png


Core+SoC combined can go down to <20W but there is still another 15~20W "lost" somewhere...

Single CCD CPUs will have Core+SoC lower because of lower SoC (~10W) power but still its not that PPT will drop to <20W
 
AM4 is no better either... The 5900X cant go below 37W PPT with (3600MT/s) 1800MHz 1:1:1 even if all windows power settings are set to efficiency.

View attachment 360828

Core+SoC combined can go down to <20W but there is still another 15~20W "lost" somewhere...

Single CCD CPUs will have Core+SoC lower because of lower SoC (~10W) power but still its not that PPT will drop to <20W
You probably must play with windows power plan about this.

Here is 5600x, 7700x, 7900x and 9700x PPT and Core+SoC with higher RAM frequency.
(Just random screenshots from my library)
1724757447879.png
1724757452613.png
1724757457058.png

1724758028456.png
 
and better CCD placement (closer to the centre of the package)
Yes, the arrangement seems weird. I think the IOD needs to be placed close to the centre because all signals from the LGA (the land grid array) are connected to it, and that's part of the issue. But I don't understand why the CCDs had to be moved that far to the edge of the package. Too many wires in between? How many wires does each IF link take?
 
I want these to absolutely annihilate my 7800X3D in gaming so that I have something to look forward to.
Gen on Gen upgrades are a waste of time on cpu's better to skip a gen.

Zen 6 is your next upgrade.

We all have a pediatric in us all when gaming is involved just saying. Even the geriatrics lol.
Based on AM4 and 5800X3D performance I can see Amd making zen6 very competitive. AMD stagnates performance with Zen5 makes a big bang with zen6. It's plausible.

You make iitsounds like AMD is sandbagging on purpose which I don't believe is the case here.

Zen 5 and specifically turin looks like it was designed for Datacenter more so than client desktop. So it will excel in that area and looks to be just average on client desktop. I also believe the IOD and lack of memory bandwidth hurts it more than it will on server which doesn't have those bottlenecks.
 
Last edited:
On any hardware really, it's even worse on phones because they'll throttle way quicker than desktop/laptops or even tablets.
 
Back
Top