• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD to Revise Specs of Ryzen 7 9700X to Increase TDP to 120W, to Beat 7800X3D

I totally disagree because they also use the extra cache in some of their server chips, so obviously something other than games benefit.
Servers don't need high clock speeds, but general use / productivity home PCs do.

I have also heard many owners of x3D chips saying that their system is more responsive than non x3D cache chips, but I admit that could easily be placebo.
I have a 7700X and a 7800X3D as well. It is placebo. Both chips are equally responsive in general use, the 7700X maybe a tad more due to the higher clock speed, although by an insignificant margin.

But more and more software will use this as time goes on, it's not 1980 anymore, and when you break it down, its actually not much cache per core. You fall for the marketing trick big numbers but forget its shared between 8/16 cores. You also forgot the fact that AMD cannot keep up with Intel without using the 3D cache band-aid.
Cache per core? All of the L3 can be used by any core, so technically, you still have 96 MB in a single-core workload.
It's not a marketing trick. You either have high voltage and high clock speeds, or more cache. There's no other way around it.
Who said AMD can't keep up? Who said a few percent difference matters? Are we even talking about the same topic? :wtf:

I get you on the thermals, but AMD should have taken Zen5 to 3nm and stopped using the bolt-on cache, and simply added it to the die. It's time for AMD to stop playing money grabbers and just get this done. Then they can use this bolt-on x3D cache for an even higher-end range of server chips, which they can charge even more crazy prices for. Zen 6 better go down this route.
It doesn't matter what nm your chip is on. If you add +64 MB cache, you basically double its size, which results in much fewer chips per wafer, which increases your defect rate, and thus, the price of the end product significantly. Not to mention, in 2D, you have longer interconnects, which adds latency, you probably also need a larger socket, and so on. You can't just bolt as much cache to your CPU as you want.
 
I really don't understand why people care. These CPUs are unlocked, you can configure them however you want. That's like caring about the out of the box brightness of your TV. Whatever


No, they really are not. In order to achieve the same performance as a zen or a 14th gen chip they need substantially more cooling and power draw.

It's not like you'd decrease your TV's brightness or refresh rate in order for it to not be a house heater.
 
Got my 5950X at the get go, never had issues ,at idle stock 18 watts, with Ram at 1.45v ,jumps to 33watts and with ASUS Dark Hero
X570 - DOCS, 2.94v and ASUS water preset in BIOS.
 
AMD's naming is their choice but in my opinion:
9700X should be at most 105W
9800X may be 120W.
Yeah when the 65 watt tdp for the 9700x dropped I said where is the middle ground. I got chewed up for saying that Mobile level efficiency on a desktop cpu doesn't make sense to me with a unlocked multiplier. Hence why they have the non X locked cpus for maximum efficiency in that regard. Again now I am saying where is the middle ground? Hopefully the consumer can choose between maximum efficiency and full blown overclocking. Imagine if AMD was sand bagging the specs for only the overclocking community to discover that it has more unlocked performance in the tank . I really hope the last part is true.

Interesting. Speaking of which, why people keep saying 7800X3D uses only 40-50W? Mine often goes to 70 and even 88. Especially while shader loading (in games) and video editing. Even during regular gaming, altho that is indeed around 45-55.
Still significantly lower than the 7700x although I would argue the 7800X3D is primarily a gaming card and we are not compiling shader significant of the time spent with it.
 
It's not like you'd decrease your TV's brightness or refresh rate in order for it to not be a house heater.
You'd decrease it for the reason youll decrease your TV or your AC. You just don't like the way it's configured at stock.
 
I totally disagree because they also use the extra cache in some of their server chips, so obviously something other than games benefit. I have also heard many owners of x3D chips saying that their system is more responsive than non x3D cache chips, but I admit that could easily be placebo. But more and more software will use this as time goes on, it's not 1980 anymore, and when you break it down, its actually not much cache per core. You fall for the marketing trick big numbers but forget its shared between 8/16 cores. You also forgot the fact that AMD cannot keep up with Intel without using the 3D cache band-aid.

I get you on the thermals, but AMD should have taken Zen5 to 3nm and stopped using the bolt-on cache, and simply added it to the die. It's time for AMD to stop playing money grabbers and just get this done. Then they can use this bolt-on x3D cache for an even higher-end range of server chips, which they can charge even more crazy prices for. Zen 6 better go down this route.
A better question might be asked of why even the next generation of AMD's mobile processors did not have even 32MB of L3 cache.

There's probably a limit in terms of scaling for SRAM cells and access lines used in these caches. Regular, 2D, cache size has not increased much for years. Penryn had 6MB L2 at 45nm, Skylake had 6-8MB L3 at 14nm, and even current higher-end Intel and AMD non-X3D offerings have barely more than 30MB L3 accessible per core. Arguably Penryn was X3D of its day, above 50% of the chip area being that cache, but I think the point still stands.
 
Last edited:
...but seeing how a lot of people reacted in the thread about regular Zen 5 not beating X3D Zen 4 chips in gaming like that was a warcrime worthy of a Hague trial… well, the public deserves the nonsense companies pull, I suppose. Hopefully, they would leave in the old PPT settings as a pre-set option a la Eco mode.
People is only asking for the X3D models to being launched at the same time of the normal ones, that would be an obvious strategy IF marketing bullshits stayied out of the door.
 
So the more power more performance AMD going in this direction a bit now.
 
I have also heard many owners of x3D chips saying that their system is more responsive than non x3D cache chips, but I admit that could easily be placebo.

I have not noticed this. Went from an OCd 5600 to a 5800X3D and at the desktop it's the same experience, but in games the 1% lows in CPU-limited situations are very nicely improved. Even going from a OCd 2600 to a 5700X3D, the desktop experience was only subtly better as any 4+ core CPU design in the last 10 years does more than a competent job managing Windows. While the desktop performance differences of my Haswell i7-4790 and Zen 4 Ryzen 7840 are noticeable, they're still in the same class of UI experience with 16GB and a decent SSD.
 
A better question might be asked of why even the next generation of AMD's mobile processors did not have even 32MB of L3 cache.
Dragon range does have 32 MB (besides the optional 3D V-cache). It's a mobile variant of Raphael, 6 to 16 cores, and is best suited for laptops with high end GPU's.

You could argue that it's not mobile, but it really is.
 
Dragon range does have 32 MB (besides the optional 3D V-cache). It's a mobile variant of Raphael, 6 to 16 cores, and is best suited for laptops with high end GPU's.

You could argue that it's not mobile, but it really is.
My original point still stands, though.

If only they'd get X3D on mobile APUs. Though I suppose they would have, if they could.
 
If only they'd get X3D on mobile APUs. Though I suppose they would have, if they could.
I don't see the point. Does the added cache always improve gaming performance significantly, regardless of GPU performance?

I actually don't know, but I wouldn't bet on it. I mean, at which point does V-cache become pointless? Kind of important if you can't upgrade your laptop GPU anyway, integrated or not.

Edit: Or do you mean added cache shared with the IGP?
 
Last edited:
Ah, the good old “crank the power up to win in benchmarks” move. I would have thought AMD would be smarter than this, but apparently not and they’ve resorted to cribbing from Intels playbook. A mistake, IMO, but seeing how a lot of people reacted in the thread about regular Zen 5 not beating X3D Zen 4 chips in gaming like that was a warcrime worthy of a Hague trial… well, the public deserves the nonsense companies pull, I suppose. Hopefully, they would leave in the old PPT settings as a pre-set option a la Eco mode.

the 9950X 16-core, the 9900X 12-core, the 9700X 8-core, and the 9600X 6-core

This is a quite bad news, both for the consumers, and for AMD which will be forced to put very low price tags on these, if they want them to even barely move off the shelves.
If you ask me, I see no initiative and reason to buy anything from this generation - simply the stagnation is too pronounced, and the core count deficit is too strong.

AMD definitely needs a move innovative approach, if they don't want to lose market share to intel.

Ryzen 9 9950X 16-core
Ryzen 9 9900X 16-core
Ryzen 7 9700X 12-core
Ryzen 5 9600X 10-core


This or DOA.
 
It's not like you'd decrease your TV's brightness or refresh rate in order for it to not be a house heater.
It's not a question of being a house heater. You don't need a bigger cooler to use your TV at max brightness.

You can lower your power limit to suit your cooling, or you can buy a bigger cooler. Or you can leave it as it is and accept that it might run into Tjmax occasionally. It's a matter of choice.
 
I don't see the point. Does the added cache always improve gaming performance significantly, regardless of GPU performance?

I actually don't know, but I wouldn't bet on it. I mean, at which point does V-cache become pointless? Kind of important if you can't upgrade your laptop GPU anyway, integrated or not.

Edit: Or do you mean added cache shared with the IGP?
Acknowledged. It really only benefits those applications that worked with datasets that both could still and would not otherwise fit within the cache, and would otherwise be bottlenecked by RAM bandwidth or latency, very often games. I was probably clouded by my experience with a 7800X3D, which was a pretty big leap from a 5800H on a lot more thing, than just gaming performance. Had I upgraded from a 7700X, the impression would likely be different.

A shared cache - maybe an L4 - on the IOD or whatever equivalent shared with the IGP could be a fun idea, though I wonder how much good that would actually do.
 
I was probably clouded by my experience with a 7800X3D, which was a pretty big leap from a 5800H on a lot more thing, than just gaming performance. Had I upgraded from a 7700X, the impression would likely be different.
Yeah, but I'd like to see some benchmarks where the added cache makes sense. I guess maybe it does with a 4060, but probably not with a 1630 lol

The reason I'm asking is that the universal recommendation of throwing a 7800X3D at anything doesn't always seem worthwile.
A shared cache - maybe an L4 - on the IOD or whatever equivalent shared with the IGP could be a fun idea, though I wonder how much good that would actually do.
The memory bus width is doubled and RAM speed is much higher for Strix point, I guess that'll have to do for now. Also, It will benefit in many benchmarks.
 
I don't see the point. Does the added cache always improve gaming performance significantly, regardless of GPU performance?

I actually don't know, but I wouldn't bet on it. I mean, at which point does V-cache become pointless? Kind of important if you can't upgrade your laptop GPU anyway, integrated or not.

Edit: Or do you mean added cache shared with the IGP?
The extra cache is pointless at a hard GPU limit, or when 1% and 0.1% low FPS doesn't matter. I suppose it'll be useful for GPU upgrades - your system might last a bit longer before you have to swap your CPU.
 
Acknowledged. It really only benefits those applications that worked with datasets that both could still and would not otherwise fit within the cache, and would otherwise be bottlenecked by RAM bandwidth or latency, very often games. I was probably clouded by my experience with a 7800X3D, which was a pretty big leap from a 5800H on a lot more thing, than just gaming performance. Had I upgraded from a 7700X, the impression would likely be different.

A shared cache - maybe an L4 - on the IOD or whatever equivalent shared with the IGP could be a fun idea, though I wonder how much good that would actually do.

An L4 cache shared with the iGPU can do a lot of good.

I started PC gaming with a NUC5i7: 384 cores and no L4 cache (Iris 6100). Later I upgraded to a NUC7i7: 384 cores and 64MB L4 cache (Iris Plus 650). 49% faster in Time Spy GFX, 73% faster in Fire Strike GFX. Similar improvements noticed in all games. The GPU cores had not changed substantially when you compare scores from other parts with the same # of cores and cache and the system memory went from 1866 to 2133 MHz in the 2 models, so not a huge difference.

Shared L4 for iGPU gaming could be a huge help.
 
Every CPU down to Core i3 12th gen is good enough if you are playing at 2160p.

1719683144046.png

 
I don't see the point. Does the added cache always improve gaming performance significantly, regardless of GPU performance?

I actually don't know, but I wouldn't bet on it. I mean, at which point does V-cache become pointless? Kind of important if you can't upgrade your laptop GPU anyway, integrated or not.

Edit: Or do you mean added cache shared with the IGP?
Mainly depends on the your FPS target. If you target 200 fps - which means you are going to lower settings to get there even with a mid range card, then the x3d might make some difference. For most people though, it's just an overly expensive CPU that offers no benefits cause they don't have a 4090 and they don't play at 1080p low. A 7600 for half the price is usually the better choice.
 
Last edited:
Every CPU down to Core i3 12th gen is good enough if you are playing at 2160p.

View attachment 353378
Until the 50x0 series is launched...
 
Every CPU down to Core i3 12th gen is good enough if you are playing at 2160p.
A better CPU gives you more headroom to increase FPS by lowering quality settings, which is becomes more important if you have something slower than a 4090.

I suppose it'll be useful for GPU upgrades - your system might last a bit longer before you have to swap your CPU.
I agree when it comes to desktop, but the post you quoted was mainly about mobile APU's where you can't change the GPU anyway.
 
Alternative scenario: keep TDP at65Watt and let PBO do some heavy lifting for a change/oc'ers toy.
le:the 1,two,three-4 by a hypothetical stretch core boosting sure is similar.
 
Alternative scenario: keep TDP at65Watt and let PBO do some heavy lifting for a change/oc'ers toy.
le:the 1,two,three-4 by a hypothetical stretch core boosting sure is similar.
Reviewers only use OOTB settings. And this CPU looks bad because of that.

As I have been saying since Zen 4 - AMD needs to stop this 3D cache grab, and incorporate the extra L3 directly in to the die.
This situation has only happened because of this greed.

AMD has outdone AMD at it's own stupid game. Intel is going to give them a bloody nose, and they don't have a product that competes for another 6 months, and then the cost of those parts will become an issue.

Zen 6 needs to bring an end to this greedy farce, or this will just happen again.
 
Back
Top