• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Ryzen 7 9700X

Depends on who you read. HUB is super negative and IMO have something wrong going on but basically had a go at others that got better results than him.

Tom's Hardware got much better than 5% for productivity and gaming even stock let alone PBO'd. Stock he got 12 game geomean of 167 for 9700X and 149 for the 7700X, so that's 12% and PBO added 9% further. HUB claims PBO did bugger all maybe 1-2% if you're lucky.

TPU and HUB have bigger game testing suite compare to other tech outlets, TPU tested 14 games (10 + 4 RT) and HUB tested 13 games. Meanwhile Toms tested 7 and Kitguru tested 4 games.

It's not very encouraging to see 9700X slower than 7700 in 1% Low FPS
minimum-fps-1920-1080.png
 
This is what is confusing. HUB especially had a 9700X with pathetically poor all-core clocks of just over 4.4GHz, much worse than others, so not surprised his results are so bad.
Tbf AMD's got a bad rep for release day hardware, especially GPU, in the meantime Intel just juices their processors to pass every test of time just that they become slower afterwards e.g. see Smeltdown o_O
 
Fair comment, but with PBO unlocked/unlimited, the 9700X is upto 35% faster than the 7700x -

Imagine what we could do with 105W and 'curve optimizer' and the new 'curve shaper', I imagine even then the 9700X would still significantly best the 7700x. I would really like some of the tech channels to try this... My 5900x runs a -0.05v drop and CO of negative 20, and will boost to 4.95Ghz sc and 4.35 ac.
This to me further embeds AMD made the right design choice, essentially the 9700X is efficient out of the box compared to its competitors, and if needed those who like to overclock can get nice benchmarking gains in performance via PBO etc. with a large drop in efficiency as a penalty, so the choice is there. That gives it more of an old school feel, as in recent times undervolting and controlling power has been more of a thing with hardware released too far up its performance curve.

My ideal future for PC would be high end CPU 65W TDP, enthusiast 125W TDP, high end GPU 150W TDP and enthusiast GPU 250W TDP. We in a world where energy consumption is becoming more efficient but PC has been going against that trend, its interesting that AMD have mentioned they want to become closer to mobile efficiency.
 
Last edited:
The more data I consume about these Zen5 desktop chips, the more underwhelmed I am honestly. I hope the 12/16 core and X3D parts as well as maybe BIOS updates? can help that perception along. Nice to see a (relative to a certain other subset of threads) civil discussion taking place too, despite the shortfalls of the product and launch.
 
I think we're suffering the "nvidia" effect.
They design cores and architect the CPUs for the server market in mind, and then, whats remaining its repurposed for the desktop enviroment.
Those power numbers look great to make a 64-128-192 core Epyc cpu.

When you have to discuss if the performance gain is between 5-10% or 10-15%... ...just imagine the same discussion with any other kind of product: cars, fridges, houses. Absurd.
 
I think we're suffering the "nvidia" effect.
They design cores and architect the CPUs for the server market in mind, and then, whats remaining its repurposed for the desktop enviroment.
Those power numbers look great to make a 64-128-192 core Epyc cpu.

For servers it is extremely pathetic, as well, because the servers require maximum computational density, which a crappy 8-core CCD won't offer.
They needed a 16-core CCD at this point, a 24-core CCD at 3nm, and a 32-core CCD at 2nm, and preferably now.
 
Why even put an iGPU on it? That makes no sense at all.
Like for millions of office PCs? The iGPU is tiny, it sits in the IO die which is fabricated on a cheaper node, it costs next to nothing
 
For servers it is extremely pathetic, as well, because the servers require maximum computational density, which a crappy 8-core CCD won't offer.
They needed a 16-core CCD at this point, a 24-core CCD at 3nm, and a 32-core CCD at 2nm, and preferably now.
Well I presume (100% speculation) that smaller dies with a proportional power reduction allow fitting more chiplets in the same area. Isnt the high core count server market the main target?

About the iGPU,
I've seen enough panic cases after a discrete GPU failure to know iGPU is a lifesaver. You can also buy and sell your GPU without any issue. And disable it, if you want, to save some watts...

And imagine, by some wizzardy trick, some magic voodoo, you could able to completely hibernate your discreet GPU on your desktop until its needed, IMAGINE THERE WAS THE TECHNOLOGY for that. Some kind of port pass-through to save some power... No more "iddle power" discussions. pff.

There is a variant for this tech in some laptops, but its a Green feature, AI-powered, cloud-connected, and, for our security, not portable to desktop.
 
Well I presume (100% speculation) that smaller dies with a proportional power reduction allow fitting more chiplets in the same area. Isnt the high core count server market the main target?

A 192-core server CPU required 24 eight-core chiplets, while only 6 thirty-two-core chiplets. A 24-chiplet CPU would have 768 cores.
 
For servers it is extremely pathetic, as well, because the servers require maximum computational density, which a crappy 8-core CCD won't offer.
They needed a 16-core CCD at this point, a 24-core CCD at 3nm, and a 32-core CCD at 2nm, and preferably now.

It's literally built for servers with the design choices they made. They offer Zen5c with 16 cores in a CCD for 192 cores with Turin for maximum density.

They chose to dedicate die area to accelerate AVX512 workloads and FP workloads with a die the exact same size as Zen 4. Yet, there's a good 15% increase in ST performance and more with FP heavy workloads. MT is where this is falling short at around a 10% gain, and gaming. But they know they can slap that cache which should pay good dividends in a seemingly Memory/IO bottlenecked design.

I agree that 16 core CCD would be nice, but for now that's only Zen5c's domain.
 
While not taking 7800X3D into account, gaming performance seems bad for 9700X, but 9600X is actually not that bad.
For less than $300 USD you get <6% gaming performance than i9-14900K which costs twice that amount:
and achieving that in 1/3 of i9-14900K power draw:

9700X, on the other hand, is pathetic. It should have been released with a default of 105W TDP and lower price.
It is indeed power starved. Especially the multi-core perf. is bad. 7700X wins over 9700X thanks to the higher TDP.
The full potential of 9700X probably unlocks with OC + undervolt approach.

Tom's hardware shared interesting information:"SkatterBencher also revealed that the Ryzen 9000 is heavily voltage-limited, featuring a peak VID limit of 1.4V for single core and 1.35V for multi-core. To help alleviate this problem, the overclocker utilized a PBO2 scalar preset 10X to boost the multi-core maximum voltage to 1.375V, enabling the chip to use more voltage to turbo higher than it previously would."
Maybe AMD is learning from Intel ... being safe is better than being sorry later.
 
My ideal future for PC would be high end CPU 65W TDP, enthusiast 125W TDP, high end GPU 150W TDP and enthusiast GPU 250W TDP. We in a world where energy consumption is becoming more efficient but PC has been going against that trend, its interesting that AMD have mentioned they want to become closer to mobile efficiency.
Oh yes, Electricity bills are right up there, so while its nice that for example the 9700x can use upto 170W, I would more than likely limit it to 105/125W for daily use. As Ive commented previously I undervolt and curve optimize my 5900x so it useually never gets above 120w under most gaming loads, and I also curve optimize my 4080 so it useually never gets above 220w in the same situations.

I'm genuinely baffled that so many people are happy with this kind of progress after 2 years. Then again, it just means that we don't have to upgrade. Clearly they don't want our money anymore. :D
Well its not that black and white, as @W1zzard said you have the best of both worlds, you can either run it as a very effiecent 65w part or you can get the best out of the 9600/9700 by enabling PBO. And then depending on the cost of electrticity where you are enable a TDP limit of your choosing.

They don't gain anything at 105W - even with PBO enabled the gains are minimal. Better to get the same performance with 35% less power than to get 8% better performance at the same power.
hmm, I would still like to see some tests done @ 105, 125, 150 to see where the sweet spot is between power useage and gained perf to find out where the point of diminishing returns is..

They also mentioned that they will be comparing Zen 5 performance using DDR5-6000 and DDR5-8000 which can show if the CPU is memory bandwidth constrained in certain situations.
Ah yes, I too would be interested to see if Zen5 has finally broken free from the low speed DDR5 doldrums. But we need production qaulity bios updates for the tech channels to test with, as the press release bios's arent stable enough.
 
I don't know why people get so hung up on the price. They're not higher than when the equivalent 7000 series CPUs launched, but lower. Just because the last gen now has dropped sufficiently in price doesn't mean the new gen should be given away.

I also believe Zen5 is quite good. What's bad is (once again) AMD's marketing. TBH they should fire the whole team. A much better way to market the new series (or at least the first two CPUs) would've been to plainly state that they're tuned for efficiency and if you would rather sacrifice that for performance then just enable PBO. Or perhaps add that functionality to Ryzen Master or some other tool. What's needed is the opposite of the "Eco Mode".

The reason why the price matters is usually the price IS HIGHER but the performance is also higher. Sure you can buy the Ryzen 5700X for much less than the 7700X on launch day, but the 7700X was also much faster and justified a price premium.

11th gen Intel? Meteor Lake laptops? Ryzen 9000. All go down in history as sometimes being slower than the predecessors and also having the new higher prices. Nope.
 
...but 9600X is actually not that bad.
For less than $300 USD you get <6% gaming performance than i9-14900K which costs twice that amount:

And then you realize that the 7600(X) offers almost exactly the same thing at $185-200. Or that you can get a much better 7700(X) for $10 more than the 9600X.
And that the 14600K also offers almost identical gaming performance at $300 and 10 W more, while offering drastically better productivity performance if you need it.

No sane person would choose either of these new CPUs at launch prices. The value is abysmal.
 
My ideal future for PC would be high end CPU 65W TDP, enthusiast 125W TDP, high end GPU 150W TDP and enthusiast GPU 250W TDP.

GPU ~ max 175 W.
Enthusiast CPU ~ max 95 W.
High-end CPU ~ max 65 W.
Mid-range CPU ~ max 51 W.
Low-end CPU ~ max 25 W.
Mobile CPU ~ between 5 and 35 W.
 
The reason why the price matters is usually the price IS HIGHER but the performance is also higher. Sure you can buy the Ryzen 5700X for much less than the 7700X on launch day, but the 7700X was also much faster and justified a price premium.

11th gen Intel? Meteor Lake laptops? Ryzen 9000. All go down in history as sometimes being slower than the predecessors and also having the new higher prices. Nope.
Well, this time the emphasis was on efficiency, not performance. I agree wholeheartedly that AMD should've communicated this a lot better. Also, we're talking MSRP. I wouldn't be surprised if the actual prices are somewhat lower. FWIW people have the option of going 7000 series without missing out. More choice is always welcome.
 
And then you realize that the 7600(X) offers almost exactly the same thing at $185-200. Or that you can get a much better 7700(X) for $10 more than the 9600X.
And that the 14600K also offers almost identical gaming performance at $300 and 10 W more, while offering drastically better productivity performance if you need it.

No sane person would choose either of these new CPUs at launch prices. The value is abysmal.
Of course, we can keep going backwards and backwards ... this will work for every generation. Why not go for Ryzen 3600? According to TPU it's currently the best bang for buck.
I was comparing present (latest on market). The best overall decision would be buying a second hand 7800X3D for maybe 220 €. The problem is - there aren't (m)any in the 2nd hand market.

7600X is not far behind 9600X, but 9600X is superior in efficiency (20% less power draw while having +7% perf.) and is much better in apps that supports AVX-512 workloads.
In addition to that, 9600X has 15°C lower temps than both 7700X and 7600X. That means you're fine with less robust cooler and the most important part (for me) is that it'd much less noisy.
Having 9600X in a SFF, I don't expect power throttling even with higher temps. 7700X and 7600X, on the other hand, might be a tough nut to crack as they have >80°C temps even with NH-D15.
(Of course, when 9600X is compared to 7700 and 7600 there is no advantage in terms of temps for 9600X.)

Talking about sanity, no sane person would (now) choose 14600K ... not with Intel's approach to admitting the mistake and taking steps in order to solve the issue.
There's not information on whether the last microcode update really stops the degradation. Only time will tell.
 
I get that for Zen 5 AMD went for efficiency, this will also help them more with their EPYC server CPUs that will have even more cores (up to 196 for Turin that will compete with efficient ARM CPUs)

Still, at 65W the 9700x is handicapped, the $360 price is too high and performance is not consistent and lower than expected is some areas.

AMD was generally spot on with their IPC estimates for previous Zen releases, but this time they promised a 16% IPC increase and the performance doesn't seem anywhere near that. Maybe things improve with new driver / AGESA updates, but we'll see.
 
Of course, we can keep going backwards and backwards ... this will work for every generation. Why not go for Ryzen 3600? According to TPU it's currently the best bang for buck.
I was comparing present (latest on market). The best overall decision would be buying a second hand 7800X3D for maybe 220 €. The problem is - there aren't (m)any in the 2nd hand market.

7600X is not far behind 9600X, but 9600X is superior in efficiency (20% less power draw while having +7% perf.) and is much better in apps that supports AVX-512 workloads.
In addition to that, 9600X has 15°C lower temps than both 7700X and 7600X. That means you're fine with less robust cooler and the most important part (for me) is that it'd much less noisy.
Having 9600X in a SFF, I don't expect power throttling even with higher temps. 7700X and 7600X, on the other hand, might be a tough nut to crack as they have >80°C temps even with NH-D15.
(Of course, when 9600X is compared to 7700 and 7600 there is no advantage in terms of temps for 9600X.)

And all of this is worth $100 to you? A 50% higher price?
You want to spend extra $100 in order not to buy a robust cooler (a $35 Peerless Assassin is very robust, indeed). You actually need to buy a cooler for the 9600X, while the 7600 has one included (it will be loud, but it's free).
You want to spend extra $100 to have 20% better efficiency? It's not even true. The 7600 literally has better gaming efficiency, and in Cinebench the 9600X only wins by 10%. How many years will it take to save that $100 in energy costs?
But you get 5% more gaming performance and 11% productivity over the 7600? Wow, for $100 more, very impressive indeed.

I have no idea where your logic comes from. Why not the 3600? Because the 9600X is 53% faster than the 3600. And the 7600 is 46% faster than the 3600. And the 3600 is on an old platform with inferior features (PCI-E, NVMe, USB). You're paying more, but you're getting a lot more. With the 9600X you literally get nothing over the 7600. Irrelevant improvement in efficiency and performance at a 50% higher price.

An RTX 4090 has terrible performance per dollar, but it doesn't matter. Why? Because it's 33% faster than the next card in the line-up. If the 9600X offered 33% more performance at a 50% higher price, nobody would be complaining.
 
I get that for Zen 5 AMD went for efficiency, this will also help them more with their EPYC server CPUs that will have even more cores (up to 196 for Turin that will compete with efficient ARM CPUs)

Still, at 65W the 9700x is handicapped, the $360 price is too high and performance is not consistent and lower than expected is some areas.

AMD was generally spot on with their IPC estimates for previous Zen releases, but this time they promised a 16% IPC increase and the performance doesn't seem anywhere near that. Maybe things improve with new driver / AGESA updates, but we'll see.
I think the quoted IPC is an average, some workloads get less and some more, also on high threaded workloads the chip is throttling so that would cancel out IPC gains.
 
And then you realize that the 7600(X) offers almost exactly the same thing at $185-200. Or that you can get a much better 7700(X) for $10 more than the 9600X.
And that the 14600K also offers almost identical gaming performance at $300 and 10 W more, while offering drastically better productivity performance if you need it.

No sane person would choose either of these new CPUs at launch prices. The value is abysmal.
For Gaming yes but for productivity no.
 
The igpu also offers av1 encoding if it's rdna3, but 1080p is broken unfortunately.
 
And that the 14600K also offers almost identical gaming performance at $300
In gaming the 14600K is faster, but for productivity no, the 9700X has quite a gain in productivity over the 7700X and the 14600K.
Although I wouldn't buy 13th or 14th gen, those are frying themselves and I expect they won't have very good resale value. If you have to go with Intel the 12700K would be a better option IMO.
And all of this is worth $100 to you? A 50% higher price?
You want to spend extra $100 in order not to buy a robust cooler (a $35 Peerless Assassin is very robust, indeed). You actually need to buy a cooler for the 9600X, while the 7600 has one included (it will be loud, but it's free).
You want to spend extra $100 to have 20% better efficiency? It's not even true. The 7600 literally has better gaming efficiency, and in Cinebench the 9600X only wins by 10%. How many years will it take to save that $100 in energy costs?
But you get 5% more gaming performance and 11% productivity over the 7600? Wow, for $100 more, very impressive indeed.

I have no idea where your logic comes from. Why not the 3600? Because the 9600X is 53% faster than the 3600. And the 7600 is 46% faster than the 3600. And the 3600 is on an old platform with inferior features (PCI-E, NVMe, USB). You're paying more, but you're getting a lot more. With the 9600X you literally get nothing over the 7600. Irrelevant improvement in efficiency and performance at a 50% higher price.

An RTX 4090 has terrible performance per dollar, but it doesn't matter. Why? Because it's 33% faster than the next card in the line-up. If the 9600X offered 33% more performance at a 50% higher price, nobody would be complaining.
The value and power efficiency is subjective, someone might have no problem spending $300 on a CPU to upgrade later, or value efficiency for an ITX system for example.
And you can keep wanting to go backwards for cheaper prices but the cost of upgrading can end up being higher if it requires buying a new motherboard.
 
Last edited:
Back
Top