• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Ryzen 9000 Zen 5 Single Thread Performance at 5.80 GHz Found 19% Over Zen 4

i'll wait for the last X3D Chip on AM5.
 
When AMD first released the Zen2 architecture, CPU-Z's author (or Intel) decided that he didn't like the Zen2 out-performing the Intel chip at the time, so a new benchmark version was released, reducing the AMD scores (Intel scores stayed the same) by some 15%. I have never taken the CPU-Z benchmark seriously after that, as it's apparently just an Intel sponsored benchmark.
In that case, we can ignore AMD vs. Intel results in CPU-z, but AMD vs. AMD is still relevant and shows a pretty chunky improvement over Zen4.
 
Last edited:
1000015810.jpg

1000015809.jpg

For those who don't know, clock-to-clock Zen4 offered a 1% improvement over Zen3 in CPU-Z. So to bring 19% gains in such a shallow benchmark shows major design changes.

But wasn't this leak declared a fake?
 
Which 8-core chips runs at 230W power limit? That is what 7900X and 7950X have.
I am talking about AM4.
With AM5 AMD decided to give users what they where cheering for. Some extra performance for much higher power consumption.
 
CPU-Z benchmark has always been bad. It is essentially a look at a best case scenario though.
It runs entirely from L1I amd has a bunch of things that are synthetic and easy for modern CPUs.
 
It could very well be fake as all rumors and leaks come from unofficial sources. But who declared this one so?
"Don't bother. The baidu thread started like this

Op claimed that amd's launching zen5 in august, then october. Said that the source has proofs and can confirm it

Did a 180 turn 1 day later and claimed june launch, july availability (was known long before he made his post)

A user put up a screenshot of alleged zen5 cpuz bench, deleted it after a couple hours

He claimed that he did it just for fun and didn't expect people to repost and take it seriously

Chinese users laughing at wccf reposting his shit

A number of chinese tech forums have already started to warn users against sharing baidu bs and threatened bans or infrator points. Your choice on whether ya wanna believe in them"

"Also, the CPU-Z screenshot doesn't list AVX-VNNI in ISA extensions."

 
At 5.8GHz doesn't just equals 14900K. It equals an overclocked and unstable 14900K.
Are you sure about that?The baseline profile didn't affect the ST performance when it was benchmarked, even locking the 14900k to 65w give the same result in ST. MT results for Zen 5 is where we will probably see the gains, but in ST RPL is still very strong
1716984155262.png

1716984488802.png
 
Are you sure about that?The baseline profile didn't affect the ST performance when it was benchmarked, even locking the 14900k to 65w give the same result in ST. MT results for Zen 5 is where we will probably see the gains, but in ST RPL is still very strong
Different benchmarks, different test beds and as Denver investigated, this all could be fake.

"Don't bother. The baidu thread started like this

Op claimed that amd's launching zen5 in august, then october. Said that the source has proofs and can confirm it

Did a 180 turn 1 day later and claimed june launch, july availability (was known long before he made his post)

A user put up a screenshot of alleged zen5 cpuz bench, deleted it after a couple hours

He claimed that he did it just for fun and didn't expect people to repost and take it seriously

Chinese users laughing at wccf reposting his shit

A number of chinese tech forums have already started to warn users against sharing baidu bs and threatened bans or infrator points. Your choice on whether ya wanna believe in them"

"Also, the CPU-Z screenshot doesn't list AVX-VNNI in ISA extensions."

Thanks for checking into this rumor. Fortunately we don't have to wait long for at least the AMD released numbers. Lisa Su's keynote is next Monday. And if we are lucky, review samples will follow shortly thereafter.
 
At 5.8GHz doesn't just equals 14900K. It equals an overclocked and unstable 14900K.
Also CPU-z benchmark is for years considered one of the Intel friendly ones.

While I doubt, I hope AMD to be considering bringing the X3D chips the same day with the regular ones. They can put a ridiculous high price if they want on them, but it will be stupid if they don't announce them together with the regular ones. They have to finally start understanding the power of marketing. Zen 5 will have a totally different, much higher level of acceptance, if an 8 core 9800X3D annihilates everything in gaming benchmarks with differences of 20-50%. If they fear internal competition, they can start that chip at $550. Zen 4 and AM5 would had much higher success if the X3D chips where introduced together with the new platform.
They're not bringing the x3d till 2025 - internal leaks already confirmed this - will be announced in Jan. Since intel doesn't have arrow lake ready these will just hang out at $550 until there's reason to drop them.
 
I would call it a fake, wouldn't you?

Moreover, i still remember the story about how developers of this utility revised their tests after, irc, Zen1 showed better results (and its result was of course downgraded).
Didn't someone from Intel have contact with these developers back then?
 
I would call it a fake, wouldn't you?

Moreover, i still remember the story about how developers of this utility revised their tests after, irc, Zen1 showed better results (and its result was of course downgraded).
Didn't someone from Intel have contact with these developers back then?

I think you won't find any concrete proof, but it was pretty obvious back then that Intel simply paid developer to change the benchmark to be more "representative of real workloads". It was also a time when Intel proclaimed what real workload is and what isn't, and of course excluded anything Zen was particulary good at.
 
Questionable source but about where I expect Zen 5 to land, in the 15-25% range.
 
I've been reading "next generation 10-20% IPC lift" from both sides every year for the last ~15-20 years.

I know better than to believe those claims until I see them. It's usually too good to be true.
 
I've been reading "next generation 10-20% IPC lift" from both sides every year for the last ~15-20 years.

I know better than to believe those claims until I see them. It's usually too good to be true.
Maybe on the intel side they were comfortable with single digit increases for years.(skylake)

I however wouldn't say that is the same for AMD who has showed us double digit increases in IPC in the last 10 years easily.

AMD Excavator lineup to Zen 1: There was roughly 52% IPC Increase
  • Zen 1 -Zen+: 3% IPC Increase
  • Zen --Zen 2: 15% IPC Increase
  • Zen --Zen 3: 19% IPC Increase
  • Zen 3 -- Zen 4: 13% IPC Increase
 
Last edited:
It's hard to compare from different sources and obviously bring salt. Anyways onto more leaks...
You guys have to consider a lot of those results are BEFORE that whole Intel baseline instability situation... remenber if you run the 14900k/s/f today in the baseline preset you're loosing a lot of performance (multitread) compared to the release day reviews.
 
Maybe on the intel side they were comfortable with single digit increases for years.(skylake)

I however wouldn't say that is the same for AMD who has showed us double digit increases in IPC in the last 10 years easily.

AMD Excavator lineup to Zen 1: There was roughly 52% IPC Increase
  • Zen 1 -Zen+: 3% IPC Increase
  • Zen --Zen 2: 15% IPC Increase
  • Zen --Zen 3: 19% IPC Increase
  • Zen 3 -- Zen 4: 13% IPC Increase
Don't forget about the clock speed increase. 1700X 4.1 vs 5800X @ 5.0 or 5950X @ 5.1 Ghz, Also Multi core CPU enhancements with Windows updates also improved performance and faster RAM made a discernible difference up to 3600 Mhz.
 
Maybe on the intel side they were comfortable with single digit increases for years.(skylake)

I however wouldn't say that is the same for AMD who has showed us double digit increases in IPC in the last 10 years easily.

AMD Excavator lineup to Zen 1: There was roughly 52% IPC Increase
  • Zen 1 -Zen+: 3% IPC Increase
  • Zen --Zen 2: 15% IPC Increase
  • Zen --Zen 3: 19% IPC Increase
  • Zen 3 -- Zen 4: 13% IPC Increase
Intel essentially stagnated between Skylake to Comet Lake. Between Alder Lake to Raptor Lake refresh, there was very little IPC improvement. The general performance improvement was mainly due to the aggressive clockspeed starting with the 13x00 series. So while AMD is busy improving their CPU architecture to deliver higher performance, Intel was busy tweaking their chips to soak in as much power to deliver the high clockspeed.
 
CPU-Z single core bench has pretty much zero relevance to real world applications and uses a tiny part of the CPU. Even if the 19% increase in performance is true in this bench, it's like saying Zen 5 has better L1 latency or something along the lines. And for some reason the news article sounds like after this 19% gain, Zen 5 has caught up to 14th gen's IPC which is obviously not the case. Zen 4 is close to 14th gen's in IPC in actual workloads.

Now if it's 19% increase in a real world application, that would be progress. Sort of similar to Zen 2 - Zen 3. Based on the architectural changes, it should at least be more than Zen 3 - Zen 4.
 
AMD tried to promote it's chips as super efficient. They did that keeping 12 and 16 core chips at 8 core chips power consumption levels. Then users online where praising Intel's chips for being 1% faster in single threaded benchmarks and games while using twice the power. What was expected from AMD to do, other than offer users what they wanted? That +1% performance for a +50% power increase.
Intel didn't drag AMD to anything. Users and tech press did. They are so desperate to keep offering wins to Intel, that they made efficiency look like a secondary, unimportant feature.

Note 1: High power consumption is an industry-wide trend, in both desktop CPUs and desktop GPUs, enabled by advances in chip manufacturing and by larger&heavier coolers.

Note 2: Neither AMD nor Intel is FORCING users to run the CPU at 250 Watts, and neither AMD nor Intel nor Nvidia is FORCING gamers to run GPUs at 400 Watts. Running a CPU or GPU at high wattage is an OPTION offered to consumers. Another OPTION offered is to limit CPU's max temperature to 75℃ in the BIOS (single-threaded performance stays the same, while multi-threaded performance is reduced). 144Hz 4K HDR gaming is just an OPTION offered by high-end displays. Complaining about 250 Watt CPU consumption, while multiple options of how to limit/optimize power consumption and temperatures do exist and are fairly obvious, is a sign of incompetence+misunderstading on the side of the user of the desktop machine.
 
Note 1: High power consumption is an industry-wide trend, in both desktop CPUs and desktop GPUs, enabled by advances in chip manufacturing and by larger&heavier coolers.
I would phrase this the other way around. High power consumption is an industry-wide trend, caused by relative lack of advances in chip manufacturing.

For long years there were regular huge improvements in manufacturing processes that enabled huge increases of transistor budgets and huge efficiency increases. These manufacturing process increases have slowed down a lot in recent years but the industry and consumer expectation is for the performance of end product to keep increasing.

Note 2: Neither AMD nor Intel is FORCING users to run the CPU at 250 Watts, and neither AMD nor Intel nor Nvidia is FORCING gamers to run GPUs at 400 Watts. Running a CPU or GPU at high wattage is an OPTION offered to consumers. Another OPTION offered is to limit CPU's max temperature to 75℃ in the BIOS (single-threaded performance stays the same, while multi-threaded performance is reduced). 144Hz 4K HDR gaming is just an OPTION offered by high-end displays. Complaining about 250 Watt CPU consumption, while multiple options of how to limit/optimize power consumption and temperatures do exist and are fairly obvious, is a sign of incompetence+misunderstading on the side of the user of the desktop machine.
This is about cost to the consumer. If consumer would prioritize buying - and paying for - low power consumption and efficiency the products offered would reflect that. Basically for a CPU or GPU it means going larger-wider (more cores, more shader) and lower frequencies. This is exactly what enterprise and data center are doing - they feel the power requirements, cooling requirements and initial investment of buying the thing is relatively smaller. Thus, products offered there are more efficient.

Nothing stops me or you from buying an RTX 4090 and running it at half power limit - at 225W that will become a very efficient GPU with surprising bit of its performance intact. The problem - this will bring its performance down to lets say RTX 4080 level. RTX 4080 would be much cheaper to buy.

Although if I remember correctly 4090 is most efficient somewhere around 300W where it does not lose as much performance and would still be faster and more effcient than RTX4080. More costly, still.
 
Last edited:
I would phrase this the other way around. High power consumption is an industry-wide trend, caused by relative lack of advances in chip manufacturing.

For long years there were regular huge improvements in manufacturing processes that enabled huge increases of transistor budgets and huge efficiency increases. These manufacturing process increases have slowed down a lot in recent years but the industry and consumer expectation is for the performance of end product to keep increasing.

If you mean 1980-ties and 1990-ties, then I mostly agree. After year 2000 it gets more complicated: AMD Bulldozer CPUs were a step back compared to the K10 architecture, which wasn't caused by manufacturing but by micro-architecture. Intel only slightly increasing IPC for 10 years is related to micro-architecture and to the absence of a competitive micro-architecture from AMD and ARM. While the size of an atom of silicon is indeed a constant, the truth is that the number of transistors on a single chip sold to a consumer has kept increasing exponentially for the past 20 years (albeit the exponent is now slightly lower than it was before 2000), which means that the main obstacle to more performance is lack of progress in micro-architecture and not a lack of progress in the number of transistors. GAA transitors will provide a lot of extra transistors for CPU micro-architecture designers to use throughout the next decade. But breakthoughts in micro-architecture have a PACING different from the PACING of advances in manufacturing. Huge mistakes in micro-architecture actually do happen sometimes (while mistakes in manufacturing are very tiny when compared to mistakes in micro-architectures). Micro-achitecture doesn't follow Moore's law.
 
This is an interesting point. I am not sure if that comes completely down to microarchitecture.

It has been clear for a while that frequencies will no longer increase considerably which has an effect on how microarchitectures need to evolve. Some - if not most - of the evolution has happened and will have to happen on different levels. Multi-/manycore CPUs and their consenquences in the system and software level has been significant and will go on.

Purely on microarchitecture there seem to be two cardinally different directions being attempted - going small simple like RISC-V or some of ARM, alternatively going wide and complex for which the Apple M is probably the best mainstream example. I think the problem with simple is that it will eventually have to rely on either clock speed or parallelism to improve. Clock speeds are not expected to improve considerably these days and parallelism works well with cores of any size or complexity. Plus of course ASICs for specific tasks for efficiency improvements.

Interesting times either way :D
 
This is an interesting point. I am not sure if that comes completely down to microarchitecture.

Of course that performance largely comes down to micro-architecture. For example, Python (or any programming language with arbitrary-precision integers as the default integer type) suffers a fairly large performance slowdown (even if you manage to JIT-compile the Python code into native code) JUST because CPUs don't have native support for accelerating arbitrary-precision integers. The same can be said about performance hit caused by garbage collection. And the same can be said in terms of CPUs lacking hardware support for message passing (that is: acceleration of concurrent programming languages).

Just a note: JIT compilation arrives to CPython with version 3.13, although it might be initially disabled by default and might noticeably improve performance only after version 3.14+ (https://peps.python.org/pep-0744/)

It has been clear for a while that frequencies will no longer increase considerably which has an effect on how microarchitectures need to evolve. Some - if not most - of the evolution has happened and will have to happen on different levels. Multi-/manycore CPUs and their consenquences in the system and software level has been significant and will go on.

Purely on microarchitecture there seem to be two cardinally different directions being attempted - going small simple like RISC-V or some of ARM, alternatively going wide and complex for which the Apple M is probably the best mainstream example. I think the problem with simple is that it will eventually have to rely on either clock speed or parallelism to improve. Clock speeds are not expected to improve considerably these days and parallelism works well with cores of any size or complexity. Plus of course ASICs for specific tasks for efficiency improvements.

Interesting times either way :D

The RISC-V standard will have (but I have no idea when it will happen, it is taking a long time) an extension "J" for accelerating dynamic programming languages (https://github.com/riscv/riscv-j-extension). With it in place, competition between ARM/x86 and RISC-V might become quite interesting.
 
Last edited:
Back
Top