• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Unveils 5 nm Ryzen 7000 "Zen 4" Desktop Processors & AM5 DDR5 Platform

now that's a new generation right there
 
What, you think that 3-4CU iGPU is going to consume a noticeable amount of power? Yeah, no, sorry. Considering AMD's iGPUs run fine with 3-4x the CUs in 15W U-series APUs, I really don't think that cut-down variant will make even a dent in the power consumption of their desktop chips
You said power limit or budget, you think the IGP on a desktop chip has the same limits like on ULV notebook ones :wtf:
It's 7950X being 45% faster than 12900K
Maybe I missed it but where do you see this chip being the flagship(?) 7950x no model numbers were revealed IIRC.
Interesting how CCX dies seem to have gold plating , most likely for soldering to IHS, while chipset die just usual silicon color, perhaps just for usual PCM paste :)
Interesting observation, but I'm thinking they're hiding the (maximum) core count here! Almost certainly nothing to do with paste or gold plating.
 
So:

1) obtain the render scene program;
2) set up a similar 12900k with as close to as posiible memory type, size and timings;
3) run program with core affinity set for E-cores only;
4) go to bios and dissable E-cores or core affinity for P-cores only;

would solve it?
 
Until Zen 4 appears it is all piss in the wind as nobody really knows
But we are kinda looking forward to it.
I might finally upgrade my i7 2600k processor either with RocketLake or Zen 4, whichever turns out to be better and be done for the next 10 years. :)
 
No, I meant IPC. What that graph shows isn't IPC, it's clock-normalized real world (Cinebench) performance. As for the variability through the That's why you make a decision on how to configure test systems - say, whether to stick to the fastest supported JEDEC RAM spec, to go for "reasonably attainable XMP" etc. Either way, you make a decision and stick to it. And while motherboard choice does affect performance in some ways, most of those are down to power delivery and boosting - i.e. again something you can control for. And then you run a representative suite of benchmarks, not just one.

Using a single benchmark to indicate IPC is just as invalid as using a single benchmark to indicate the overall performance of a product. Or, arguably, even more invalid, as using the term IPC purports to speak to more fundamental architectural characteristics, which is then undermined all the more by using a single benchmark with its specific characteristics, quirks and specific performance requirements. IPC as a high-level description of real world performance per clock must be calculated across a wide range of tests in order to have any validity whatsoever.
What do you mean it is not an IPC metric? It clearly shows what's the score in an controlled environment when CPUs are locked to a certain frequency to estimate their Instructions per clock cycle. I think that is as valid as any other. Maybe it has not been done on many benchmarks but it is valid IPC metric. If you want to test IPC on a CPU you must have controlled environment thus frequency cap. Obviously, testing with a brother range of apps would have given different results of the IPC but it is still valid and uses the most common benchmark considered as adequate for that type of measurement. Anyway, what you are talking about, measuring IPC with more benchmarks is rather a general performance than an IPC indicator.
 
Last edited:
I hope the new heat spreader eliminates the heat dissipation issues of current gen - that is, I'll be able to cool anything more powerful than a Ryzen 3 in a SFF build.
 
You said power limit or budget, you think the IGP on a desktop chip has the same limits like on ULV notebook ones :wtf:
No, I'm just giving an illustrative example of how little power an iGPU needs, compared to your assertion that it will meaningfully affect overall power draws. And remember: this is a tiny, low power iGPU, not one tuned for performance. This is not an APU, which is the term AMD uses for all their hardware with performance-oriented iGPUs. It's meant to give you a display output without a dGPU, not to run complex 3D scenes at high performance. Could it clock very high and consume some power? Sure! Will it at stock? Not likely. The drastically reduced CU count compared to even the mobile iGPUs will reduce base power consumption for base desktop rendering and peak power draws. And it certainly won't eat up a meaningful amount of the CPU's power budget when runnning a CPU-heavy compute workload - the power required for a modern iGPU displaying a simple desktop is a few watts. What I'm saying is that the effect of this will be negligible in this context, which directly contradicts your argument that the iGPU would somehow draw so much power that these might be Zen4c cores, that the iGPU power draw is equivalent to the per-core power draw of going from 96 to 128 cores.
Interesting how CCX dies seem to have gold plating , most likely for soldering to IHS, while chipset die just usual silicon color, perhaps just for usual PCM paste :)
It's very unlikely to be gold plating, rather it's just the color of the diffusion barrier material used for TSMC's 5nm process. Plenty of dice throughout the ages have had a golden sheen to their top surface - Intel's Sapphire Rapids and Ponte Vecchio are a good example, but there are plenty. AMD CPUs (and APUs) have been soldered already for several generations after all, you don't need to gold plate the die for that to work. (And there's no way they're combining different TIMs under the same IHS - the chance of contamination between the two would be far too high, and the high temperatures for soldering would likely harm the paste.)

What do you mean it is not an IPC metric? It clearly shows what's the score in an controlled environment when CPUs are locked to a certain frequency to estimate their Instructions per clock cycle. I think that is as valid as any other. Maybe it has not been done on many benchmarks but it is valid IPC metric. If you want to test IPC on a CPU you must have controlled environment thus frequency cap. Obviously, testing with a brother range of apps would have given different results of the IPC but it is still valid and uses the most common benchmark considered as adequate for that type of measurement. Anyway, what you are talking about, measuring IPC with more benchmarks is rather a general performance than an IPC indicator.
It's an IPC metric for a single workload, which is fundamentally unrepresentative, and thus fails to meaningfully represent IPC in any general sense of the term. That is literally what the second paragraph you quoted says. The term IPC inherently makes a claim of broadly describing the per-clock performance of a given implementation of a given architecture - "instructions" is pretty general, after all. Attempting to extrapolate this from a single workload is essentially impossible, as that workload will have highly specific traits in how it loads the different parts of the core design, potentially/likely introducing significant bias, and thus failing to actually represent the architecture's ability to execute instructions generally. That's why you need some sort of representative sample of benchmkars to talk about IPC in any meaningful sense. It can still be a somewhat interesting comparison, but using it as the basis on which to say "X has A% higher IPC than Y" is very, very flawed.
 
It's an IPC metric for a single workload, which is fundamentally unrepresentative, and thus fails to meaningfully represent IPC in any general sense of the term. That is literally what the second paragraph you quoted says. The term IPC inherently makes a claim of broadly describing the per-clock performance of a given implementation of a given architecture - "instructions" is pretty general, after all. Attempting to extrapolate this from a single workload is essentially impossible, as that workload will have highly specific traits in how it loads the different parts of the core design, potentially/likely introducing significant bias, and thus failing to actually represent the architecture's ability to execute instructions generally. That's why you need some sort of representative sample of benchmkars to talk about IPC in any meaningful sense. It can still be a somewhat interesting comparison, but using it as the basis on which to say "X has A% higher IPC than Y" is very, very flawed.
Well i disagree with you and I can say that extrapolating this (like you said) from a variety of application which behave differently and there is such a vast number of them is impossible as well.
For example, 1 cpu is better than another in one application and the another cpu is better than the first one in a different application. If IPC is a metric describing instructions per second which are a constant, the outcome should be the same for every app but it is not. So performance does not always equal IPC.
For instance.
5800x and 5800x3d in games. Normally these are the same processors but they behave differently in gaming and differently in office apps. So out of curiosity, am I talking here about IPC or a performance? Somehow, you say that IPC has to be measured across variety of benchmarks to be valid. I thought that is general performance of a CPU across mostly used applications.
 
Last edited:
Maybe I missed it but where do you see this chip being the flagship(?) 7950x no model numbers were revealed IIRC.
Just being brief I guess, it could be a lower TDP 16 core model.
In that case how do you know 7950X is the flagship and not the lower one?
It could be 7950X and 7950XT!jk
 
I know you folks keep cracking the ""Raptor Lake ill destroy this" whip like it's gong-out-of-style, but i think you're missing the large part:

Raptor lake just doubles the e-cores (so as most real-world loads hit a scaling wall, Raptor lake will also hit that same scaling wall earlier than Zen 4 (8P + 16e versus 16 P!)

it's going to take a perfectly-scaling application for Raptor Lake to rape 7950!
Indeed, unless they reign in power use their desktop designs will follow their laptops, IE underutilized because POWER.
An I5 can beat a I7 in laptop land.

Go see.

As for this 15% ST /30% MT and pciex 5 all round, sounds good can't wait for the competition.

I wouldn't be buying gen 1 straight away though.

I do like the Intel fanbois declaration of failure, without the adequate facts available or tests to validate there concerns.

Plus Rocket lake , could be late , Intel likes late these days, so much still to be resolved.
 
And remember: this is a tiny, low power iGPU, not one tuned for performance.
It can easily consume 5-15W power, more if it's overclocked! Fact is it's hogging "TDP" of at least 1-2 cores in there, everything else is irrelevant.
It could be 7950X and 7950XT!jk
Yes and it could be 7970x (6)Ghz edition, it's been what 11 years since that infamous slip up against Kepler :D
In that case how do you know 7950X is the flagship and not the lower one?

Hence the question mark.
 
Yes and it could be 7970x (6)Ghz edition, it's been what 11 years since that infamous slip up against Kepler :D
Those were the days, it lost in performance/W but it had a more forward looking architecture design vs Kepler.(plus 3GB instead of 2GB)
 
Those two numbers are literally the same thing. 204s is 31% faster than 297s; 297s is 45% slower than 204s. They chose the more conservative wording, which uses the existing product as the baseline for comparison. That's the only sensible, good-faith comparison to make - especially as a "slower than" wording in marketing is guaranteed to be flipped into a "faster than" wording by readers who don't consider how this changes the percentage. And that would be a shitshow for AMD.

No. That is wrong.

Completing a workload in 31% less time means the rate of work done is 45% higher.

faster / slower refers to a comparison of value / time (like Frames Per Second for example 145fps is 45% faster than 100fps). Now AMD did not use faster / slower in the slide they said it took 31% less time which is the correct wording because they are doing a seconds / workload comparison and the seconds for the Zen 4 rig was 31% less than the 12900K rig. (297 * 0.69)

If you want to use faster / slower you need to calculate the rate which is easy enough, just do 1/204 to get the renders / s which is 0.0049. Do the same for the 12900K and you get 1/297 which is 0.0034

0.0049 is a 45% faster rate than 0.0034. 0.0034 is 31% slower than 0.0049.

On a TPU graph of rate with 12900K at 100% Zen 4 would be 145%. If Zen 4 was at 100% the 12900K would be at 69%. In both cases 12900K * 1.45 = zen 4 (100*1.45 = 145 and 69*1.45 = 100)

If you don't want to use rate you need to avoid faster / slower wording and stick to less time / more time wording where you can say that Zen 4 took 31% less time or the 12900K took 45% more time. These are simple calculations though so re-arranging them is pretty trivial.
 
Now AMD did not use faster / slower in the slide they said it took 31% less time which is the correct wording because they are doing a seconds / workload comparison and the seconds for the Zen 4 rig was 31% less than the 12900K rig. (297 * 0.69)
The problem is that they said it:
Xh1roJcSK6ktbqtB.jpg
 
If IPC is a metric describing instructions per second which are a constant
I think you meant instructions per clock here, is that right?

However, the number of instructions that a given CPU core executes in one clock cycle is most certainly NOT a constant. Rather, it varies in a very wide range.
 
The performance number of 15% in cinebench R23 is underwhelming as it do not allow AMD to catch up Intel. But that is not the thing that disappoint me the most.

For me it's the Fall announcement. that is quite late in my opinion and it should have been released in my opinion early summer. Fall make it very close to Raptor lake and AMD will have to truly deliver.

We still don't know what AMD have made and many assumed that AMD went the intel way and went wider cores. They may have not. Cinebench R23 is not really cache/memory sensitive so if they went to improve the cache bandwidth and latency + increased the size + reworked the memory subsystem, their gain wouldn't show up really in CB R23. But they will show up on many others applications.

We will see, a reworked and improved memory subsystem will improve multithread score and gaming.

But it's way too early to tell. I am not sure that AMD sandbag that much. I think they went to design a CPU that will rock where they have the highest margin. EPYC lineup. People say AMD is dead, but if AMD suck 1 gen or 2 on desktop while still destroying everything on server, the company will still thrive. They make so much more money on a chiplet in an EPYC cpu than in a Ryzen.

I will wait to see the review number but right now i am neutral on the product. Not really hype but not really disappointed
 
The problem is that they said it:
Xh1roJcSK6ktbqtB.jpg
That analogy means that Zen4 will need to become 31% slower to get to the 12900K performance BUT 12900K needs to become 45% faster to match Zen4. So, will Raptor Lake do that jump? And is Zen4 as small a leap as many seem to believe? Btw, >15% single core performance improvement vs Zen3 could mean that in 20 apps the minimum increase is 15%, not the average. Sandbagging for sure there imho.
 
So much guessing and speculation in this thread. Wait until this is out reviewed by TPU people calm down lol
 
I'm not sure also, but sandbagging+6400 CL32 is an odd combination
Sandbagging by using application that are not really benefiting of the Zen 4 improvement. But yeah, i know it's still fishy. Maybe it's how AMD like to get mind shares.

That is a huge departure from the era of finely crafted benchmark to show the new product in his best light. AMD sandbagged a bit Zen 3, but that much? i don't know.
 
The problem is that they said it:
Xh1roJcSK6ktbqtB.jpg
AMD need better proof readers then. I thought it said 31% less time but yea, can't argue with a picture (well you can but it's stupid).
That analogy means that Zen4 will need to become 31% slower to get to the 12900K performance BUT 12900K needs to become 45% faster to match Zen4. So, will Raptor Lake do that jump? And is Zen4 as small a leap as many seem to believe? Btw, >15% single core performance improvement vs Zen3 could mean that in 20 apps the minimum increase is 15%, not the average. Sandbagging for sure there imho.
>15% was just in CB R23 ST. Zen 3 was +13% over Zen 2 in that same test scenario.

AMD are keeping true performance close to their chest.
 
That analogy means that Zen4 will need to become 31% slower to get to the 12900K performance BUT 12900K needs to become 45% faster to match Zen4. So, will Raptor Lake do that jump? And is Zen4 as small a leap as many seem to believe? Btw, >15% single core performance improvement vs Zen3 could mean that in 20 apps the minimum increase is 15%, not the average. Sandbagging for sure there imho.
Your guess is as good as mine, I don't know, the 12900K blender comparison can have many interpretations, on the other hand the ST Cinebench score not so many, what the preproduction sample could hit 5.5GHz during actual gameplay but in the ST Cinebench test had trouble reaching even Zen 3 clocks, not so likely, it was more than 30% or 25% or 20% (perfectly fine round numbers) but AMD decided to just tease with a >15%, also seems unlikely.
With nearly +10% frequency in 1T (and much more in nT with 170W), IPC would just be 5%, deduct 2-5% or whatever due to high memory that they used and we are talking zen->zen+ IPC difference which I refuse to believe.
For the sake of competition they better deliver, I just want the pricing to be competitive (with that I mean if in 1080p gaming 7800X/7600X is similar to 13700K/13600K in performance (+3% is similar imo) while they lose with much higher margins in multithreading tests like Cinebench, V-ray, transcoding etc, they better be cheaper than Raptor Lake...)
 
I've read they will all inclued igpu, that seems like a waste of die and money for the consumer. I would prefer 2 versions with and without like Intel is doing.
 
I've read they will all inclued igpu, that seems like a waste of die and money for the consumer. I would prefer 2 versions with and without like Intel is doing.
Well to please you they can always make SKU where they disable the iGPU in the I/O die. That is what Intel does. not sure what is the real benefits of that to be honest. I prefer having it in case i need it (GPU problem by example) and just deactivating it in the bios.
 
Back
Top