• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Zen 5 Microarchitecture Referenced in Leaked Slides

In general terms;
There is a difference in someone who lacks knowledge, misunderstoods, misrepresents or even misremembers something, vs. someone who intentionally lies (sometimes pathologically). Once you've identified a pattern of lies, you can safely assume that most of what that person is claiming to be untrue, and you should stop listening to that person. In some cases (I'm not referring to specific persons here), there might even be some kind of underlying condition, and it's useful for anyone to learn how to identify this. This goes not only for people online, but even more-so people in real life, like family, friends and colleagues. This is life advice for everyone; Don't let anyone manipulate you. :)

In terms of tech rumors, there are many "leakers" on Youtube, and some on various forums too. Some have been known to delete posts/videos once they've been proven embarrassingly wrong, I've seen this myself in the past based on my own notes, but I'm not going to say which "leakers" it is. Someone who cares more about this than me should download the videos and keep for evidence later.
Don't get me wrong, I'm not saying that leakers have any malicious intentions to spread misinformation. I'm only saying that they're storming the internet with whatever true or false information they can get their hands on to get the views, without any filter. Then, the average reader, having no filter of their own, goes online, reads/watches such stuff, and forms an opinion on something that may not even be real. Absolute waste of time at best, misinformation and false judgement in worst cases.

But most of all, as I've been saying for years, people need to use some kind of "sniff test" on any rumor, and if it doesn't pass, then you can pretty safely assume it's BS.
Just consider the following;
CPUs and GPUs are many years in the making, Intel/AMD/Nvidia have multiple generations in various stages of development at any time, a major architecture can take ~5+ years and a minor one ~3 years to market. Major design features are decided firsts, followed by minor ones, followed by tweaks and adjustments. The closer to you are to a product launch, the more unlikely it is to do changes, as any change might add up to years of development time. By the time a product reaches tapeout, usually ~1-1.5 years ahead of intended launch, the design is complete, anything beyond that point are bugfixes or adjustments for yields etc.
Armed with this knowledge, I can say e.g. the rumors of Intel considering whether to have SMT-4 or no SMT for Arrow Lake was allegedly decided before this summer is absolutely nonsense. Such a change would impact every design decision throughout the microarchitecture, and would have been made in the very early stages of development. And the same goes for AMD rumors, whenever you hear claims of AMD making "last minute" design decisions.

As to what someone earlier mentioned, there are certainly industrial espionage going on too, in addition to streams of employees flowing between the companies, cooperation between them, lots of research published by universities and even some public speeches etc. All of which will let those who know the field use logical deduction to sense where the industry is heading. Some even have developer contacts which may give them some hints. This is why someone can make fairly accurate predictions, but these people are usually very clear about it being qualified guesses, not "I have 20 sources at AMD". And such predictions rarely result in specific features, performance figures etc. for particular CPUs/GPUs.
Sure, but would you make a post or video every time an Intel/AMD/Nvidia employee made an attempt at something? You could be posting every 5 minutes. "The next AMD architecture will have double L2 cache as Joe is adding it right now." Then 5 minutes later: "Joe's attempt at adding extra L2 cache failed because it wasn't stable." Do you see how pointless the whole "leak" business is?
 
Why is no one talking about 16 core CCX or 32 core CCX? These are signs that CCDs for servers and desktops will be separate. Maybe there will be no desktop version and it will be diverted for laptops.
 
Asses can carry a lot of weight, but I don't ask them for comments about other peoples "leaks" (NOT news).

May I kindly suggest that if you have nothing to give, then give nothing.

MLID "leaks" information, has discussions with insiders and formulates his own conclusions and then releases them to the public as HIS leaks and opinions. You could very easily do the same, if you are not able or willing to do so, then simply do not read or watch what he has to offer, and do not add your empty comments in places like this.
Awww, did I hurt the MLID fanboy's feewings? Poor widdle baby, I'm sure you'll survive.
 
I remember this slide.

AMD%20Computex%202022%20Slide%20Deck%206.JPG


Actual uplift is closer to 29%...

AMD have a history of sandbagging like crazy with Zen.

Also nT and 1T uplift can be different depending on SMT efficiency.
Didn't it come to that because it was IPC gain combined with GHz gain? The 7000 Zen chips all had a boost clock much higher (4.4 to 4.9GHz on the 5000 CPUs vs 5 to 5.7 GHz on the 7000 CPUs). The Zen 5 chips would need to come at 6GHz boost clocks to get 20%+ gain.

Which would be quite incredible to any of us who still remember the old Pentium 4 days.
 
I'm not seeing real new microarchitectures made from scratch that bring big performance advances. From what I gather, they are just improving certain parts of the previous microarchitectures.
 
I would never reference MLID about anything in my life, period.
 
So a 32core 8950x with a 4GB HBM L4 ?
Yes, and an integrated GPU with 40 CUs giving you more performance than a 6700 XT with a TDP of 125 W for the whole package. :D

Seriously, it's MLID - not the most credible source of information on the planet. Just look at the slides, 5 nm is stated for 2022, 4 nm for 2023. Do you see that happen? Honestly, I think these are very old slides.
 
Just look at the slides, 5 nm is stated for 2022, 4 nm for 2023. Do you see that happen? Honestly, I think these are very old slides.
Um hello? Zen 4 based Ryzen and EPYC are based on 5nm and were released on 2022 (Excluding Genoa-X, Siena, Bergamo and Ryzen X3D models). Zen 4 based Phoenix is 4nm and was released in 2023.

It's ridiculous that people criticizing MLID cant even get their own facts straight. And unlike MLID we all have the benefit of hindsight about how things turned out in the end. Everyone is a big critic behind anonymity but try and put you face and your name out there with your predictions and i doubt that 99% of critics would do a better job predicting things years in advance compared to who they are criticizing.
 
Um hello? Zen 4 based Ryzen and EPYC are based on 5nm and were released on 2022 (Excluding Genoa-X, Siena, Bergamo and Ryzen X3D models). Zen 4 based Phoenix is 4nm and was released in 2023.

It's ridiculous that people criticizing MLID cant even get their own facts straight. And unlike MLID we all have the benefit of hindsight about how things turned out in the end. Everyone is a big critic behind anonymity but try and put you face and your name out there with your predictions and i doubt that 99% of critics would do a better job predicting things years in advance compared to who they are criticizing.
Does Phoenix have the Nirvana/Eldora cores?
 
Wasn't it supposed to exceed (internal?) expectations or has WTFtech on YT revised his own targets, again :wtf:

10-15% isn't exactly ground breaking!
Better than 5%.
 
I prefer lower IPC gains and less frequent CPU releases as it slows down obsolescence and keeps more money in my pocket.

One thing I have learnt, is performance gains seem to now days get quickly sapped up by less efficient software, devs will just keep stealing those extra cycles.

Friend of mine made 2 versions of an app, one in basic C, the other .NET, the latter uses almost 5x the cycles to do the same thing, but the advantage to him as a developer it was way easier to make the program on the framework.

Gains from cache increases is also going to vary from one software to the next, some software might react well to an increase, others might not be affected at all. Games seem to react particularly well though.
 
It must be wrong or outdated, it says 2023...
It'll be an early 2020 slide. Before Covid-19 hit.
Zen3 is the left-most "Cerberus". Its launch got delayed until end of 2020 simply because of Covid. Then the roll-out stretched all through 2021 and also 2022.
Again, because of Covid, Zen4 didn't get launched until end of 2022. A full 1.5 years late.
Looks like Zen5 will be less than a year late. Covid recovery finally under way.
 
I'm not seeing real new microarchitectures made from scratch that bring big performance advances. From what I gather, they are just improving certain parts of the previous microarchitectures.

Any "real new micro-architecture made from scratch with big performance advances" would have to be able to execute 2 basic blocks per cycle - the rumored Zen 5/6 architectures are trying to do that.
 
fetch=execute? How do it this in existed ZEN architectures?
I am sorry about using loose terminology. Terminology used in Techpowerup forums is less precise than terminology used in scientific articles. From an x86 Linux/Windows application perspective, the rumored Zen 5/6 CPUs can be said to be able to execute instructions belonging to 2 basic blocks in a single clock cycle.
 
If it is a two digit increase in IPC it is a good increase. If the two digit increase starts with a 2 in the front it is great. Everything else after the 2 in the front is awesome.
 
If it is a two digit increase in IPC it is a good increase. If the two digit increase starts with a 2 in the front it is great. Everything else after the 2 in the front is awesome.
I think we have all been conditioned by Intel's 1-3% IPC increases gen over gen for 20 years.

I'm not happy with anything less than 20%. Not for the money they want, and the investment into the overall platform.
 
I think we have all been conditioned by Intel's 1-3% IPC increases gen over gen for 20 years.

I'm not happy with anything less than 20%. Not for the money they want, and the investment into the overall platfor
10-15% is fair for me to consider the CPU IPC increase OK. If you have had decades of Intel's 1-3% increases gen over gen 10-15% is substantial in my eyes and nothing to scoff at.
 
10-15% is fair for me to consider the CPU IPC increase OK. If you have had decades of Intel's 1-3% increases gen over gen 10-15% is substantial in my eyes and nothing to scoff at.
But no reason to upgrade from current gen, either.
 
But no reason to upgrade from current gen, either.
Need to upgrade is irrelevant here. Upgrading every generation does not contribute or dictate what is an OK, fair, good IPC increase gen over gen for CPU's. Wouldn't you agree? You want to have 30% IPC icnrease every gen? 1-3% as @stimpy88 mentioned, was very low and now you want the opposite of that 30 or higher % every gen so that the CPU upgrade is viable for you every gen? I don't think that is a valid argument because the advancement is not that easy nowadays as it was a decade or two ago. We will see different percentages of gen to gen IPC increases and maybe at some point 30% or higher will happen as a breakthrough but every gen? What I'm saying is, if you get two digit increase in CPU IPC being 10 or 15 or 20% it is a good increase but it does not necessarily mean you need to upgrade. Should it be viable to upgrade every gen? That is beside the point here. What you suggest is a breakthrough every time a new gen CPU is released and we, as a community, must be sane about it as well and at least understand the capabilities of the industry and obstacles companies encounter to achieve these goals.
 
10-15% is fair for me to consider the CPU IPC increase OK. If you have had decades of Intel's 1-3% increases gen over gen 10-15% is substantial in my eyes and nothing to scoff at.
Just a note: Intel increased IPC in year 1989 by approximately 100% when going from i386 (internal architecture: CISC) to i486 (internal architecture: RISC-like in case of simple x86 instructions such as ADD; MOV reg,reg; etc): https://ieeexplore.ieee.org/document/46766
 
Need to upgrade is irrelevant here. Upgrading every generation does not contribute or dictate what is an OK, fair, good IPC increase gen over gen for CPU's. Wouldn't you agree? You want to have 30% IPC icnrease every gen? 1-3% as @stimpy88 mentioned, was very low and now you want the opposite of that 30 or higher % every gen so that the CPU upgrade is viable for you every gen? I don't think that is a valid argument because the advancement is not that easy nowadays as it was a decade or two ago. We will see different percentages of gen to gen IPC increases and maybe at some point 30% or higher will happen as a breakthrough but every gen? What I'm saying is, if you get two digit increase in CPU IPC being 10 or 15 or 20% it is a good increase but it does not necessarily mean you need to upgrade. Should it be viable to upgrade every gen? That is beside the point here. What you suggest is a breakthrough every time a new gen CPU is released and we, as a community, must be sane about it as well and at least understand the capabilities of the industry and obstacles companies encounter to achieve these goals.
I just said 10-15% IPC increase doesn't give you a reason to upgrade. I never said I wanted to. ;) Back in the days, you could game on a 2600K for a decade. One could cry for no innovation, but I think it was great value. :)
 
Back
Top