• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD "Zen 7" Rumors: Three Core Classes, 2 MB L2, 7 MB V‑Cache, and TSMC A14 Node

There is slim to no chance of Zen 7 debuting on AM5. If the specs/rumors in this post are anything to go on, significant departures on ccd, core type/arrangement, and hybrid designs will most likely require different a socket/pinouts.
New socket is likely, but it's not clear at the moment. It also depends on when Intel and AMD want to introduce DDR6. Currently, it looks like DDR5 still has a lot of fuel in tank. It can go beyond 12,000 MT/s and official support today is 6400. So, DDR5 can easily continue for at least two more generations. This would be in line with new IOD that will be introduced for Zen6 and then repeated on Zen7, as they usually go with the same IOD for two generations. I don't think they will redo IOD after one generation just to introduce DDR6.

Also, AMD can easily introduce two forks of Zen7 CPUs on desktop platforms, one new and premium on a new socket, and another one for legacy mainstream whereby AM5 platform can still continue for another generation with new cores only.
 
New socket is likely, but it's not clear at the moment. It also depends on when Intel and AMD want to introduce DDR6. Currently, it looks like DDR5 still has a lot of fuel in tank. It can go beyond 12,000 MT/s and official support today is 6400. So, DDR5 can easily continue for at least two more generations. This would be in line with new IOD that will be introduced for Zen6 and then repeated on Zen7, as they usually go with the same IOD for two generations. I don't think they will redo IOD after one generation just to introduce DDR6.

Also, AMD can easily introduce two forks of Zen7 CPUs on desktop platforms, one new and premium on a new socket, and another one for legacy mainstream whereby AM5 platform can still continue for another generation with new cores only.
That would require Zen7 to feature two different memory controllers, one for DDR5 and one for DDR6. AMD hasn't done that since the socket AM3 era with the phenom II.

See also: no zen 4 on socket AM4.
 
That would require Zen7 to feature two different memory controllers, one for DDR5 and one for DDR6. AMD hasn't done that since the socket AM3 era with the phenom II.

See also: no zen 4 on socket AM4.
Yes, I am aware of this. Anything is possible and they will make some decisions based on future market conditions. Flexibility is not a strange idea to them. At the end of the day, we have also had Zen3 with DDR5 with Rembrandt APUs, which I have in my laptop.
 
Since they don't really have competition anymore, I think you are probably right about Computex 2026 reveal/launch. No need to rush.

AMD needs to keep it's foot on the pedal, leading Intel doesn't really mean much nowadays when you have a much more mature ARM already eating at your high margin server market.
 
AMD needs to keep it's foot on the pedal, leading Intel doesn't really mean much nowadays when you have a much more mature ARM already eating at your high margin server market.

For AMD's sake then I hope they focus on the server market hardcore, I am content with my 7800x3d for another 5+ years really.
 
That would require Zen7 to feature two different memory controllers, one for DDR5 and one for DDR6. AMD hasn't done that since the socket AM3 era with the phenom II.

See also: no zen 4 on socket AM4.
AMD sort of did this more recently than that. Corrizo featured DDR3 and DDR4 memory controllers. However it was a solder processor rather than socketed, and DDR4 was pretty much not used until Bristol Ridge, which as far as I can tell was the same chip but with newer firmware and always paired with DDR4.
 
AMD needs to keep it's foot on the pedal, leading Intel doesn't really mean much nowadays when you have a much more mature ARM already eating at your high margin server market.
There is no evidence of this. AMD has just had the best ever data center Q1 revenue and they are designing Zen 6c 256 core CPU on N2. Turin Dense wiped the floor with AmpereOne 192 core CPU and brought 60% faster performance while being only 16% less power efficient. Those c cores are getting significantly better every gen. c cores with SMT have already reached 84% of power efficiency of ARM cores in average consumption and they are way better in idle power management and overall performance. Performance is currently much higher on those EPYCs than any benefits for ARM cores in power efficiency. This directly translates into cost of ownership

As the leak suggests, there will be more iterations of those c cores on Zen 7, focusing on low power and efficiency, enabled by jumping onto A14. Besides, need there be, they could produce a line of future EPYCs on ARM too, just like they do now with SoundWave APU for tablets and laptops next year. They are not shy to use ARM if competition requires it. Also, it is Intel that suffers a double-whammy here, as server builders have been increasingly replacing their servers with EPYC and ARM designs.
AMD EPYC Z5 9965 vs Ampere One 192c.png
 
Zen 7 is obsolete. I want Zen 8 by now. Come on, MLID, you know better than this. :slap:
 
There is no evidence of this.

Let me just stop you right there, because there certainly is:


Those c cores are getting significantly better every gen. c cores with SMT have already reached 84% of power efficiency of ARM cores in average consumption and they are way better in idle power management and overall performance.

I'd like to point out that idle power consumption is irrelevant in the data center. If your gear is idling, you are wasting money.

Mind you, to say x86 is "way better in idle power management" is crazy. ARM is known for it's efficiency, so what you are saying goes against everything that we know as fact. On top of that, AMD's multi-CCD design isn't great with idle consumption, not that it matters in the datacenter. Perhaps that specific ARM CPU you listed is bad at idle, I haven't researched that particular CPU, but it doesn't matter given it's well proven that ARM is more efficient at idle and it's not even relevant in this use case.

AMD has just had the best ever data center Q1 revenue and they are designing Zen 6c 256 core CPU on N2.

Such a statement isn't mutually exclusive to ARM also gaining marketshare, as the marketshare analysis I linked above proves.

AMD was coming from a very low share so it's logically both could be gaining.

This directly translates into cost of ownership

I've provided several sources that show TCO is lower for ARM chips.

AMD's server products are good and they will likely stay around for workloads that require high performance high cores but an increasing number of datacenter customers are going ARM or ASIC because they are either good enough performance wise and cheap or they need the chips to do with specific things.

Besides, need there be, they could produce a line of future EPYCs on ARM too, just like they do now with SoundWave APU for tablets and laptops next year. They are not shy to use ARM if competition requires it. Also, it is Intel that suffers a double-whammy here, as server builders have been increasingly replacing their servers with EPYC and ARM designs.

They could and that would further support the argument that I'm making. My point was never that AMD was going to fail, only that the market is diversifying with ARM being a big player in that.
 
Isn't 16 cores enough? 32 threads is already difficult to fully utilize.
Yeah, tell that to all the OCD trolls who calls AMD greedy/milking/stagnating/no better than Intel 14++++ nm for not adding more cores every launch, all while typing it on their dual core laptops.

Hypocrisy

I dunno how important more than 16 cores is, but more than 8 per CCD is a popular request (which gives us more than 16 in total).
 
Last edited:
I dunno how important more than 16 cores is, but more than 8 per CCD is a popular request (which gives us more than 16 in total).

That is mainly due to the growing need of software and especially games to utilise more than 8 cores which currently with AMDs design there is a latency/performance penalty when multiple CCDs have to be accessed . For most server loads the penalties arent massive due to the way enterprise software tends to utlilise cores and there are ways around it in software to mitigate it by utilising the scheduler to favor which CCDs are utilised by what thread which when you are talking 10s if not 100s of threads can really cut down on that overhead..

Games however are extremely latency sensitive so by unintentionally introducing a decent chunk of latency by going from one CCD to another over the Infinity Fabric you can see stutters/frame time spikes when that happens. Its the primary reason as to why X3D has been limited to single CCDs in the consumer market. They have rectified a lot of that performance deficit already with Zen 5 but there is still a chunk of latency when data has to go from one CCD to another.

This Patent from AMD is the most interesting in relation to this as it should effectively eliminate that sort of access latency as it would basically mean a direct attachement of the CCD with the I/O Die eliminating the current Infinity Fabric drawbacks by having to comunicate through the substrate.

It could also mean an ability to share a massive X3D die across multiple CCDs so your L3 becomes a massive monolithic area of storage accessible by one core or all cores at once. Imagine a 256+MB block of X3D being accessible by a single or all 48 threads depending on software.

I ama ware of all of this, I was just trying to bring in the question of nov 7 for every release date claim... but alright you are right i am the downfall of general intelligence. congrats on spotting that, you have solved all of humanities problems.

Im sorry for being a bit "short" with you, I didnt meant for it to come across as a personal attack, but its getting to the point where more and more people type a question in Chat GPT and rely on its answer as Gospel. Its like those people who took the 1st answer in Google as being the only correct answer a few years ago again.
 
That is mainly due to the growing need of software and especially games to utilise more than 8 cores which currently with AMDs design there is a latency/performance penalty when multiple CCDs have to be accessed .
I don't know how you interpreted my reply to Rowsol as a question..

I was just pointing out that more than 16 cores may not be the primary goal, but rather to have more cores on a single CCD.

On the surface it may look like Zen6 on AM5 is about >16 cores. I'd say that >16 cores is more of a consequence for wanting more cores on a CCD. Yeah, splitting hairs here.

Having a higher maximum number of cores is obviously a goal as well.
 
Wait... I want to see Zen 6 in real world tests before overdosing on Zen 7!
 
It could also mean an ability to share a massive X3D die across multiple CCDs so your L3 becomes a massive monolithic area of storage accessible by one core or all cores at once. Imagine a 256+MB block of X3D being accessible by a single or all 48 threads depending on software.
Yes I can imagine that, it must be a latency and scheduling night mare.
It would also ruin the cost and production benefits that come with the chiplet design
 
They could and that would further support the argument that I'm making. My point was never that AMD was going to fail, only that the market is diversifying with ARM being a big player in that.
The way you worded one statement in the post #30 sounded to me as if ARM servers were eating out their margins, which is still not the case. Of course, there are many players deploying seriously competitive and custom ARM servers, as your links show. We are still far away from the situation whereby EPYCs cannot grow their presence. I am sure AMD sees what's happening and are working on addressing as many competitive advantages of ARM servers as possible. It's an interesting space, for sure.
 
Yes I can imagine that, it must be a latency and scheduling night mare.
It would also ruin the cost and production benefits that come with the chiplet design
I used to dream up such a thing. :laugh:

This would hypothetically make way for a single 128 MB V-cache chip under both CCD's.

There'are probably loads of things that makes this hard or complicated, but I have no idea if this is impossible.

On the other hand, what's taking so long with 9900X3D/9950X3D? Isn't it pretty much the same thing as last time? Maybe there's more difference between the generations than moving the cache after all?
 
Last edited:
I really hope they up the core count per CCD.
 
CAMM2 RAM for Zen6/7?
1747396891256.png
 
There’s one good place for camm, the trash. I seriously doubt the form factor will take off in desktop, and rightfully so.
CAMM2 not CAMM. From a TPU article before, better latency, more efficient, and higher clock speeds. Due to traces in mobo are shorter and combined. It would also make RAM clearance for tower CPU coolers a thing of the past!
 
There’s one good place for camm, the trash. I seriously doubt the form factor will take off in desktop, and rightfully so.
I hope they do take off, flat design so no more clearance issues with any type of coolers, supposed to have better signal integrity at higher clock speeds due to much shorter traces, better airflow, and it's already dual channel with one module, so you no longer need to install the bloody thing in pairs.

It's a much better technology.
 
I hope they do take off, flat design so no more clearance issues with any type of coolers, supposed to have better signal integrity at higher clock speeds due to much shorter traces, better airflow, and it's already dual channel with one module, so you no longer need to install the bloody thing in pairs.

It's a much better technology.

The idea first showed up in 2022, 3 years later were at nearly 0 adoption rate (one msi motherboard and a few unreleased boards). There’s so little advantage that significantly change the atx form factor isn’t worth it. Call me when this (never) takes off in desktop.
 
Back
Top