• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Namedrops EPYC "Venice" Zen 6 and EPYC "Verano" Zen 7 Server Processors

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,795 (7.40/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
AMD at its 2025 Advancing AI event name-dropped its two next generations of EPYC server processors to succeed the current EPYC "Turin" powered by Zen 5 microarchitecture. 2026 will see AMD debut the Zen 6 microarchitecture, and its main workhorse for the server segment will be EPYC "Venice." This processor will likely see a generational increase in CPU core counts, increased IPC from the full-sized Zen 6 cores, support for newer ISA, and an updated I/O package. AMD is looking to pack "Venice" with up to 256 CPU cores per package.

AMD is looking to increase the CPU core count per CCD (CPU complex die) with "Zen 6." The company plans to build these CCDs on the 2 nm TSMC N2 process node. The sIOD (server I/O die) of "Venice" implements PCI-Express Gen 6 for a generational doubling in bandwidth to GPUs, SSDs, and NICs. AMD is also claiming memory bandwidth as high as 1.6 TB/s. There are a couple of ways they can go about achieving this, either by increasing the memory clock speeds, or giving the processor a 16-channel DDR5 memory interface, up from the current 12-channel DDR5. The company could also add support for multichannel DIMM standards, such as MR-DIMM and MCR-DIMMs. All said and done, AMD is claiming a 70% increase in multithreaded performance over the current EPYC "Turin," which we assume is comparing the highest performing part to its next-gen successor.



Next up, AMD unveiled the 2027 successor of "Venice," the 7th Gen EPYC "Verano." This processor introduces the future "Zen 7" microarchitecture for even higher IPC, support for even newer instruction sets. At this point, it's not clear if AMD will dial up CPU core counts beyond the up to 256/package of "Venice," but we're hearing that "Verano" will retain the Socket SP7 infrastructure of "Venice," which means it will likely retain the memory and PCIe interfaces introduced by "Venice." The company understandably did not get into the nuts and bolts of "Verano," saving it for the 2026 Advancing AI event.



AMD isn't just selling these processors, but also timing their launch with its latest AI GPUs. The current EPYC "Turin" CPU is paired with MI355X AI GPUs, and Pensando "Pollara 400" NICs for an industry standard server rack package. The 2026 package combines "Venice" CPUs with next-generation MI400 series AI GPUs and "Vulcano" NICs. AMD is referring to this package as "Helios." Then in 2027, the company will time the launches of its EPYC "Verano" CPUs with those of the MI500 series AI GPUs, while carrying over "Vulcano" NICs.

View at TechPowerUp Main Site
 
- MR-DIMM 12800 MT/s on 16 channels. This gives 1.6 TB/s per socket.
- each one of two IODs most likely features 8 memory channels, x32 Gen6 lanes and x64 Gen5 lanes
- new Infinity Fabric is largely based on PCIe 6.0 speed (edit)
 
Last edited:
- MR-DIMM 12800 MT/s on 16 channels. This gives 1.6 TB/s per socket.
- each one of two IODs most likely features 8 memory channels, x32 Gen6 lanes and x64 Gen5 lanes
- new Infinity Fabric is largely based on PCIe 5.0 speed
Infinity Fabric in current 5th gen EPYC is already based on PCIe 5.0 at 32GT/s. Dual socket motherboards can either feature full 4-link IF between sockets or 3-link and more PCIe 5.0 lanes available.

OPr82yR.jpeg
 
Last edited:
Venice is look really nice.

Also cool to see a MI400 comparison vs Rubin, whereas their other slide that I had seen before only compared their Helios Rack to the current Blackwell NVL72
However, in this one they mention 72 GPUs, but in other post they announced it'd have more GPUs than that.

Maybe I should watch the coverage by myself later this week.
 
Infinity Fabric in current 5th gen EPYC is already based on PCIe 5.0 at 32GT/s. Dual socket motherboards can either feature full 4-link IF between sockets or 3-link and more PCIe 5.0 lanes available.
My apologies. I meant to say Gen6, but I wrote Gen5.
In fact, IFOP GMI3 is clocked a bit faster than Gen5, at 36 Gbps, and at 72 Gbps for GMI-Wide. IFIS xGMI operates at 32 Gbps, like any Gen5 link.

However, in this one they mention 72 GPUs, but in other post they announced it'd have more GPUs than that.
Like MI350, MI400 will be a scalable rack, up to 128 GPUs.
Screenshot 2025-06-12 194844.png
 
Venice is look really nice.

Also cool to see a MI400 comparison vs Rubin, whereas their other slide that I had seen before only compared their Helios Rack to the current Blackwell NVL72
However, in this one they mention 72 GPUs, but in other post they announced it'd have more GPUs than that.

Maybe I should watch the coverage by myself later this week.
No you are right, they drop from 128 down to 72 for Helios
It is probably a power density issue. given the DLC 355x is already at 1400w.
And they announced the mi400 is a doubling of lower precision performance over the mi350x to 20Pflops FP8 and 40Pflops FP4

There were definitely leaks earlier this year showing 64/128 UAlinkd mi400 racks though.
Oh, this is totally 2x 64 gpu racks
1749830251644.png
 
My apologies. I meant to say Gen6, but I wrote Gen5.
In fact, IFOP GMI3 is clocked a bit faster than Gen5, at 36 Gbps, and at 72 Gbps for GMI-Wide. IFIS xGMI operates at 32 Gbps, like any Gen5 link.


Like MI350, MI400 will be a scalable rack, up to 128 GPUs.
View attachment 403615
I got some extra infos in other places. The MI3XX racks do scale up to 128 GPUs, but Helios was officially announced as up to 72 GPUas at this moment.
There were rumors about 96/128 GPU configs on Helios, but seems that this was not part of the presentation.
No you are right, they drop from 128 down to 72 for Helios
As said above, apparently there are rumors for higher-density racks, but I can't find a reputable source for that.
 
I got some extra infos in other places. The MI3XX racks do scale up to 128 GPUs, but Helios was officially announced as up to 72 GPUas at this moment.
There were rumors about 96/128 GPU configs on Helios, but seems that this was not part of the presentation.

As said above, apparently there are rumors for higher-density racks, but I can't find a reputable source for that.
(shrugs) Data centers have trouble keeping up with NVL72 as is... with the current gen 130KW racks...
 
I got some extra infos in other places. The MI3XX racks do scale up to 128 GPUs, but Helios was officially announced as up to 72 GPUas at this moment.
There were rumors about 96/128 GPU configs on Helios, but seems that this was not part of the presentation.
I have looked a bit into this. Current AMD 'racks' are single-width, tall servers, like the one that Pegatron is building with 128 MI355X + 32 EPYC Turin CPUs. More than 200 of those 'racks' will be build and delivered to Oracle, in a multi-billion deal.

Those systems are based on x8 GPU nodes connected by 400 GbE switches. As they are not yet rack-scalable into coherent GPU pool within a single build until MI450 series, we cannot really call them a rack. The word rack is a double edge sword here. It has a meaning of a mechanical, tall structure, but more importantly a rack means a coherent GPU fabric of nodes within one system. Nvidia uses NVSwitch and NVLinks to create one, like the solution deployed in Oberon NVL72 rack that has scaled-up to 72 GPUs and is capable of scaling-out to a pod of 8 identical racks with 576 GPUs, giving such systems a great advantage of acting like 8 gigantic GPUs in one pod.

Now, AMD will also be able to do this, on a smaller scale, with Helios (proper) rack from next year due to open source standardisation of UALink and UltraEthernet 1.0. As the two standards are brand new, switch silicon designers need time to bake silicon and build both UALink intra-rack switches based on AMD's donated Infinity Fabric and inter-rack UltraEthernet switches, such as Vulcano. It will take time for above 800 GbE switches to materialize and match Nvidia's next best.

In last two days days I have seen several different numbers thrown around Helios system and other solutions. Frankly, there is no coherent narrative with those numbers. AMD compared x72 GPU system with Vera NVL144, possibly as GPU number normalized comparion for illustrative purposes rather than showing the actual final number of GPUs in a rack. Then SemiAnalysis showed slides both with x64 and x128 systems, etc. A bit confusing kakophony. As dr Ian Cutress pointed out, Helios will be a double-width rack that is meant to standardise this form factor in order to deal with power density and cooling. It is not clear what the final spec of Helios would look like at this point. Perhaps there will be all those custom configurations for different clients.
 
They are multi-node scalable as everything was before nvidia introduced the NVL rack, They are rack scalable, but they do not posses the scale-up fabric that is expected. It means they will scale with inference workloads. And there exists fabrics that tie multiple nodes together from AMD partners, but they are smaller scale-up but with great linear performance gains.
https://gigaio.com/supernode/ Till the Mi400 gen AMD competes less on training more on inference, and that's fine... they need to focus on getting their libraries in order and grinding on their ecosystem.
 
Back
Top