Thursday, October 10th 2024

AMD Launches 5th Gen AMD EPYC CPUs, Maintaining Leadership Performance and Features for the Modern Data Center

AMD (NASDAQ: AMD) today announced the availability of the 5th Gen AMD EPYC processors, formerly codenamed "Turin," the world's best server CPU for enterprise, AI and cloud. Using the "Zen 5" core architecture, compatible with the broadly deployed SP5 platform and offering a broad range of core counts spanning from 8 to 192, the AMD EPYC 9005 Series processors extend the record-breaking performance and energy efficiency of the previous generations with the top of stack 192 core CPU delivering up to 2.7X the performance compared to the competition.

New to the AMD EPYC 9005 Series CPUs is the 64 core AMD EPYC 9575F, tailor made for GPU powered AI solutions that need the ultimate in host CPU capabilities. Boosting up to 5 GHz, compared to the 3.8 GHz processor of the competition, it provides up to 28% faster processing needed to keep GPUs fed with data for demanding AI workloads.
"From powering the world's fastest supercomputers, to leading enterprises, to the largest Hyperscalers, AMD has earned the trust of customers who value demonstrated performance, innovation and energy efficiency," said Dan McNamara, senior vice president and general manager, server business, AMD. "With five generations of on-time roadmap execution, AMD has proven it can meet the needs of the data center market and give customers the standard for data center performance, efficiency, solutions and capabilities for cloud, enterprise and AI workloads."

The World's Best CPU for Enterprise, AI and Cloud Workloads
Modern data centers run a variety of workloads, from supporting corporate AI-enablement initiatives, to powering large-scale cloud-based infrastructures to hosting the most demanding business-critical applications. The new 5th Gen AMD EPYC processors provide leading performance and capabilities for the broad spectrum of server workloads driving business IT today.

The new "Zen 5" core architecture, provides up to 17% better instructions per clock (IPC) for enterprise and cloud workloads and up to 37% higher IPC in AI and high performance computing (HPC) compared to "Zen 4."6

With AMD EPYC 9965 processor-based servers, customers can expect significant impact in their real world applications and workloads compared to the Intel Xeon 8592+ CPU-based servers, with:

Up to 4X faster time to results on business applications such as video transcoding.7
Up to 3.9X the time to insights for science and HPC applications that solve the world's most challenging problems.8
Up to 1.6X the performance per core in virtualized infrastructure.9
In addition to leadership performance and efficiency in general purpose workloads, 5th Gen AMD EPYC processors enable customers to drive fast time to insights and deployments for AI deployments, whether they are running a CPU or a CPU + GPU solution.

Compared to the competition:
  • The 192 core EPYC 9965 CPU has up to 3.7X the performance on end-to-end AI workloads, like TPCx-AI (derivative), which are critical for driving an efficient approach to generative AI.
  • In small and medium size enterprise-class generative AI models, like Meta's Llama 3.1-8B, the EPYC 9965 provides 1.9X the throughput performance compared to the competition.
  • Finally, the purpose built AI host node CPU, the EPYC 9575F, can use its 5GHz max frequency boost to help a 1,000 node AI cluster drive up to 700,000 more inference tokens per second. Accomplishing more, faster.
By modernizing to a data center powered by these new processors to achieve 391,000 units of SPECrate 2017_int_base general purpose computing performance, customers receive impressive performance for various workloads, while gaining the ability to use an estimated 71% less power and ~87% fewer servers13. This gives CIOs the flexibility to either benefit from the space and power savings or add performance for day-to-day IT tasks while delivering impressive AI performance.

AMD EPYC CPUs - Driving Next Wave of Innovation
The proven performance and deep ecosystem support across partners and customers have driven widespread adoption of EPYC CPUs to power the most demanding computing tasks. With leading performance, features and density, AMD EPYC CPUs help customers drive value in their data centers and IT environments quickly and efficiently.

5th Gen AMD EPYC Features
The entire lineup of 5th Gen AMD EPYC processors is available today, with support from Cisco, Dell, Hewlett Packard Enterprise, Lenovo and Supermicro as well as all major ODMs and cloud service providers providing a simple upgrade path for organizations seeking compute and AI leadership.

High level features of the AMD EPYC 9005 series CPUs include:
  • Leadership core count options from 8 to 192, per CPU
  • "Zen 5" and "Zen 5c" core architectures
  • 12 channels of DDR5 memory per CPU
  • Support for up to DDR5-6400 MT/s14
  • Leadership boost frequencies up to 5GHz
  • AVX-512 with the full 512b data path
  • Trusted I/O for Confidential Computing, and FIPS certification in process for every part in the series
Add your own comment

24 Comments on AMD Launches 5th Gen AMD EPYC CPUs, Maintaining Leadership Performance and Features for the Modern Data Center

#1
ThomasK
Waiting for reviews comparing it to the new Xeon 6900 with P-cores.
Posted on Reply
#3
Daven
So let's do a little price comparison...

Xeon 6980P 128 (256) Cores (Threads) 2.0 GHz base 504MB L3 Cache 500W $17,800
Epyc 9755 128 (256) Cores (Threads) 2.7 GHz base 512MB L3 Cache 500W $12,984

Hmmmmm....are accelerators worth $4,816 or are we paying the Pat tax?

Edit: Pat tax it is then (see below)
Posted on Reply
#4
ThomasK
DavenSo let's do a little price comparison...

Xeon 6980P 128 (256) Cores (Threads) 2.0 GHz base 504MB L3 Cache 500W $17,800
Epyc 9755 128 (256) Cores (Threads) 2.7 GHz base 512MB L3 Cache 500W $12,984

Hmmmmm....are accelerators worth $4,816 or are we paying the Pat tax?
At least customers will be paying the PAT TAX to beat AMD's 2022 Genoa.

Great deal.
Posted on Reply
#5
kondamin
DavenSo let's do a little price comparison...

Xeon 6980P 128 (256) Cores (Threads) 2.0 GHz base 504MB L3 Cache 500W $17,800
Epyc 9755 128 (256) Cores (Threads) 2.7 GHz base 512MB L3 Cache 500W $12,984

Hmmmmm....are accelerators worth $4,816 or are we paying the Pat tax?

Edit: Pat tax it is then (see below)
Depends if your workload benefits from the accelerator, if you just rent out a data centre to host some vps’s nope
Posted on Reply
#6
RGAFL
Intels latest and greatest was on top for two weeks. Superior node advantage and using MRDIMMS and still gets left behind in the vast majority of benchmarks. When the 3D cache version of Epyc come out it will be a bloodbath (again).
Posted on Reply
#7
AnotherReader
z1n0xwww.phoronix.com/review/amd-epyc-9965-9755-benchmarks
www.servethehome.com/amd-epyc-9005-turin-turns-transcendent-performance-solidigm-broadcom/
STH's workloads are better for benchmarking servers. Contrast their code compilation methodology with Phoronix's. Interestingly, Zen 5c has a 16 core CCX now. Zen 5c is fast enough to be bottlenecked by PCIe4 SSDs and 200 Gbps Ethernet for some tests. ARM alternatives are looking less rosy as well.
The bigger question is on the hyper-scale side. Hyper-scalers are the ones driving Arm adoption in the cloud. 192 cores/ 384 threads of a solid Zen 5c CPU is going to put folks on notice. At the same time, if a hyper-scale customer is religious about delivering custom Arm CPUs, then the big question is whether this is enough to change religion.


Posted on Reply
#8
RGAFL
AMD also said in their AI presentation that marketshare up to 34% in the server space. Expect that to ramp up quite a bit after this.
Posted on Reply
#9
Bet0n
RGAFLAMD also said in their AI presentation that marketshare up to 34% in the server space. Expect that to ramp up quite a bit after this.
Unit share or revenue share?
Posted on Reply
#11
igormp
RIP Intel, Turin is a killing offering with way lower cost across all the stack compared to intel.
Heck, even AmpereOne will have a hard time against it.
Posted on Reply
#12
Carillon
AMD made specific SKUs to match corecounts and TDPs of intel offerings, so that they can finally be compared without complaints.

Gg
Posted on Reply
#13
Wirko
AnotherReaderZen 5c has a 16 core CCX now.
AMD has never been keen to reveal much about their ring bus but ... this can't be a ring bus any longer, 16 cores are very probably too much for that.
Posted on Reply
#14
unwind-protect
I wouldn't mind a dual system with that 5 GHz EPYC.

ETA:
"With the large Node.js codebase, the EPYC Turin processors were delivering the fastest build times with ease. Here even the single EPYC 9575F / 9965 / 9755 processors were faster than the dual Xeon 6980P server with MRDIMM memory."
Posted on Reply
#15
AnotherReader
WirkoAMD has never been keen to reveal much about their ring bus but ... this can't be a ring bus any longer, 16 cores are very probably too much for that.
Using switches, ring buses have scaled up to 24 cores for Broadwell-EP.
The largest die (+/- 454 mm²), highest core (HCC) count SKUs still work with a two ring configuration connected by two bridges. The rings move data in opposite directions (clockwise/counter-clockwise) in order to reduce latency by allowing data to take the shortest path to the destination. The blue points indicate where data can jump onto the ring buses.
Posted on Reply
#16
Wirko
unwind-protectI wouldn't mind a dual system with that 5 GHz EPYC.

ETA:
"With the large Node.js codebase, the EPYC Turin processors were delivering the fastest build times with ease. Here even the single EPYC 9575F / 9965 / 9755 processors were faster than the dual Xeon 6980P server with MRDIMM memory."
The other one (9175F) with the same clocks looks impressive too, just in other ways. 16 chiplets with all the cache for a total of 16 cores. I'm wondering what's the principal market for that, it may be HFT or some Oracle etc. database servers with huge costs per core. Oracle also claims (or used to claim) that x86 platform doesn't support real virtualisation (while IBM z mainframe does, for example). You run their software in a VM with 8 cores on a physical CPU with 16 cores? You pay the licence for 16 cores. MSSQL has (or had) some similar restrictions as well.
AnotherReaderUsing switches, ring buses have scaled up to 24 cores for Broadwell-EP.
After that, they switched to using even more switches. Skylake has a mesh interconnect.
Posted on Reply
#17
AnotherReader
WirkoThe other one (9175F) with the same clocks looks impressive too, just in other ways. 16 chiplets with all the cache for a total of 16 cores. I'm wondering what's the principal market for that, it may be HFT or some Oracle etc. database servers with huge costs per core. Oracle also claims (or used to claim) that x86 platform doesn't support real virtualisation (while IBM z mainframe does, for example). You run their software in a VM with 8 cores on a physical CPU with 16 cores? You pay the licence for 16 cores. MSSQL has (or had) some similar restrictions as well.


After that, they switched to using even more switches. Skylake has a mesh interconnect.
Yes, the mesh is inferior to the rings in latency and bandwidth.
Posted on Reply
#18
Sarajiel
AnotherReaderZen 5c has a 16 core CCX now
Are you sure about that?
I was under the impression that the Zen 5c CCD consisted of 2x CCXs with 8 cores each. Similar to how Zen2 CCDs were laid out, just with double the amount of cores in the case of Zen 5c.
Posted on Reply
#19
Bet0n
AnotherReaderRevenue share.
Afaik this 34% mainly comes from hyperscalers. In enterprise Intel is still dominant by a large margin (AMD has <10%) and that's the majority of the market.
The enterprise market is the most similar to the desktop space where marketing dominates over best product (so basically it's the least rationale), hyperscalers really only care about TCO but in enterprise there are fixations and biases and laziness.
AMD so far got away without lots of marketing (just like with the desktop products) but their growing potential is smaller by the day if they keep doing this (or should we say without doing what's necessary).
Posted on Reply
#20
z1n0x
SarajielAre you sure about that?
I was under the impression that the Zen 5c CCD consisted of 2x CCXs with 8 cores each. Similar to how Zen2 CCDs were laid out, just with double the amount of cores in the case of Zen 5c.


Source
Posted on Reply
#21
Draconis
RGAFLIntels latest and greatest was on top for two weeks. Superior node advantage and using MRDIMMS and still gets left behind in the vast majority of benchmarks. When the 3D cache version of Epyc come out it will be a bloodbath (again).
According Toms Hardware there won't be a X Epyc this gen.

"Notably, AMD isn’t introducing its X-series models with stacked L3 cache for this generation, instead relying upon its Milan-X lineup for now. AMD says its X-series might get an upgrade every other generation, though that currently remains under consideration."
Posted on Reply
#22
AnotherReader
SarajielAre you sure about that?
I was under the impression that the Zen 5c CCD consisted of 2x CCXs with 8 cores each. Similar to how Zen2 CCDs were laid out, just with double the amount of cores in the case of Zen 5c.
STH covered it in their review.

Posted on Reply
#23
Nhonho
On the EPYC CPU specifications pages, AMD did the "favor" of not showing which type of core the processor has (whether ZEN5, ZEN5c, etc.), nor does it show which instruction sets the processor supports or how much cache memory each chiplet has:

www.amd.com/en/products/processors/server/epyc/9005-series/amd-epyc-9755.html
www.amd.com/en/products/processors/server/epyc/9005-series/amd-epyc-9575f.html



Is this performance difference between the EPYC 9xx5 and 9xx4 CPUs due to improvements in the AVX-512 unit of the ZEN5 architecture?

Why doesn't the EPYC 9755 (128 cores) perform nearly twice as well as the 9575F (64 cores) in this AV1 video encode test? Is this a limitation of the CPUs or of the software?



Source:
www.tomshardware.com/pc-components/cpus/amd-launches-epyc-turin-9005-series-our-benchmarks-of-fifth-gen-zen-5-chips-with-up-to-192-cores-500w-tdp#section-encoding-benchmarks
Posted on Reply
#24
AnotherReader
NhonhoOn the EPYC CPU specifications pages, AMD did the "favor" of not showing which type of core the processor has (whether ZEN5, ZEN5c, etc.), nor does it show which instruction sets the processor supports or how much cache memory each chiplet has:

www.amd.com/en/products/processors/server/epyc/9005-series/amd-epyc-9755.html
www.amd.com/en/products/processors/server/epyc/9005-series/amd-epyc-9575f.html



Is this performance difference between the EPYC 9xx5 and 9xx4 CPUs due to improvements in the AVX-512 unit of the ZEN5 architecture?

Why doesn't the EPYC 9755 (128 cores) perform nearly twice as well as the 9575F (64 cores) in this AV1 video encode test? Is this a limitation of the CPUs or of the software?



Source:
www.tomshardware.com/pc-components/cpus/amd-launches-epyc-turin-9005-series-our-benchmarks-of-fifth-gen-zen-5-chips-with-up-to-192-cores-500w-tdp#section-encoding-benchmarks
It's likely to be a limitation of the software. Serve the Home's tests are more representative of the scaling for server applications.
Posted on Reply
Add your own comment
Dec 2nd, 2024 12:08 CST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts