Thursday, August 19th 2021

Intel Xeon "Sapphire Rapids" Memory Detailed, Resembles AMD 1st Gen EPYC: Decentralized 8-Channel DDR5

Intel's upcoming Xeon "Sapphire Rapids" processor features a memory interface topology that closely resembles that of first-generation AMD EPYC "Rome," thanks to the multi-chip module design of the processor. Back in 2017, Intel's competing "Skylake-SP" Xeon processors were based on monolithic dies. Despite being spread across multiple memory controller tiles, the 6-channel DDR4 memory interface was depicted by Intel as an advantage over EPYC "Rome." AMD's first "Zen" based enterprise processor was a multi-chip module of four 14 nm, 8-core "Zeppelin" dies, each with a 2-channel DDR4 memory interface that added up to the processor's 8-channel I/O. Much like "Sapphire Rapids," a CPU core from any of the four dies had access to memory and I/O controlled by any other die, as the four were networked over the Infinity Fabric interconnect in a configuration that essentially resembled "4P on a stick."

With "Sapphire Rapids," Intel is taking a largely similar approach—it has four compute tiles (dies) instead of a monolithic die, which Intel says helps with scalability in both directions; and each of the four compute tiles has a 2-channel DDR5 or 1024-bit HBM memory interface, which add up to the processor's 8-channel DDR5 total I/O. Intel says that CPU cores from each tile has equal access to memory, last-level cache, and I/O controlled by another die. Inter-tile communication is handled by EMIB physical media (55 micron bump-pitch wiring). UPI 2.0 makes up the inter-socket interconnect. Each of the four compute tiles has 24 UPI 2.0 links that operate at 16 GT/s. Intel didn't detail how memory is presented to the operating system, or the NUMA hierarchy, however much of Intel's engineering effort appears to be focused on making this disjointed memory I/O work as if "Sapphire Rapids" were a monolithic die. The company claims "consistent low-latency, high cross-sectional bandwidth across the SoC."
Another interesting aspect of "Sapphire Rapids" Xeon processors is support for HBM, which could prove a game-changer for the processor in the HPC and high-density compute markets. Specific models of Xeon "Sapphire Rapids" processors could come with on-package HBM. This memory can either be used as a victim-cache for the on-die caches on the compute tiles, vastly improving the memory sub-system; work exclusively as a standalone main memory; or even work as a non-tiered main memory alongside the DDR5 DRAM with flat memory regions. Intel refers to these as software-visible HBM+DDR5, and software-transparent HBM+DDR5 modes.
Add your own comment

16 Comments on Intel Xeon "Sapphire Rapids" Memory Detailed, Resembles AMD 1st Gen EPYC: Decentralized 8-Channel DDR5

#1
Crackong
So it is just ONE intel presentation and we got NINE threads so it can cover the Front page and the second page as well.
Posted on Reply
#2
Nephilim666
Wow slow news day.
I love the "Up to >100MB shared LLC" bit... so up to (upper bound) greater than (lower bound) 100MB. It's clear it's marketeers and not engineers that wrote this.
Posted on Reply
#3
btarunr
Editor & Senior Moderator
CrackongSo it is just ONE intel presentation and we got NINE threads so it can cover the Front page and the second page as well.
I have like 15 more stories in the pipeline just related to Architecture Day. The presentation itself has like 200 slides and a 2-hour video. It's all I'll be doing today.
Posted on Reply
#4
Crackong
btarunrI have like 15 more stories in the pipeline just related to Architecture Day. The presentation itself has like 200 slides and a 2-hour video. It's all I'll be doing today.
OMG it is 24 stories in 24 hours
Posted on Reply
#5
jeremyshaw
Nephilim666Wow slow news day.
I love the "Up to >100MB shared LLC" bit... so up to (upper bound) greater than (lower bound) 100MB. It's clear it's marketeers and not engineers that wrote this.
Must be the marketers, since the 3rd, 4th, and 5th slides imply there are two different compute tiles, based on internal layout.

I'd expect that there is actually only one tile layout, and Intel didn't actually create two mirrored tile layouts.
Posted on Reply
#6
dragontamer5788
Hmmm. Zen1 had some pretty bad corner cases before AMD came out with that I/O die. In particular, the "switching network" of the I/O die allows for better utilization of those memory channels.

That being said: EMIB looks more advanced than what AMD did with Zen1 (and indeed, looks more advanced than even today's Zen3. IIRC, the packaging around the I/O die on Zen3 is a simpler passive connection, and nothing like the microbumps on EMIB). So maybe Intel has a "secret weapon" in there.
Posted on Reply
#7
DeathtoGnomes
btarunrI have like 15 more stories in the pipeline just related to Architecture Day. The presentation itself has like 200 slides and a 2-hour video. It's all I'll be doing today.
Ctrl + v works well, just ask Intel. :p
Posted on Reply
#8
TumbleGeorge
Intel is obviously very worried about AMD, since they spend so much resources to zombie us.
Posted on Reply
#9
ZoneDymo
I know Pat never said this, but I would like it if he got confronted with the whole "glue" comment the company made about AMD Ryzen/Threadripper/Epyc and apologize for it.
Posted on Reply
#11
AnarchoPrimitiv
Genoa/Zen4 is supposedly moving to a 12 channel memory controller....and while the initial release will top out at 96 cores, they're rumored to have another release with 128 cores. Also, there's supposedly a huge amount of V-Cache supposed to be in there as well if I remember correctly. While, it'll be interesting to see what the integrated HBM does for intel, I just don't see their 10nm chips edging out 5nm genoa in performance based on everything I've heard, but we'll see. I just don't see Lisa Su being beaten so easily, but anything can happen.
Posted on Reply
#12
TheGuruStud
AnarchoPrimitivGenoa/Zen4 is supposedly moving to a 12 channel memory controller....and while the initial release will top out at 96 cores, they're rumored to have another release with 128 cores. Also, there's supposedly a huge amount of V-Cache supposed to be in there as well if I remember correctly. While, it'll be interesting to see what the integrated HBM does for intel, I just don't see their 10nm chips edging out 5nm genoa in performance based on everything I've heard, but we'll see. I just don't see Lisa Su being beaten so easily, but anything can happen.
Don't worry, intel is wasting their money on that HBM, b/c AMD will have it, too, and you know who will benefit more from it lol
Posted on Reply
#13
DeathtoGnomes
TheGuruStudand you know who will benefit more from it lol
wait! I know this one!! :kookoo:
Posted on Reply
#14
TheGuruStud
DeathtoGnomeswait! I know this one!! :kookoo:
If it's not obvious, I'm insulting their lack of engineering prowess. Intel continues to be reactionary instead of forward thinking. (I mean AL big little...lol).
Posted on Reply
#15
spnidel
1st gen ryzen eh? can't believe intel used glue
Posted on Reply
#16
Oliverda
Let's not forget about this:

[URL='https://www.techpowerup.com/235092/intel-says-amd-epyc-processors-glued-together-in-official-slide-deck']Intel Says AMD EPYC Processors "Glued-together" in Official Slide Deck[/URL]

Posted on Reply
Add your own comment
Apr 25th, 2024 08:08 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts