Tuesday, August 4th 2020

Penguin Computing Packs 7616 Intel Xeon Platinum Cores in one Server Rack

In data centers of hyperscalers like Amazon, Google, Facebook, and ones alike, there is a massive need for more computing power. Being that data centers are space-limited facilities, it is beneficial if there is a system that can pack as much computing power as possible, in a smaller form factor. Penguin Computing has thought exactly about this problem and has decided to launch a TundraAP platform, designed specifically as a high-density CPU system. Using an Intel Xeon Platinum 9200 processor, the company utilizes Intel's processor with the highest core count - 56 cores spread on two dies, brought together by a single BGA package.

The Penguin Computing TundraAP system relies on Intel's S9200WK server system. In a 1U server, Penguin Computing puts two of those in one system, with a twist. The company implements a power disaggregation system, which is designed to handle and remove the heat coming from those 400 W TPD monster processors. This means that the PSU is moved from the server itself and moved on a special rack, so the heat from the CPUs wouldn't affect PSUs. The company uses Open Compute Project standards and says it improves efficiency by 15%. To cool those chips, Penguin Computing uses a direct-to-chip liquid cooling system. And if you are wondering how much cores the company can fit in a rack, look no further as it is possible to have as much as 7616 Xeon Platinum cores in just one rack. This is a huge achievement as the density is quite big. The custom cooling and power delivery system that the company built enabled this, by only allowing compute elements to be present in the system.
Source: AnandTech
Add your own comment

17 Comments on Penguin Computing Packs 7616 Intel Xeon Platinum Cores in one Server Rack

#1
Crackong
And Highest Heat density ?
Posted on Reply
#3
kayjay010101
ncrsCray was able to fit 8 EPYC CPUs into 1U in 2018. I'll leave the total number of cores as an exercise for the reader ;)
If we assume Epyc Rome (2019), then the math would be 64 (cores per CPU) x 8 (CPUs) x 42 (U's in a rack) = 21504 cores. Goddamn. If we assume the previous gen, we still land at 10752 cores, which still beats Intel by a country mile. Not to mention AMD does it at 280W per CPU (7H12) vs. Intel's 400W (9282).
Posted on Reply
#4
Patriot
ncrsCray was able to fit 8 EPYC CPUs into 1U in 2018. I'll leave the total number of cores as an exercise for the reader ;)
Yeah I was like, oh, I thought only cray said they were going to make a watercooled xeon 9200 box for someone that really wanted Intel (Aurora supercomputer)

Then I see this is a reference design from Intel because literally no one else wasted their money designing anything for 9200...
Posted on Reply
#5
PanicLake
...in one rack. OK, but how much space is taken by the external power supplies?
Posted on Reply
#6
Tom Yum
PanicLake...in one rack. OK, but how much space is taken by the external power supplies?
And how many racks worth of A/C cooling equipment to dump all that heat?
Posted on Reply
#7
Patriot
PanicLake...in one rack. OK, but how much space is taken by the external power supplies?
None? The picture shows a sled that clearly slides into an enclosure which according to the article links...
Looks like this....

www.intel.com/content/www/us/en/products/servers/server-chassis-systems/server-board-s9200wk-systems.html
Tom YumAnd how many racks worth of A/C cooling equipment to dump all that heat?
Now that is a good question, each generation of watercooled supercomputer gets better... with SGI it used to be 2:1 or 3:1 rack to heat exchange.
Cray currently appears to do 4 cabinets to 1 which isn't exactly a rack... Cray also makes a 9200 version of this, it just isn't their flagship ;)


It took awhile to find this... HPE kinda sucks when it comes to product documentation lookup.
www.hpe.com/us/en/pdfViewer.html?docId=a50002389&parentPage=/us/en/products/compute/hpc/supercomputing/cray-exascale-supercomputer&resourceTitle=HPE+Cray+EX+Liquid-Cooled+Cabinet+for+Large-Scale+Systems+brochure
Posted on Reply
#8
Parn
Presumably the server can't operate without this dedicated PSU. So if it needs its own rack space, how could the server be called 1U?
Posted on Reply
#9
kayjay010101
ParnPresumably the server can't operate without this dedicated PSU. So if it needs its own rack space, how could the server be called 1U?
There's the chassis, the FC2000, that houses 4 1U modules in a 2x2 configuration. So 2U is 4 modules. Each module being 1U in height and half width. The PSUs take up no additional rack space, they're part of the chassis.
Posted on Reply
#10
zlobby
- How does 7616 reactor core explodes?
- It doesn't!

*famous last words*
Posted on Reply
#11
TheGuruStud
zlobby- How does 7616 reactor core explodes?
- It doesn't!

*famous last words*
It's not 95W. It's 15,000.
Posted on Reply
#12
Patriot
TheGuruStudIt's not 95W. It's 15,000.
That's honestly not much for a watercooled rack... About the same as a rack of air cooled gpu servers.
Posted on Reply
#13
TheGuruStud
PatriotThat's honestly not much for a watercooled rack... About the same as a rack of air cooled gpu servers.
Whoosh lol
Posted on Reply
#14
Patriot
TheGuruStudWhoosh lol
Oh, you didn't do the math you just pulled numbers out of your ass got it...
it's 54,400 watts++
Posted on Reply
#15
TheGuruStud
PatriotOh, you didn't do the math you just pulled numbers out of your ass got it...
it's 54,400 watts++
Yikes...

No, it's 15,000 roentgen.
Posted on Reply
#16
Patriot
TheGuruStudYikes...

No, it's 15,000 roentgen.
Ah, the reference would be it's not 3w its 15000w but ok.

These fuckers def require a specialized space... it's pretty hard to power multiple racks >30kw ea. Unless these are detuned/turbo disabled a rack can easily exceed 60kw as 54.4kw is just the cpus not ram, chipset, network fabric...
Posted on Reply
#17
zlobby
TheGuruStudIt's not 95W. It's 15,000.
They gave them the propaganda numbers!
PatriotAh, the reference would be it's not 3w its 15000w but ok.

These fuckers def require a specialized space... it's pretty hard to power multiple racks >30kw ea. Unless these are detuned/turbo disabled a rack can easily exceed 60kw as 54.4kw is just the cpus not ram, chipset, network fabric...
Our high-range Watt-meter just arrived. We could cover one of our trucks with lead shielding, mount the Watt-meter on the front.

Intel: We did everything right!
Apple: Do you taste Metal?
Posted on Reply
Add your own comment
Apr 26th, 2024 22:51 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts