Wednesday, March 4th 2020

AMD Scores Another EPYC Win in Exascale Computing With DOE's "El Capitan" Two-Exaflop Supercomputer

AMD has been on a roll in both consumer, professional, and exascale computing environments, and it has just snagged itself another hugely important contract. The US Department of Energy (DOE) has just announced the winners for their next-gen, exascale supercomputer that aims to be the world's fastest. Dubbed "El Capitan", the new supercomputer will be powered by AMD's next-gen EPYC Genoa processors (Zen 4 architecture) and Radeon GPUs. This is the first such exascale contract where AMD is the sole purveyor of both CPUs and GPUs, with AMD's other design win with EPYC in the Cray Shasta being paired with NVIDIA graphics cards.

El Capitan will be a $600 million investment to be deployed in late 2022 and operational in 2023. Undoubtedly, next-gen proposals from AMD, Intel and NVIDIA were presented, with AMD winning the shootout in a big way. While initially the DOE projected El Capitan to provide some 1.5 exaflops of computing power, it has now revised their performance goals to a pure 2 exaflop machine. El Capitan willl thus be ten times faster than the current leader of the supercomputing world, Summit.

AMD's ability to provide an ecosystem with both CPUs and GPUs very likely played a key part in the DOE's choice for the project, and this all but guarantees that the contractor was left very satisfied with AMD's performance projections for both their Zen 4 and future GPU architectures. AMD's EPYC Genoa will feature support next-gen memory, implying DDR5 or later, and also feature unspecified next-gen I/O connections. AMD's graphics cards aren't detailed at all - they're just referred to as being part of the Radeon instinct lineup featuring a "new compute architecture".

Another wholly important part of this design win has to be that AMD has redesigned their 3rd Gen Infinity Fabric (which supports a 4:1 ratio of GPUs to CPUs) to provide data coherence between CPU and GPU - thus effectively reducing the need for data to move back and forth between the CPU and GPU as it is being processed. With relevant data being mirrored across both pieces of hardware through their coherent, Infinity Fabric-powered memory, computing efficiency can be significantly improved (since data transition usually requires more power expenditure than the actual computing calculations themselves), and that too must've played a key part in the selection.
El Capitan will also feature a future version of CRAY's proprietary Slingshot network fabric for increased speed and reduced latencies. All of this will be tied together with AMD's ROCm open software platform for heterogeneous programming to maximize performance of the CPUs and GPUs in OpenMP environments. ROCm has recently gotten a pretty healthy, $100 million shot in the arm also courtesy of the DOE, having deployed a Center of Excellence at the Lawrence Livermore National Lab (part of the DOE) to help develop ROCm. So this means AMD's software arm too is flexing its muscles - for this kind of deployment, at least - which has always been a contention point against rival NVIDIA, who has typically shown to invest much more in its software implementations than AMD - and hence the reason NVIDIA has been such a big player in the enterprise and computing segments until now.
As for why NVIDIA was shunned, it likely has nothing to do with their next-gen designs offering lesser performance than what AMD brought to the table. If anything, I'd take an educated guess in that the 3rd gen Infinity Fabric and its memory coherence was the deciding factor in choosing AMD GPUs over NVIDIA's, because the green company doesn't have anything like that to offer - it doesn't play in the x64 computing space, and can't offer that level of platform interconnectedness. Whatever the reason, this is yet another big win for AMD, who keeps muscling Intel out of very, very lucrative positions.
Source: Tom's Hardware
Add your own comment

35 Comments on AMD Scores Another EPYC Win in Exascale Computing With DOE's "El Capitan" Two-Exaflop Supercomputer

#1
thesmokingman
That is sick, 10x faster than Summit... crazy.
Posted on Reply
#2
xkm1948
What "future GPU" are we talking about?

I guess whatever DOE uses probably does not rely on CUDA or Tensorflow.
Posted on Reply
#3
ARF
xkm1948
What "future GPU" are we talking about?
Arcturus on N5.
Posted on Reply
#4
thesmokingman
xkm1948
What "future GPU" are we talking about?

I guess whatever DOE uses probably does not rely on CUDA or Tensorflow.
If you read closely the ability reduce memory swapping between gpu and cpu memory was a primary benefit. The APIs don't matter since they will be tailored for them.
Posted on Reply
#5
Mark Little
xkm1948
What "future GPU" are we talking about?

I guess whatever DOE uses probably does not rely on CUDA or Tensorflow.
CUDA is more for companies like mine where we have 10 people and make biomedical imaging devices. CUDA helps us speed up the image reconstruction on the GPU versus the CPU. We are too small to make our own APIs. Giant supercomputer projects have custom tailor made software.
Posted on Reply
#6
Bruno Vieira
xkm1948
What "future GPU" are we talking about?

I guess whatever DOE uses probably does not rely on CUDA or Tensorflow.
It may have support for Tensorflow. Not CUDA, for sure
Posted on Reply
#7
silentbogo
xkm1948
What "future GPU" are we talking about?
If rumors are true, tomorrow AMD will have a Financial Analyst Day, and (taken with two shovels of salt) they should reveal some more info on RDNA2 and HPC.
Posted on Reply
#8
FreedomEclipse
~Technological Technocrat~
But can it multi-virtual machine run crysis?
Posted on Reply
#9
xkm1948
Mark Little
CUDA is more for companies like mine where we have 10 people and make biomedical imaging devices. CUDA helps us speed up the image reconstruction on the GPU versus the CPU. We are too small to make our own APIs. Giant supercomputer projects have custom tailor made software.
Not true. All major research universities as well as national labs have high rate of CUDA based GPU compute deployment. Training for ML/DL dataset for use in genomics absolutely relies on CUDA acceleration.

Unless one have massive amount of resource to invest in building something from ground up in OpenCL, CUDA is THE best option for accelerated computing.

TBH I prefer Vulkan compute over OpenCL. Better in every single way
Posted on Reply
#10
Beertintedgoggles
xkm1948
Unless one have massive amount of resource to invest in building something from ground up in OpenCL, CUDA is THE best option for accelerated computing.
$600 million is fairly massive
Posted on Reply
#11
Cheeseball
Not a Potato
xkm1948
Not true. All major research universities as well as national labs have high rate of CUDA based GPU compute deployment. Training for ML/DL dataset for use in genomics absolutely relies on CUDA acceleration.
Can confirm. The major universities in Pittsburgh all have dedicated ML labs and mathematical science centers with mostly CUDA servers from Exxact and Dell's PowerEdge.
Posted on Reply
#12
Logoffon
Is it running on macOS 10.11?
Posted on Reply
#13
Mark Little
Cheeseball
Can confirm. The major universities in Pittsburgh all have dedicated ML labs and mathematical science centers with mostly CUDA servers from Exxact and Dell's PowerEdge.
Universities don't work like corporations. They are made up of small research groups run by PIs (or professors). Usually the administration of an entire university doesn't decide on a university wide API. Since each of these small research groups are responsible for securing their own funding after an initial lab startup fund from the university which isn't much, they do not have the money to write their own APIs (unless that IS their research).
Posted on Reply
#14
gamefoo21
I apologize for posting saucy.

Guess CUDA and a locked in eco-system isn't all that afterall.

Also likely means AMD is bringing AI processing to the next gen stuff.

Just because universities still use Apple's in their media programs doesn't mean they are the best. It's just what they've used and still do. Just saying...

Also shouldn't Vulkan's GP-GPU stuff be better than OpenCL since it replaced OpenGL basically. OpenCL is ancient by this point.
Posted on Reply
#15
Cheeseball
Not a Potato
Mark Little
Universities don't work like corporations. They are made up of small research groups run by PIs (or professors). Usually the administration of an entire university doesn't decide on a university wide API. Since each of these small research groups are responsible for securing their own funding after an initial lab startup fund from the university which isn't much, they do not have the money to write their own APIs (unless that IS their research).
You're right about that. Corporations create these supercomputers with a major goal in mind, so they would need custom APIs to get to that goal efficiently. But what @xkm1948 is getting at is that CUDA can scale from the basic enthusiast all the way to the [big] corporations that don't have the time (or need) to have a custom API developed for them.

If anything, those same corporations would employ researchers from these universities. :laugh:
gamefoo21
I apologize for posting saucy.

Guess CUDA and a locked in eco-system isn't all that afterall.

Also likely means AMD is bringing AI processing to the next gen stuff.
Why do you keep saying CUDA is in a locked-in eco-system? You can run CUDA code on other hardware (even on x86 and ARM, if you're desperate) using HIP through ROCm, but you need to translate (not manual conversion) to avoid any NVIDIA extensions. This is currently a lot more efficient than what can be done in OpenCL 2.1.

The investment in ROCm is an advantage for everyone since all compute APIs will use this. Thank AMD for pulling this off.
gamefoo21
Just because universities still use Apple's in their media programs doesn't mean they are the best. It's just what they've used and still do. Just saying...

Also shouldn't Vulkan's GP-GPU stuff be better than OpenCL since it replaced OpenGL basically. OpenCL is ancient by this point.
They still use Apple because of deals (think 60%+ hardware and support discounts) offered by Apple. Also hardware deployment of Mac minis and Pros depends on department use cases.

Vulkan is aimed at rendering (and why any GPGPU code using Vulkan is on the graphics pipeline), which is why it succeeds OpenGL. OpenCL is meant for GPGPU use.
Posted on Reply
#16
Patriot
Cray is a touch insane... to hit the 30Mw usage, and 1.5 exaflop and fit it in 100 cabinets they have 4 cpus/16 gpus per 1u of space in a cabinet...
averaging >5kw of power per u of compute space... 16,000 cpus and 64,000 gpus... :D
Posted on Reply
#17
timta2
That's a played out name lacking in creativity.
Posted on Reply
#18
Flanker
When spending all that money on customized hardware, they better developed customized API to make full use of them lol
Posted on Reply
#19
mtcn77
xkm1948
What "future GPU" are we talking about?

I guess whatever DOE uses probably does not rely on CUDA or Tensorflow.
Fast Forward 1 used Nvidia, this is the Fast Forward 2 project that they are conjuring.

The Fast Forward 2 project is heterogeneous. One mistake, we thought it would be comprised of apus whereas epyc+RI with heterogeneous(coherent) memory is the underlying hardware.
Posted on Reply
#20
xkm1948
Mark Little
Universities don't work like corporations. They are made up of small research groups run by PIs (or professors). Usually the administration of an entire university doesn't decide on a university wide API. Since each of these small research groups are responsible for securing their own funding after an initial lab startup fund from the university which isn't much, they do not have the money to write their own APIs (unless that IS their research).
In the ARM server news article I mentioned that my institution did get some ARM based nodes a few years back, at the request from Computer Science department and funded by state budget. The CS department was looking to bring x86 based softwares over to arm server side. They made some progress but ultimately that cost in time as well as the piss-poor performance of the arm cluster caused them to can the project.

Superior technology speaks louder than anything, which is also the reason all of our new nodes added are EPYC2 based.
Posted on Reply
#21
steen
xkm1948
Not true. All major research universities as well as national labs have high rate of CUDA based GPU compute deployment. Training for ML/DL dataset for use in genomics absolutely relies on CUDA acceleration.

Unless one have massive amount of resource to invest in building something from ground up in OpenCL, CUDA is THE best option for accelerated computing.
CUDA is a toolchain that makes it easier for the semi-skilled... ;) Nv just practices the Apple model of separable markets. Perennially cash strapped departments get reduced price HW with decent devrel support for academic/student research that can be generated quickly with reproducible results. Nv gets exposure via publications, students require CUDA once back ITRW, Nv gets to sell Quadros/Teslas to business/govt. Path of least resistance, virtuous circle, only game in town? Take your pick.
Cheeseball
The investment in ROCm is an advantage for everyone since all compute APIs will use this. Thank AMD for pulling this off.
Let's see what resources are put into ROCm now that AMD has some income to fund dev. Nv has many years (decade) lead with their better fleshed out ecosystem. With nn/AI, Dnn/Dlops will feature heavily on upcoming IHV releases.
Posted on Reply
#23
dicktracy
They picked the cheapest but good enuff and not the absolute highest performing options.
Posted on Reply
#24
Patriot
dicktracy
They picked the cheapest but good enuff and not the absolute highest performing options.
hardly, cray doesn't do cheap, and doesn't use reference designs, 100 cabinets is insane density and an Intel solution wouldn't have fit the power budget for performance requirement.
And nvidia doesn't like custom... they want you to use dgx reference boards. This is far denser than anything that could be achieved any other way.
While ROCm is still not at cuda parity its getting there... And their hip c++ compiler is pretty baller, you can take cuda code, compile it to C with AMDs tool and it run faster than native cuda on an nvidia card...

Also... While ROCm's support lags a version behind on things like tensorflow may be annoying and unacceptable elsewhere...it's faster than any government code is pushed out...
Posted on Reply
#25
silentbogo
FreedomEclipse
But can it multi-virtual machine run crysis?
With raytracing!
steen
Let's see what resources are put into ROCm now that AMD has some income to fund dev. Nv has many years (decade) lead with their better fleshed out ecosystem. With nn/AI, Dnn/Dlops will feature heavily on upcoming IHV releases.
Today it's not the case. There are GPGPU APIs that can do the same and have expansive feature set and ecosystem. Heck, before yesterday I didn't even know that OpenMP already implemented GPU offloading (last time I tinkered with it 5-6 years ago).
The main reason why CUDA ruled the HPC and GPGPU compute in general, is being fast. Other aspects are just a consequence of the first one.
Posted on Reply
Add your own comment