Thursday, October 28th 2021

Intel Aurora Supercomputer Will Touch 2 ExaFLOPs of Computational Power

Intel's Aurora supercomputer is a $500 million contract with the US Department of Energy to deliver an exascale supercomputer for Argonne National Laboratory. The project aims to build a machine capable of cranking over one ExaFLOP of computing at sustained workloads. The supercomputer aims to reach two ExaFLOPs of computing power once the installation system is completed and powered. The contract bound Intel to create accelerators that are powerful enough to achieve this magical number. However, they left Intel with room to do a little bit extra. With Ponte Vecchio GPU behind the project, it seems like the GPU is performing better than expected.

According to Intel's CEO, Pat Gelsinger, the system will reach over 2 ExaFLOPs at peak and a bit below in sustained workloads. As per preliminary calculations done by The Next Platform, the system's estimations point towards 2.43 ExaFLOPs peak and around 1.7 ExaFLOPs in sustained workloads at Double-precision floating-point format math, aka FP64. The system will utilize Intel Xeon Sapphire Rapids processors with HBM memory and the powerful Ponte Vecchio GPU with 47 tiles and over 100 billion transistors.
Source: The Next Platform
Add your own comment

21 Comments on Intel Aurora Supercomputer Will Touch 2 ExaFLOPs of Computational Power

#1
P4-630
So is it record breaking?
Posted on Reply
#2
Minus Infinity
P4-630So is it record breaking?
I think it is. Telsa's Dojo supercomputer is around 1 exaflop. Exaflops are a huge achievement, most current fastest supercomputers are about 0.5 exaflops.
Posted on Reply
#3
john_
Probably they threw a few thousands more free Xeons into the agreement as a "we are sorry for the huge delay".
Posted on Reply
#4
demu
I guess after 2 more years we'll get a press telling about an upgrade to 3.5 ExaFLOPs.
And still nothing delivered
Posted on Reply
#8
IceShroom
Minus InfinityI think it is. Telsa's Dojo supercomputer is around 1 exaflop. Exaflops are a huge achievement, most current fastest supercomputers are about 0.5 exaflops.
Wrong. Current Supercomputer is .5Exaflop of DP64, NOT INT4/INT8/FP16 which what Tesla Dojo is.
1Exflop of INT4/INT8/FP16 is not equal to 1 Exaflop of DP64. DP64 is the measure of supercomputer not AI INT4/INT8/FP16.
Posted on Reply
#9
Dragokar
Was Intel not the ones that claimed they don't need to that good in SuperComputers anymore :D ?
Posted on Reply
#10
DeathtoGnomes
ntel's Aurora supercomputer is a $500 million contract with the US Department of Energy
With AMD the same number of processors would cost half that, and have a third more cores... :p
Posted on Reply
#11
ScaLibBDP
To AleksandarK

>>...workloads at dual-precision FP64 math...

There is No dual-precision FP64 math and there is a Double Precision ( 53-bit ) Floating Point arithmetics ( using 64-bit double data type ).

As a Software Engineer I constantly see that too many guys, who write articles, blog posts, etc. on the Web, do Not fully understand fundamental concepts, especially related to precision, of Floating Point arithmetics.

Please watch out a youtube video:

Accuracy of Floating Point arithmetic defined by IEEE 754 Standard ( VTR-095 )
Posted on Reply
#12
AleksandarK
ScaLibBDPTo AleksandarK

>>...workloads at dual-precision FP64 math...

There is No dual-precision FP64 math and there is a Double Precision ( 53-bit ) Floating Point arithmetics ( using 64-bit double data type ).

As a Software Engineer I constantly see that too many guys, who write articles, blog posts, etc. on the Web, do Not fully understand fundamental concepts, especially related to precision, of Floating Point arithmetics.

Please watch out a youtube video:

Accuracy of Floating Point arithmetic defined by IEEE 754 Standard ( VTR-095 )
Excuse me for the wrong choice of words. I very much understand the standard as I was actually doing it in hardware, so please excuse my poor choice of words for this :)
Posted on Reply
#13
_larry
Exa is right above Peta right? Mega < Giga < Tera < Peta < Exa ?

How much data would need to be constantly moving to create an alternate reality like the Matrix? It seems we are heading that direction..
Posted on Reply
#14
docnorth
For us needing moaaar cores...
Posted on Reply
#15
Minus Infinity
IceShroomWrong. Current Supercomputer is .5Exaflop of DP64, NOT INT4/INT8/FP16 which what Tesla Dojo is.
1Exflop of INT4/INT8/FP16 is not equal to 1 Exaflop of DP64. DP64 is the measure of supercomputer not AI INT4/INT8/FP16.
LOL which part is wrong, did I say the DoJo was faster!
Posted on Reply
#17
ScaLibBDP
AleksandarKExcuse me for the wrong choice of words. I very much understand the standard as I was actually doing it in hardware, so please excuse my poor choice of words for this :)
I really appreciate your response. Thank you!
_larryExa is right above Peta right? Mega < Giga < Tera < Peta < Exa ?

How much data would need to be constantly moving to create an alternate reality like the Matrix? It seems we are heading that direction..
>>...Exa is right above Peta right? Mega < Giga < Tera < Peta < Exa ?

That's correct...

Orders of Magnitude for FLOPs are as follows:

MegaFLOPs MFLOPs 10^06 FLOPs
GigaFLOPs GFLOPs 10^09 FLOPs
TeraFLOPs TFLOPs 10^12 FLOPs
PetaFLOPs PFLOPs 10^15 FLOPs
ExaFLOPs EFLOPs 10^18 FLOPs
ZettaFLOPs ZFLOPs 10^21 FLOPs
YottaFLOPs YFLOPs 10^24 FLOPs

The Departmet of Energy ( DOE ) of the US is already thinking about a next generation of supercomputing systems after ExaScale ( 10^18 FLOPs ), that is ZettaScale ( 10^21 FLOPs )...
Posted on Reply
#18
IceShroom
Minus InfinityLOL which part is wrong, did I say the DoJo was faster!
You are comparing DP64 performance of a Supercomputer to INT4/INT8/FP16 performance of Telsa Dojo. Not same.
Posted on Reply
#19
phill
Freebirdand the "new" Aurora is going to use 60 MegaWatts of power compared to the 29 MegaWatts used by the 1.5Exabyte (AMD) Frontier system that should be delivered by the end of this year... the costs to support such an immense load is more than just the extra electricity cost.
You'd also think, that the Department of Energy might consider its consumption to do the job it was meant for... I mean, if 1.5 Exabytes for AMD is 29 Megawatts, makes me wonder why they chose Intel if its using double the energy to do something 0.5 exabytes slower.....
Posted on Reply
#20
dragontamer5788
P4-630So is it record breaking?
Pretty much every strategic supercomputer built will break records.

Frontier is 1.5 Exaflops IIRC and will be launching "soon". I forgot where El Capitan is at. 2 Exa should be competitive and if not the new record, at least close to it.
phillYou'd also think, that the Department of Energy might consider its consumption to do the job it was meant for... I mean, if 1.5 Exabytes for AMD is 29 Megawatts, makes me wonder why they chose Intel if its using double the energy to do something 0.5 exabytes slower.....
Strategic supercomputer. Part of the strategy is ensuring multiple vendors have a chance at making something this big. Its not about picking the best. Its about fostering competition and having a big ecosystem of competing vendors. Intel may be losing right now, but in 10 years, Intel may come out with the best tech again. Giving Intel the chance to build a supercomputer of this size/scale will foster more competition in the long run.

Similarly, AMD, NVidia, and IBM have been given chances to make supercomputers even back in the days when Intel was the undisputed king.

The Department of Energy has a lot of power (nuclear power, hydro power, etc. etc.). They can afford these absurd costs in energy.
Posted on Reply
#21
phill
dragontamer5788Pretty much every strategic supercomputer built will break records.

Frontier is 1.5 Exaflops IIRC and will be launching "soon". I forgot where El Capitan is at. 2 Exa should be competitive and if not the new record, at least close to it.



Strategic supercomputer. Part of the strategy is ensuring multiple vendors have a chance at making something this big. Its not about picking the best. Its about fostering competition and having a big ecosystem of competing vendors. Intel may be losing right now, but in 10 years, Intel may come out with the best tech again. Giving Intel the chance to build a supercomputer of this size/scale will foster more competition in the long run.

Similarly, AMD, NVidia, and IBM have been given chances to make supercomputers even back in the days when Intel was the undisputed king.

The Department of Energy has a lot of power (nuclear power, hydro power, etc. etc.). They can afford these absurd costs in energy.
I'm just saying it's a shame that they don't think about the energy use, we are all meant to be doing our bit surely after all??.....
Posted on Reply
Add your own comment