Monday, May 30th 2022

ORNL Frontier Supercomputer Officially Becomes the First Exascale Machine

Supercomputing game has been chasing various barriers over the years. This has included MegaFLOP, GigaFLOP, TeraFLOP, PetaFLOP, and now ExaFLOP computing. Today, we are witnessing for the first time an introduction of an Exascale-level machine contained at Oak Ridge National Laboratory. Called the Frontier, this system is not really new. We have known about its upcoming features for months now. What is new is the fact that it was completed and is successfully running at ORNL's facilities. Based on the HPE Cray EX235a architecture, the system uses 3rd Gen AMD EPYC 64-core processors with a 2 GHz frequency. In total, the system has 8,730,112 cores that work in conjunction with AMD Instinct MI250X GPUs.

As of today's TOP500 supercomputers list, the system is overtaking Fugaku's spot to become the fastest supercomputer on the planet. Delivering a sustained HPL (High-Performance Linpack) score of 1.102 Exaflop/s, it features a 52.23 GigaFLOPs/watt power efficiency rating. In the HPL-AI metric, dedicated to measuring the system's AI capabilities, the Frontier machine can output 6.86 exaFLOPs at reduced precisions. This alone is, of course, not a capable metric for Exascale machines as AI works with INT8/FP16/FP32 formats, while the official results are measured in FP64 double-precision form. Fugaku, the previous number one, scores about 2 ExaFLOPs in HPL-AI while delivering "only" 442 PetaFlop/s in HPL FP64 benchmarks.
Source: TOP500
Add your own comment

21 Comments on ORNL Frontier Supercomputer Officially Becomes the First Exascale Machine

#2
Kohl Baas
AnarchoPrimitivSkynet is now operational.
Skynet is not a machine. It's a program.
Posted on Reply
#3
Rhein7
Reading the source was even more interesting.
No 1 on GREEN500 is actually a Frontier test system with the Frontier itself on no 2. No 3 on that list is a newcomer with all 3 of them actually powered by 3rd gen Epyc which is impressive for Amd.
Posted on Reply
#4
Space Lynx
Astronaut
Rhein7Reading the source was even more interesting.
No 1 on GREEN500 is actually a Frontier test system with the Frontier itself on no 2. No 3 on that list is a newcomer with all 3 of them actually powered by 3rd gen Epyc which is impressive for Amd.
the article also states China has like 2 or 3 Exascale supercomputers up and running, but for some reason they are not on this list. and those in China are more powerful than this one in USA. China runs at 1.3 exaflops, not 1.1 like the new USA one.

probably doesn't matter, just thought it odd how the article just casually mentions that...
Posted on Reply
#5
SDR82
Ok, so this explains the chip shortage then /s ;):rolleyes:
Posted on Reply
#6
DeathtoGnomes
8,730,112 / 64 assuming cores only without threads is 136408 cpus.

8,730,112 / 128 assuming cores and threads together is 68204 cpus.

TBH, not really sure how to count the number of cpus.
Posted on Reply
#7
Daven
Intel has been reduced to just one system in the top ten. But not just by AMD.

#1 AMD
#2 Fujitsu
#3 AMD
#4 IBM/Nvidia
#5 IBM/Nvidia
#6 NRCPC
#7 AMD/Nvidia
#8 AMD/Nvidia
#9 Intel/Matrix 2000
#10 AMD

I love our new heterogeneous computing world where no one dominates!
Posted on Reply
#8
PCL
CallandorWoTthe article also states China has like 2 or 3 Exascale supercomputers up and running, but for some reason they are not on this list. and those in China are more powerful than this one in USA. China runs at 1.3 exaflops, not 1.1 like the new USA one.

probably doesn't matter, just thought it odd how the article just casually mentions that...
Those systems haven't been submitted to actual testing, but proxy submissions suggest they don't hit their claimed performance numbers.
Posted on Reply
#12
Patriot
mechtechWas waiting for the can it run crisis comment lol
I am just wondering how much trouble HPE is in till it hits that 1.5Exa mark...
Posted on Reply
#13
Prima.Vera
That's great. Can we know now what is it used for and other useful details? ;)
Posted on Reply
#14
Vayra86
Prima.VeraThat's great. Can we know now what is it used for and other useful details? ;)
Calculating how long this computer needs to run to burn enough fossil fuels to kill the planet

Result: please build a bigger one of me.
Posted on Reply
#15
qubit
Overclocked quantum bit
Yeah, all that power, but it still can't play Crysis. ;)
Posted on Reply
#16
r9
CallandorWoTthe article also states China has like 2 or 3 Exascale supercomputers up and running, but for some reason they are not on this list. and those in China are more powerful than this one in USA. China runs at 1.3 exaflops, not 1.1 like the new USA one.

probably doesn't matter, just thought it odd how the article just casually mentions that...
I'm sure they also have a warp engine and a time machine as well.
Posted on Reply
#17
R0H1T
Vayra86Calculating how long this computer needs to run to burn enough fossil fuels to kill the planet

Result: please build a bigger one of me.
You mean like mining the next >hype<Coin :slap:
Posted on Reply
#18
Space Lynx
Astronaut
Vayra86Calculating how long this computer needs to run to burn enough fossil fuels to kill the planet

Result: please build a bigger one of me.
Humans know not what they do, the yield increases in agriculture that allowed for population boom is unsustainable, I expect these supercomputers will tell us that soon enough.
Posted on Reply
#19
Blaeza
I want to see it run cinebench. Think it may beat my R5 3600.
Posted on Reply
#20
Leiesoldat
lazy gamer & woodworker
I work in the project office for the Exascale Computing Project (ECP) which is designing the applications and software to run on Frontier. In the past, supercomputers would be built with little to no input from the scientists and software developers that would need to use the hardware. This time around Frontier and the ECP worked closely together on the software and accompanying bugs (software and applications will be proven in the coming months). I get a lot of questions regarding what is actually run on a supercomputer at a national lab with most people thinking it is just nuclear weapons modeling; there is modeling of that nature but most of the computing runs are for the Office of Science (DoE is split between the Office of Science and NNSA [National Nuclear Security Administration]).
  • Chemistry and Materials (This area focuses on simulation capabilities that attempt to precisely describe the underlying properties of matter needed to optimize and control the design of new materials and energy technologies.)
    • LatticeQCD - Validate Fundamental Laws of Nature
    • NWChemEx - Tackling Chemical, Materials, and Biomolecular Challenges in Exascale: Catalytic Conversion of Biomass-derived Alcohols
    • GAMESS - General Atomic and Molecular Electronic Structure System: Biofuel Catalyst Design
    • EXAALT - Molecular Dynamics at Exascale: Simultaneously address time, length, and accuracy requirements for predictive microstructural evolution of materials
    • ExaAM - Transforming Additive Manufacturing through Exascale Simulation: Additive Manufacturing of Qualifiable Metal Parts
    • QMCPACK - Quantum Mechanics at Exascale: Find, predict, and control materials and properties at quantum level
  • Co-Design (These projects target crosscutting algorithmic methods that capture the most common patterns of computation and communication [known as motifs] in the ECP applications.)
    • Adaptive Mesh Refinement - Adaptive mesh refinement (AMR) is like a computational microscope; it allows scientists to “zoom in” on particular regions of space that are more interesting than others.
    • Efficient Exascale Discretizations - Efficient exploitation of exascale architectures requires a rethink of the numerical algorithms used in large-scale applications of strategic interest to the DOE. Many large-scale applications employ unstructured finite element discretization methods—the process of dividing a large simulation into smaller components in preparation for computer analysis—where practical efficiency is measured by the accuracy achieved per unit computational time.
    • Online Data Analysis and Reduction at the Exascale
    • Particle-Based Applications - Particle-based simulation approaches are ubiquitous in computational science and engineering. The “particles” may represent, for example, the atomic nuclei of quantum and classical molecular dynamics methods or gravitationally interacting bodies or tracer particles in N-body simulations.
    • Efficient Implementation of Key Graph Algorithms
    • Exascale Machine Learning Technologies
    • Proxy Applications - Proxy applications (proxy apps) are small, simplified codes that allow application developers to share important features of larger production applications without forcing collaborators to assimilate large and complex code bases.
  • Data Analytics and Optimization
    • ExaSGD - Optimizing Stochastic Grid Dynamics at Exascale: Reliable and Efficient Planning of the Power Grid
    • CANDLE - Exascale Deep Learning–Enabled Precision Medicine for Cancer: Accelerate and Translate Cancer Research, Develop pre-clinical drug response models, predict mechanisms of RAS/RAF driven cancers, and develop treatment strategies
    • ExaBiome - Exascale Solutions for Microbiome Analysis: Metagenomics for Analysis of Biogeochemical Cycles
    • ExaFEL - Data Analytics at Exascale for Free Electron Lasers: Light Source–Enabled Analysis of Protein and Molecular Structures and Design
  • Earth and Space Science
    • ExaStar - Exascale Models of Stellar Explosions: Demystify Origin of Chemical Elements
    • ExaSky - Computing at the Extreme Scales: Cosmological Probe of the Standard Model of Particle Physics
    • EQSIM - High-Performance, Multidisciplinary Simulations for Regional-Scale Earthquake Hazard/ Risk Assessments: Earthquake Hazard Risk Assessment
    • Subsurface - Exascale Subsurface Simulator of Coupled Flow, Transport, Reactions, and Mechanics: Carbon Capture, Fossil Fuel Extraction, Waste Disposal
    • E3SM-MMF - Cloud-Resolving Climate Modeling of the Earth’s Water Cycle: Accurate Regional Impact Assessment in Earth Systems (modeling cloud formations for all of North America for instance)
  • Energy
    • ExaWind - Exascale Predictive Wind Plant Flow Physics Modeling: Turbine Wind Plant Efficiency
    • Combustion-PELE - High-efficiency, Low-emission Combustion Engine Design: Advance Understanding of Fundamental Turbulence- Chemistry Interactions in Device-relevant Conditions
    • MFIX-Exa - Performance Prediction of Multiphase Energy Conversion Device: Scale-up of Clean Fossil Fuel Combustion
    • WDMApp - High-fidelity Whole Device Modeling of Magnetically Confined Fusion Plasmas: High-fidelity Whole Device Modeling of Magnetically Confined Fusion Plasmas. Prepare for the International Thermonuclear Experimental Reactor (ITER) [fusion reaction in the south of France] experiments and increase return of investment (ROI) of validation data and understanding; prepare for beyond-ITER devices
    • ExaSMR - Coupled Monte Carlo Neutronics and Fluid Flow Simulation of Small Modular Reactors: Design and Commercialization of Small Modular Reactors
    • WarpX - Exascale Modeling of Advanced Particle Accelerators: Plasma Wakefield Accelerator Design
  • National Security
    • Ristra - Multi-physics simulation tools for weapons-relevant applications: The Ristra project is developing new multi-physics simulation tools that address emerging HPC challenges of massive, heterogeneous parallelism using novel programming models and data management.
    • MAPP - Multi-physics simulation tools for High Energy Density Physics (HEDP) and weapons- relevant applications for DOE and DoD
    • EMPIRE & SPARC - EMPIRE addresses electromagnetic plasma physics, and SPARC addresses reentry aerodynamics
  • Data and Visualization
    • ADIOS - Support efficient I/O and code coupling services
    • DataLib - Support efficient I/O, I/O monitoring and data services
    • VTK-m - Provide VTK- based scientific visualization software that supports shared memory parallelism
    • VeloC/SZ - Develop two software products: VeloC checkpoint restart and SZ lossy compression with strict error bounds
    • ExaIO - Develop an efficient system topology and storage hierarchy-aware HDF5 and UnifyFS parallel I/O libraries
    • Alpine/ZFP - Deliver in situ visualization and analysis algorithms, infrastructure and data reduction of floating-point arrays
  • Development Tools
    • EXA-PAPI++ - Develop a standardized interface to hardware performance counters
    • HPCToolkit - Develop an HPC Tool Kit for performance analysis
    • PROTEAS-TUNE - Develop a software tool chain for emerging architectures
    • SOLLVE - Develop/enhance OpenMP programming model
    • FLANG - Develop a Fortran front-end for LLVM
  • Mathematical Libraries
    • xSDK4ECP - Create a value-added aggregation of DOE math libraries to combine usability, standardization, and interoperability
    • PETSc/TAO - Deliver efficient libraries for sparse linear and nonlinear systems of equations and numerical optimization
    • STRUMPACK/SuperLU - Provide direct methods for linear systems of equations and Fourier transformations
    • SUNDIALS-hypre - Deliver adaptive time-stepping methods for dynamical systems and solvers
    • CLOVER - Develop scalable, portable numerical algorithms to facilitate efficient simulations
    • ALExa - Provide technologies for passing data among grids, computing surrogates, and accessing mathematical libraries from Fortran
  • Programming Models & Runtimes
    • Exascale MPI/MPICH - Enhance the MPI standard and the MPICH implementation of MPI for exascale
    • Legion - provides a data-centric programming system that allows scientists to describe the properties of their program data and dependencies, along with a runtime that extracts tasks and executes them using knowledge of the exascale systems to improve performance
    • PaRSEC - supports the development of domain-specific languages and tools to simplify and improve the productivity of scientists when using a task-based system and provides a low-level runtime
    • Pagoda: UPC++/GASNet - Develop/enhance a Partitioned Global Address Space (PGAS) programming model
    • SICM - addresses the emerging complexity of exascale memory hierarchies by providing a portable, simplified interface to complex memory
    • OMPI-X - Enhance the MPI standard and the Open MPI implementation of MPI for exascale
    • Kokkos/RAJA - Develop abstractions for node-level performance portability
    • Argo - Optimize existing low-level system software components to improve performance and scalability and improve functionality of exascale applications and runtime systems
  • Software Ecosystem and Delivery
    • E4S & SDK Efforts - The large number of software technologies being delivered to the application developers poses challenges, especially if the application needs to use more than one technology at the same time. The Software Development Kit (SDK) efforts identify meaningful aggregations of products within the programming models and runtimes, development tools, and data and visualization technical areas, with the goal of increasing the interoperability, availability and quality.
  • NNSA Software
    • LANL (Los Alamos National Laboratory) NNSA - LANL’s NNSA/ATDM software technology efforts include Legion (PM/R), LLVM (Tools), Cinema (Data/Vis), and BEE (Ecosystem)
    • LLNL (Lawrence Livermore National Laboratory) NNSA - LLNL’s NNSA/ATDM software technology efforts include Spack, Flux (Ecosystem), RAJA, Umpire (PMR), Debugging @ Scale, Flux/Power (Tools), and MFEM (Math Libs)
    • SNL (Sandia National Laboratory) NNSA - SNL’s NNSA/ATDM software technology efforts include Kokkos (PM/R) and Kokkos Kernels (Math Libs)
Posted on Reply
#21
DeathtoGnomes
Vayra86Calculating how long this computer needs to run to burn enough fossil fuels to kill the planet

Result: please build a bigger one of me.
Slartibartfast asks, "How Big?"

According to the Church of Last Thursdayism, we have enough fossil fuels to last until midnight.
Posted on Reply
Add your own comment
May 18th, 2024 12:00 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts