• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Announces the A100 80GB GPU for AI Supercomputing

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
46,362 (7.68/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
NVIDIA today unveiled the NVIDIA A100 80 GB GPU—the latest innovation powering the NVIDIA HGX AI supercomputing platform—with twice the memory of its predecessor, providing researchers and engineers unprecedented speed and performance to unlock the next wave of AI and scientific breakthroughs. The new A100 with HBM2E technology doubles the A100 40 GB GPU's high-bandwidth memory to 80 GB and delivers over 2 terabytes per second of memory bandwidth. This allows data to be fed quickly to A100, the world's fastest data center GPU, enabling researchers to accelerate their applications even faster and take on even larger models and datasets.

"Achieving state-of-the-art results in HPC and AI research requires building the biggest models, but these demand more memory capacity and bandwidth than ever before," said Bryan Catanzaro, vice president of applied deep learning research at NVIDIA. "The A100 80 GB GPU provides double the memory of its predecessor, which was introduced just six months ago, and breaks the 2 TB per second barrier, enabling researchers to tackle the world's most important scientific and big data challenges."



The NVIDIA A100 80 GB GPU is available in NVIDIA DGX A100 and NVIDIA DGX Station A100 systems, also announced today and expected to ship this quarter.

Leading systems providers Atos, Dell Technologies, Fujitsu, GIGABYTE, Hewlett Packard Enterprise, Inspur, Lenovo, Quanta and Supermicro are expected to begin offering systems built using HGX A100 integrated baseboards in four- or eight-GPU configurations featuring A100 80 GB in the first half of 2021.

Fueling Data-Hungry Workloads
Building on the diverse capabilities of the A100 40 GB, the 80 GB version is ideal for a wide range of applications with enormous data memory requirements.

For AI training, recommender system models like DLRM have massive tables representing billions of users and billions of products. A100 80 GB delivers up to a 3x speedup, so businesses can quickly retrain these models to deliver highly accurate recommendations.

The A100 80 GB also enables training of the largest models with more parameters fitting within a single HGX-powered server such as GPT-2, a natural language processing model with superhuman generative text capability. This eliminates the need for data or model parallel architectures that can be time consuming to implement and slow to run across multiple nodes.

With its multi-instance GPU (MIG) technology, A100 can be partitioned into up to seven GPU instances, each with 10 GB of memory. This provides secure hardware isolation and maximizes GPU utilization for a variety of smaller workloads. For AI inferencing of automatic speech recognition models like RNN-T, a single A100 80 GB MIG instance can service much larger batch sizes, delivering 1.25x higher inference throughput in production.

On a big data analytics benchmark for retail in the terabyte-size range, the A100 80 GB boosts performance up to 2x, making it an ideal platform for delivering rapid insights on the largest of datasets. Businesses can make key decisions in real time as data is updated dynamically.

For scientific applications, such as weather forecasting and quantum chemistry, the A100 80 GB can deliver massive acceleration. Quantum Espresso, a materials simulation, achieved throughput gains of nearly 2x with a single node of A100 80 GB.

"Speedy and ample memory bandwidth and capacity are vital to realizing high performance in supercomputing applications," said Satoshi Matsuoka, director at RIKEN Center for Computational Science. "The NVIDIA A100 with 80 GB of HBM2E GPU memory, providing the world's fastest 2 TB per second of bandwidth, will help deliver a big boost in application performance."

Key Features of A100 80 GB
The A100 80 GB includes the many groundbreaking features of the NVIDIA Ampere architecture:
  • Third-Generation Tensor Cores: Provide up to 20x AI throughput of the previous Volta generation with a new format TF32, as well as 2.5x FP64 for HPC, 20x INT8 for AI inference and support for the BF16 data format.
  • Larger, Faster HBM2E GPU Memory: Doubles the memory capacity and is the first in the industry to offer more than 2 TB per second of memory bandwidth.
  • MIG technology: Doubles the memory per isolated instance, providing up to seven MIGs with 10 GB each.
  • Structural Sparsity: Delivers up to a 2x speedup inferencing sparse models.
  • Third-Generation NVLink and NVSwitch : Provide twice the GPU-to-GPU bandwidth of the previous generation interconnect technology, accelerating data transfers to the GPU for data-intensive workloads to 600 gigabytes per second.
NVIDIA HGX AI Supercomputing Platform
The A100 80 GB GPU is a key element in NVIDIA HGX AI supercomputing platform, which brings together the full power of NVIDIA GPUs, NVIDIA NVLink, NVIDIA InfiniBand networking and a fully optimized NVIDIA AI and HPC software stack to provide the highest application performance. It enables researchers and scientists to combine HPC, data analytics and deep learning computing methods to advance scientific progress.

View at TechPowerUp Main Site
 
Joined
Jan 31, 2011
Messages
238 (0.05/day)
Processor 3700X
Motherboard X570 TUF Plus
Cooling U12
Memory 32GB 3600MHz
Video Card(s) eVGA GTX970
Storage 512GB 970 Pro
Case CM 500L vertical
Still 5 of the 6 memory controllers and 108 of 128 SMs, iirc. Still neat that someone got HBM at 3.2Gbps per pin!

Theoretically, a fully enabled SKU would be 96GB of HBM2e at ~2.4TB/s and assuming the same clocks, ~23 TFLOPs FP32 and ~11.4 TFLOPs FP64... seemingly familiar numbers after today's conference announcements, lol.
 
Joined
Aug 21, 2013
Messages
1,678 (0.43/day)
Still 5 of the 6 memory controllers and 108 of 128 SMs, iirc. Still neat that someone got HBM at 3.2Gbps per pin!

Theoretically, a fully enabled SKU would be 96GB of HBM2e at ~2.4TB/s and assuming the same clocks, ~23 TFLOPs FP32 and ~11.4 TFLOPs FP64... seemingly familiar numbers after today's conference announcements, lol.
Yes. That will be the CEO edition that will be on Jensen's desk lol.
 
Top