• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Delivers Leadership Portfolio of Data Center AI Solutions with AMD Instinct MI300 Series

GFreeman

News Editor
Staff member
Joined
Mar 6, 2023
Messages
1,483 (2.45/day)
Today, AMD announced the availability of the AMD Instinct MI300X accelerators - with industry leading memory bandwidth for generative AI and leadership performance for large language model (LLM) training and inferencing - as well as the AMD Instinct MI300A accelerated processing unit (APU) - combining the latest AMD CDNA 3 architecture and "Zen 4" CPUs to deliver breakthrough performance for HPC and AI workloads.

"AMD Instinct MI300 Series accelerators are designed with our most advanced technologies, delivering leadership performance, and will be in large scale cloud and enterprise deployments," said Victor Peng, president, AMD. "By leveraging our leadership hardware, software and open ecosystem approach, cloud providers, OEMs and ODMs are bringing to market technologies that empower enterprises to adopt and deploy AI-powered solutions."



Customers leveraging the latest AMD Instinct accelerator portfolio include Microsoft, which recently announced the new Azure ND MI300x v5 Virtual Machine (VM) series, optimized for AI workloads and powered by AMD Instinct MI300X accelerators. Additionally, El Capitan - a supercomputer powered by AMD Instinct MI300A APUs and housed at Lawrence Livermore National Laboratory - is expected to be the second exascale-class supercomputer powered by AMD and expected to deliver more than two exaflops of double precision performance when fully deployed. Oracle Cloud Infrastructure plans to add AMD Instinct MI300X-based bare metal instances to the company's high-performance accelerated computing instances for AI. MI300X-based instances are planned to support OCI Supercluster with ultrafast RDMA networking.

Several major OEMs also showcased accelerated computing systems, in tandem with the AMD Advancing AI event. Dell showcased the Dell PowerEdge XE9680 server featuring eight AMD Instinct MI300 Series accelerators and the new Dell Validated Design for Generative AI with AMD ROCm-powered AI frameworks. HPE recently announced the HPE Cray Supercomputing EX255a, the first supercomputing accelerator blade powered by AMD Instinct MI300A APUs, which will become available in early 2024. Lenovo announced its design support for the new AMD Instinct MI300 Series accelerators with planned availability in the first half of 2024. Supermicro announced new additions to its H13 generation of accelerated servers powered by 4th Gen AMD EPYC CPUs and AMD Instinct MI300 Series accelerators.

AMD Instinct MI300X
AMD Instinct MI300X accelerators are powered by the new AMD CDNA 3 architecture. When compared to previous generation AMD Instinct MI250X accelerators, MI300X delivers nearly 40% more compute units, 1.5x more memory capacity, 1.7x more peak theoretical memory bandwidth as well as support for new math formats such as FP8 and sparsity; all geared towards AI and HPC workloads.

Today's LLMs continue to increase in size and complexity, requiring massive amounts of memory and compute. AMD Instinct MI300X accelerators feature a best-in-class 192 GB of HBM3 memory capacity as well as 5.3 TB/s peak memory bandwidth to deliver the performance needed for increasingly demanding AI workloads. The AMD Instinct Platform is a leadership generative AI platform built on an industry standard OCP design with eight MI300X accelerators to offer an industry leading 1.5 TB of HBM3 memory capacity. The AMD Instinct Platform's industry standard design allows OEM partners to design-in MI300X accelerators into existing AI offerings and simplify deployment and accelerate adoption of AMD Instinct accelerator-based servers.

Compared to the Nvidia H100 HGX, the AMD Instinct Platform can offer a throughput increase of up to 1.6x when running inference on LLMs like BLOOM 176B and is the only option on the market capable of running inference for a 70B parameter model, like Llama2, on a single MI300X accelerator; simplifying enterprise-class LLM deployments and enabling outstanding TCO.

AMD Instinct MI300A
The AMD Instinct MI300A APUs, the world's first data center APU for HPC and AI, leverage 3D packaging and the 4th Gen AMD Infinity Architecture to deliver leadership performance on critical workloads sitting at the convergence of HPC and AI. MI300A APUs combine high-performance AMD CDNA 3 GPU cores, the latest AMD "Zen 4" x86-based CPU cores and 128 GB of next-generation HBM3 memory, to deliver ~1.9x the performance-per-watt on FP32 HPC and AI workloads, compared to previous gen AMD Instinct MI250X.

Energy efficiency is of utmost importance for the HPC and AI communities, however these workloads are extremely data- and resource-intensive. AMD Instinct MI300A APUs benefit from integrating CPU and GPU cores on a single package delivering a highly efficient platform while also providing the compute performance to accelerate training the latest AI models. AMD is setting the pace of innovation in energy efficiency with the company's 30x25 goal, aiming to deliver a 30x energy efficiency improvement in server processors and accelerators for AI-training and HPC from 2020-2025.

The APU advantage means that AMD Instinct MI300A APUs feature unified memory and cache resources giving customers an easily programmable GPU platform, highly performant compute, fast AI training and impressive energy efficiency to power the most demanding HPC and AI workloads.

ROCm Software and Ecosystem Partners
AMD announced the latest AMD ROCm 6 open software platform as well as the company's commitment to contribute state-of-the-art libraries to the open-source community, furthering the company's vision of open-source AI software development. ROCm 6 software represents a significant leap forward for AMD software tools, increasing AI acceleration performance by ~8x when running on MI300 Series accelerators in Llama 2 text generation compared to previous generation hardware and software. Additionally, ROCm 6 adds support for several new key features for generative AI including FlashAttention, HIPGraph and vLLM, among others. As such, AMD is uniquely positioned to leverage the most broadly used open-source AI software models, algorithms and frameworks - such as Hugging Face, PyTorch, TensorFlow and others - driving innovation, simplifying the deployment of AMD AI solutions and unlocking the true potential of generative AI.

AMD also continues to invest in software capabilities through the acquisitions of Nod.AI and Mipsology as well as through strategic ecosystem partnerships such as Lamini - running LLMs for enterprise customers - and MosaicML - leveraging AMD ROCm to enable LLM training on AMD Instinct accelerators with zero code changes.



View at TechPowerUp Main Site | Source
 
Joined
Jan 5, 2006
Messages
18,585 (2.70/day)
System Name AlderLake
Processor Intel i7 12700K P-Cores @ 5Ghz
Motherboard Gigabyte Z690 Aorus Master
Cooling Noctua NH-U12A 2 fans + Thermal Grizzly Kryonaut Extreme + 5 case fans
Memory 32GB DDR5 Corsair Dominator Platinum RGB 6000MT/s CL36
Video Card(s) MSI RTX 2070 Super Gaming X Trio
Storage Samsung 980 Pro 1TB + 970 Evo 500GB + 850 Pro 512GB + 860 Evo 1TB x2
Display(s) 23.8" Dell S2417DG 165Hz G-Sync 1440p
Case Be quiet! Silent Base 600 - Window
Audio Device(s) Panasonic SA-PMX94 / Realtek onboard + B&O speaker system / Harman Kardon Go + Play / Logitech G533
Power Supply Seasonic Focus Plus Gold 750W
Mouse Logitech MX Anywhere 2 Laser wireless
Keyboard RAPOO E9270P Black 5GHz wireless
Software Windows 11
Benchmark Scores Cinebench R23 (Single Core) 1936 @ stock Cinebench R23 (Multi Core) 23006 @ stock
Nvidia already got most covered.....
 
Joined
Oct 6, 2021
Messages
1,605 (1.43/day)
Everything is better with images and numbers, that's what's missing.


1701896291108.png


1701896314763.png
 
Joined
Dec 12, 2016
Messages
1,750 (0.61/day)
AI CAGR $400B by 2027

This market is wide open for the taking with no one having a significant share yet. Looking forward to seeing AMD and Nvidia fight it out. I’m cheering for AMD since they are going the open source route.
 
Joined
Aug 13, 2022
Messages
35 (0.04/day)
Don’t know if they will even create a splash considering customers already ordered the h200 for shipment in 2024. Judging from pure numbers used for advertising from both companies, h200 still has the upper hand in most tasks. Seems like AMD is a bit late to the market. Or maybe they will try to undercut NVIDIA’s price, I have no idea.
 
Top