• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

AMD Eyes Mid-November CDNA Debut with Instinct MI100, "World's Fastest FP64 Accelerator"

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,866 (7.38/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
AMD is eyeing a mid-November debut for its CDNA compute architecture with the Instinct MI100 compute accelerator card. CDNA is a fork of RDNA for headless GPU compute accelerators with large SIMD resources. An Aroged report pins the launch of the MI100 at November 16, 2020, according to leaked AMD documents it dug up. The Instinct MI100 will eye a slice of the same machine intelligence pie NVIDIA is seeking to dominate with its A100 Tensor Core compute accelerator.

It appears like the first MI100 cards will be built in the add-in-board form-factor with PCI-Express 4.0 x16 interfaces, although older reports do predict AMD creating a socketed variant of its Infinity Fabric interconnect for machines with larger numbers of these compute processors. In the leaked document, AMD claims that the Instinct MI100 is the "world's highest double-precision accelerator for machine learning, HPC, cloud compute, and rendering systems." This is an especially big claim given that the A100 Tensor Core features FP64 CUDA cores based on the "Ampere" architecture. Then again, given that AMD claims that the RDNA2 graphics architecture is clawing back at NVIDIA with performance at the high-end, the competitiveness of the Instinct MI100 against the A100 Tensor Core cannot be discounted.



View at TechPowerUp Main Site
 
Here is hoping they bring better software support for their open computing initiative this round. Hardware is only 50% of the solution, maybe even less. A robust and consistent bug-free software developer environment is key to success for such compute orientated chips.
 
Last edited by a moderator:
AMD has been winning HPC stuff recently: Frontier, El Capitain, LUMI, etc. etc. All of these are likely using MI100 (or maybe the GPU planned after that).

These supercomputers must have had a simulation system, or some other solid assurance, of CDNA's performance. Otherwise, they all wouldn't have chosen AMD here. As such, I'm willing to bet that this MI100 has to be pretty good.
 
Nvidia dominates here and dictates industry standards. I can't see AMD upsetting that.
 
As far as I know Vega based cards already had the upper hand on theoretical throughput. Unfortunately even in computing this is not always equal to faster results.
 
AMD has been winning HPC stuff recently: Frontier, El Capitain, LUMI, etc. etc. All of these are likely using MI100 (or maybe the GPU planned after that).

These supercomputers must have had a simulation system, or some other solid assurance, of CDNA's performance. Otherwise, they all wouldn't have chosen AMD here. As such, I'm willing to bet that this MI100 has to be pretty good.

we still need to see how good they are in real world application. titan super computer end up choosing AMD bulldozer because they believe the CPU design should be good for their purpose back then. but after titan pretty much any super computer using x86 processor end up choosing intel xeon. another example is tahiti. after SANAM that show lower than 50% utilization majority of HPC client end up picking nvidia GK210 over AMD superior hawaii back in 2013 to 2015 period.

personally i think some of the HPC client probably are not too comfortable with nvidia dominance in accelerator market. hence they are giving more AMD chance this time. plus AMD have their own CPU so they can offer better integration between CPU and GPU. it is one of the reason why nvidia going after mellanox which is very popular among HPC client for it's interconnect tech.
 
Here is hoping they bring better software support for their open computing initiative this round. Hardware is only 50% of the solution, maybe even less. A robust and consistent bug-free software developer environment is key to success for such compute orientated chips.
First you need to have the hardware to focus on support and software for it.
 
OpenACC is Nvidia technology though.

OpenACC is a programming standard for parallel computing developed by Cray, CAPS, Nvidia and PGI. The standard is designed to simplify parallel programming of heterogeneous CPU/GPU systems.


Na, its an open standard agreed to by a consortium same as OpenCapi, nvidia just happens to be part of it... no ownership over it.
 
Na, its an open standard agreed to by a consortium same as OpenCapi, nvidia just happens to be part of it... no ownership over it.
Open Standard or not, Nvidia part of it makes less suitable for other vendor.
 
Open Standard or not, Nvidia part of it makes less suitable for other vendor.

OpenACC is basically being merged into OpenMP. All the features people want are in OpenMP 4.5, and are already supported by AMD ROCm: https://github.com/ROCm-Developer-Tools/aomp

There's nothing "unsuitable" about OpenACC, aside from the fact that OpenMP's version of it (4.5+) is more up to date. Not much reason to support OpenACC technically anymore, since OpenMP 4.5 or 5.0 support will get you the same stuff.
 
Back
Top