• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA A100 Ampere GPU Benchmarked on MLPerf

AleksandarK

News Editor
Staff member
Joined
Aug 19, 2017
Messages
3,012 (1.07/day)
When NVIDIA announced its Ampere lineup of the graphics cards, the A100 GPU was there to represent the higher performance of the lineup. The GPU is optimized for heavy computing workloads as well as machine learning and AI tasks. Today, NVIDIA has submitted the MLPerf results on the A100 GPU to the MLPerf database. What is MLPerf and why it matters you might think? Well, MLPerf is a system benchmark designed to test the capability of a system for machine learning tasks and enable comparability between systems. The A100 GPU got benchmarked in the latest 0.7 version of the benchmark.

The baseline for the results was the previous generation king, V100 Volta GPU. The new A100 GPU is average 1.5 to 2.5 times faster compared to V100. So far A100 GPU system beats all offers available. It is worth pointing out that not all competing systems have been submitted, however, so far the A100 GPU is the fastest.


The performance results follow:




View at TechPowerUp Main Site
 
Getting a strong 10 gigarays vibe from these numbers. Can anyone tell what we're really looking at? I also see different versions of software compared along with hardware. What does it compare to anyway?
 
Getting a strong 10 gigarays vibe from these numbers. Can anyone tell what we're really looking at? I also see different versions of software compared along with hardware. What does it compare to anyway?
Ampere is faster than Volta, hello.
 
Ampere is faster than Volta, hello.

Hi. We got that, and my question is how much faster really given all the variables.
 
Hi. We got that, and my question is how much faster really given all the variables.
Depending on type of application, seems to be anywhere between 20% and over double.
Its a bit hard to narrow it, since software is still mostly inexperienced with Ampere. My guess is that as the product becomes more widely available, its results and utilization will improve, much like how Turing did.
 
Getting a strong 10 gigarays vibe from these numbers. Can anyone tell what we're really looking at? I also see different versions of software compared along with hardware. What does it compare to anyway?

mlperf is largely open, so you can see for yourself what loads are being run, at least the reference implementations.

 
Hi. We got that, and my question is how much faster really given all the variables.

Generic shader performance hasn't increased that much, if that's what you are wondering.
 
Back
Top