Friday, January 3rd 2020

NVIDIA's Next-Generation Ampere GPUs to be 50% Faster than Turing at Half the Power

As we approach the release of NVIDIA's Ampere GPUs, which are rumored to launch in the second half of this year, more rumors and information about the upcoming graphics cards are appearing. Today, according to the latest report made by Taipei Times, NVIDIA's next-generation of graphics cards based on "Ampere" architecture is rumored to have as much as 50% performance uplift compared to the previous generations of Turing GPUs, while using having half the power consumption.

Built using Samsung's 7 nm manufacturing node, Ampere is poised to be the new king among all future GPUs. The rumored 50% performance increase is not impossible, due to features and improvements that the new 7 nm manufacturing node brings. If utilizing the density alone, NVIDIA can extract at least 50% extra performance that is due to the use of a smaller node. However, performance should increase even further because Ampere will bring new architecture as well. Combining a new manufacturing node and new microarchitecture, Ampere will reduce power consumption in half, making for a very efficient GPU solution. We still don't know if the performance will increase mostly for ray tracing applications, or will NVIDIA put the focus on general graphics performance.
Source: Taipei Times
Add your own comment

186 Comments on NVIDIA's Next-Generation Ampere GPUs to be 50% Faster than Turing at Half the Power

#27
notb
HTC
Something doesn't sound right.

When moving to a smaller node, you either get X% higher performance @ the same power or same performance using Y% less power, but not both.
And this is a result of which fundamental law of physics?

There's no reason why Nvidia would offer the traditional +30% performance with new generation just based on architecture improvements and optimizations.
Sometimes there's also some power efficiency bonus (despite the same node).
7nm mean more cores.

Going straight to 7nm EUV means even higher efficiency gain than AMD got with Polaris -> Navi.
Posted on Reply
#28
HTC
64K
I believe it's possible to gain 50% in performance. Going from the 12nm process to the 7nm process should increase the efficiency and allow for more cores/faster clocks for the same wattage as Turing uses.

I doubt that Nvidia will lower prices though. They won't have a reason to unless Intel comes out with something really good this year for competition. Also, if there are shortages for any reason then we can expect retailer gouging which will make prices higher than they should be for awhile after release.
But they are claiming 50% performance increase WHILE using 50% less power. It's perfectly believable to be either one ... but both @ the same time? Seriously doubt it.

EDIT

notb
And this is a result of which fundamental law of physics?

There's no reason why Nvidia would offer the traditional +30% performance with new generation just based on architecture improvements and optimizations.
Sometimes there's also some power efficiency bonus (despite the same node).
7nm mean more cores.


Going straight to 7nm EUV means even higher efficiency gain than AMD got with Polaris -> Navi.
But they're not claiming 30%, are they? They could claim ... say ... 20% more performance @ 30% less power and it would be much more believable but 50% more performance @ 50% less power?

Seriously SERIOUSLY doubt this.
Posted on Reply
#29
64K
True. That's why I only focussed on the possibility of a 50% increase in performance for the same wattage as Turing. Having 50% increase in performance and 50% less watts used at the same time isn't possible imo.
Posted on Reply
#30
fynxer
Why do you think nVidia wants to "surprise" all their customers in the absolute last second...

They don't want to cause panic to early before next gen release because they know the 2080Ti will drop to 1/3 of it's value instantly when they do their announcement

Don't give them the satisfaction... Sell your 2080Ti NOW!!!
Posted on Reply
#31
renz496
low
AMD: release big navi
Nvidia: what shall we do? Lets spread news about the next gen to prevent ppl buying big navi.
i think we heard this every time new GPU is about to come out. the competitor starts spreading rumor to stop them from buying competitor product. sometimes i think company like AMD and nvidia does not really need to do this. because their fanboys will do the job for them for free. still remember when some people said you should hold back from getting GTX1080 because Vega will coming out in october 2016. they said there is shortage anyway with 1080 because of demand so you might just as well wait until october 2016.
Posted on Reply
#32
Xex360
Finally we'll get the promised 2080ti, not the garbage we have now.
Posted on Reply
#33
ZeroFM
nVidia is the new Intel , YOU WILL SEE . New gpus no cheaper , tiny faster becouse is no competition
Posted on Reply
#34
Fluffmeister
Talk of high-end Navi are rumours not from AMD so don't blame them!

Nvidia.... It's all BS Fu Nvidia!

Man this forum.
Posted on Reply
#35
Xaled
Why is there no "UP TO" in the title here while there was in the Intel news?
I don't like intel but I hate double standards
Posted on Reply
#36
ppn
If Nvidia skips 10nm, 7nm directly to 7+/6nm, the density is increased from 30 on 12/14/16nm to 77 Mtr/mm2 on 7+. 2080ti gets smaller 770 to 300mm2, drops 384 to 256 bit bus 16Gb buffer and 286 mm2, 4096 cuda, compensates with gpu clock speed. bigger chip with 6144 cuda around 429 mm2 so +50% perf is perfectly doable.
Posted on Reply
#37
Valantar
Xaled
Why is there no "UP TO" in the title here while there was in the Intel news?
I don't like intel but I hate double standards
Because the source (apparently) doesn't say "up to". They're just reporting on someone else's report, so conveying this accurately is a must, no matter if the information itself is accurate. If this was data TPU had sourced themselves, your complaint would be entirely valid, but in this case they're just the messenger and do not deserve to be shot.
Posted on Reply
#38
Vya Domus
64K
I believe it's possible to gain 50% in performance. Going from the 12nm process to the 7nm process should increase the efficiency and allow for more cores/faster clocks for the same wattage as Turing uses.
You're not going to get 50% more clockspeed on 7nm, it's impossible. The only thing you'll get is more cores but the power wont change much.
Posted on Reply
#39
64K
Vya Domus
You're not going to get 50% more clockspeed on 7nm, it's impossible. The only thing you'll get is more cores but the power wont change much.
I didn't say 50% faster clocks. I said a combination of faster clocks/more cores should allow for a 50% increase in performance for the same watts used as Turing.
Posted on Reply
#40
notb
HTC
But they're not claiming 30%, are they? They could claim ... say ... 20% more performance @ 30% less power and it would be much more believable but 50% more performance @ 50% less power?
But based on what we've seen with Zen and Navi, would you say halving power draw is believable? Based on new node alone.
Remember this is 14nm -> 7nm EUV.
So it's analogous to what AMD has already shown with Zen2/Navi (DUV) + what they promise for this year.

I think it's quite feasible.

And now we get to the architectural improvements.
We know Nvidia can do +30% with each gen (every ~2 years). There's no reason why this wouldn't be true with Ampere.

So we now have something like +30% performance and -50% power draw. Not quite the +50%/-50%, but still not bad.

Except, as someone said above, the +50% may be taking into account a large boost in RTRT.
Of course RTRT is still a source of arguments today, but few years from now we'll just call it "gaming performance" - just like we stopped splitting 3D and 2D in the late 90s.
Posted on Reply
#41
Xaled
Valantar
Because the source (apparently) doesn't say "up to". They're just reporting on someone else's report, so conveying this accurately is a must, no matter if the information itself is accurate. If this was data TPU had sourced themselves, your complaint would be entirely valid, but in this case they're just the messenger and do not deserve to be shot.


Now, double standards or not?
Posted on Reply
#42
Naito
Regardless of where the 50% uplift applies, I'm just glad I didn't buy into the RTX 2000 series
Posted on Reply
#43
P4-630
Ah, Cyberpunk 2077 Release date April 16 2020, recommended system requirements RTX30xx.....
Posted on Reply
#44
Valantar
notb
And now we get to the architectural improvements.
We know Nvidia can do +30% with each gen (every ~2 years). There's no reason why this wouldn't be true with Ampere.
Turing was nowhere near 30% perf/SM over Pascal. More like 10%. Any further gains came from more SMs and higher clocks.

notb
But based on what we've seen with Zen and Navi, would you say halving power draw is believable? Based on new node alone.
Remember this is 14nm -> 7nm EUV.
So it's analogous to what AMD has already shown with Zen2/Navi (DUV) + what they promise for this year.
That sounds rather unlikely to me, though I'm no expert by any stretch of the imagination. The jump from 28nm to 16nm did not halve power for Nvidia, so going from 12nm to 7nm EUV doesn't sound likely to do so either. Beyond that they're moving between foundries (at least for some GPUs), so comparisons could be difficult.

It's not analogous to AMD's move from Vega on GloFo 12nm to Navi on TSMC 7nm either, as that is a completely new architecture with very significant efficiency improvements. You'd be better off looking at the Radeon Vega 64 vs. the Radeon VII, as those are very similar in design but on a new node with slightly bumped clocks, and that improved perf/W by < 30%.

Xaled


Now, double standards or not?
Not really, no. The first is reporting on IPC, which is (at least supposed to be) an average number based on a number of tests. An "up to" number in this case could thus be 18%, 30% or 45% - it's impossible to know, as we only know the average. Look at SPEC testing, for example - gen-to-gen IPC (clock equalized) testing in that normally results in a wide array of performance differences. The other reports on one specific single data point with little context. Is this a high number, an average, or a low? We have no idea, but given that it's an officially released number from the company itself, it's logical to infer it to be above average to present the product in the best light.
Posted on Reply
#45
Xaled
Valantar
Turing was nowhere near 30% perf/SM over Pascal. More like 10%. Any further gains came from more SMs and higher clocks.


That sounds rather unlikely to me, though I'm no expert by any stretch of the imagination. The jump from 28nm to 16nm did not halve power for Nvidia, so going from 12nm to 7nm EUV doesn't sound likely to do so either. Beyond that they're moving between foundries (at least for some GPUs), so comparisons could be difficult.

It's not analogous to AMD's move from Vega on GloFo 12nm to Navi on TSMC 7nm either, as that is a completely new architecture with very significant efficiency improvements. You'd be better off looking at the Radeon Vega 64 vs. the Radeon VII, as those are very similar in design but on a new node with slightly bumped clocks, and that improved perf/W by < 30%.


Not really, no. The first is reporting on IPC, which is (at least supposed to be) an average number based on a number of tests. An "up to" number in this case could thus be 18%, 30% or 45% - it's impossible to know, as we only know the average. Look at SPEC testing, for example - gen-to-gen IPC (clock equalized) testing in that normally results in a wide array of performance differences. The other reports on one specific single data point with little context. Is this a high number, an average, or a low? We have no idea, but given that it's an officially released number from the company itself, it's logical to infer it to be above average to present the product in the best light.
Then then the title should've included words as : Claim or say ( Nvidia says, or claims) at worst, just like did in the Intel news.
Saying that IT WILL BE %50 FASTER this way is just wrong and biased.
Posted on Reply
#46
cucker tarlson
It will be very good but too expensive for the first couple of quarters,2020 may bring on some competition and price cuts.
Posted on Reply
#47
kings
Personally, I don´t think that Nvidia will be very concerned with consumption, because it has been seen that AMD even with a new architecture and 7nm, is not brilliant in that regard.

I think they are more likely to try to keep the TDPs at or slightly below and pull performance to the max in that envelope. About 250W will probably remain normal for the Nvidia top-end card.
Posted on Reply
#48
Tsukiyomi91
if Ampere really has a 50% gain over Turing in all benchmarks/real world use while using less power is a good thing. Problem here is many bought the "refreshed" RTX20 Series Super cards & GTX16 Series cards... so those folks might be at a loss-ish? That said, I wonder how the naming convention will be? RTX 22xx? or RTX 3xxx since it's an entirely new silicon? 2020 & 2021 will be an interesting year.
Posted on Reply
#49
wolf
Performance Enthusiast
As always take the super early info with an enormous pinch of salt. 50% is a silly large number, possibly pulled from thin air to keep money in wallets for it instead of buying a competing product today, or it might represent a super outside shot at a best case scenario like RTX or using a new gen specific feature like VRS did for Turing over Pascal.

What is almost certain is that, the cards as a product stack should reasonably outperform their Turing counterparts, they should also present a reasonable performance per watt improvement, and a reasonable hardware ray tracing improvement. Prices... well who knows, if AMD can't keep up with their upper tier products expect much of the same.

I won't count on anything solid whatsoever until W1zzard (and other trusted sites) publish a review of an actual product. Hopefully in any case 2020 brings a compelling upgrade for a GTX1080 (I own) / vega 56/64 owners that isn't the halo product and has a competitive price to perf ratio.
Posted on Reply
Add your own comment