• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA's Next-Generation Ampere GPUs to be 50% Faster than Turing at Half the Power

I bet that the "50% uplift" is in RTX ON performance while non RTX performance will remain the same in each segment.
That is the only plausible explanation for something like this, or more precisely: a 50% uplift in perf/W for RT operations. A 300% general perf/W increase in a single generation, even with a full node jump, is completely unheard of. Not going to happen for ordinary rasterized graphics. Period.
 
Something doesn't sound right.

When moving to a smaller node, you either get X% higher performance @ the same power or same performance using Y% less power, but not both.
And this is a result of which fundamental law of physics?

There's no reason why Nvidia would offer the traditional +30% performance with new generation just based on architecture improvements and optimizations.
Sometimes there's also some power efficiency bonus (despite the same node).
7nm mean more cores.

Going straight to 7nm EUV means even higher efficiency gain than AMD got with Polaris -> Navi.
 
I believe it's possible to gain 50% in performance. Going from the 12nm process to the 7nm process should increase the efficiency and allow for more cores/faster clocks for the same wattage as Turing uses.

I doubt that Nvidia will lower prices though. They won't have a reason to unless Intel comes out with something really good this year for competition. Also, if there are shortages for any reason then we can expect retailer gouging which will make prices higher than they should be for awhile after release.

But they are claiming 50% performance increase WHILE using 50% less power. It's perfectly believable to be either one ... but both @ the same time? Seriously doubt it.

EDIT

And this is a result of which fundamental law of physics?

There's no reason why Nvidia would offer the traditional +30% performance with new generation just based on architecture improvements and optimizations.
Sometimes there's also some power efficiency bonus (despite the same node).
7nm mean more cores.


Going straight to 7nm EUV means even higher efficiency gain than AMD got with Polaris -> Navi.

But they're not claiming 30%, are they? They could claim ... say ... 20% more performance @ 30% less power and it would be much more believable but 50% more performance @ 50% less power?

Seriously SERIOUSLY doubt this.
 
Last edited:
True. That's why I only focussed on the possibility of a 50% increase in performance for the same wattage as Turing. Having 50% increase in performance and 50% less watts used at the same time isn't possible imo.
 
Why do you think nVidia wants to "surprise" all their customers in the absolute last second...

They don't want to cause panic to early before next gen release because they know the 2080Ti will drop to 1/3 of it's value instantly when they do their announcement

Don't give them the satisfaction... Sell your 2080Ti NOW!!!
 
AMD: release big navi
Nvidia: what shall we do? Lets spread news about the next gen to prevent ppl buying big navi.

i think we heard this every time new GPU is about to come out. the competitor starts spreading rumor to stop them from buying competitor product. sometimes i think company like AMD and nvidia does not really need to do this. because their fanboys will do the job for them for free. still remember when some people said you should hold back from getting GTX1080 because Vega will coming out in october 2016. they said there is shortage anyway with 1080 because of demand so you might just as well wait until october 2016.
 
Finally we'll get the promised 2080ti, not the garbage we have now.
 
nVidia is the new Intel , YOU WILL SEE . New gpus no cheaper , tiny faster becouse is no competition
 
Talk of high-end Navi are rumours not from AMD so don't blame them!

Nvidia.... It's all BS Fu Nvidia!

Man this forum.
 
Why is there no "UP TO" in the title here while there was in the Intel news?
I don't like intel but I hate double standards
 
If Nvidia skips 10nm, 7nm directly to 7+/6nm, the density is increased from 30 on 12/14/16nm to 77 Mtr/mm2 on 7+. 2080ti gets smaller 770 to 300mm2, drops 384 to 256 bit bus 16Gb buffer and 286 mm2, 4096 cuda, compensates with gpu clock speed. bigger chip with 6144 cuda around 429 mm2 so +50% perf is perfectly doable.
 
Why is there no "UP TO" in the title here while there was in the Intel news?
I don't like intel but I hate double standards
Because the source (apparently) doesn't say "up to". They're just reporting on someone else's report, so conveying this accurately is a must, no matter if the information itself is accurate. If this was data TPU had sourced themselves, your complaint would be entirely valid, but in this case they're just the messenger and do not deserve to be shot.
 
I believe it's possible to gain 50% in performance. Going from the 12nm process to the 7nm process should increase the efficiency and allow for more cores/faster clocks for the same wattage as Turing uses.

You're not going to get 50% more clockspeed on 7nm, it's impossible. The only thing you'll get is more cores but the power wont change much.
 
You're not going to get 50% more clockspeed on 7nm, it's impossible. The only thing you'll get is more cores but the power wont change much.

I didn't say 50% faster clocks. I said a combination of faster clocks/more cores should allow for a 50% increase in performance for the same watts used as Turing.
 
But they're not claiming 30%, are they? They could claim ... say ... 20% more performance @ 30% less power and it would be much more believable but 50% more performance @ 50% less power?
But based on what we've seen with Zen and Navi, would you say halving power draw is believable? Based on new node alone.
Remember this is 14nm -> 7nm EUV.
So it's analogous to what AMD has already shown with Zen2/Navi (DUV) + what they promise for this year.

I think it's quite feasible.

And now we get to the architectural improvements.
We know Nvidia can do +30% with each gen (every ~2 years). There's no reason why this wouldn't be true with Ampere.

So we now have something like +30% performance and -50% power draw. Not quite the +50%/-50%, but still not bad.

Except, as someone said above, the +50% may be taking into account a large boost in RTRT.
Of course RTRT is still a source of arguments today, but few years from now we'll just call it "gaming performance" - just like we stopped splitting 3D and 2D in the late 90s.
 
Because the source (apparently) doesn't say "up to". They're just reporting on someone else's report, so conveying this accurately is a must, no matter if the information itself is accurate. If this was data TPU had sourced themselves, your complaint would be entirely valid, but in this case they're just the messenger and do not deserve to be shot.
Screenshot_20200103-143042.jpg
Screenshot_20200103-143127.jpg


Now, double standards or not?
 
Regardless of where the 50% uplift applies, I'm just glad I didn't buy into the RTX 2000 series
 
Ah, Cyberpunk 2077 Release date April 16 2020, recommended system requirements RTX30xx.....
 
And now we get to the architectural improvements.
We know Nvidia can do +30% with each gen (every ~2 years). There's no reason why this wouldn't be true with Ampere.
Turing was nowhere near 30% perf/SM over Pascal. More like 10%. Any further gains came from more SMs and higher clocks.

But based on what we've seen with Zen and Navi, would you say halving power draw is believable? Based on new node alone.
Remember this is 14nm -> 7nm EUV.
So it's analogous to what AMD has already shown with Zen2/Navi (DUV) + what they promise for this year.
That sounds rather unlikely to me, though I'm no expert by any stretch of the imagination. The jump from 28nm to 16nm did not halve power for Nvidia, so going from 12nm to 7nm EUV doesn't sound likely to do so either. Beyond that they're moving between foundries (at least for some GPUs), so comparisons could be difficult.

It's not analogous to AMD's move from Vega on GloFo 12nm to Navi on TSMC 7nm either, as that is a completely new architecture with very significant efficiency improvements. You'd be better off looking at the Radeon Vega 64 vs. the Radeon VII, as those are very similar in design but on a new node with slightly bumped clocks, and that improved perf/W by < 30%.

Not really, no. The first is reporting on IPC, which is (at least supposed to be) an average number based on a number of tests. An "up to" number in this case could thus be 18%, 30% or 45% - it's impossible to know, as we only know the average. Look at SPEC testing, for example - gen-to-gen IPC (clock equalized) testing in that normally results in a wide array of performance differences. The other reports on one specific single data point with little context. Is this a high number, an average, or a low? We have no idea, but given that it's an officially released number from the company itself, it's logical to infer it to be above average to present the product in the best light.
 
Turing was nowhere near 30% perf/SM over Pascal. More like 10%. Any further gains came from more SMs and higher clocks.


That sounds rather unlikely to me, though I'm no expert by any stretch of the imagination. The jump from 28nm to 16nm did not halve power for Nvidia, so going from 12nm to 7nm EUV doesn't sound likely to do so either. Beyond that they're moving between foundries (at least for some GPUs), so comparisons could be difficult.

It's not analogous to AMD's move from Vega on GloFo 12nm to Navi on TSMC 7nm either, as that is a completely new architecture with very significant efficiency improvements. You'd be better off looking at the Radeon Vega 64 vs. the Radeon VII, as those are very similar in design but on a new node with slightly bumped clocks, and that improved perf/W by < 30%.


Not really, no. The first is reporting on IPC, which is (at least supposed to be) an average number based on a number of tests. An "up to" number in this case could thus be 18%, 30% or 45% - it's impossible to know, as we only know the average. Look at SPEC testing, for example - gen-to-gen IPC (clock equalized) testing in that normally results in a wide array of performance differences. The other reports on one specific single data point with little context. Is this a high number, an average, or a low? We have no idea, but given that it's an officially released number from the company itself, it's logical to infer it to be above average to present the product in the best light.
Then then the title should've included words as : Claim or say ( Nvidia says, or claims) at worst, just like did in the Intel news.
Saying that IT WILL BE %50 FASTER this way is just wrong and biased.
 
It will be very good but too expensive for the first couple of quarters,2020 may bring on some competition and price cuts.
 
Personally, I don´t think that Nvidia will be very concerned with consumption, because it has been seen that AMD even with a new architecture and 7nm, is not brilliant in that regard.

I think they are more likely to try to keep the TDPs at or slightly below and pull performance to the max in that envelope. About 250W will probably remain normal for the Nvidia top-end card.
 
if Ampere really has a 50% gain over Turing in all benchmarks/real world use while using less power is a good thing. Problem here is many bought the "refreshed" RTX20 Series Super cards & GTX16 Series cards... so those folks might be at a loss-ish? That said, I wonder how the naming convention will be? RTX 22xx? or RTX 3xxx since it's an entirely new silicon? 2020 & 2021 will be an interesting year.
 
As always take the super early info with an enormous pinch of salt. 50% is a silly large number, possibly pulled from thin air to keep money in wallets for it instead of buying a competing product today, or it might represent a super outside shot at a best case scenario like RTX or using a new gen specific feature like VRS did for Turing over Pascal.

What is almost certain is that, the cards as a product stack should reasonably outperform their Turing counterparts, they should also present a reasonable performance per watt improvement, and a reasonable hardware ray tracing improvement. Prices... well who knows, if AMD can't keep up with their upper tier products expect much of the same.

I won't count on anything solid whatsoever until W1zzard (and other trusted sites) publish a review of an actual product. Hopefully in any case 2020 brings a compelling upgrade for a GTX1080 (I own) / vega 56/64 owners that isn't the halo product and has a competitive price to perf ratio.
 
Back
Top