• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA's Next-Generation Ampere GPUs to be 50% Faster than Turing at Half the Power

Joined
Dec 31, 2009
Messages
19,366 (3.72/day)
Benchmark Scores Faster than yours... I'd bet on it. :)
Nice try.
Now let's check reality, shall we:

1080 (non TI) FE: $699
1080 (non TI) "later on edition": $599

Did I mention it was "non-TI"? Thanks.
Now, TI version came later, after 1080 milked the market for about 1 year.

#HowToMilk101


S right? The one released after AMD spanked 2070?
That's cool.

2070 non S, however, released for $599 for FE edition.
Are you telling me it was slower than 1080Ti? Oh, what as strange "improvement" of perf/$, chuckle.
1080ti's msrp on release day was $699... look it up...so was the 1080 a few months prior.

Last I checked, they are a for profit business. Perhaps if amd had anything worthwhile at the time, prices wouldnt have inflated so much...
 
Joined
Sep 17, 2014
Messages
20,780 (5.97/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
Nice try.
Now let's check reality, shall we:

1080 (non TI) FE: $699
1080 (non TI) "later on edition": $599

Did I mention it was "non-TI"? Thanks.
Now, TI version came later, after 1080 milked the market for about 1 year.

#HowToMilk101


S right? The one released after AMD spanked 2070?
That's cool.

2070 non S, however, released for $599 for FE edition.
Are you telling me it was slower than 1080Ti? Oh, what as strange "improvement" of perf/$, chuckle.

As always your strange pair of glasses made you handily gloss over a key word, competition. You are exactly right about the 2070S. That is how movement happens: when the competitor can challenge similar performance. So, if the contrary happens and performance stalls, while being unchallenged, MSRP is unlikely to drop. That was the initial point. No performance movement = no or very slow price movement.

And even without competition the performance cost was reduced by 100 bucks MSRP, so, your point?
 
Last edited:
Joined
Jul 5, 2013
Messages
25,559 (6.52/day)
After all, cards still gotta get sold but the market is saturated with a certain performance level already. There is not much to compete over, so there is less competition.
I have to disagree with this. Market saturation as far as performance bar goes is as it always is, economically tiered. Those with the money and desire always have what they want in the performance range regardless of price and frequency of release, those on a lesser but still generous budget plan for upgrades but generally get the performance parts, those on an even lesser budget will bargain shop to get the best bang for buck, and finally those with the desire for a good system but have little to spend are the thrifty shoppers looking for the used parts or clearance sales and closeout deals.

This has been true for more than 30 years and has changed very little in that time.

Complaining about the Pascal price hike was never realistic, and here is your proof. We got a LOT more for our money there than we ever did during Kepler or Maxwell. Especially in terms of VRAM. Everything's got a lush 8 GB (or more) to work with. The ONLY reason the 980ti is still relevant is because its the only Maxwell card with 6GB. Try that with the budget 970...
However, on this we agree.
 
Joined
Sep 17, 2014
Messages
20,780 (5.97/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
I have to disagree with this. Market saturation as far as performance bar goes is as it always is, economically tiered. Those with the money and desire always have what they want in the performance range regardless of price and frequency of release, those on a lesser but still generous budget plan for upgrades but generally get the performance parts, those on an even lesser budget will bargain shop to get the best bang for buck, and finally those with the desire for a good system but have little to spend are the thrifty shoppers looking for the used parts or clearance sales and closeout deals.

This has been true for more than 30 years and has changed very little in that time.

This is all true, but it doesnt discard the reality that Nvidia wants to keep selling cards and a competitor will challenge those lower tiers more easily with every passing gen. If the performance of each respective tier doesn't go up noticeably, the higher tier customers won't have anything left to buy. They will resort to buying into baby steps - look at Turing. The whole reason Nvidia released the S line is because for Pascal owners, Turing had little if anything to offer and RTX didn't trigger many into doing so regardless. Sales were shit until Supers came about, and even now its not anything shocking. That is why Navi appears to sell, too, by competing aggressively on the 1080 ~ 1080ti performance tier.

Put differently: if the 2080ti wasn't priced out of this world, Navi would have sold for even less and the 2070S would also be a lot cheaper atm. Those economic tiers have comfort levels for pricing, too.

I mean yes, the budget bin hunters and mainstream bulk of sales exist. But they don't change the market radically, they just follow the path laid out by top-end performance. Trickle down. That is where you find those cut down monstrosities, varying VRAM capacities, small shader cutdowns (1060 3gb) the GDDR3/GDDR5 nonsense and rebadged old gen GPUs for mobile. Scraps and leftovers, because that is inevitably how silicon wafers work, big stuff gets scaled down to size. No big stuff, no progress. Nvidia even does this very visibly for us, remember the GP104 1060's... the 1070ti with cheap VRAM, etc etc. None of that happens without the bar being moved further up.
 
Last edited:
Joined
Jul 5, 2013
Messages
25,559 (6.52/day)
They will resort to buying into baby steps - look at Turing.
As an owner of an RTX card upgraded from a GTX counterpart card, 30% to 50%(depending on the game) upgrade in performance is a serious jump and hardly a "baby-step".
The whole reason Nvidia released the S line is because for Pascal owners, Turing had little if anything to offer and RTX didn't trigger many into doing so regardless.
That is an inaccurate perspective. While it took time for the RTRT features to make it into games, the raw performance in non-RTRT games was an impressive jump and worth the upgrade by itself. Put another way, my 2080 kicks the crap out of my old 1080 in the exact same system. Anyone who thinks the RTX cards are not a worthy upgrade from the GTX 1xxx cards needs to take the blinders off...

I'm not going to debate the rest of your points as they mostly subjective and depend greatly on personal bias and opinion.
 
Joined
Sep 17, 2014
Messages
20,780 (5.97/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
As an owner of an RTX card upgraded from a GTX counterpart card, 30% to 50%(depending on the game) upgrade in performance is a serious jump and hardly a "baby-step".

That is an inaccurate perspective. While it took time for the RTRT features to make it into games, the raw performance in non-RTRT games was an impressive jump and worth the upgrade by itself. Put another way, my 2080 kicks the crap out of my old 1080 in the exact same system. Anyone who thinks the RTX cards are not a worthy upgrade from the GTX 1xxx cards needs to take the blinders off...

I'm not going to debate the rest of your points as they mostly subjective and depend greatly on personal bias and opinion.

Perf/dollar shifts between generations... that was the point of discussion. Not your personal idea of how good a 2080 is. It was one of the worst perf/dollar choices in Turing and it still is. You've also upgraded not from top end last gen performance (again: thát was the topic: 1080ti, not 1080) but from sub top. You'd have been far better off waiting on the 2070S or even simply buying the 1080ti from the get-go.

To each their own... just call it what it is. The numbers don't lie. Furthermore, Nvidia's own sales numbers pre-Turing S underline my 'inaccurate' perspective... There is very little subjective about it. There are indeed Turing cards today that offer meaningful upgrade paths. But the selection is small and appeared late in gen, and not with the introduction of RTX on its own. It needed a price cut and got one. Heck even today, it appears AMD has been gaining market share since Turing launch. Odd... :)

We've been here before. I'm talking about the market and you're talking about your personal upgrade considerations. The latter is irrelevant here... This is the big picture.
 
Last edited:
Joined
Jul 5, 2013
Messages
25,559 (6.52/day)
Not your personal idea of how good a 2080 is.
It's not my idea, it's real world performance.
It was one of the worst perf/dollar choices in Turing and it still is.
Subjective opinion. Not everyone agrees. I didn't pay $1000 for my 2080. I got one for $700ish. Your perf/price value ratio is heavily dependent on the price being paid and the comparative upgrade.
You've also upgraded not from top end last gen performance (again: thát was the topic: 1080ti, not 1080) but from sub top.
I upgraded from one model tier to it's counterpart in the RTX line. The TI models have a similar performance difference from the non-TI models, so when we talk about the price difference between the 1080ti and the 2080ti, then yes you might have a point, but only for that model tier level. And if I had jumped from a 1080 to a 1080ti, I would not have the RTRT features that were a big part of the motivation for the upgrade.
The numbers don't lie.
No, but they are greatly subjective and vary quite a bit from maker to maker and from region to region, something that always seems to get overlooked in discussions like this.

Once again here's an idea, how about we let people make up their own minds where value is concerned.
 
Last edited:
Joined
Sep 17, 2014
Messages
20,780 (5.97/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
It's not my idea, it's real world performance.

Subjective opinion. Not everyone agrees. I didn't pay $1000 for my 2080. I got one for $700ish. Your perf/price value ratio is heavily dependent on the price being paid and the comparative upgrade.

I upgraded from one model tier to it's counterpart in the RTX line. The TI models have a similar performance difference from the non-TI models, so when we talk about the price difference between the 1080ti and the 2080ti, then yes you might have a point, but only for that model tier level. And if I had jumped from a 1080 to a 1080ti, I would not have the RTRT features that were a big part of the motivation for the upgrade.

No, but they greatly subjective and vary quite a bit from maker to maker and from region to region, something that always seems to get overlooked in discussions like this.

Once again here's an idea, how about we let people make up their own minds where value is concerned.

Wooooosh... that is all
 
Joined
Jul 9, 2015
Messages
3,413 (1.07/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
That is how movement happens: when the competitor can challenge similar performance.
I thought I was reading about galactic level breakthrough on the perf/$ front, but it seems I've misread it.
Oh, good to know.

And even without competition the performance cost was reduced by 100 bucks MSRP, so, your point?
My point... is you were getting barely enough perf (not necessarily perf/$) bumps, to somehow justify selling you (not personally you, of course) stuff.
With hilarious $100 "on top" at the very beginning, when certain folks pay it even though, oh well, how far was 1080 from well OCed AIB 980Ti?

So to summarize:

1) Major perf/$ improvements touted in this post is misleading BS, as demonstrated here.
2) if AMD is really be missing in action, for whatever reason, milking Pascal/Turing style, and I mean literally Turing style, with greed pushing it to a point sales targets are missed by 25%
3) Re-read #1, it's worth it
 
Joined
Sep 17, 2014
Messages
20,780 (5.97/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
I thought I was reading about galactic level breakthrough on the perf/$ front, but it seems I've misread it.
Oh, good to know.


My point... is you were getting barely enough perf (not necessarily perf/$) bumps, to somehow justify selling you (not personally you, of course) stuff.
With hilarious $100 "on top" at the very beginning, when certain folks pay it even though, oh well, how far was 1080 from well OCed AIB 980Ti?

So to summarize:

1) Major perf/$ improvements touted in this post is misleading BS, as demonstrated here.
2) if AMD is really be missing in action, for whatever reason, milking Pascal/Turing style, and I mean literally Turing style, with greed pushing it to a point sales targets are missed by 25%
3) Re-read #1, it's worth it

Wooooosh....There is no AMD beef here. Stop searching for it.

Man, it must be Corona stress or something...

With hilarious $100 "on top" at the very beginning, when certain folks pay it even though, oh well, how far was 1080 from well OCed AIB 980Ti?

You really gotta learn not to twist facts to your narrative. 1080 from well OC'd 980ti was still 25% faster. So a normal tier jump for all intents and purposes. And if you OC the 1080, 30% is easy to get. And it got only better as time and demands progressed, because delta compression and memory are notably better and faster and newer games love that.

You take an FE price point that nobody in their right mind ever paid (it was common knowledge that the Pascal FE blowers were utter shit and overpriced, straight away from launch day and after MSRP was known between FE/non-FE and reviews showed throttling) and take it for granted, while you discard the real MSRP of a 1080ti because it also doesn't really suit you too well.

I can't even... You can try as hard as you like, but its clear you don't understand a thing of the marketplace, and rather seem to think its a schoolyard with bickering kids. Here's news: that is how bickering kids are getting played by AMD and Nvidia. A little leak here, a rumor there, a Tuber with a scoop there... and boom. Free press. Meanwhile in the real world, the only things that matter are price and USPs, that customers care about. The numbers. Simply. Don't. Lie.
 
Last edited:
Joined
Jul 9, 2015
Messages
3,413 (1.07/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
Man, it must be Corona stress or something...
I write it off to green reality distortion field.
Someone can state figures, be shown they are all way off, still stick to teh narrative.

1080 from well OC'd 980ti was still 25% faster.
BULLSHIT.
Even stock vs stock it was about 30% ahead

You take an FE price point that nobody in their right mind ever paid
That's why they were sold out.
But it's cool we need to introduce "but nobody bought this" aspect to figure out how graceful pricing model was, chuckle.

And, for the record: I don't complain about it, in fact, I'd love NV to be even greedier, chuckle, as I'm rather enjoying it.

(is it OK if I ramble around, like I'm high or something? Just to make my post look even more "impressive"?)
No problem, dude.
 
Joined
Sep 17, 2014
Messages
20,780 (5.97/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
I write it off to green reality distortion field.
Someone can state figures, be shown they are all way off, still stick to teh narrative.


BULLSHIT.
Even stock vs stock it was about 30% ahead


That's why they were sold out.
But it's cool we need to introduce "but nobody bought this" aspect to figure out how graceful pricing model was, chuckle.

And, for the record: I don't complain about it, in fact, I'd love NV to be even greedier, chuckle, as I'm rather enjoying it.


No problem, dude.

1583849350802.png


24% is 30% in medi01-land.

lmao. But anyway thanks for confirming that for me, because the point was, initially, bigger perf jumps cause more price movement on the market, and they did, your whole non-discussion notwithstanding. The 1080ti only confirms that once more by offering yet another 30% at the same price point only a year later.

By the way.

1583849497410.png


Them being sold out doesn't change the fact they were shit. Once again, thanks for confirming your BS and me being 100% correct. You're even giving me the right sources now. Brilliant :D

I'd love NV to be even greedier, chuckle, as I'm rather enjoying it.

Signs of a madman. I'm not going all emotional over a pricing strategy.

Oh and eh... you still haven't learned how to quote as you should it seems. Shame you gotta go so low, once again. I thought you had grown up alittle...
 
Last edited:
Joined
Mar 21, 2016
Messages
2,194 (0.75/day)
I can hardly wait to see Nvidia's new line up at 7nm priced even more out of sight than RTX. It'll be a joyous time for gaming for the 3 people that can afford them. I hope at the very least NV masks a money bag onto the PCB so people know they blew their wad on it. Nvidia the way it's meant to be paid.
 
Joined
Jul 5, 2013
Messages
25,559 (6.52/day)
Wooooosh... that is all
This comment made me think I might have missed some parts of the conversation, and after review.. YUP. If I'm not much mistaken, I was making all your points for you while at the same time arguing against you. How's that for irony? You'll excuse me while I extract my foot from my mouth....
Them being sold out doesn't change the fact they were shit.
True.
 
Joined
Jul 9, 2015
Messages
3,413 (1.07/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.

the54thvoid

Intoxicated Moderator
Staff member
Joined
Dec 14, 2009
Messages
12,378 (2.37/day)
Location
Glasgow - home of formal profanity
Processor Ryzen 7800X3D
Motherboard MSI MAG Mortar B650 (wifi)
Cooling be quiet! Dark Rock Pro 4
Memory 32GB Kingston Fury
Video Card(s) Gainward RTX4070ti
Storage Seagate FireCuda 530 M.2 1TB / Samsumg 960 Pro M.2 512Gb
Display(s) LG 32" 165Hz 1440p GSYNC
Case Asus Prime AP201
Audio Device(s) On Board
Power Supply be quiet! Pure POwer M12 850w Gold (ATX3.0)
Software W10
To clear things up, percentages are relative to a baseline.

For example, consider 50 and 100.

From an increase in performance: 100 is 100% higher than 50. However, 50 is also 50% lower than 100. It all depends where you work out the initial deduction. It happens all the time on TPU and really should be used in context, otherwise, we get these arithmetic confusions.

So, 2 is double one, and 1 is half of two. They're both right.
 
Joined
Mar 9, 2020
Messages
80 (0.05/day)
There are also rumours that they are having fab difficulties - low yields, power spikes etc with 5nm on such a large die.
We shall soon see if there's any truth to them.
 
Joined
Jul 10, 2015
Messages
748 (0.23/day)
Location
Sokovia
System Name Alienation from family
Processor i7 7700k
Motherboard Hero VIII
Cooling Macho revB
Memory 16gb Hyperx
Video Card(s) Asus 1080ti Strix OC
Storage 960evo 500gb
Display(s) AOC 4k
Case Define R2 XL
Power Supply Be f*ing Quiet 600W M Gold
Mouse NoName
Keyboard NoNameless HP
Software You have nothing on me
Benchmark Scores Personal record 100m sprint: 60m
Ampere is on 7nm, 5nm should be for smaller tegra chips first.
 
Joined
Mar 10, 2014
Messages
1,793 (0.49/day)
Ampere is on 7nm, 5nm should be for smaller tegra chips first.

Well yeah Drive AGX Orin sounds a bit right for that timeline(c.a. 2022). Albeit it being relative big chip by itself, the size will be more like a midrange gpu than a big heavy compute part.

And you are right, unproven 5nm for large compute chip sound very unlikely.
 

ARF

Joined
Jan 28, 2020
Messages
3,892 (2.56/day)
Location
Ex-usa
There are also rumours that they are having fab difficulties - low yields, power spikes etc with 5nm on such a large die.
We shall soon see if there's any truth to them.


You mean with 7nm?
It sounds quite plausible, though. It's been a year and 2 months since AMD released the Radeon VII, and 10 months since the RX 5700 XT.

AMD is about to launch a second generation N7P products soon.
 
Joined
May 2, 2017
Messages
7,762 (3.08/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
You mean with 7nm?
It sounds quite plausible, though. It's been a year and 2 months since AMD released the Radeon VII, and 10 months since the RX 5700 XT.

AMD is about to launch a second generation N7P products soon.
I'm not saying what you quoted is true, but the size difference between the Radeon VII and any follow-up to the RTX 2080 Ti would be very significant - Vega 20 on 7nm was 331mm2, while the Vega 10 on 14nm was 495mm2 - a ~35% shrinkage (with a couple of added memory controllers and PHYs etc, so actual density increase is likely a bit higher). While TSMC 12nm isn't identical in density to GloFo 16nm they should nonetheless be roughly comparable - which would leave a direct shrink of the 754mm2 TU102 at ~490mm2, barring any added CUDA cores or other hardware. That's quite a lot larger. I would be surprised if they couldn't get decent yields even at that size, but it would be by far the biggest chip in volume production on that node (though we'll see what RDNA 2 brings).
 
Joined
Mar 18, 2015
Messages
2,960 (0.90/day)
Location
Long Island
I can't imagine that I live in a world where fanbois argue about pre-release specs that come out of the advertising department ... and, on top of that, arguing that their brand's fake specs are all real and the other guys are all fake. Save ya arguing for then the cards are tested. My bet is we just going to see more of the same ...

Mantle was gonna change everything ... it didn't
HBM2 was gonna change everything ... it didn't
7nm was gonna change everything ... it didn't

What we do know is that the GPU market stopped being competitive with the 7xx versus 2xx series, where nVidia walked away with the top two tiers (all cards overclocked). AMD lost another tier against the 970 and another tier against the 1060. The next generation didn't go well for both sides in some respects ... AMD had to make huge price cuts; nVidia didn't because they didn't have to. The bright shining light to was the 5600 XT, pretty much nothing else got me excited out of AMD ... if they can scale that up into the upper tiers, things may finally get interesting.
 
Joined
Mar 21, 2016
Messages
2,194 (0.75/day)
To be fair all three of those things changed things as to them changing "everything" idk whom made such absurd claims and remarks lol, but if you think HBM2 isn't great I'd hate to disappoint you what do you think both companies are using for their professional tier graphics cards exactly. Mantle isn't any worse than other GPU tech that requires developer support be it DLSS/RTX/CF/SLI/PhysX or any other proprietary 3D voodoo FX GPU hardware + developer magic. As for 7nm it's changed plenty just look at what it changed for Ryzen chips go on tell it hasn't changed anything or are you still clinging to a quad core Intel chip!? I mean let's not pretend 7nm hasn't made any difference obviously it has and will continue to do so TSMC has plenty of time for 7nm++++++++++++++ I mean Intel has paved the way for it.

I'd say the 5600m and the new Radeon Pro VII are both intriguing parts and Renoir as well. AMD just need shuffle something of the things together that's got or worked on. That would include Radeon Pro/Vega HBCC particularly the card where they utilized a M.2 slot. I think AMD is in position to do a lot of intriguing things on the GPU side similar to how Ryzen was able to shake things up on the CPU side of things. I'm not saying it'll happen immediately, but I have a feeling they are going to hit back hard again one of these days on the GPU side. Chances are rather likely that it'll be during a period when the CPU side of the business begins to wane or is waning again. It would stand to reason that would be a transition period of time where they'd certainly make a concerted effort to double or triple down on the R&D of their GPU division portfolio to leverage them as they up with a new CPU architecture design win again.

I would like to think on the gaming side inverse of the new Radeon Pro VII's Double 64FP precision they'll go more in the opposite direction with half precision floating point 16FP which seems like it would tie in with variable rate shading more appropriately actually. The double precision 64FP seems like it would be more beneficial to less stringent non "real time" rendering requirements and flexibility while half precision I'd think to be more the opposite enabling more fine granularity though a mixture of that 32FP and double 64FP is likely in order at some stage or another for gaming to leverage them all with variable rate shading to the best extent.

Probably something like 50% going to 32FP 37.5% to half precision 16FP and 12.5% double precision 64FP for gaming cards is what I'd expect in the future while more compute world loads would reverse half precision in favor of double precision. That ratio might be closer to 6.25%/43.75% for half precision/double precision, but I'd expect 32FP to remain rather neutral that or we could see quarter precision and quad precision take more of a split of resource allocations, but keeping FP32 the majority of resource allocation. In that scenario it would be more like 6.25% FP8, 6.25% FP16, 50% FP32, 18.75% FP64 18.75% FP128 or you could inverse the FP64/FP128 with FP8/FP16 between gaming and compute consumer orientated graphics cards. I'm mostly speculating on that, but I think more granularity is certainly beneficial especially with variable rate shading. In terms of the floating point precision aspect though I'd say that applies to AMD and Nvidia as well as Intel "if" they do ultimately become competitive at discrete graphics.

On the APU side I could see AMD teaming a APU with a x16 discrete APU that matches it's specs for both the CPU/GPU cores/cu's increasing the overall combined system resources for both tasks in the future. I maybe perhaps it's too late for that right now with it's latest APU or perhaps not, but I do see that as a very potential possibility in the future and I really do think that would have a big appeal to a great deal of people that just want a nice affordable balance and handy upgrade path. I mean sure maybe perhaps GPU's wouldn't scale perfectly being teamed together in a CF format in all instances, but additional CPU cores is likely to still be beneficial in instances where perhaps that doesn't apply so it could still be a overall net gain and positive. Basically even if that only ticks 1 out of 2 check boxes between the two it's still a net gain of either or scenario which is a cool thing to think about and AMD is best positioned right now to offer it to consumers because Intel hasn't exactly proven itself in that area nearly as well at this point then again perhaps they have more than we give them credit given how their integrated GPU's have slowly been eroding discrete graphics over the years then again that's true of any of the companies making integrated graphics in any form or another form Nvidia back on LGA775 to Intel today as well as AMD.
 
Last edited:
Joined
May 2, 2017
Messages
7,762 (3.08/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
To be fair all three of those things changed things as to them changing "everything" idk whom made such absurd claims and remarks lol, but if you think HBM2 isn't great I'd hate to disappoint you what do you think both companies are using for their professional tier graphics cards exactly. Mantle isn't any worse than other GPU tech that requires developer support be it DLSS/RTX/CF/SLI/PhysX or any other proprietary 3D voodoo FX GPU hardware + developer magic. As for 7nm it's changed plenty just look at what it changed for Ryzen chips go on tell it hasn't changed anything or are you still clinging to a quad core Intel chip!? I mean let's not pretend 7nm hasn't made any difference obviously it has and will continue to do so TSMC has plenty of time for 7nm++++++++++++++ I mean Intel has paved the way for it.

I'd say the 5600m and the new Radeon Pro VII are both intriguing parts and Renoir as well. AMD just need shuffle something of the things together that's got or worked on. That would include Radeon Pro/Vega HBCC particularly the card where they utilized a M.2 slot. I think AMD is in position to do a lot of intriguing things on the GPU side similar to how Ryzen was able to shake things up on the CPU side of things. I'm not saying it'll happen immediately, but I have a feeling they are going to hit back hard again one of these days on the GPU side. Chances are rather likely that it'll be during a period when the CPU side of the business begins to wane or is waning again. It would stand to reason that would be a transition period of time where they'd certainly make a concerted effort to double or triple down on the R&D of their GPU division portfolio to leverage them as they up with a new CPU architecture design win again.

I would like to think on the gaming side inverse of the new Radeon Pro VII's Double 64FP precision they'll go more in the opposite direction with half precision floating point 16FP which seems like it would tie in with variable rate shading more appropriately actually. The double precision 64FP seems like it would be more beneficial to less stringent non "real time" rendering requirements and flexibility while half precision I'd think to be more the opposite enabling more fine granularity though a mixture of that 32FP and double 64FP is likely in order at some stage or another for gaming to leverage them all with variable rate shading to the best extent.

Probably something like 50% going to 32FP 37.5% to half precision 16FP and 12.5% double precision 64FP for gaming cards is what I'd expect in the future while more compute world loads would reverse half precision in favor of double precision. That ratio might be closer to 6.25%/43.75% for half precision/double precision, but I'd expect 32FP to remain rather neutral that or we could see quarter precision and quad precision take more of a split of resource allocations, but keeping FP32 the majority of resource allocation. In that scenario it would be more like 6.25% FP8, 6.25% FP16, 50% FP32, 18.75% FP64 18.75% FP128 or you could inverse the FP64/FP128 with FP8/FP16 between gaming and compute consumer orientated graphics cards. I'm mostly speculating on that, but I think more granularity is certainly beneficial especially with variable rate shading. In terms of the floating point precision aspect though I'd say that applies to AMD and Nvidia as well as Intel "if" they do ultimately become competitive at discrete graphics.

On the APU side I could see AMD teaming a APU with a x16 discrete APU that matches it's specs for both the CPU/GPU cores/cu's increasing the overall combined system resources for both tasks in the future. I maybe perhaps it's too late for that right now with it's latest APU or perhaps not, but I do see that as a very potential possibility in the future and I really do think that would have a big appeal to a great deal of people that just want a nice affordable balance and handy upgrade path. I mean sure maybe perhaps GPU's wouldn't scale perfectly being teamed together in a CF format in all instances, but additional CPU cores is likely to still be beneficial in instances where perhaps that doesn't apply so it could still be a overall net gain and positive. Basically even if that only ticks 1 out of 2 check boxes between the two it's still a net gain of either or scenario which is a cool thing to think about and AMD is best positioned right now to offer it to consumers because Intel hasn't exactly proven itself in that area nearly as well at this point then again perhaps they have more than we give them credit given how their integrated GPU's have slowly been eroding discrete graphics over the years then again that's true of any of the companies making integrated graphics in any form or another form Nvidia back on LGA775 to Intel today as well as AMD.
An add-on APU AIC over PCIe would be a terrible idea unless it included modifications to the Windows scheduler that strictly segregated the two chips with no related processes ever crossing between the two. Without that you would have absolutely horrible memory latency issues and other NUMA-related performance issues, just exacerbated by being connected over (for this use) slow PCIe. Remember how 1st and 2nd generation Threadripper struggled to scale due to NUMA issues? It would be that, just multiplied by several orders of magnitude due to the PCIe link latency. It could work as a compute coprocessor or something similar (running its own discrete workloads), but it would be useless for combining with the existing CPU/APU. Scaling would be horrendous.

As for FP32/16/8, most if not all modern GPU architectures (Vega and onwards from AMD) support Rapid Packed Math or similar techniques for "packing" multiple smaller instructions (INT8 or FP16) into FP32 execution units for 100% performance scaling (i.e. 2:1 FP16 to FP32 or 4:1 INT8 to FP32). No additional hardware is needed for this beyond the changes to shader cores that have already been in existence for several years. So any modern GPU with X TFLOPS FP32 should be able to compute 2X TFLOPS FP16 or 4X INT8. FP64 needs additional hardware as it is (at least for now, in consumer GPUs) not possible to combine multiple FP32 units to one FP64 unit or anything like that (might be possible if they built it that way), but FP64 as you say has little utility in consumer applications that isn't happening. CDNA is likely to aim for everything between INT8 and FP64 as the full range is useful for HPC, ML and other datacenter uses.

It will be very interesting to see if game engine developers start to utilize FP16 more in the coming years, now that GPUs generally support it well and frameworks for its utilization have been in place for a while. It could be very useful to speed up rendering of less important parts of the screen, perhaps especially if combined with foveated rendering for HMDs with eye tracking.

I can't imagine that I live in a world where fanbois argue about pre-release specs that come out of the advertising department ... and, on top of that, arguing that their brand's fake specs are all real and the other guys are all fake. Save ya arguing for then the cards are tested. My bet is we just going to see more of the same ...

Mantle was gonna change everything ... it didn't
HBM2 was gonna change everything ... it didn't
7nm was gonna change everything ... it didn't

What we do know is that the GPU market stopped being competitive with the 7xx versus 2xx series, where nVidia walked away with the top two tiers (all cards overclocked). AMD lost another tier against the 970 and another tier against the 1060. The next generation didn't go well for both sides in some respects ... AMD had to make huge price cuts; nVidia didn't because they didn't have to. The bright shining light to was the 5600 XT, pretty much nothing else got me excited out of AMD ... if they can scale that up into the upper tiers, things may finally get interesting.
Well ...

Mantle paved the way for Vulkan and DX12, the current dominant graphics API and the clear runner-up. Without AMD's push for closer-to-the-hardware APIs we might not have seen this arrive as quickly. Has it revolutionized performance? No. But it leaves us a lot of room for growth that DX11 and OpenGL was running out of due to overhead issues. While there are typically negligible performance differences between the different APIs in games that support several (and the older often perform better), this is mainly down to a few factors: more familiarity with programming for the older API, needing to program for the lowest common denominator (i.e. no opportunity to specifically utilize the advantages of newer APIs), etc.

HBM(2) represents a true generational leap in power efficiency per bandwidth, and is still far superior to any GDDR or DDR technology. The issue is that adoption has been slow and the only major markets have been high-margin enterprise products, leading to prices stagnating at very high levels. Though to be fair, given the high price of GDDR6 this is less of an issue than two years ago. Still, the cost of entry is higher due to the need for an interposer (or something EMIB-likes) and more exotic packaging technology, and this means that GPUs using HBM have typically been expensive. Of course it's also gotten a worse than deserved reputation due to the otherwise unimpressive performance of the GPUs it's been paired with. Nonetheless, GPUs like the recently announced Radeon Pro 5600M shows just how large of an impact it can have on power efficiency while delivering excellent performance. I'm still hoping for HBM2(e?) on "big Navi".

7nm (and Zen 2, of course) took AMD from "good performance, great value for money, particularly with multithreaded applications" to "clear overall performance winner, clear efficiency winner, minor ST disadvantage" in the CPU space. It in combination with RDNA (which is not to be discounted in terms of efficiency when compared to 7nm GCN in the Radeon VII) brought AMD to overall perf/W parity with Nvidia even in frequency-pushed SKUs like the 5700 XT, which we hadn't seen since the Kepler/early GCN era before that. We've also seen that lower clocked versions of 7nm RDNA (original BIOS 5600 XT and Radeon Pro 5600M) are able to notably surpass anything Nvidia has to offer in terms of efficiency. Now, of course there is a significant node advantage in play here, but 7nm has nonetheless helped AMD reach a point in the competitive landscape that it hasn't seen on either the CPU or GPU side for many, many years. With AMD promising 50% improved perf/W for RDNA2 (even if that is peak and the average number is, say, 30%) we're looking at some very interesting AMD GPUs coming up.

It's absolutely true that AMD has a history of over-promising and under-delivering, particularly in the years leading up to the RDNA launch, but things are looking like that has changed. The upcoming year is going to be exciting from both GPU makers, consoles are looking exciting, and even the CPU space is showing some signs of actually being interesting again (though mostly in mobile).
 
Joined
Jan 8, 2017
Messages
8,862 (3.36/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
Mantle was gonna change everything ... it didn't
HBM2 was gonna change everything ... it didn't
7nm was gonna change everything ... it didn't

Mantle paved the way for Vulkan.
HBM is now used by both AMD and Nvidia for their highest end GPUs.
Everyone is moving to 7nm which dramatically increases transistor counts, Nvidia has a 54 billion transistor GPU, if that isn't a game changer I don't know what is.

Bottom of the line is, you don't know what you're talking about.
 
Top