Wednesday, June 12th 2019

NVIDIA's SUPER Tease Rumored to Translate Into an Entire Lineup Shift Upwards for Turing

NVIDIA's SUPER teaser hasn't crystallized into something physical as of now, but we know it's coming - NVIDIA themselves saw to it that our (singularly) collective minds would be buzzing about what that teaser meant, looking to steal some thunder from AMD's E3 showing. Now, that teaser seems to be coalescing into something amongst the industry: an entire lineup upgrade for Turing products, with NVIDIA pulling their chips up one rung of the performance chair across their entire lineup.

Apparently, NVIDIA will be looking to increase performance across the board, by shuffling their chips in a downward manner whilst keeping the current pricing structure. This means that NVIDIA's TU106 chip, which powered their RTX 2070 graphics card, will now be powering the RTX 2060 SUPER (with a reported core count of 2176 CUDA cores). The TU104 chip, which power the current RTX 2080, will in the meantime be powering the SUPER version of the RTX 2070 (a reported 2560 CUDA cores are expected to be onboard), and the TU102 chip which powered their top-of-the-line RTX 2080 Ti will be brought down to the RTX 2080 SUPER (specs place this at 8 GB GDDR6 VRAM and 3072 CUDA cores). This carves the way for an even more powerful SKU in the RTX 2080 Ti SUPER, which should be launched at a later date. Salty waters say the RTX 2080 Ti SUPER will feature and unlocked chip which could be allowed to convert up to 300 W into graphics horsepower, so that's something to keep an eye - and a power meter on - for sure. Less defined talks suggest that NVIDIA will be introducing an RTX 2070 Ti SUPER equivalent with a new chip as well.
This means that NVIDIA will be increasing performance by an entire tier across their Turing lineup, thus bringing improved RTX performance to lower pricing brackets than could be achieved with their original 20-series lineup. Industry sources (independently verified) have put it forward that NVIDIA plans to announce - and perhaps introduce - some of its SUPER GPUs as soon as next week.

Should these new SKUs dethrone NVIDIA's current Turing series from their current pricing positions, and increase performance across the board, AMD's Navi may find themselves thrown into a chaotic market that they were never meant to be in - the RT 5700 XT for $449 features performance that's on par or slightly higher than NVIDIA's current RTX 2070 chip, but the SUPER version seems to pack in just enough more cores to offset that performance difference and then some, whilst also offering raytracing.
Granted, NVIDIA's TU104 chip powering the RTX 2080 does feature a grand 545 mm² area, whilst AMD's RT 5700 XT makes do with less than half that at 251 mm² - barring different wafer pricing for the newer 7 nm technology employed by AMD's Navi, this means that AMD's dies are cheaper to produce than NVIDIA's, and a price correction for AMD's lineup should be pretty straightforward whilst allowing AMD to keep healthy margins.
Sources: WCCFTech, Videocardz
Add your own comment

126 Comments on NVIDIA's SUPER Tease Rumored to Translate Into an Entire Lineup Shift Upwards for Turing

#2
cucker tarlson
2080ti cut down to 3072 cuda ? that's like vasectomy.
They throw in 3500 cuda and 90% of 2080Ti's RT capability and I might get one at $699
Posted on Reply
#3
Fluffmeister
It's wrong to assume a larger chip on an old node is more expensive to produce than a smaller chip on a new one, Nvidia's margins have remained pretty healthy regardless.
cucker tarlson2080ti cut down to 3072 cuda ? that's like vasectomy.
Yeah sounds like they are just using the full TU104 chip to me.
Posted on Reply
#4
64K
hmmm looking at the 2080 that's a 4.3% increase in cores. Assuming the clocks get a nice bump then would all of that even equate to a 10% increase in performance over what you could OC an FE to yourself? How much can you still OC a 2080 Super or will it be pretty close to it's peak stable clocks out of the box. Waiting for reviews.

If there is a price adjustment downwards with the Super lineup then that would be nice but that's just a rumor right now.
Posted on Reply
#5
IceShroom
TesterAnonWelp there goes NAVI, again.
It means current RTX is outdated. RTX launched how many month ago? Maybe 8 or 10!! Well iPhones have at least 1 year lifecycle.
Posted on Reply
#6
trparky
So correct me if I'm wrong, does this mean that a 2080 will become a 2070 with the same price as the 2080 and a new 2080 will come out with an even higher price?
Posted on Reply
#7
FeelinFroggy
IceShroomIt means current RTX is outdated. RTX launched how many month ago? Maybe 8 or 10!! Well iPhones have at least 1 year lifecycle.
Releasing a new GPU does not impact the life cycle of the RTX cards at all. It also wont change their performance. They will get the same fps in games next week that they got last week.

It just means that there are other products on the market that can get more fps.
Posted on Reply
#8
TheoneandonlyMrK
IceShroomIt means current RTX is outdated. RTX launched how many month ago? Maybe 8 or 10!! Well iPhones have at least 1 year lifecycle.
The way Consumers are meant to be played.


Plus this is just Spoiler PR BS , the cards are not hitting shelfs for months and then it won't apparently be all at the same time.

Im not linking the wccftech article.

So Trparky they are undoing the upsell Apparently, I would be livid personally.
Posted on Reply
#10
bug
FluffmeisterIt's wrong to assume a larger chip on an old node is more expensive to produce than a smaller chip on a new one, Nvidia's margins have remained pretty healthy regardless.
The price of a chip has always been a function of its die area. Manufacturing doesn't care about designs, it's all about imprinting a pattern while keeping the flaws to a minimum ;)
You are right to point out newer nodes tend to be more expensive than older ones. At while they coexist.
Posted on Reply
#11
kings
FluffmeisterIt's wrong to assume a larger chip on an old node is more expensive to produce than a smaller chip on a new one, Nvidia's margins have remained pretty healthy regardless.
Yeah, Nvidia already said that the 7nm node is very expensive at the moment and so they are not in a hurry to move, since even with the more affordable 12nm they have the most efficient cards!

syncedreview.com/2019/03/21/nvidia-ceo-says-no-rush-on-7nm-gpu-company-clearing-its-crypto-chip-inventory/
Posted on Reply
#12
trparky
kingsNvidia already said that the 7nm are very expensive at the moment
So why has AMD moved to 7nm and they don't seem to have any issues?
Posted on Reply
#13
bug
trparkySo why has AMD moved to 7nm and they don't seem to have any issues?
What do you mean no issues? They have a die half the size of Nvidia's, but their finished product costs just as much. That was the only implied issue.
Posted on Reply
#14
trparky
@kings mentioned something about the expense of moving to 7nm, that's what I'm referring to. AMD seems to have been able to move to 7nm and do it quite well without having to raise prices. Why can't nVidia do the same?
Posted on Reply
#15
bug
trparky@kings mentioned something about the expense of moving to 7nm, that's what I'm referring to. AMD seems to have been able to move to 7nm and do it quite well without having to raise prices. Why can't nVidia do the same?
Oh, that. There are more parties competing for 7nm production, most of them in the mobile business. If you take a 12nm die and move it to 7nm you get a smaller die, but you pay more per square mm. Like I said above, a die half the size of what Nvidia builds on 12nm seems to cost the same when built on 7nm.
Plus, Nvidia really doesn't need 7nm now.
Posted on Reply
#16
trparky
It was a question that I wanted to be answered. I always thought that shrinking the node meant lower prices since (theoretically speaking) you'd get more usable chips out of each silicon wafer. The larger the die the fewer chips you get from the silicon wafer, the smaller the die... well duh.
Posted on Reply
#17
efikkan
FluffmeisterIt's wrong to assume a larger chip on an old node is more expensive to produce than a smaller chip on a new one, Nvidia's margins have remained pretty healthy regardless.
TSMC 7nm is at least twice as expensive per density, probably more, since the old "16/12nm" node have reached its full potential, while the 7nm node is still maturing.
trparky@kings mentioned something about the expense of moving to 7nm, that's what I'm referring to. AMD seems to have been able to move to 7nm and do it quite well without having to raise prices. Why can't nVidia do the same?
AMD's motivation of moving to a more expensive node is driven by their need to achieve higher energy efficiency, even though the gains are relatively small.
Nvidia on the other hand have a superior architecture that achieves better efficiency on an "inferior" node. They will not move to a new node until they need to, and considering the volumes of large chips shipped by Nvidia vs. AMD, Nvidia needs a more mature node before they move production.
Should these new SKUs dethrone NVIDIA's current Turing series from their current pricing positions, and increase performance across the board, AMD's Navi may find themselves thrown into a chaotic market that they were never meant to be in…
Really? So AMD assumed that Nvidia wouldn't update their lineup for the next several years?
A few years ago Nvidia used to do mid-life upgrades of their generations every year or so.
Posted on Reply
#18
bug
trparkyIt was a question that I wanted to be answered. I always thought that shrinking the node meant lower prices since (theoretically speaking) you'd get more usable chips out of each silicon wafer. The larger the die the fewer chips you get from the silicon wafer, the smaller the die... well duh.
And that is correct, but only the new node is refined/established enough. 7nm is not, just yet, so prod capacity is at a premium atm.
Posted on Reply
#19
kings
trparky@kings mentioned something about the expense of moving to 7nm, that's what I'm referring to. AMD seems to have been able to move to 7nm and do it quite well without having to raise prices. Why can't nVidia do the same?
But AMD has pushed up the prices, with GPUs half the size of Turing, their cards cost as much as Nvidia's equivalents!

AMD had to go to 7nm out of necessity to compete. Nvidia for the time being has no need for that, so they stick to what is more affordable.
Posted on Reply
#20
trparky
bugAnd that is correct, but only the new node is refined/established enough. 7nm is not, just yet, so prod capacity is at a premium atm.
OK, that makes sense.
kingsAMD had to go to 7nm out of necessity to compete.
That's true, AMD has been lagging behind pretty badly. About the only side of the AMD house that's been doing well is the Ryzen side of the house.
Posted on Reply
#21
Totally
bugOh, that. There are more parties competing for 7nm production, most of them in the mobile business. If you take a 12nm die and move it to 7nm you get a smaller die, but you pay more per square mm. Like I said above, a die half the size of what Nvidia builds on 12nm seems to cost the same when built on 7nm.
Plus, Nvidia really doesn't need 7nm now.
If it is as you say the added cost is not intrinsic as you are implying.
Posted on Reply
#22
bug
TotallyIf it is as you say the added cost is not intrinsic as you are implying.
Go on...
Posted on Reply
#23
Totally
Meaning it's the demand that is costing more not the process itself.
Posted on Reply
#24
bug
TotallyMeaning it's the demand that is costing more not the process itself.
The output is not where is should be, that's why not everyone can be catered to. You still pay per square mm, it's just that for the time being you pay a lot more than you do for 12nm. For mobile that's worth it because every watt saved is golden. For others, not so much.
Posted on Reply
#25
Camm
This has got to be one of the most disgusting rebrands I've seen in a while.
Posted on Reply
Add your own comment
May 8th, 2024 06:21 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts