• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA to Build "Volta" Consumer GPUs on TSMC 12 nm Process

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,683 (7.42/day)
Location
Dublin, Ireland
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard Gigabyte B550 AORUS Elite V2
Cooling DeepCool Gammax L240 V2
Memory 2x 16GB DDR4-3200
Video Card(s) Galax RTX 4070 Ti EX
Storage Samsung 990 1TB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
NVIDIA's next-generation "Volta" GPU architecture got its commercial debut in the most unlikely class of products, with the Xavier autonomous car processor. The actual money-spinners based on the architecture, consumer GPUs, will arrive some time in 2018. The company will be banking on its old faithful fab TSMC, to build those chips on a new 12 nanometer FinFET node that's currently under development. TSMC's current frontline process is the 16 nm FFC, which debuted in mid-2015, with mass-production following through in 2016. NVIDIA's "GP104" chip is built on this process.

This could also mean that NVIDIA could slug it out against AMD with its current GeForce GTX 10-series "Pascal" GPUs throughout 2017-18, even as AMD threatens to disrupt NVIDIA's sub-$500 lineup with its Radeon Vega series, scheduled for Q2-2017. NVIDIA's "Volta" architecture could see stacked DRAM technologies such as HBM2 gain more mainstream exposure, although competing memory standards such as GDDR6 aren't too far behind.



View at TechPowerUp Main Site
 
I believe TSMC's frontline process is still FF+. FFC is only a variant for more cost conscious clients.
 
Watch this: Volta will be running GDDR6 in the end, as we are in 2018 and still looking at AMD to release a competitor for the top segment and Nvidia has no urge to up the ante.
 
Watch this: Volta will be running GDDR6 in the end, as we are in 2018 and still looking at AMD to release a competitor for the top segment and Nvidia has no urge to up the ante.

if GDDR6 can cover the things they do then why not use it?
 
if GDDR6 can cover the things they do then why not use it?

That's the point, it'll only cover what they want to do if AMD doesn't compete in the top end segment.
 
Using HBM isn't like magic sauce. If GDDR5X and upwards provides more than enough bandwidth, it's more cost effective to use and better for shareholders as well (cheaper to use, can sell high).
It's actually counterproductive to potentially limit stock and incur other cost penalties to use cutting edge technologies first.
 
That's the point, it'll only cover what they want to do if AMD doesn't compete in the top end segment.
You're making a big deal out of nothing. HBM never translated into any significant performance advantage to begin with. Implying that GDDR6 (which we know nothing about at this point) will be handicap in 2018 is... "creative"?
 
So they're just Pascal with an HBM2 controller and a tiny die shrink (16nm ->12nm)? Not entirely sure why TSMC is bothering with 12nm. You'd think cost versus reward wouldn't pay off.

And how is NVIDIA going to manage 6 DRAM stacks on the interposer when AMD could barely fit four? Kind of suggests the GPU is relatively small which, in turn, suggests it is memory (as in compute) centric rather than graphics centric.

Have to wait and see.
 
So they're just Pascal with an HBM2 controller and a tiny die shrink (16nm ->12nm)? Not entirely sure why TSMC is bothering with 12nm. You'd think cost versus reward wouldn't pay off.

A square with the side 16 is 256 square units. At 12, it's 144 (little more than half).
At the same time, this is rumour about Volta being built on 12nm. I'm not sure how you infer from that that Volta is "just Pascal with an HBM2 controller and a tiny die shrink".
 
I'm not sure how you infer from that that Volta is "just Pascal with an HBM2 controller and a tiny die shrink".
Pretty much what Polaris amounted to but you're right, not sure what Volta exclusively implies.
 
Watch this: Volta will be running GDDR6 in the end, as we are in 2018 and still looking at AMD to release a competitor for the top segment and Nvidia has no urge to up the ante.
HBM is not going to be mainstream anytime soon from Nvidia, and why should they waste money on it when it's not needed? I wish AMD also prioritized real world value over PR value.

A square with the side 16 is 256 square units. At 12, it's 144 (little more than half).
At the same time, this is rumour about Volta being built on 12nm.
TSMC "12nm node" is not a real node shrink, but another refinement of 20nm + FinFET. Nvidia might choose to increase the density of the transistors on the refined process though.
 
You're making a big deal out of nothing. HBM never translated into any significant performance advantage to begin with. Implying that GDDR6 (which we know nothing about at this point) will be handicap in 2018 is... "creative"?

So why is there still 'stacked DRAM' in Volta slides then, seeing as they won't need it anyway?
 
So why is there still 'stacked DRAM' in Volta slides then, seeing as they won't need it anyway?
Same reason we have HBM in Pascal maybe?
Nobody said HBM is not needed. Rather, for consumers it makes little difference other than cost in its current form.
 
So why is there still 'stacked DRAM' in Volta slides then, seeing as they won't need it anyway?
It will be used for GV100, we don't know how many consumer products though.
 
It will be used for GV100, we don't know how many consumer products though.

Back to my original statement on Volta: it will launch for that market WITHOUT the stacked DRAM, just like Pascal, because the performance cap won't increase sufficiently to push HBM for gaming. That was my prediction and for some reason I need 4 posts to get that point across :)

It fits a pattern that started with Maxwell and the roadmap we had at that time. Nvidia is pushing architectural changes ahead of itself because the competition doesn't compete. I'm saying Volta will continue along that line.
 
V is for vaporware.
 
This sounds good, I am looking forward to this upgrade. Hopefully I won't give in to the temptation to upgrade to 1080ti first :(
 
Last edited:
I wish AMD also prioritized real world value over PR value.

AMD must have a good reason to push HBM, even more so with the low R&D budget they have. Vega with HBC will be already using it much better as Fury, but in Navi (maybe multiple small chips on an interposer?) it could play even more important role.
 
AMD must have a good reason to push HBM, even more so with the low R&D budget they have. Vega with HBC will be already using it much better as Fury, but in Navi (maybe multiple small chips on an interposer?) it could play even more important role.
Using a fancy new technology certainly gains attention, but in the end real world value matters. Sticking with a more pragmatic solution would have yielded better profit margins for AMD. HBM offers no significant benefits for a consumer GPU at this point.
 
Using a fancy new technology certainly gains attention, but in the end real world value matters. Sticking with a more pragmatic solution would have yielded better profit margins for AMD. HBM offers no significant benefits for a consumer GPU at this point.

B/c it's for enterprise. Why they didn't use gddr on consumer, idk. Maybe they really couldn't make two chips (lack of funds and all that).
 
Using a fancy new technology certainly gains attention, but in the end real world value matters. Sticking with a more pragmatic solution would have yielded better profit margins for AMD. HBM offers no significant benefits for a consumer GPU at this point.


Couldn't agree more. Real life solutions that work is all that matters.
 
HBM is not going to be mainstream anytime soon from Nvidia, and why should they waste money on it when it's not needed? I wish AMD also prioritized real world value over PR value.

Praize AMD for developing new standards, if you ask me. :) GDDR5, GDDR5x or GDDR6 takes up space, on a practical small card, and to have a wide memory-bus you need lots of chips to create a 300+ bits or even wider bus.

HBM is more practical, and requires less space, less power and offers more bandwidth compared to GDDR. The downside of HBM1 was that it was only able to adress up to 4GB of videoram, while HBM2 does not have that limitation anymore (up to 16GB).

Since AMD had a role in developping that interposer, AMD will have an advantage over HBM and HBM2 chips in the future while Nvidia and others have to wait in line first.

This was an important deal and this is why you dont see nvidia HBM cards yet. The FuryX with HBM is still an excellent all-round graphics card if you ask me. The HBM 1 OC is also sick, from base 500Mhz up to 1GHz offering a stunning 1024GB a sec of memory bandwidth. You dont see numbers like that in the GDDR camp.
 
Praize AMD for developing new standards, if you ask me. :) GDDR5, GDDR5x or GDDR6 takes up space, on a practical small card, and to have a wide memory-bus you need lots of chips to create a 300+ bits or even wider bus.
I don't give them credit for something they didn't invent.

HBM is more practical, and requires less space, less power and offers more bandwidth compared to GDDR. The downside of HBM1 was that it was only able to adress up to 4GB of videoram, while HBM2 does not have that limitation anymore (up to 16GB).
HBM is very expensive, has limited supply and has limited size configurations. The higher bandwidth offer no benefits to consumers at this point.

Since AMD had a role in developping that interposer, AMD will have an advantage over HBM and HBM2 chips in the future while Nvidia and others have to wait in line first.
Completely untrue. Nvidia was BTW the first to ship a HBM2 based product.

This was an important deal and this is why you dont see nvidia HBM cards yet. The FuryX with HBM is still an excellent all-round graphics card if you ask me. The HBM 1 OC is also sick, from base 500Mhz up to 1GHz offering a stunning 1024GB a sec of memory bandwidth. You dont see numbers like that in the GDDR camp.
In your dreams. Fury X was beaten by GTX 980 Ti, and overclocking the memory wouldn't help here. Even though Fury X is the most powerful graphics card made by AMD, it's not even available any more.
 
Last edited:
Back
Top