Thursday, September 17th 2015

NVIDIA "Pascal" GPUs to be Built on 16 nm TSMC FinFET Node

NVIDIA's next-generation GPUs, based on the company's "Pascal" architecture, will be reportedly built on the 16 nanometer FinFET node at TSMC, and not the previously reported 14 nm FinFET node at Samsung. Talks of foundry partnership between NVIDIA and Samsung didn't succeed, and the GPU maker decided to revert to TSMC. The "Pascal" family of GPUs will see NVIDIA adopt HBM2 (high-bandwidth memory 2), with stacked DRAM chips sitting alongside the GPU die, on a multi-chip module, similar to AMD's pioneering "Fiji" GPU. Rival AMD, on the other hand, could build its next-generation GCNxt GPUs on 14 nm FinFET process being refined by GlobalFoundries.
Source: BusinessKorea
Add your own comment

52 Comments on NVIDIA "Pascal" GPUs to be Built on 16 nm TSMC FinFET Node

#1
RCoon
Gaming Moderator
16 + 2 VRM layout?
Posted on Reply
#2
FordGT90Concept
"I go fast!1!11!1!"
That's a pretty old picture I think. It was just to show the size of it. They put that out about the same time AMD announced HBM on Fiji to steal some of AMD's thunder ("ha, ha, we can do it too").

I just hope GloFo and TSMC deliver what was promised on schedule. AMD especially has their continued existence riding on GloFo delivering.
Posted on Reply
#3
RCoon
Gaming Moderator
"FordGT90Concept said:
That's a pretty old picture I think. It was just to show the size of it. They put that out about the same time AMD announced HBM on Fiji to steal some of AMD's thunder ("ha, ha, we can do it too").

I just hope GloFo and TSMC deliver what was promised on schedule. AMD especially has their continued existence riding on GloFo delivering.
Thanks, thought I'd seen it somewhere before. NVidia are usually pretty light on VRM's on reference models.

If AMD got 14nm over NVidia's 16nm, that would be relatively hilarious.
Posted on Reply
#4
FordGT90Concept
"I go fast!1!11!1!"
I think it's a gamble for both. I think both would prefer to leave TSMC because of their absolute failure to deliver a 22nm node. AMD went to GloFo (because obvious) and NVIDIA tried to hammer out a deal with Samsung but failed. NVIDIA is the clear loser here simply because they have to continue to do business with a company that has over hyped and underperformed previously.

Assuming both do deliver, yeah, it's pretty funny that AMD will have a process advantage over NVIDIA. Personally, I think I would be planning and placing orders for both. Both failing seems pretty remote so in either case, they wouldn't be completely left out to dry. I wonder if NVIDIA even considered GloFo or are they so prideful they consider doing so beneath them?
Posted on Reply
#5
HumanSmoke
"RCoon said:
Thanks, thought I'd seen it somewhere before
Yup. It's the mock-up shown at last years GTC
"FordGT90Concept said:
I just hope GloFo and TSMC deliver what was promised on schedule. AMD especially has their continued existence riding on GloFo delivering.
Looking a little doubtful at this stage. Apple usually have their finger on the pulse of process tech as far as production yields and delivery schedules are concerned. If the report that TSMC's 16nmFF+ got the nod over Samsung's 14nm for the A10 is on point, then that would indicate why Nvidia went that way. It was probably a certainty that the large GPU contracts went to TSMC (Samsung's large die experience being what it is), but if the whole stack is heading there then either the time frame is slipping, 16nmFF+ is demonstrating a superior performance envelope, or both.
Posted on Reply
#6
Ebo
fore me its like AMD has the upperhand tecwise, since they have made HBM work together with Samsung which makes the transition to HBM2 much easier.

Nvidia hasent gone down that way yet or at least as we know of, so they are still in the learning process. Now they have teamed up with TSMC again, the manufacturer that hasent done anything regarding HBM yet, to me thats a big gamble.

1 thing is to make the tec work, another is to make it work "perfect" and for me AMD has the upperhand here, since both AMD and Samsung allready has the working process going and finetune it.
Posted on Reply
#7
r.h.p
"btarunr said:
NVIDIA's next-generation GPUs, based on the company's "Pascal" architecture, will be reportedly built on the 16 nanometer FinFET node at TSMC, and not the previously reported 14 nm FinFET node at Samsung. Talks of foundry partnership between NVIDIA and Samsung didn't succeed, and the GPU maker decided to revert to TSMC. The "Pascal" family of GPUs will see NVIDIA adopt HBM2 (high-bandwidth memory 2), with stacked DRAM chips sitting alongside the GPU die, on a multi-chip module, similar to AMD's pioneering "Fiji" GPU. Rival AMD, on the other hand, could build its next-generation GCNxt GPUs on 14 nm FinFET process being refined by GlobalFoundries.



Source: BusinessKorea
So does this mean in theory that AMD could at last have a jump on Nvidia for their next Flag ship card eg: Radeon R10 x Fury ??
Posted on Reply
#8
NC37
This is a gamble. Especially with quality. Seems like just about every chip that had trouble, where there was mass recalls or vendors shelling out loads on warranty repairs came out of TSMC. Although, may have been GloFo too.

More than likely nVidia pulled a nVidia and demanded more than it was worth for Samsung. They did the same thing to Microsoft on the Xbox and then again to Sony on the PS3. Was no surprise no one would deal with their chips in consoles after that.
Posted on Reply
#9
HumanSmoke
"Ebo said:
fore me its like AMD has the upperhand tecwise, since they have made HBM work together with Samsung which makes the transition to HBM2 much easier.
Wrong company. AMD's cards use HBM from SK Hynix
"Ebo said:
Nvidia hasent gone down that way yet or at least as we know of, so they are still in the learning process.
Probably not too great - after all, they've had the benefit of AMD blazing that particular trail.
"Ebo said:
Now they have teamed up with TSMC again, the manufacturer that hasent done anything regarding HBM yet, to me thats a big gamble.
1. Duh! Who do you think makes AMD's Fiji ?
2. TSMC don't assemble the package. TSMC supply the GPU silicon. Hynix (or Samsung) supply the HBM IC's and interposer silicon, and in the case of AMD's Fury, another third party, Amkor, assembles the package.
"Ebo said:
1 thing is to make the tec work, another is to make it work "perfect" and for me AMD has the upperhand here, since both AMD and Samsung allready has the working process going and finetune it.
If history tells us anything, it is that both camps tend to be very close on timetable and performance. R&D commitment is the key going forward
On top of that, it takes 100 engineer-years to bring out a 28nm chip design. “Therefore, a team of 50 engineers will need two years to complete the chip design to tape-out. Then, add 9 to 12 months more for prototype manufacturing, testing and qualification before production starts. That is if the first silicon works,” he {J.K. Wang - TSMC's VP for 12" wafer operations] said. “For a 14nm mid-range SoC, it takes 200 man-years. A team of 50 engineers will need four years of chip design time, plus add nine to 12 months for production.” - SemiEngineering
"NC37 said:
This is a gamble. Especially with quality. Seems like just about every chip that had trouble, where there was mass recalls or vendors shelling out loads on warranty repairs came out of TSMC. Although, may have been GloFo too.
GloFo is perpetually behind schedule- partly from their way too optimistic timetables. If GloFo kept to their word, they would have been pumping out 14nm XM silicon since mid 2014. FWIW, both Samsung and TSMC are at least a quarter behind in their schedule. The only difference is, TSMC hasn't had their business contracts cut because of bad yields by all accounts.
"NC37 said:
More than likely nVidia pulled a nVidia and demanded more than it was worth for Samsung.
Yes this makes perfect sense. :rolleyes: More likely is that risk silicon is showing which process is better suited to getting the parts out on time and at the performance parameters required ( I also wouldn't be surprised to see AMD follow suit on the large chips).
Posted on Reply
#10
nemesis.ie
"Ebo said:
for me it's like AMD has the upperhand tecwise, since they have made HBM work together with Samsung which makes the transition to HBM2 much easier.
I think SK Hynix is AMD's HBM partner, not Samsung?
Posted on Reply
#11
natr0n
I though AMD/Hynix had exclusivity on HBM...?
Posted on Reply
#12
FordGT90Concept
"I go fast!1!11!1!"
Supply. Only AMD is getting HBM chips to meet AMD's demand for them. AMD won't have exclusive access forever. It was the bone SK Hynix threw to AMD to get AMD to sign up for their experiment.
Posted on Reply
#13
bug
Well, this news is pretty irrelevant, because we don't know where TSMC or GloFo are with their 16/14nm process. Unless one of them makes it work and the other one doesn't, it won't matter who builds what.

The real unknown for this generation is how much more computing power will be unleashed by moving from 28 to 16 or 14nm. This 22nm blunder hurt everyone and kept both and Nvidia and AMD rehashing old designs. (I know, Nvidia worked some magic towards power consumption, but isn't very relevant on a desktop and came at the cost of gimped computing power - a worthwhile cost for many, but still a cost).
Posted on Reply
#14
buggalugs
AMd gets exclusive rights to HBM 1 and first dibs on HBM2.....but that doesnt mean AMD will necessarily be first to market with the new HBM 2 cards.

I doubt we will see a huge performance increase with 14/16nm.....with cards being at best 50% faster than current gen, even 30% wouldnt surprise me...........at least for first gen cards. AMD and Nvidia will leave a little in the tank for a refresh.

The biggest jump on past process nodes is like 75%, and most are usually much less. The aim for them is to keep us buying upgrades.
Posted on Reply
#15
64K
I expect there to be shortages of GPUs at first as they try to improve yields at TSMC. Most likely price gouging from retailers too but for those that can be patient and wait they should be very impressed with the performance especially if they are upgrading from Kepler. One of the rumors about Pascal is that it will be better at compute than Maxwell. There will probably be a Titan like the original that will be considered to be a good deal for people that need them for work but can't afford the full professional card.

Where my interest lies is strictly in a gaming card and I expect a single Flagship Pascal to be able to handle 4K at 60 FPS in just about every game. The midrange Pascal should be faster than a 980 Ti and an entry level Pascal should be able to handle almost every game at 1080p at 60 FPS. I think for people expecting an increase in performance like we got with Maxwell over Kepler will be in for a pleasant surprise. I'm expecting a much larger increase in performance from Pascal over Maxwell.
Posted on Reply
#16
bug
"buggalugs said:
AMd gets exclusive rights to HBM 1 and first dibs on HBM2.....but that doesnt mean AMD will necessarily be first to market with the new HBM 2 cards.

I doubt we will see a huge performance increase with 14/16nm.....with cards being at best 50% faster than current gen, even 30% wouldnt surprise me...........at least for first gen cards. AMD and Nvidia will leave a little in the tank for a refresh.

The biggest jump on past process nodes is like 75%, and most are usually much less. The aim for them is to keep us buying upgrades.
At 28nm, a square is 784sq nm. At 16nm, it's 256sq nm. So you can fit 3x more transistors in the same area (simplifying, because it's a bit more complicated in real-life). So yes, a considerable (if not huge) performance increase is to be expected imho.
Posted on Reply
#17
PowerPC
"bug said:
At 28nm, a square is 784sq nm. At 16nm, it's 256sq nm. So you can fit 3x more transistors in the same area (simplifying, because it's a bit more complicated in real-life). So yes, a considerable (if not huge) performance increase is to be expected imho.
What tells you they won't just make the dies smaller like Intel?
Posted on Reply
#18
HumanSmoke
"bug said:
At 28nm, a square is 784sq nm. At 16nm, it's 256sq nm. So you can fit 3x more transistors in the same area (simplifying, because it's a bit more complicated in real-life). So yes, a considerable (if not huge) performance increase is to be expected imho.
Nope. Scaling isn't linear, and 28nm is the process node, not the size of the silicon.
TSMC have already stated that 16nmFF+ has twice the transistor density as a comparable 28nm IC, so if big Pascal is (for arguments sake) ~ 16bn transistors, then it is comparable in size to GM200 at 8bn transistors - ballpark*

*Uncore ( I/O, command processor, memory controllers, cache, transcode engine etc.) has a lower transistor density than the shader core, so any calculation needs to take into account the reduced uncore (due to the reduction in size of HBM's memory control in relation to that of GDDR5 IMC's).
"buggalugs said:
AMd gets exclusive rights to HBM 1 and first dibs on HBM2.....but that doesnt mean AMD will necessarily be first to market with the new HBM 2 cards.
AMD may have first option on Hynix's HBM2, but I doubt that they have a lock on Samsung's as well.
Posted on Reply
#19
john_
if I am not mistaken, only Intel can come out and say "I have a 14nm process". Samsung's and TSMC's 14nm and 16nm are more or less something between 20nm and 14nm or 16nm with a good dose of marketing.

Anyway, Pascal is probably 12 months away. Until then many things can happen. TSMC does have the advantage of experience over Samsung and I am not sure how good Samsung's process will be for big GPUs that eat 200W TDP and not just small SOCs that can be happy with 4-5W TDP.
Posted on Reply
#20
naxeem
Problem is, allegedly, 14nm or 16nm FinFET from TSMC and GF achieves the same density as would be expected from a current tech shrink to 20nm. That means that pure transistor count would get up to only 40% increase in number at best.
Posted on Reply
#21
dj-electric
*raises hand*

uhm... there shouldn't be any reason for nvidia not to use GDDR5 for their entry level cards. Hell, even for some of the mid range.

i dont want a GTX 750 ti esque card to cost 200$
Posted on Reply
#22
ironcerealbox
28^2 is exactly 4 times 14^2... 28:14 is 2:1 and, thus, 4:1 when squared. However 28:16 gives a 49:16 ratio after squaring, or, 3.0625:1. Simple math aside, 14nm chips at same die size of a 16nm chip will give you, approximately, 30.6% more transistors. The combination of a less efficient architecture but more transistors for same die size and assuming that AMD uses a design, layout, and instruction sets that are slightly inferior to Nvidia's for GAMING purposes, then I can see, overall, AMD competing equally with Nvidia. Sure, they might need more transistors to do it but at least they can compete.

Don't forget that GloFo has a partnership and technology agreement with Samsung and that is how GloFo got 14nm FinFET. It will be interesting what the next couple years will bring...

I just noticed that remark about the non-traditional scaling with 28nm to 14nm/16nm and that they aren't really true 14nm or 16nm but something between that and 20nm. >_<

So, that was pointless on my part.
Posted on Reply
#23
TheinsanegamerN
"Dj-ElectriC said:
*raises hand*

uhm... there shouldn't be any reason for nvidia not to use GDDR5 for their entry level cards. Hell, even for some of the mid range.

i dont want a GTX 750 ti esque card to cost 200$
Lower power consumption is a pretty good reason. the 960m (750ti) or the 965m (960) are great mobile cards, but if hbm2 could cut power consumption by 10-15 watt, it could make a big difference heat wise, and battery consumption wise, for gaming laptops that use it. The smaller size of the gpu package could also be a boon to small laptops, like the alienware 13 or razer blade 14.

It could also allow cards like the 960 to return to a single-slot design, as opposed to the current dual slot design, along with gpu power consumption taking another dip. I'd love to have my 770s performance in a sub 100 watt gpu. And, if hbm2 is only used on high end chips, the price of hbm will fall much more slowly than if it is used everywhere.
Posted on Reply
#24
GhostRyder
Well, all that matters in the end is how it is used. I will be interested to see some new GPU's on a die shrink as that is when ill make my next purchase, however I see that as a long ways off before anything comes to the market.
Posted on Reply
#25
erixx
Did AMD not patent this type of chip+RAM? Why not?
Posted on Reply
Add your own comment