Friday, February 26th 2016

NVIDIA to Unveil "Pascal" at the 2016 Computex

NVIDIA is reportedly planning to unveil its next-generation GeForce GTX "Pascal" GPUs at the 2016 Computex show, in Taipei, scheduled for early-June. This unveiling doesn't necessarily mean market availability. SweClockers reports that problems, particularly related to NVIDIA supplier TSMC getting its 16 nm FinFET node up to speed, especially following the recent Taiwan earthquake, could delay market available to late- or even post-Summer. It remains to be seen if the "Pascal" architecture debuts as an all-mighty "GP100" chip, or a smaller, performance-segment "GP104" that will be peddled as enthusiast-segment over being faster than the current big-chip, the GM200. NVIDIA's next generation GeForce nomenclature will also be particularly interesting to look out for, given that the current lineup is already at the GTX 900 series.
Source: SweClockers
Add your own comment

97 Comments on NVIDIA to Unveil "Pascal" at the 2016 Computex

#26
HumanSmoke
TheGuruStudSee yields of AMD and nvidia everytime a GPU launches on a new node that TSMC claimed was ready.
Name a foundry that hasn't had ramp issues on a new process. You hold up Samsung as some process leader yet what have they commercially produced on 14nm that wasn't "a tiny arm chip" as you put it?
TheGuruStudAnd your proof is more arm?
You are going to tell me that a BGA-1506 package is "a tiny arm chip" again :shadedshu:
Posted on Reply
#27
rruff
rtwjunkieThat role has been laid out for the 950SE. Since they announced it so late in Maxwell's life as the 750 replacement, my guess is it will fill that slot long into the Pascal cycle.
You may be right, but I think we will not wait long even if it is not the first to be introduced. Nvidia wants to keep competing in the laptop dGPU market (which they currently dominate by a huge margin), and AMD has said they will introduce Polaris for this market this year. This is where power consumption is super important, so it makes sense to use the latest architecture. That's why the 750s were the first to get Maxwell. It's the desktop version of the GTX 840-860m and 940-960m which are all GM107 chips. It would be easy to to the same for Pascal, and would make marketing sense if AMD uses Polaris in the low end gaming market.
Posted on Reply
#28
ArdWar
HumanSmokeName a foundry that hasn't had ramp issues on a new process. You hold up Samsung as some process leader yet what have they commercially produced on 14nm that wasn't "a tiny arm chip" as you put it?

You are going to tell me that a BGA-1506 package is "a tiny arm chip" again :shadedshu:
Package size, and in this case pin count, doesn't necessarily correlated to die size and complexity. An ASIC with many integrated peripherals could have very low pin count relative to its complexity, while a general purpose chip might be pad limited (it ran out of pin area rather than die size).

Nevertheless Zynq is hybrid FPGA+SoC, and if I not mistaken their much bigger brother Virtex FPGA also start shipping. FPGA's probably only second to GPU in sheer number of transistors.
Posted on Reply
#29
HumanSmoke
ArdWarPackage size, and in this case pin count, doesn't necessarily correlated to die size and complexity.
That should be a given.
I would also have thought people could think laterally and use use package size dimensions to get an approximate size of the die as shown at the beginning of some of Xilinx's promotional videos and product literature - which shows that the die is still comfortably larger than the ~ 100mm² ARM chips currently in production at Samsung. Bear in mind the Zynq SKU shown below is one of the smaller die UltraScale+ chips.
ArdWarNevertheless Zynq is hybrid FPGA+SoC, and if I not mistaken their much bigger brother Virtex FPGA also start shipping. FPGA's probably only second to GPU in sheer number of transistors.
True enough. Virtex-7/-7XT is a pretty big FPGA on TSMC's 28nm (and TSMC's 65nm for the FPGA's interposer). The die is ~ 375-385mm² and 6.8 billion transistors - basically the size of performance GPU or enthusiast CPU, but with a greater transistor density than either.
Posted on Reply
#30
ManofGod
Wood screws? Just a paper launch reveal? Actual release data and possible performance numbers? $1000 initial cost? 24GB of ram on a Titan version would be cool but not really doable I suppose.
Posted on Reply
#31
FordGT90Concept
"I go fast!1!11!1!"
AMD is likely to have up to 32 GB on HBM2 which translates to 8 GB per stack. NVIDIA will likely offer the same.
Posted on Reply
#32
newtekie1
Semi-Retired Folder
btarunrIt remains to be seen if the "Pascal" architecture debuts as an all-mighty "GP100" chip, or a smaller, performance-segment "GP104" that will be peddled as enthusiast-segment over being faster than the current big-chip, the GM200.
This will come down entirely to how AMD Polaris performs. If we get a repeat of AMD's last few launches, then nVidia's mid-range GP104 will match or beat AMD's top end, and we won't see GP100 until AMD's next generation. But I'm hoping AMD manages to pull a rabbit out of their had with Polaris and we finally see the 2nd from the top GP100 at $300 like it should be.

The sad truth is, nVidia has likely banked a crap ton of money by selling mid-range GPUs for $500 that would have normally sold for no more than $200 in the past. And when they finally have to release their high end chip, they sell them for $650+. AMD hasn't had this luxury, and we all know cash isn't something they have a lot of, this gives nVidia a very nice advantage.
Posted on Reply
#33
Steevo
The mid range and low power segments are where the real money is, smaller chips mean more per wafer and a larger product stack if a few have flaws out of the batch, I am thinking the days of high performance large die initial offerings is over, perhaps they will run out enough to get reviews of the big ones if they are smart, and then run out the rest as their primary movers, the $100-300 performance segment, considering the recent steam hardware survey shows that mid-range cards lead the pack its their best option.
Posted on Reply
#34
FordGT90Concept
"I go fast!1!11!1!"
Polaris, like Pascal, is likely to be twice as fast as cards available today using less power. 28nm to 14/16nm is a huge jump, as the slide in the OP shows.

Polaris is expected to have "up to 18 billion transistors" where Pascal has about 17 billion.


I still think the only reason why Maxwell can best Fiji is because Maxwell's async compute is half software, half hardware, where AMD's is all hardware. Transistors for making async compute work in GCN were otherwise spent increasing compute performance in Maxwell. It's not clear whether or not Pascal has a complete hardware implementation of async compute or not.

As with all multitasking, there is a overhead penalty. So long as you aren't using async compute (which not much software does, regrettably), Maxwell will come out ahead because everything is synchronous.


I think the billion transistor difference comes from two areas: 1) AMD is already familiar with HBM and interposers. They knew the exact limitations they were facing walking into the Polaris design so they could push the limit with little risk. 2) 14nm versus 16nm so more transistors can be packed into the same space.

Knowing the experience Apple had with both processes, it seems rather likely that AMD's 14nm chips may run hotter than NVIDIA's 16nm chips. This likely translates to lower clocks but, with more transistors, more work can be accomplished per clock.

I think it ends up being very competitive between the two. If Samsung improved their process since Apple's contract (which they should have, right?), AMD could end up with a 5-15% advantage over NVIDIA.
Posted on Reply
#35
rruff
FordGT90ConceptPolaris, like Pascal, is likely to be twice as fast as cards available today using less power. 28nm to 14/16nm is a huge jump, as the slide in the OP shows.
Isn't this like going from Sandy Bridge to Broadwell? Only if you'd had a few years to tweak Sandy to get the most out of it.

I think people expecting a 2x jump will be disappointed. A small increase in performance (~20%) with a bigger reduction in power consumption would be more like it. And don't expect any of it to be cheap.
Posted on Reply
#36
qubit
Overclocked quantum bit
Hail Hydra!
Posted on Reply
#37
FordGT90Concept
"I go fast!1!11!1!"
rruffIsn't this like going from Sandy Bridge to Broadwell? Only if you'd had a few years to tweak Sandy to get the most out of it.

I think people expecting a 2x jump will be disappointing.
No, because Intel has been making processors smaller and smaller, relativistically:

Also bare in mind that Intel has been making the cores smaller and increasing the size of the GPU with each iteration. Each generation is a tiny bit faster...and cheaper for Intel to produce.

In GPUs, the physical dimesions stay more or less the same (right now, limited by interposer):

Posted on Reply
#38
rruff
Isn't that only relevant to the highest end chip? That is a niche market, and it won't be cheap. Granted Intel has had no competition lately, while Nvidia has at least a little. At the end of the day what 99% of us care about is FPS/$ and FPS/W, not absolute FPS for the biggest chip. Big gains in FPS/$ will only occur if there is fierce competition. We have only 2 players and one is hanging on by a thread. I don't see it happening, but it would be cool if it did.
Posted on Reply
#39
FordGT90Concept
"I go fast!1!11!1!"
The non-cutdown Pascal and Polaris chips will no doubt run for at least $600 USD but that's normal. AMD has a card that competes with NVIDIA at every price point except Titan-Z but that's coming with the Fury X2.


AMD is not "hanging by a thread" in the graphics department. They have 20% marketshare in the discreet card market and 100% of the console market.
Posted on Reply
#40
HumanSmoke
FordGT90ConceptPolaris, like Pascal, is likely to be twice as fast as cards available today using less power. 28nm to 14/16nm is a huge jump, as the slide in the OP shows.
FordGT90ConceptPolaris is expected to have "up to 18 billion transistors" where Pascal has about 17 billion. ....I think the billion transistor difference comes from two areas: 1) AMD is already familiar with HBM and interposers. They knew the exact limitations they were facing walking into the Polaris design so they could push the limit with little risk. 2) 14nm versus 16nm so more transistors can be packed into the same space.
Both the figures come from an extrapolation (guesstimate) done by 3DCenter. The transistor count extrapolation is based almost entirely upon TSMC's 16nmFF product blurb
TSMC's 16FF+ (FinFET Plus) technology can provide above 65 percent higher speed, around 2 times the density, or 70 percent less power than its 28HPM technology.
All 3DC did was basically double Fiji and GM 200's count, deduct the uncore that was represented twice, and in Pascal's case added a ballpark figure for the added SFU's (FP64) that they knew would be added.
FordGT90ConceptKnowing the experience Apple had with both processes, it seems rather likely that AMD's 14nm chips may run hotter than NVIDIA's 16nm chips. This likely translates to lower clocks but, with more transistors, more work can be accomplished per clock.
Yes, the comparisons between Samsung and TSMC manufactured Apple A9's would tend to bear that out. It is the nature of GCN that the "always on" nature of its hardware will further add to the imbalance - although if Nvidia adapt their thread scheduling technology (HyperQ) to work with the graphics pipeline, the power requirement differences could still come down to the foundry of manufacture. I am still not convinced that all AMD's GPUs will be built on 14nm. It really wouldn't surprise me if the flagship was a TSMC product. The added bonus for AMD would be that since TSMC could also provide the interposer, the supply chain can be somewhat more vertically integrated.
FordGT90ConceptIn GPUs, the physical dimesions stay more or less the same (right now, limited by interposer)
The GPU limiting size is the reticule limits of the lithography tools at ~625mm², which is why GPU's sit at around 600mm² to allow of a reasonable keep out space between dies for cutting. It is also the reason why large chips like the 20+ billion transistor Virtex Ultrascale XCVU440 and its smaller FPGA brethen are made in "slices" then mounted on a common interposer. The interposer itself can be both smaller than the package (as per Fiji), and not that limited in size to begin with. There are companies putting out larger interposers than the 1101mm² that UMC uses for Fiji. Bear in mind that the TSMC manufactured XCVU440 package is 55mm x 55mm and sits squarely atop an interposer made by the same company.
Posted on Reply
#41
newtekie1
Semi-Retired Folder
FordGT90ConceptAMD is not "hanging by a thread" in the graphics department. They have 20% marketshare in the discreet card market and 100% of the console market.
100% of the console market doesn't help when they had to bid so low just to get the contracts that they aren't making any money on the deal.
Posted on Reply
#42
FordGT90Concept
"I go fast!1!11!1!"
They wouldn't be making them if they couldn't turn a profit. Yeah, it likely isn't much per unit but all of the console developers are intimately familar with AMD GPUs. AMD doesn't need to pad developer pockets to make them use their stuff like NVIDIA does.

The console market could turn into a huge boon for AMD as more developers use async compute. Xbox One has 16-32 compute queues while the Playstation 4 has 64. Rise of the Tomb Raider may be the only game to date that uses them for volumetric lighting. This is going to increase as more developers learn to use the ACEs. As these titles are ported to Windows, NVIDIA cards may come up lacking (depends on whether or not they moved async compute to the hardware in Pascal).


Then again, the reason why NVIDIA has 80% of the market while AMD doesn't is because of shady backroom deals. AMD getting the performance lead won't change that.
Posted on Reply
#43
rruff
FordGT90ConceptThen again, the reason why NVIDIA has 80% of the market while AMD doesn't is because of shady backroom deals.
Really? Nothing to do with hardware?
Posted on Reply
#44
FordGT90Concept
"I go fast!1!11!1!"
AMD cards have been competitive with NVIDIA cards, dollar for dollar, for decades.

AMD's processors bested Intel processors from K6 to K8. Their market share grew during that period but they didn't even come close to overtaking Intel. It was later discovered Intel did shady dealings of their own (offering rebates to OEMs that refused to sell AMD processors) and AMD won a lawsuit that had Intel paying AMD.

It's all about brand recognition. People recognize Intel and, to a lesser extent, NVIDIA. Only tech junkies are aware of AMD. NVIDIA, like Intel, is in a better position to broker big deals with OEMs.
Posted on Reply
#45
newtekie1
Semi-Retired Folder
FordGT90ConceptAMD cards have been competitive with NVIDIA cards, dollar for dollar, for decades.
No they haven't. When AMD took the lead they overpriced their cards and failed to provide a good value. The 7950 was a good card, but they overpriced it at launch. The 290 was too late to the game, and nVidia just cut prices to best it. And the 970 was the card to beat for almost a year before AMD answered it with the 390, and the 390 still didn't best the 970's price to performance when the 390 launched. Then everyone was biting their lips waiting for the Fury Nano, that had to be the card to best the 970, right? Nope. Sure it was faster than the 970, but they priced the damn thing at more than double the price of the 970, giving it one of the worst price/performance values second only to the Titan X. If you go back through the reviews on TPU, there is not a lot of times where AMD is leading the price/performance, but there are a lot of times nVidia is.

They have missed some pretty good opportunities. The Nano could have been great if they hadn't overpriced it(the Fury X too). The Nano at $450 at launch would have flown out the door. The 390 is a decent contender now, but now is too late. The 390 needed to be on the market 4 months sooner than it was, and cost $20 less than the 970, not $20 more. You don't gain market share by simply matching what your competitor has had on the market for a few months, you have to offer the consumer something worth switching to.
Posted on Reply
#46
FordGT90Concept
"I go fast!1!11!1!"
And the Titan isn't overpriced now? Cards selling for >$500 aren't exactly volume movers.

HD 7950 -> R9 280(X)

R9 290X has been out since 2013. GTX 970 didn't come for another year. 2014 and 2015 were crappy years for cards not because of what AMD and NVIDIA did but because both were stuck on TSMC 28nm. The only difference is that NVIDIA debuted a new architecture while AMD didn't do much of anything.

Fiji is an expensive chip. They couldn't sell Nano on the cheap because the chip itself is not cheap.

390 is effectively a 290 with clocks bumped and 8 GiB of VRAM (which only a handful of applications at ridiculous resolutions can even reach). 390, all things considered, is about on par with 290X which is only about a 13% difference. Not something to write home about.
Posted on Reply
#47
newtekie1
Semi-Retired Folder
FordGT90ConceptAnd the Titan isn't overpriced now? Cards selling for >$500 aren't exactly volume movers.
Titan is a niche product, in fact it is a stupid product, I'm ignoring it. But just because nVidia has one outragously overpriced product, that doesn't mean the rest of their portfolio is overpriced too. And you are exactly right, $500+ products aren't volume movers. That is why I talked about the 970 and the 390, and the 290. They are in that sweet spot of price, where beyond that you start to spend a heck of a lot more money for a little more performance. That is why I said pricing the Nano at $450 instead of $650 is what AMD needed to do. AMD basically made Fiji, their first new GPU in almost 2 years, completely irrelevant to the market. Even the regular Fury was overpriced. There is no way $550 was a good price point for that. It would have turned heads at $400, but at $550 there was no reason to buy it over the cheaper 980 or the much much cheaper 970.
Posted on Reply
#48
HumanSmoke
FordGT90ConceptAMD cards have been competitive with NVIDIA cards, dollar for dollar, for decades.
AMD's processors bested Intel processors from K6 to K8. Their market share grew during that period but they didn't even come close to overtaking Intel. It was later discovered Intel did shady dealings of their own (offering rebates to OEMs that refused to sell AMD processors) and AMD won a lawsuit that had Intel paying AMD.
It's all about brand recognition. People recognize Intel and, to a lesser extent, NVIDIA. Only tech junkies are aware of AMD. NVIDIA, like Intel, is in a better position to broker big deals with OEMs.
That is a very blinkered view of the industry IMO.
AMD did have a superior product in K7 and K8 and were competitive during that era - and for certain weren't helped by Intel's predatory practices (nor were Cyrix, C&T, Intergraph, Seeq and a whole bunch of other companies). It is also a fact that AMD were incredibly slow to realize the potential of their own product. As early as 1998 there were doubts about the companies ability to fulfill contracts and supply the channel, and while the cross-licence agreement with Intel allowed AMD to outsource 20% of the x86 production, Jerry Sanders refused point blank to do so. By the time shortages were acute, the company poured funds they could ill afford to spend into developing Dresden's Fab 36 at breakneck speed and cost rather than just outsource production to Chartered Semi (which they eventually did way too late in the game) or UMC, or TSMC. AMD never took advantage of the third-party provision of the x86 agreement past 7% of production when sales were there for the taking. The hubris of Jerry Sanders and his influence on lapdog Ruiz was true in the early years of the decade as it was when AMD's own ex-president and COO, Atiq Raza reiterated the same thing in 2013.

As for the whole Nvidia/AMD debate, that is less about hardware than the entire package. Nearly twenty years ago ATI was content to just sell good hardware knowing that a good product sells itself - which was a truism back in the day when the people buying hardware were engineers for OEMs rather than consumers. Nvidia saw what SGI was achieving with a whole ecosystem (basically the same model that served IBM so well until Intel started dominating the big iron markets), allied with SGI - and then were gifted the pro graphics area in the lawsuit settlement between the two companies - and reasoned that there was no reason that they couldn't strengthen their own position in a similar matter. Cue 2002-2003, and the company begin design of the G80, a defined strategy of pro software (CUDA) and gaming (TWIMTBP). The company are still reaping rewards of a strategy defined 15 years ago. Why do people still buy Nvidia products? Because they laid down the groundwork years ago and many people were brought up with the hardware and software - especially via boring OEM boxes and TWIMTBP splash screens at the start of games. AMD could have gained massive inroads into that market, but shortsightedness in cancelling ATI's own GIGT program, basically put the company back to square one in customer awareness and all because AMD couldn't see the benefit of a gaming development program or actively sponsoring OpenCL. Fast forward to the last couple of years, and the penny has finally dropped, but it is always tough to topple a market leader if that leader basically delivers - I'm talking about delivering to the vast majority of customers - OEMs and the average user that just uses the hardware and software, not an minority of enthusiasts whose presence barely registers outside of specialized sites like this.

Feel free to blame everything concerning AMD's failings on outside influences and big bads in the tech industry. The company has fostered exactly that image. I'm sure the previous cadre of deadwood in the board room collecting compensation packages for 10-14 years during AMD's slow decline appreciate having a built in excuse for not having to perform. It's just a real pity that it's the enthusiast that pays for the laissez-faire attitude of a BoD that were content to not to have to justify their positions.
Posted on Reply
#49
rruff
FordGT90ConceptAMD cards have been competitive with NVIDIA cards, dollar for dollar, for decades.
I'll choose AMD over Intel and Nvidia unless there is a sound reason to do otherwise. No AMD for me the last few years. They aren't competitive on the stuff I'm interested in, and I'm not buying high end. My computers are on all the time, and I use them to play movies and video. AMD's high power consumption more than erases their cost advantage in GPUs. In CPUs they are both slow and power hungry.

If that changes I'll be more than happy to go with AMD.
Posted on Reply
#50
FordGT90Concept
"I go fast!1!11!1!"
AMD has always been horrible at marketing and branding. AMD has also repeatedly made very bad decisions (like the one to aquire ATI in the first place when their position in the CPU market had a very grim outlook). A lot of it does fall on AMD itself and their desire to not change the status quo. At least Zen brings some hope that the culture of AMD is changing...but that remains to be seen.
Posted on Reply
Add your own comment
May 12th, 2024 18:24 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts