• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GTX 1060 and GTX 1050 Successors in 2019; Turing Originally Intended for 10nm

Yes in the past we have stuff like 28nm LP and 28nm HP. but i thought that no longer the case since TSMC failed to deliver 20nmHP? And my point about capacity still valid. Majority of TSMC revenue still coming from the so called "mobile tech process" so majority 7nm process capacity will be shifted towards mobile process. The only one really need that high performance process most likely only nvidia and AMD. and one reason TSMC failed on 20nmHP before because they end of most of their development focus for small low power chip like SoC. Nvidia most likely did not want to repeat what happen with 28nm. To push the performance further they have to ditch compute oriented design from maxwell. Nvidia end up relying on kepler for their compute solution for four years! Good thing for nvidia they have very solid ecosystem with their tesla. If not GK110/210 has been crushed by AMD Hawaii just looking at raw performance alone. There is no guarantee TSMC successor to 16nmFF will end up the way nvidia want it to be. Instead of betting their future on the uncertainties they create the road for their future themselves by investing on custom process for their architecture.

Rubbish, you simply cannot compare die sizes of such scale with minuscle mobile die sizes. They are different processes, goals and usages, lines and plants. They do not compete with each other.
 
  • Like
Reactions: ppn
Well damn, if they are looking at 8nm Samsung then it really does sound like AMD has TSMC's 7nm to itself in 2019 (besides Apple and ASIC's of course).
 
  • Like
Reactions: ppn
How do you know how the RTX reception will be when the NDA isn't lifted yet?

The initial reception of RTX is past history, it 'happened' during and after the Nvidia Keynote where it was announced, which ended in a 30 fps, super blurry dancing bloke, where its price points were set, and what resulted in performance analysis of RTX cards using the new features. Furthermore you can look around on this forum in any RTX topic to recognize the lukewarm-ness.

Hell, RTX ON/OFF is even a meme now. Its being ridiculed... and its not rocket science to consider why. It falls squarely in the same corner as VR, 3D stereoscopic and all those other gimmicks that won't last. High cost, uncanny, low benefit and virtually zero adoption rate, which creates the eternal chicken/egg situation many new technologies die from.

Also, people seem to forget that much of the performance is known and can be calculated - you don't need Nvidia slides to tell you this. And the reality is that only in a select few use cases does Turing improve on perf/dollar AT ALL. In most cases its complete stagnation or worse. That already cuts out most Pascal owners from a decent deal. And do you really think those who skipped Pascal are going to spend big on features they never need? En masse? Naaah - Pascal on discount is a far better deal for them, and has a much more friendly price tag too.

Nvidia is in a very strange position right now, and they've kinda dug their own hole.
 
Last edited:
Just the 2070 is fully enabled (so no 2070 Ti). The 2080 and 2080 Ti are cut down so that it can leave space for the 2080 Plus and the Titan X.
I bet they intended on the 2070 die to be the 2080Ti and 2080 die, but process failures made this the way it is.

I hear the spin cycle starting and it's shaking hard.
 
I bet they intended on the 2070 die to be the 2080Ti and 2080 die, but process failures made this the way it is.

I hear the spin cycle starting and it's shaking hard.

Yup - Icing on the cake is that bumped up TDP for the "Founders" cards. Needs extra watts for extra performance in other words, binning these cards is unreliable yield wise. So now you really just get a boosted BIOS at an increased price tag... instead of a throttling blower cooler. Sounds like an awesome deal at +100~150 bucks :D
 
Last edited:
It's hilarious we even consider discussing what the "reception" will be like when all this RTX stuff is going to be virtually nonexistent at launch. I think BFV is the only game that's supposed to come out with day one RTX support and Shadow of the Tomb Raider will receive a patch "later on".

That will be the RTX reception for ya when the NDA lifts, basically nothing to even talk about.

I have to disagree. If the cards didn't cost so much, I would be buying one and the one title I would be looking for ray tracing would be Shadow of the Tomb Raider.

That world is already beautiful and adding more immersion to it would be even better. It is also nice that it is a title where it isn't competitive so FPS could be a tad on the lower/tolerable side compared to BFV.
 
Rubbish, you simply cannot compare die sizes of such scale with minuscle mobile die sizes. They are different processes, goals and usages, lines and plants. They do not compete with each other.
Mwahahahahaha! Hahaha! Ha!
Thanks for the best laugh of the day.
 
Mwahahahahaha! Hahaha! Ha!
Thanks for the best laugh of the day.

How so? It was clear as day that we were stuck on 28nm for so long because the smaller nodes simply didn't offer the same characteristics for high performance, high power budget components. There is also a clear reason Nvidia's 16nm TSMC Pascal clocks noticeably higher than competitors on a different node...
 
How so? It was clear as day that we were stuck on 28nm for so long because the smaller nodes simply didn't offer the same characteristics for high performance, high power budget components. There is also a clear reason Nvidia's 16nm TSMC Pascal clocks noticeably higher than competitors on a different node...
I only laughed @ mobile silicon not competing for fab capacity with GPU silicon.
 
I only laughed @ mobile silicon not competing for fab capacity with GPU silicon.

Gotcha - we read his comment differently, to me he is speaking solely about die size/process nodes and not market - in which case he's 100% correct.
 
Also, people seem to forget that much of the performance is known and can be calculated - you don't need Nvidia slides to tell you this. And the reality is that only in a select few use cases does Turing improve on perf/dollar AT ALL. In most cases its complete stagnation or worse. That already cuts out most Pascal owners from a decent deal. And do you really think those who skipped Pascal are going to spend big on features they never need? En masse? Naaah - Pascal on discount is a far better deal for them, and has a much more friendly price tag too.

Nvidia is in a very strange position right now, and they've kinda dug their own hole.
How do you "know" the performance? Which benchmarks do you base this on?
What specifically proves that Turing is stagnating or worse than Pascal?
Which hole is Nvidia in right now? The only valid complaint so far for Turing is pricing, everything else is grumpy AMD fans.
 
How do you "know" the performance? Which benchmarks do you base this on?
What specifically proves that Turing is stagnating or worse than Pascal?
Which hole is Nvidia in right now? The only valid complaint so far for Turing is pricing, everything else is grumpy AMD fans.
The compute part is similar to Volta, I'm guessing that's what he's talking about. Anand has an article on Turing (as they always do). It is mostly Volta, save for the RT cores, but even then there are twists that make them different.
 
Anybody think we'll see 10nm nVidia GPUs by early 2020? I thought Intel was behind but i guess nVidia may be last to the shrinkage race.
 
Back
Top