Thursday, September 6th 2018

NVIDIA TU106 Chip Support Added to HWiNFO, Could Power GeForce RTX 2060

We are all still awaiting how NVIDIA's RTX 2000 series of GPUs will fare in independent reviews, but that has not stopped the rumor mill from extrapolating. There have been alleged leaks of the RTX 2080 Ti's performance and now we see HWiNFO add support to an unannounced NVIDIA Turing microarchitecture chip, the TU106. As a reminder, the currently announced members in RTX series are based off TU102 (RTX 2080 Ti), and TU104 (RTX 2080, RTX 2070). It is logical to expect a smaller die for upcoming RTX cards based on NVIDIA's history, and we may well see an RTX 2060 using the TU106 chip.

This addition to HWiNFO is to be taken with a grain of salt, however, as they have been wrong before. Even recently, they had added support for what, at the time, was speculated to be NVIDIA Volta microarchitecture which we now know as Turing. This has not stopped others from speculating further, however, as we see 3DCenter.org give their best estimates on how TU106 may fare in terms of die size, shader and TMU count, and more. Given that TSMC's 7 nm node will likely be preoccupied with Apple iPhone production through the end of this year, NVIDIA may well be using the same 12 nm FinFET process that TU102 and TU104 are being manufactured on. This mainstream GPU segment is NVIDIA's bread-and-butter for gross revenue, and so it is possible we may see an announcement with even retail availability towards the end of Q4 2018 to target holiday shoppers.
Source: HWiNFO Changelog
Add your own comment

56 Comments on NVIDIA TU106 Chip Support Added to HWiNFO, Could Power GeForce RTX 2060

#26
TheGuruStud
NkdLOL honestly when I saw the batterfield demo I was like o cool and the metro demo. But then I played strange brigade and I was like is this shit doing ray tracing on my gtx 1080? Hahahha. I guess I was just paying too much attention to shadows just because it was all about light and shadows. I was like but strange brigade looks just like that its so shiny and got great light.. Hahahha.
It should only be used for shadows, then maybe it wouldn't suck. It should be offloading rasterization and increasing perf, not this crap.
Posted on Reply
#27
john_
NkdLOL honestly when I saw the batterfield demo I was like o cool and the metro demo. But then I played strange brigade and I was like is this shit doing ray tracing on my gtx 1080? Hahahha. I guess I was just paying too much attention to shadows just because it was all about light and shadows. I was like but strange brigade looks just like that its so shiny and got great light.. Hahahha.
Well, if the past is any indication, don't worry. When PhysX came out programmers forgot how to program physics on the CPU and they absolutely could not do physics without PhysX. Programmers will soon forget how to do rays and shadows without ray tracing.
Posted on Reply
#28
RejZoR
VSGWould you not want to see RTX support on the most popular GPUs? If the majority of end users will not use it, why will developers support it?
There is a difference between having RTX and actually being able to use it. If RTX 2080Ti barely has the power to run things at fluid framerate, you can be sure RTX 2060 won't be able to run at any playable framerate with RTX enabled. So, literally, what's the point? It's about the same as having a pixels hader graphic card back in the day that was barely capable of running game seven without pixels haders, let alone with them because you ended up watching slideshows instead of real-time game...
TheGuruStudIt's to the point now that other methods are probably far superior. And the games are going to need 12 threads to run properly, b/c RT has to be processed on CPU. Surprise, vid card is worthless for what it was supposedly made for.

I told you, they're scam cores, and it's all for marketing. Real time reflections aren't new, but robbing this much perf is lol

It's time for AMD to have the devs add vega support and release a RT driver lol. I'm practically convinced that AMD perf can't be worse.
I'd be interested to see how GCN fares with ray tracing. Would be hilarious if in the end turns out R9 Fury is as fast as RTX something cards with RTX enabled XD
Posted on Reply
#29
Robcostyle
sweetNah. FE price is not the base price anymore.
Tell that AIB partners. I beg you!
Posted on Reply
#30
ppn
So is TU106 1/2 of TU102. or 1/3.

TU104 would have not existed if this generation was built on 7nm see.

In case 2070 is TU106 that is. TU106 would be TU206/192bit/7nm, and TU102 becomes TU204/256 bit with 2x core count.

TU104 only exists to counter possible VEGA64 on 7nm and 1.2Ghz HBM2.
Posted on Reply
#31
nemesis.ie
IMO, they can't/shouldn't brand a card as RTX if it doesn't have the extra h/w they are claiming for the 2080/ti. If it doesn't have those it should stay as GTX or it's very misleading.

Given this is NV though, it would not surprise me if they do this even if it doesn't have the h/w and does the RT via the regular cores and as a result has terrible RT performance. This could be "3.5GB" all over again if they do that.

I'm really hoping AMD will put DXR etc. support in a driver release soon (maybe the big EoY one) for Vega and it will be interesting to see how that performs and if it supports more than one card, e.g. rasterising done on one card and RT overlay effects on the 2nd or similar.

It would be quite funny if the next driver had it and is released over the next week or so just before the 20th launch.
Posted on Reply
#32
efikkan
TheGuruStudIt's to the point now that other methods are probably far superior. And the games are going to need 12 threads to run properly, b/c RT has to be processed on CPU. Surprise, vid card is worthless for what it was supposedly made for.

I told you, they're scam cores, and it's all for marketing. Real time reflections aren't new, but robbing this much perf is lol

It's time for AMD to have the devs add vega support and release a RT driver lol. I'm practically convinced that AMD perf can't be worse.
All the CPU cores you can get still can't achieve the raytracing efficiency of Turing. CPU raytracing is strictly for non-realtime.
RejZoRThere is a difference between having RTX and actually being able to use it. If RTX 2080Ti barely has the power to run things at fluid framerate, you can be sure RTX 2060 won't be able to run at any playable framerate with RTX enabled. So, literally, what's the point? It's about the same as having a pixels hader graphic card back in the day that was barely capable of running game seven without pixels haders, let alone with them because you ended up watching slideshows instead of real-time game...
What's the point of AMD and Nvidia releasing low-end GPUs with 4 GB memory and full API support, including tessellation, which they have no way of using? What was the point of AMD boasting about their Direct3D 12 support in early GCN models?
It's the chicken and the egg problem, you need wide hardware support to get games to use it, even if it means most cards will have limited to no real use for it.

Useful or not, the support in the Turing GPUs is not going to hurt you. Even if you never use the raytracing, Turing is still by far the most efficient and high-performing GPU design. The only real disadvantage is the larger dies and very marginal wasted power consumption.

Full scene real time raytracing at a high detail level are probably several GPU generations away, but that doesn't mean it wouldn't have some usefulness before that. We've seen people do the "Cornell box" with realtime voxelised raytracing with CUDA, not as advanced as Nvidia did it with RTX of course, but still enough to get realistic soft shadows and reflections. With RTX, some games should be able to do good soft shadows and reflections in full scenes, while retaining good performance.

I don't get why the tech demos from Nvidia focus so much on sharp reflections. Very few things in the real world have sharp reflection, and even my car with newly applied "HD Wax" from AutoGlym achieves the glossiness of that car in the Battlefield V video. I do understand that shiny things sell, but I wish they focused more on realistic use cases.
RejZoRI'd be interested to see how GCN fares with ray tracing. Would be hilarious if in the end turns out R9 Fury is as fast as RTX something cards with RTX enabled XD
How could that happen? R9 Fury doesn't have hardware accelerated raytracing, it only has general compute.
Posted on Reply
#33
londiste
TheGuruStudIt's time for AMD to have the devs add vega support and release a RT driver lol. I'm practically convinced that AMD perf can't be worse.
RejZoRI'd be interested to see how GCN fares with ray tracing. Would be hilarious if in the end turns out R9 Fury is as fast as RTX something cards with RTX enabled XD
nemesis.ieI'm really hoping AMD will put DXR etc. support in a driver release soon (maybe the big EoY one) for Vega and it will be interesting to see how that performs and if it supports more than one card, e.g. rasterising done on one card and RT overlay effects on the 2nd or similar.
While the comparisons in Nvidia's presentation were all over the place and 6 times faster than Pascal (whichever GPU they mean) sounded suspicious, on the same slide they also noted Turing (most likely Quadro RTX 8000) is 18% faster than DGX (4x Volta V100). And V100s are beasts. No, Fury and Vega do not have a chance.
Posted on Reply
#34
TheGuruStud
londisteWhile the comparisons in Nvidia's presentation were all over the place and 6 times faster than Pascal (whichever GPU they mean) sounded suspicious, on the same slide they also noted Turing (most likely Quadro RTX 8000) is 18% faster than DGX (4x Volta V100). And V100s are beasts. No, Fury and Vega do not have a chance.
Made up nvidia numbers are truth, now? Faster in doing one particular calculation that is useless. It's the intel way of marketing. Kaby lake is 15% faster than skylake!

Plus, you should know that's physically impossible or did the die become 4x larger? What? The dies are the same just cut down from Volta (with some scam cores sprinkled in)? Well, you don't say.

Four V100s would be able to run the game at 4k...not 1080p 40 fps LOLOLOLOL
Posted on Reply
#35
londiste
TheGuruStudPlus, you should know that's physically impossible or did the die become 4x larger? What?
Dedicated hardware. Much more efficient than generalized hardware.
What did you think RT cores are?
Posted on Reply
#36
ppn
340 sq.mm 192 bus is nonsens, unless they are preparing to shrink all the chips to 1/2 their size on 7nm keeping the bus intact.

it is 256 bit at least. so 2060 is the cut down version of 2070. similar to GTX 760/770.
Posted on Reply
#37
efikkan
ppn340 sq.mm 192 bus is nonsens, unless they are preparing to shrink all the chips to 1/2 their size on 7nm keeping the bus intact.

it is 256 bit at least. so 2060 is the cut down version of 2070. similar to GTX 760/770.
Precisely how is a 192-bit bus nonsense?
GP106 (GTX 1060) uses a 192-bit bus, and features 3/6 GB of memory.

192-bit memory bus actually means 3×64-bit memory controllers, each are actually dual 32-bit controllers. Chips can in fact be designed with any multiple of 32-bit memory controllers.
Posted on Reply
#38
Tsukiyomi91
IDGAF to what ppl say. that RTX2060 looks promising with that price range. If it performs as good as a GTX1070 in non ray tracing benchmarks & gaming, then I'll say I get one.
Posted on Reply
#39
moproblems99
efikkanvery marginal wasted power consumption
Oooooh, I see. Power consumption isn't important anymore. Let's waste some.
Posted on Reply
#40
ppn
GDDR6 provides 75% more bandwidth 1536 represents 20% improvement over 1280 Cuda. In theory 45% improvement overall same as 2070 to 1070. But why not go for 2070 instead, 2060 will be too slow at this point in time. the --60 card usually releases as it is just about to become obsolete. At least mine did. Had only crushing experience with these.
Posted on Reply
#41
efikkan
moproblems99Oooooh, I see. Power consumption isn't important anymore. Let's waste some.
It's still more efficient than anything else on the market, how can you complain about that?
ppnGDDR6 provides 75% more bandwidth 1536 represents 20% improvement over 1280 Cuda. In theory 45% improvement overall same as 2070 to 1070. But why not go for 2070 instead, 2060 will be too slow at this point in time. the --60 card usually releases as it is just about to become obsolete. At least mine did. Had only crushing experience with these.
Turing is a new architecture, you should not assume performance based on CUDA cores across architectures.

Still, the xx60 cards are only the entry product for gaming, if you want more, you got to pay more.
Posted on Reply
#42
Tsukiyomi91
its kinda saddening that there are people who haven't even own a sample card are complaining over Nvidia's new Turing chip. Have they benched all 3 cards? nope. Have they pre-order one & wait patiently? nope. Have they even signed the NDA? nope. Are they judging the new GPU's performance just by looking at the paper specs & go "oh, this is not good. Boo!"? Yep. Bunch of triggered, immature kids who owned Pascal cards are moaning over what Nvidia has been doing. Don't like it; keep it to yourselves.
Posted on Reply
#43
Prince Valiant
Tsukiyomi91its kinda saddening that there are people who haven't even own a sample card are complaining over Nvidia's new Turing chip. Have they benched all 3 cards? nope. Have they pre-order one & wait patiently? nope. Have they even signed the NDA? nope. Are they judging the new GPU's performance just by looking at the paper specs & go "oh, this is not good. Boo!"? Yep. Bunch of triggered, immature kids who owned Pascal cards are moaning over what Nvidia has been doing. Don't like it; keep it to yourselves.
No one is looking at paper specs when referring to RT performance, they're basing it off how it actually performed. I'd say it's a lot worse to be extolling Turing's virtues than it is to maintain healthy skepticism when performance is unknown.
Posted on Reply
#44
efikkan
Tsukiyomi91its kinda saddening that there are people who haven't even own a sample card are complaining over Nvidia's new Turing chip. Have they benched all 3 cards? nope. Have they pre-order one & wait patiently? nope. Have they even signed the NDA? nope. Are they judging the new GPU's performance just by looking at the paper specs & go "oh, this is not good. Boo!"? Yep. Bunch of triggered, immature kids who owned Pascal cards are moaning over what Nvidia has been doing. Don't like it; keep it to yourselves.
Yes, it's sad how the forums and the usual suspects in youtube channels keep flooding the Internet about how terrible Turing is… But the same people would have been dancing in the streets if AMD had released it instead. Just look at the insane hype for Polaris, Vega and now Navi(just the other day). Anyone remember Polaris was supposed to yield 2.5× performance per watt? And Turing may only yield ~35% performance gains, I wonder which company made a turd?
Posted on Reply
#45
Tsukiyomi91
Polaris was a disappointment, Vega is a flop. Let's HOPE Navi can bring something good to the table, despite being late.
Posted on Reply
#46
cucker tarlson
Tsukiyomi91Polaris was a disappointment
Well, no. I'd take 580 over 1060 any day.
Tsukiyomi91Vega is a flop.
Performance wise it's alright, gtx 1080 match. It's the very late release date,low availability,price and power consumption that made it an unattractive purchase for most. even now v64 is pretty pricey here, sells at +3000pln while 1080 is 2300-2400 and cheapest 1080Ti start at 3200. rtx 2080 duke is 3500 for preorder.
Posted on Reply
#47
efikkan
Tsukiyomi91Polaris was a disappointment, Vega is a flop. Let's HOPE Navi can bring something good to the table, despite being late.
Navi is still going to be GCN, and we still don't know what changes are coming. AMD doesn't have a new architecture scheduled before ~2021. The changes from Pascal to Turing are larger than what we've seen so far within GCN.
cucker tarlsonWell, no. I'd take 580 over 1060 any day.
You are free to make your own (misguided) choices. GTX 1060 is a better choice than RX 580, unless we are talking of edge cases.

Vega was a flop; primarily because it fell further behind Nvidia in performance and efficiency, but also because AMD failed to make good profit on them. Most people have long forgotten that AMD actually tried to sell them at $100 over MSRP, but changed after a lot of bad press.
Posted on Reply
#48
Nkd
efikkanNavi is still going to be GCN, and we still don't know what changes are coming. AMD doesn't have a new architecture scheduled before ~2021. The changes from Pascal to Turing are larger than what we've seen so far within GCN.


You are free to make your own (misguided) choices. GTX 1060 is a better choice than RX 580, unless we are talking of edge cases.

Vega was a flop; primarily because it fell further behind Nvidia in performance and efficiency, but also because AMD failed to make good profit on them. Most people have long forgotten that AMD actually tried to sell them at $100 over MSRP, but changed after a lot of bad press.
well word is Next gen architecture should be coming out in 2020 not 2021. I am not sure where 2021 started. I think it was wccftech throwing dates out there. Their roadmap for GPU after Navi labeled next gen was around 2020 I always thought it was 2020, but started hearing 2021 out of no where. I would be very surprised if we don't see a high end gpu in 2020. There is also a guy on hardforum, he usually is fairly spot on with AMD stuff. Not sure if works there or what but he doesn't reveal too much just little info. He is pretty damn accurate. He responded to my post just a couple of weeks back saying AMD is busting their ass on next gen and its ahead of schedule and should be out 2020 the latest. We will see. But I would bet on 2020 since intel will have their card in 2020 as well.
Posted on Reply
#49
nemesis.ie
It's also good to see that AMD's R&D funding is back up to 2012 levels and climbing rapidly.

Now that there is more money going into R&D (it looks like over 50% more than in 2014) that should help move things along.
Posted on Reply
#50
Vayra86
Tsukiyomi91its kinda saddening that there are people who haven't even own a sample card are complaining over Nvidia's new Turing chip. Have they benched all 3 cards? nope. Have they pre-order one & wait patiently? nope. Have they even signed the NDA? nope. Are they judging the new GPU's performance just by looking at the paper specs & go "oh, this is not good. Boo!"? Yep. Bunch of triggered, immature kids who owned Pascal cards are moaning over what Nvidia has been doing. Don't like it; keep it to yourselves.
It's even more saddening to see others yell 'pre order nao!' before performance is known. That just needs culling and this is what we're doing. Its all about peer pressure - that is why Nvidia tries to get 'influencers' on the net to join their dark side, using free hardware and bags of money. That should raise eyebrows more so than people here predicting (conservative) performance numbers: an architecture such as Pascal literally just sold itself because it was a leap forward. Turing really is not and this is clear as day.

Realistically, looking at the numbers, historically we have always been very accurate at making assumptions on performance. Simply because the low hanging fruit in CUDA really is gone now. We won't see tremendous IPC jumps and if we do, they will cost something else that also provides performance (such as clockspeeds). Its really not rocket science and the fact is, if you can't predict Turing performance with some accuracy, you just don't know enough about GPU.

So let's leave it at that, okay? This has nothing to do with triggered immature kids - in fact, that is the group spouting massive performance gains based on keynote shouts and powerpoint slides. The idiots who believe everything Nvidia feeds them. The Pascal card owners have nothing to be unhappy about - with Turing's release, the resale value of their cards will remain stagnant even though they become older. That is a unique event and one that will cause a lot of profitable used Pascal card sales. I'm not complaining!

If you don't like it, don't visit these threads or tech forum in general...
Posted on Reply
Add your own comment
Apr 25th, 2024 13:01 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts