Tuesday, December 25th 2018
NVIDIA GeForce RTX 2060 to Ship in Six Variants Based on Memory Size and Type
NVIDIA drew consumer ire for differentiating its GeForce GTX 1060 into two variants based on memory, the GTX 1060 3 GB and GTX 1060 6 GB, with the two also featuring different GPU core-configurations. The company plans to double-down - or should we say, triple-down - on its sub-branding shenanigans with the upcoming GeForce RTX 2060. According to VideoCardz, citing a GIGABYTE leak about regulatory filings, NVIDIA could be carving out not two, but six variants of the RTX 2060!
There are at least two parameters that differentiate the six (that we know of anyway): memory size and memory type. There are three memory sizes, 3 GB, 4 GB, and 6 GB. Each of the three memory sizes come in two memory types, the latest GDDR6 and the older GDDR5. Based on the six RTX 2060 variants, GIGABYTE could launch up to thirty nine SKUs. When you add up similar SKU counts from NVIDIA's other AIC partners, there could be upward of 300 RTX 2060 graphics card models to choose from. It won't surprise us if in addition to memory size and type, GPU core-configurations also vary between the six RTX 2060 variants compounding consumer confusion. The 12 nm "TU106" silicon already has "A" and "non-A" ASIC classes, so there could be as many as twelve new device IDs in all! The GeForce RTX 2060 is expected to debut in January 2019.
Source:
VideoCardz
There are at least two parameters that differentiate the six (that we know of anyway): memory size and memory type. There are three memory sizes, 3 GB, 4 GB, and 6 GB. Each of the three memory sizes come in two memory types, the latest GDDR6 and the older GDDR5. Based on the six RTX 2060 variants, GIGABYTE could launch up to thirty nine SKUs. When you add up similar SKU counts from NVIDIA's other AIC partners, there could be upward of 300 RTX 2060 graphics card models to choose from. It won't surprise us if in addition to memory size and type, GPU core-configurations also vary between the six RTX 2060 variants compounding consumer confusion. The 12 nm "TU106" silicon already has "A" and "non-A" ASIC classes, so there could be as many as twelve new device IDs in all! The GeForce RTX 2060 is expected to debut in January 2019.
230 Comments on NVIDIA GeForce RTX 2060 to Ship in Six Variants Based on Memory Size and Type
And don't forget the 6 GB version of GTX 1060 is like 5-6% faster too. Stop with this nonsense. 32-bit CPUs/OS' have NOTHING to do with memory capacity. Aah, the eternal "future proofing" argument.
I remember all those who bought GCN over Kepler because it was more "future proofing" in Direct3D 12. Then R9 390(X) with 8 GB for "future proofing". And then Fiji with HBM for "future proofing", but then suddenly memory capacity didn't matter any more, because HBM was so glorious. Then Polaris with 8 GB for "future proofing", because memory capacity suddenly mattered again.
In real world it's a balancing act. You'll have to guess your requirements for the immediate future. But taking "future proofing" too far is going to be wasted money in the end. History has proven that paying extra for a lot of "future proofing" has never paid off. Then prepare yourself for disappointment.
- Disable one memory controller and use 128-bit, possibly compensate with faster memory.
- Use an imbalanced memory configuration, like GTX 660/660 Ti.
*Looks through closet for old 3GB 7970... yup... launched in 2012...*
I don't get it, but ok... guess 2060 is the "Sucker's Edition"..
Edit: Oops, I forgot about the 285.
Explain this, how does 5-6% performance gap translate to half or 25% less VRAM? Where is the balance in that? And why would you *not* suffer a performance hit from such a cutdown when you push data over the same, rather narrow bus?
Common sense, use it, instead of gazing endlessly at performance summaries that reduce all detail to a single percentage and rarely bases it on a fully comprehensive benchmark suite. Reviews are an indicator, not an absolute all encompassing truth. People apparently still didn't get that memo. Its the exception that makes the rule when it comes to VRAM and you only need one edge case to kill the experience.
The truth is that if most people game at 1080 (which it is) 3GB of RAM should be enough for the vast majority of games. I game at 3440x1440 and most games don't break 4GB of RAM.
What about the 11Gb flag ship.... its the same as the last generation... :P and its an odd number.
12GB would be better or 16gb :P
Since there is no competition Nvidia could stick there logo on a turd and market that
Where is the blast processing?
And don't get me started on overheating. My 1080FE would quickly go to 83C and throttle down to 1300mhz, causing it to perform WORSE them my old 1070. I took the FE card back and got a MSI card, witch did pretty much the same thing. I had to buy and install a 100$ cooler to get the card to stop throttling. Same for power usage - the 180w TDP on the 1080 is pure fiction. Under load my 1080 draws 200-240w on it's own (tested with an ampermeter on the 2nd 12v rail on the PSU witch only the video card is using 20 amps x 12 = 240w. I tought the card draws 180w if not allowed to boost, but at stock 1530Mhz it draws allmost 16 amps - 12v x 16a = 192W. If you're refering to the 1060, the yeah - those are cool cards. 70-75C even with cheap, crap coolers - but they are fast cards. They're OK for 1080p, but that's it. The 580 can do a lot better, especially overclocked versions like the XFX 580GTS OC Black edition (that card does get pretty freakin' hot tough).
As for instability - you've never used an AMD card right? I've had a 7950, then two, then a 280x, then bought a second one, and then a 290 (no-x) - they were all rock-solid. I also played around with a Vega64 - and while the max FPS is not as high as on a 1080 (in some games), the minimum FPS and frame times are miles better on the Vega, so much so that most games are noticeably smoother, even tough the framerate is a tiny bit slower. I've been trying to trade my 1080 for a vega64, but guess what - nobody wants to take the trade! The only reason I switched to nvidia is the minding boom witch made AMD cards climb in price to a silly degree. A Vega 64 was twice the price I payed for my 1080, so I said screw that and bought what made sense at the time.
You are DEFINITELY RIGHT on the innovation part tough... AMD needs to get of their asses and release something competitive - not that 590 (i.e. overclocked 580 BULLSHIT). And this both for AMD and Nvidia fans. Left to their own devices, nvidia will end up charging 5000$ for a high end GPU. The 470 and 570 are great little cards. And so is the 580. I picked up a 4GB 580 in november for 150$ for my living room PC (i5 2500k @ 4GHz, mATX form factor) and it runs 1080p @ ultra flawlesly. I even play some games in 4k (less demanding ones like civ6 and some oldies). For that price nvidia was ofering a 1050ti witch is significantly slower.
There is NO way the memory above 4g to be addressable from a 32bit cpu, not even with virtual memory paging. At least not from an x86 cpu. There is NO way to cover the deficit of one third of cutting the memory bus, with higher clocks, because the GDDR5 memory has limitations on the speeds it can achieve. Thats why i think they are using GDDR6 also for the same model. Because IT MATTERS for this generation, even for middle range. Propably the GDDR6 model will be much faster and maybe with slightly different core config. I guess nvidia knows the GDDR5 is not enough as the core needs its memory to be, but they dont give a damn. Milking the cow is the way for them. You will need 30-35% increase in memory speed to cover the deficit.
It so funny seeing you guys trying to defend something that sucks so hard. Really, some people here should consider a new carreer in comedy (that was a joke).
*without factoring in compression Ah, this misconception is with us since Athlon64 days. I suggest you look up PAE, the address space hasn't been confined by the general architecture for quite some time. It's awkward to do, so this practice isn't all that widespread (afaik), but it exists.
The are no 64bit memory registers on a 32bit cpu.
Edit: If my style of writing feels aggressive, sorry. I am not attacking anyone, i just disagree with passion.
No fear of consumer retaliation for anti-consumer practices.
No fear of government oversight reigning anti-consumer practices in (really the same thing since governments are supposed to be people elected to do the people's work).
This is what happens when there is monopoly, duopoly, and quasi-monopoly.
The tech world has far too little competition in a lot of areas and this is what consumers get. If you don't like it you're not going to get anywhere by engaging with forum astroturfers. Organize and get political action.
You're acting as if that never happened.
You forget that PAE maybe it does support indeed 64 bit memory range, but in THEORY. Because in reality the virtual address space capabilities of those cpu's (Pentium Pro) remained 32bit. This changed with the AMD x86-64.
Edit: In any case, i wont say more about this, because i think it is off topic.
The point of mentioning it is that it manifests a watershed moment. When games were developed for 32-bit, their memory usage was very restricted. The moment games switched to 64-bit, suddenly there was memory available so developers sought to use it. Fury X marks the transition. 4 GiB was okay then but it definitely isn't okay now--especially in premium cards.
Just look at the response to this thread. All but two people, by my count, are scoffing at the notion of a 3 GiB 2060. It's sad that yields are so low they feel they need to debut four extra models of sub-par cards under the same brand.