Tuesday, December 25th 2018

NVIDIA GeForce RTX 2060 to Ship in Six Variants Based on Memory Size and Type
NVIDIA drew consumer ire for differentiating its GeForce GTX 1060 into two variants based on memory, the GTX 1060 3 GB and GTX 1060 6 GB, with the two also featuring different GPU core-configurations. The company plans to double-down - or should we say, triple-down - on its sub-branding shenanigans with the upcoming GeForce RTX 2060. According to VideoCardz, citing a GIGABYTE leak about regulatory filings, NVIDIA could be carving out not two, but six variants of the RTX 2060!
There are at least two parameters that differentiate the six (that we know of anyway): memory size and memory type. There are three memory sizes, 3 GB, 4 GB, and 6 GB. Each of the three memory sizes come in two memory types, the latest GDDR6 and the older GDDR5. Based on the six RTX 2060 variants, GIGABYTE could launch up to thirty nine SKUs. When you add up similar SKU counts from NVIDIA's other AIC partners, there could be upward of 300 RTX 2060 graphics card models to choose from. It won't surprise us if in addition to memory size and type, GPU core-configurations also vary between the six RTX 2060 variants compounding consumer confusion. The 12 nm "TU106" silicon already has "A" and "non-A" ASIC classes, so there could be as many as twelve new device IDs in all! The GeForce RTX 2060 is expected to debut in January 2019.
Source:
VideoCardz
There are at least two parameters that differentiate the six (that we know of anyway): memory size and memory type. There are three memory sizes, 3 GB, 4 GB, and 6 GB. Each of the three memory sizes come in two memory types, the latest GDDR6 and the older GDDR5. Based on the six RTX 2060 variants, GIGABYTE could launch up to thirty nine SKUs. When you add up similar SKU counts from NVIDIA's other AIC partners, there could be upward of 300 RTX 2060 graphics card models to choose from. It won't surprise us if in addition to memory size and type, GPU core-configurations also vary between the six RTX 2060 variants compounding consumer confusion. The 12 nm "TU106" silicon already has "A" and "non-A" ASIC classes, so there could be as many as twelve new device IDs in all! The GeForce RTX 2060 is expected to debut in January 2019.
230 Comments on NVIDIA GeForce RTX 2060 to Ship in Six Variants Based on Memory Size and Type
I didnt buy a PC to have it look worse than a console. Some need to...some choose to, others like ultra. It is what it is.
Newer games with newer cards, the defaults are usually good enough. It's older games that don't know what the graphics card is that need tweaking.
Plus it wouldn't have been that bad if there was a minimal price increase for the RTX series, let's say 50-50$ for the 2070 and 2080. But anything you say, just check Techpowerup's poll before the release of RTX, check the stock market, check general reception of potential customers about the card, and you will now you are just simply lying to yourself, too. As some already reacted to this, I have to do that too: Wow, cleanly beats out a 2,5 year old card by nearly 30%. What a result! 2080 is 1% faster than the 1080Ti, which is totally equal in performance, so it doesnt' beat it out. Just to remind you: 1080 beat the 980 Ti by 38% while costing $50 less. The 2080 equals the 1080Ti while costing the same. And as mentioned before, 1080Ti can be OCd better than the 2080. So you are advising people who want to buy a ~ $350+ 3GB 2060 (which is near the price of the 1070 with 8GB) to lower settings in FHD. LOL. No other words needed. I hope you advise your customers only Intel-NV rigs. :D
The fact is that objectively the only really good point in the RTX is series is the Founder's Edition's solid cooling solutions (in terms of noise, cooling performance) and neat look (subjective).
@Vayra86
Earlier you said I was making a fool of myself... How are things going on that?
Suffice to say, Im out, enjoy yourselves
Gigabyte will have at least 40 variants of the GeForce RTX 2060 graphics card according to an EEC (Eurasian Economic Commission) product listing. link
:rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes:
(eyeroll count not gratuitous)
My understanding is that there's a variety of ways to make it crash because that RAM limitation is absolute.
1) If you try to load too many resources, GART will overflow crashing the executable.
2) If you try to load a huge asset (depending on conditions but could be smaller than 3 GiB in size) , it will crash because the RAM can't hold the asset before handing it to the VRAM.
3) If you try to hold too many assets in RAM in transit to VRAM and you fail to release them the RAM fast enough and it goes over the virtual memory limit, it will crash.
In other words, even under 32-bit D3D10, you're dancing on a razer's edge when dealing with VRAM. VRAM (unless DMA is used which good luck with that) is practically limited by addressable RAM space.
Coming full circle, this is fundamentally why Fury X 4 GiB and GTX 970 3.5 GiB were okay a few years ago but not so much now. Any game that might need 64-bit address space is usually 64-bit. The days of claustrophobic memory usage are gone.
As for loading huge textures, two things. First, there's this thing called streaming - you don't have to keep the entire thing into RAM at the same time to load it. Second, I'm pretty sure no game uses a 3GB piece of texture.
Yes, streaming, but most engines that are big on streaming (like UE4) and are of the era where >4 GiB VRAM can be used are also already 64-bit so it's a non-issue. This is where you run into problems with graphics cards that have <4 GiB VRAM because the API is having to shuffle assets between RAM and VRAM which translates to stutter.
The argument I made before about 32-bit and VRAM being intrinsically linked is effectively true. Now that the 32-bit barrier is gone, VRAM usage has soared in games that can use it.