Tuesday, December 25th 2018

NVIDIA GeForce RTX 2060 to Ship in Six Variants Based on Memory Size and Type

NVIDIA drew consumer ire for differentiating its GeForce GTX 1060 into two variants based on memory, the GTX 1060 3 GB and GTX 1060 6 GB, with the two also featuring different GPU core-configurations. The company plans to double-down - or should we say, triple-down - on its sub-branding shenanigans with the upcoming GeForce RTX 2060. According to VideoCardz, citing a GIGABYTE leak about regulatory filings, NVIDIA could be carving out not two, but six variants of the RTX 2060!

There are at least two parameters that differentiate the six (that we know of anyway): memory size and memory type. There are three memory sizes, 3 GB, 4 GB, and 6 GB. Each of the three memory sizes come in two memory types, the latest GDDR6 and the older GDDR5. Based on the six RTX 2060 variants, GIGABYTE could launch up to thirty nine SKUs. When you add up similar SKU counts from NVIDIA's other AIC partners, there could be upward of 300 RTX 2060 graphics card models to choose from. It won't surprise us if in addition to memory size and type, GPU core-configurations also vary between the six RTX 2060 variants compounding consumer confusion. The 12 nm "TU106" silicon already has "A" and "non-A" ASIC classes, so there could be as many as twelve new device IDs in all! The GeForce RTX 2060 is expected to debut in January 2019.
Source: VideoCardz
Add your own comment

230 Comments on NVIDIA GeForce RTX 2060 to Ship in Six Variants Based on Memory Size and Type

#176
bug
Tatty_One said:
I struggle to comprehend what games you are playing then. I moved from a 4GB 290X to a 1070 a few months back, I only play one game which is world of tanks and they completely updated their game engine and overnight the 290X on ultra settings moved from an average 2.9Gig usage to 4.4 and that game is hardly demanding even on ultra at my 2560 x 1080.
You don't have to update a game engine for that effect. Just upscale your textures 2x and you get 4x* the VRAM usage without actually improving quality.

*without factoring in compression

CandymanGR said:
A 32bit cpu cannot address in 64 bit memory space. And you need 64bit addressing in order to have access in more than 4gigabytes of memory.
There is NO way the memory above 4g to be addressable from a 32bit cpu, not even with virtual memory paging. At least not from an x86-64 cpu.
Ah, this misconception is with us since Athlon64 days. I suggest you look up PAE, the address space hasn't been confined by the general architecture for quite some time. It's awkward to do, so this practice isn't all that widespread (afaik), but it exists.
Posted on Reply
#177
CandymanGR
bug said:

Ah, this misconception is with us since Athlon64 days. I suggest you look up PAE, the address space hasn't been confined by the general architecture for quite some time. It's awkward to do, so this practice isn't all that widespread (afaik), but it exists.
Show me an example of a 64bit application working on a 32bit cpu then. There is no way to have a 32bit application with 64bit addressing space on a 32bit cpu.
The are no 64bit memory registers on a 32bit cpu.

Edit: If my style of writing feels aggressive, sorry. I am not attacking anyone, i just disagree with passion.
Posted on Reply
#178
RichF
This is what happens when corporations feel no fear.

No fear of consumer retaliation for anti-consumer practices.

No fear of government oversight reigning anti-consumer practices in (really the same thing since governments are supposed to be people elected to do the people's work).

This is what happens when there is monopoly, duopoly, and quasi-monopoly.

The tech world has far too little competition in a lot of areas and this is what consumers get. If you don't like it you're not going to get anywhere by engaging with forum astroturfers. Organize and get political action.
Posted on Reply
#179
bug
CandymanGR said:
Show me an example of a 64bit application working on a 32bit cpu then. There is no way to have a 32bit application with 64bit addressing space on a 32bit cpu.
The are no 64bit memory registers on a 32bit cpu.

Edit: If my style of writing feels aggressive, sorry. I am not attacking anyone, i just disagree with passion.
You refuse to educate yourself with the same passion, it would seem.
Posted on Reply
#180
CandymanGR
bug said:
You refuse to educate yourself with the same passion, it would seem.
Thanks for your valuable input.
Posted on Reply
#181
bug
CandymanGR said:
Thanks for your valuable input.
I gave you my input above: look up PAE and read about it. (And not because I'm too lazy to detail, but because it's a lot to read.)
You're acting as if that never happened.
Posted on Reply
#182
CandymanGR
bug said:
I gave you my input above: look up PAE and read about it. (And not because I'm too lazy to detail, but because it's a lot to read.)
You're acting as if that never happened.
And i also told you 'it doesnt work well, not even with virtual memory paging', but you also act as like nothing happened. Those cpu's cannot "see" the whole memory at once. Memory paging sucks, its ancient techonology. Thats why we went to x86-64 technology.
You forget that PAE maybe it does support indeed 64 bit memory range, but in THEORY. Because in reality the virtual address space capabilities of those cpu's (Pentium Pro) remained 32bit. This changed with the AMD x86-64.

Edit: In any case, i wont say more about this, because i think it is off topic.
Posted on Reply
#183
FordGT90Concept
"I go fast!1!11!1!"
bug said:
Ah, this misconception is with us since Athlon64 days. I suggest you look up PAE, the address space hasn't been confined by the general architecture for quite some time. It's awkward to do, so this practice isn't all that widespread (afaik), but it exists.
PAE is not practical in gaming (as stated repeatedly). The latency is too high, framerates plummet. It's like going into the 3.5-4.0 GiB territory of a GTX 970.

The point of mentioning it is that it manifests a watershed moment. When games were developed for 32-bit, their memory usage was very restricted. The moment games switched to 64-bit, suddenly there was memory available so developers sought to use it. Fury X marks the transition. 4 GiB was okay then but it definitely isn't okay now--especially in premium cards.

Just look at the response to this thread. All but two people, by my count, are scoffing at the notion of a 3 GiB 2060. It's sad that yields are so low they feel they need to debut four extra models of sub-par cards under the same brand.
Posted on Reply
#184
CandymanGR
That's because PAE in 32bit cpus cannot access all physical memory at once. So it utilizes a method of virtualizing memory space in pages, and then using paging & segmentation to access all available ram (in parts). But this introduces lots of wait-states to the processor every time it needs to access data on a different "memory page". At least this is my knowledge.
Posted on Reply
#185
RichF
FordGT90Concept said:
It's sad that yields are so low they feel they need to debut four extra models of sub-par cards under the same brand.
There are other incentives for doing that. Yields, for example, don't explain things like GPUs that came with much more VRAM than they could put to use, GPUs with large amounts of slow VRAM and others — with the same chip — that have much less but much faster VRAM (e.g. 2 GB DDR3 on one and 512 MB GDDR5 on the other), packaging that makes the GPU seem powerful and useful for serious gaming, numbers that make the GPU sound more powerful than the previous generation... you know... the whole sad bag of tricks.

And tricks are what they are. They're not merely a matter of efficiently dealing with yield problems. Far from it. There are plenty of ways to deal with yields that don't involve intentionally confusing the consumer. But, that's how consumers are parted with more money than they otherwise would be. That, of course, is the entire point of the business of advertising.

The primary reason to sell two, or three, or fifteen different specs with the same number name (e.g. 1060) is to confuse the consumer. This is why, for instance, Sapphire sold Vega cards with vapor chambers (and got them reviewed), then sold basically identical cards to consumers without the vapor chambers. Bait and switch deception, in many forms.
Posted on Reply
#186
efikkan
CandymanGR said:
A 32bit cpu cannot address in 64 bit memory space. And you need 64bit addressing in order to have access in more than 4gigabytes of memory.

There is NO way the memory above 4g to be addressable from a 32bit cpu, not even with virtual memory paging. At least not from an x86 cpu.
This is a common confusion, even among many engineers, unfortunately. You are mixing address width with register width. While these two can be the same width, CPUs and software can certainly work around incompatibility between the two.
The old 8086(16-bit) had a 20-bit address bus. It had to use two registers to specify the memory address. This is extra overhead, but completely achievable.
Another example is the old 8-bit 6502(and derivates), famous for the Commodore 64, Atari 2600, Apple II and NES. This 8-bit chip had 16-bit address width, allowing direct access to 64kB. Machines like the Commodore 64 employed a technique called bank switching to extend this even further.

CandymanGR said:
And i also told you 'it doesnt work well, not even with virtual memory paging', but you also act as like nothing happened. Those cpu's cannot "see" the whole memory at once. Memory paging sucks, its ancient techonology. Thats why we went to x86-64 technology.
The fact police needs to correct you again ;)
Memory paging is not ancient, nor is it outdated in any way. Paging must not be confused with swapping/pagefile, that's when memory pages are moved to another storage medium. Paging is just the division of sections of memory organized into continous virtual memory spaces for each application, it's essential for multitasking operating systems.

CandymanGR said:

There is NO way to cover the deficit of one third of cutting the memory bus, with higher clocks, because the GDDR5 memory has limitations on the speeds it can achieve.
Well, it might not cover all of it, but it might not have to. Turing cards do in general have much more memory bandwidth than Pascal already, so they have a lot of headroom.
Posted on Reply
#187
bug
FordGT90Concept said:
PAE is not practical in gaming (as stated repeatedly). The latency is too high, framerates plummet. It's like going into the 3.5-4.0 GiB territory of a GTX 970.

The point of mentioning it is that it manifests a watershed moment. When games were developed for 32-bit, their memory usage was very restricted. The moment games switched to 64-bit, suddenly there was memory available so developers sought to use it. Fury X marks the transition. 4 GiB was okay then but it definitely isn't okay now--especially in premium cards.

Just look at the response to this thread. All but two people, by my count, are scoffing at the notion of a 3 GiB 2060. It's sad that yields are so low they feel they need to debut four extra models of sub-par cards under the same brand.
CandymanGR said:
That's because PAE in 32bit cpus cannot access all physical memory at once. So it utilizes a method of virtualizing memory space in pages, and then using paging & segmentation to access all available ram (in parts). But this introduces lots of wait-states to the processor every time it needs to access data on a different "memory page". At least this is my knowledge.
My only assertion here was that 32bit CPUs can use more than the 32bit address space allows.
Posted on Reply
#188
lexluthermiester
bug said:
My only assertion here was that 32bit CPUs can use more than the 32bit address space allows.
Which is correct. There are many ways to map memory beyond the physical limits of a CPU.
Posted on Reply
#189
CandymanGR
Yes, but you cannot access all the memory at once.
Memory paging sucks and IT IS ancient.

Ofcourse i dont confuse MEMORY SEGMENTATION AND PAGING with virtual memory. I think YOU do.

A 32bit cpu cannot USE more memory, it just segments the ram into 32bit memory pages! This is NOT the same as x64 memory addressing. JESUS! JESUS!!!!!!!!!!!!!

efikkan said:
his is a common confusion, even among many engineers, unfortunately. You are mixing address width with register width. While these two can be the same width, CPUs and software can certainly work around incompatibility between the two.
The old 8086(16-bit) had a 20-bit address bus. It had to use two registers to specify the memory address. This is extra overhead, but completely achievable.
Another example is the old 8-bit 6502(and derivates), famous for the Commodore 64, Atari 2600, Apple II and NES. This 8-bit chip had 16-bit address width, allowing direct access to 64kB. Machines like the Commodore 64 employed a technique called bank switching to extend this even further.
Funny. You say it is not an ancient tech and you give examples of ancientc processors, like the 8086 or the MOS6502 processor. And the Motorola 68000 had 16bit bus but 32bit memory and data registers. So? You compare an ancient architecture that had no performance hit whatsoever by using mem segmentation and switch bank addressing because the cpu itself was so slow!

efikkan said:
The fact police needs to correct you again ;)
Wooooooow... really? So you are the.. tech police here, who are always right and the others are wrong and you correct them? No shit? Really? Do you have more jokes like that?

efikkan said:
Memory paging is not ancient, nor is it outdated in any way. Paging must not be confused with swapping/pagefile, that's when memory pages are moved to another storage medium. Paging is just the division of sections of memory organized into continous virtual memory spaces for each application, it's essential for multitasking operating systems.
You dont even KNOW what is memory paging and segmentation, do you? Read again. I speak about 32bit memory addressing vs 64bit memory addressing and you and bug are saying whatever comes to your minds. And if PAE was such a panacea as you and bug imply, we would still be using 32bit cpus. We went to x86-64 FOR A REASON. Do you know what it was?
Posted on Reply
#190
bug
@CandymanGR At this point I would suggest you either drop it or try to be more concise about what you're trying to say.
Posted on Reply
#191
CandymanGR
efikkan said:

Well, it might not cover all of it, but it might not have to. Turing cards do in general have much more memory bandwidth than Pascal already, so they have a lot of headroom.
Turing cards have more bandwidth? Are you sure? An RTX 2080 has more bandwidth than a GTX 1080?
Posted on Reply
#192
95Viper
Get back on topic.
Stop the side topic bickering.

Thank you and try to have a nice day.
Posted on Reply
#193
CandymanGR
bug said:
@CandymanGR At this point I would suggest you either drop it or try to be more concise about what you're trying to say.
Seriously?

Really? REALLY? Ok then.

32bit cpus cannot "see" and address the whole physical memory at once (if more than 4gb). Only 64bit cpus can.

PAE is NOT the same as x64 memory addressing and it CANNOT use all memory at once (if more than 4gb). Anyone who says otherwise is ignorant and should stop pretending he is...... tech police. If someone still keeps insisting on that, he has no fucking clue.

Memory paging and segmentation, switch back addressing and other ancient shit is for history lessons. Not for 2019 tech.
And mulitasking doesnt need memory paging since 1992 because of memory protection!!!!!!

Turing has not more bandwidth. Bandwidth is the combination of bus width and memory speed. It is not.. MAGIC as some "experts" here believe.

Some wannabe experts should really reconsider their opinion about themselves.

Covered?
Posted on Reply
#194
FordGT90Concept
"I go fast!1!11!1!"
Oh look! A 6 year old game (Max Payne 3) is using 3.2 GiB VRAM at 1920x1200!
Posted on Reply
#195
efikkan
I don't know if anyone mentioned it, but if the rumor of 6 variants of RTX 2060 is true, perhaps some might be third-world editions™? (similar to the GTX 1060 5GB)
As mentioned, I'm not a fan of this due to confusion with naming.
Posted on Reply
#196
FordGT90Concept
"I go fast!1!11!1!"
Perhaps but 4 GiB and 3 GiB? GDDR5 and GDDR6?

2060 3 GiB GDDR6 and 6 GiB GDDR6 makes sense (albeit stupid on the 3 GiB SKU) for western release and 2060 4 GiB GDDR5 release for internet cafes.

Maybe two of these variants are actually 2050s?

Edit: No... the list is all of them exclusively for Gigabyte. Only two logical conclusions:
a) the rumor is wrong or
b) Gigabyte has lost its marbles.

Having that many SKUs to support doesn't make business sense.
Posted on Reply
#197
Tsukiyomi91
@FordGT90Concept it doesn't really make sense unless Gigabyte thinks it's doing a favor that no one is asking...
Posted on Reply
#198
lexluthermiester
CandymanGR said:
An RTX 2080 has more bandwidth than a GTX 1080?
Yes it does. A lot. So it would be safe to conclude that a 2060 is going to have a similar increase compared to a 1060.

FordGT90Concept said:
Oh look! A 6 year old game (Max Payne 3) is using 3.2 GiB VRAM at 1920x1200!
Settings?
Posted on Reply
#199
FordGT90Concept
"I go fast!1!11!1!"
lexluthermiester said:
Settings?
Max (har har) except for MSAA (4x) and tessellation (off).
Posted on Reply
#200
gamerman
mid range games,meaning FHD resolution 3GB of memory is more than enough..if some1 not know it.

and that one 3GB 2060 is for target.still its fastest gpu for that..with 3GB.

so,if you using FHD monitor,2060 with 3GB memory is best choice bcoz you get near 50 fps almost all games with low price
Posted on Reply
Add your own comment