• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA GeForce RTX 2060 to Ship in Six Variants Based on Memory Size and Type

Stop with this nonsense. 32-bit CPUs/OS' have NOTHING to do with memory capacity.

A 32bit cpu cannot address in 64 bit memory space. And you need 64bit addressing in order to have access in more than 4gigabytes of memory.
There is NO way the memory above 4g to be addressable from a 32bit cpu, not even with virtual memory paging. At least not from an x86 cpu.


Two options that I know of:
- Disable one memory controller and use 128-bit, possibly compensate with faster memory.
- Use an imbalanced memory configuration, like GTX 660/660 Ti.

There is NO way to cover the deficit of one third of cutting the memory bus, with higher clocks, because the GDDR5 memory has limitations on the speeds it can achieve. Thats why i think they are using GDDR6 also for the same model. Because IT MATTERS for this generation, even for middle range. Propably the GDDR6 model will be much faster and maybe with slightly different core config. I guess nvidia knows the GDDR5 is not enough as the core needs its memory to be, but they dont give a damn. Milking the cow is the way for them. You will need 30-35% increase in memory speed to cover the deficit.

It so funny seeing you guys trying to defend something that sucks so hard. Really, some people here should consider a new carreer in comedy (that was a joke).
 
Last edited:
I struggle to comprehend what games you are playing then. I moved from a 4GB 290X to a 1070 a few months back, I only play one game which is world of tanks and they completely updated their game engine and overnight the 290X on ultra settings moved from an average 2.9Gig usage to 4.4 and that game is hardly demanding even on ultra at my 2560 x 1080.
You don't have to update a game engine for that effect. Just upscale your textures 2x and you get 4x* the VRAM usage without actually improving quality.

*without factoring in compression

A 32bit cpu cannot address in 64 bit memory space. And you need 64bit addressing in order to have access in more than 4gigabytes of memory.
There is NO way the memory above 4g to be addressable from a 32bit cpu, not even with virtual memory paging. At least not from an x86-64 cpu.

Ah, this misconception is with us since Athlon64 days. I suggest you look up PAE, the address space hasn't been confined by the general architecture for quite some time. It's awkward to do, so this practice isn't all that widespread (afaik), but it exists.
 
Ah, this misconception is with us since Athlon64 days. I suggest you look up PAE, the address space hasn't been confined by the general architecture for quite some time. It's awkward to do, so this practice isn't all that widespread (afaik), but it exists.

Show me an example of a 64bit application working on a 32bit cpu then. There is no way to have a 32bit application with 64bit addressing space on a 32bit cpu.
The are no 64bit memory registers on a 32bit cpu.

Edit: If my style of writing feels aggressive, sorry. I am not attacking anyone, i just disagree with passion.
 
Last edited:
This is what happens when corporations feel no fear.

No fear of consumer retaliation for anti-consumer practices.

No fear of government oversight reigning anti-consumer practices in (really the same thing since governments are supposed to be people elected to do the people's work).

This is what happens when there is monopoly, duopoly, and quasi-monopoly.

The tech world has far too little competition in a lot of areas and this is what consumers get. If you don't like it you're not going to get anywhere by engaging with forum astroturfers. Organize and get political action.
 
Show me an example of a 64bit application working on a 32bit cpu then. There is no way to have a 32bit application with 64bit addressing space on a 32bit cpu.
The are no 64bit memory registers on a 32bit cpu.

Edit: If my style of writing feels aggressive, sorry. I am not attacking anyone, i just disagree with passion.
You refuse to educate yourself with the same passion, it would seem.
 
Thanks for your valuable input.
I gave you my input above: look up PAE and read about it. (And not because I'm too lazy to detail, but because it's a lot to read.)
You're acting as if that never happened.
 
I gave you my input above: look up PAE and read about it. (And not because I'm too lazy to detail, but because it's a lot to read.)
You're acting as if that never happened.

And i also told you 'it doesnt work well, not even with virtual memory paging', but you also act as like nothing happened. Those cpu's cannot "see" the whole memory at once. Memory paging sucks, its ancient techonology. Thats why we went to x86-64 technology.
You forget that PAE maybe it does support indeed 64 bit memory range, but in THEORY. Because in reality the virtual address space capabilities of those cpu's (Pentium Pro) remained 32bit. This changed with the AMD x86-64.

Edit: In any case, i wont say more about this, because i think it is off topic.
 
Last edited:
Ah, this misconception is with us since Athlon64 days. I suggest you look up PAE, the address space hasn't been confined by the general architecture for quite some time. It's awkward to do, so this practice isn't all that widespread (afaik), but it exists.
PAE is not practical in gaming (as stated repeatedly). The latency is too high, framerates plummet. It's like going into the 3.5-4.0 GiB territory of a GTX 970.

The point of mentioning it is that it manifests a watershed moment. When games were developed for 32-bit, their memory usage was very restricted. The moment games switched to 64-bit, suddenly there was memory available so developers sought to use it. Fury X marks the transition. 4 GiB was okay then but it definitely isn't okay now--especially in premium cards.

Just look at the response to this thread. All but two people, by my count, are scoffing at the notion of a 3 GiB 2060. It's sad that yields are so low they feel they need to debut four extra models of sub-par cards under the same brand.
 
Last edited:
That's because PAE in 32bit cpus cannot access all physical memory at once. So it utilizes a method of virtualizing memory space in pages, and then using paging & segmentation to access all available ram (in parts). But this introduces lots of wait-states to the processor every time it needs to access data on a different "memory page". At least this is my knowledge.
 
It's sad that yields are so low they feel they need to debut four extra models of sub-par cards under the same brand.
There are other incentives for doing that. Yields, for example, don't explain things like GPUs that came with much more VRAM than they could put to use, GPUs with large amounts of slow VRAM and others — with the same chip — that have much less but much faster VRAM (e.g. 2 GB DDR3 on one and 512 MB GDDR5 on the other), packaging that makes the GPU seem powerful and useful for serious gaming, numbers that make the GPU sound more powerful than the previous generation... you know... the whole sad bag of tricks.

And tricks are what they are. They're not merely a matter of efficiently dealing with yield problems. Far from it. There are plenty of ways to deal with yields that don't involve intentionally confusing the consumer. But, that's how consumers are parted with more money than they otherwise would be. That, of course, is the entire point of the business of advertising.

The primary reason to sell two, or three, or fifteen different specs with the same number name (e.g. 1060) is to confuse the consumer. This is why, for instance, Sapphire sold Vega cards with vapor chambers (and got them reviewed), then sold basically identical cards to consumers without the vapor chambers. Bait and switch deception, in many forms.
 
A 32bit cpu cannot address in 64 bit memory space. And you need 64bit addressing in order to have access in more than 4gigabytes of memory.

There is NO way the memory above 4g to be addressable from a 32bit cpu, not even with virtual memory paging. At least not from an x86 cpu.
This is a common confusion, even among many engineers, unfortunately. You are mixing address width with register width. While these two can be the same width, CPUs and software can certainly work around incompatibility between the two.
The old 8086(16-bit) had a 20-bit address bus. It had to use two registers to specify the memory address. This is extra overhead, but completely achievable.
Another example is the old 8-bit 6502(and derivates), famous for the Commodore 64, Atari 2600, Apple II and NES. This 8-bit chip had 16-bit address width, allowing direct access to 64kB. Machines like the Commodore 64 employed a technique called bank switching to extend this even further.

And i also told you 'it doesnt work well, not even with virtual memory paging', but you also act as like nothing happened. Those cpu's cannot "see" the whole memory at once. Memory paging sucks, its ancient techonology. Thats why we went to x86-64 technology.
The fact police needs to correct you again ;)
Memory paging is not ancient, nor is it outdated in any way. Paging must not be confused with swapping/pagefile, that's when memory pages are moved to another storage medium. Paging is just the division of sections of memory organized into continous virtual memory spaces for each application, it's essential for multitasking operating systems.

There is NO way to cover the deficit of one third of cutting the memory bus, with higher clocks, because the GDDR5 memory has limitations on the speeds it can achieve.
Well, it might not cover all of it, but it might not have to. Turing cards do in general have much more memory bandwidth than Pascal already, so they have a lot of headroom.
 
PAE is not practical in gaming (as stated repeatedly). The latency is too high, framerates plummet. It's like going into the 3.5-4.0 GiB territory of a GTX 970.

The point of mentioning it is that it manifests a watershed moment. When games were developed for 32-bit, their memory usage was very restricted. The moment games switched to 64-bit, suddenly there was memory available so developers sought to use it. Fury X marks the transition. 4 GiB was okay then but it definitely isn't okay now--especially in premium cards.

Just look at the response to this thread. All but two people, by my count, are scoffing at the notion of a 3 GiB 2060. It's sad that yields are so low they feel they need to debut four extra models of sub-par cards under the same brand.
That's because PAE in 32bit cpus cannot access all physical memory at once. So it utilizes a method of virtualizing memory space in pages, and then using paging & segmentation to access all available ram (in parts). But this introduces lots of wait-states to the processor every time it needs to access data on a different "memory page". At least this is my knowledge.
My only assertion here was that 32bit CPUs can use more than the 32bit address space allows.
 
Yes, but you cannot access all the memory at once.
Memory paging sucks and IT IS ancient.

Ofcourse i dont confuse MEMORY SEGMENTATION AND PAGING with virtual memory. I think YOU do.

A 32bit cpu cannot USE more memory, it just segments the ram into 32bit memory pages! This is NOT the same as x64 memory addressing. JESUS! JESUS!!!!!!!!!!!!!

his is a common confusion, even among many engineers, unfortunately. You are mixing address width with register width. While these two can be the same width, CPUs and software can certainly work around incompatibility between the two.
The old 8086(16-bit) had a 20-bit address bus. It had to use two registers to specify the memory address. This is extra overhead, but completely achievable.
Another example is the old 8-bit 6502(and derivates), famous for the Commodore 64, Atari 2600, Apple II and NES. This 8-bit chip had 16-bit address width, allowing direct access to 64kB. Machines like the Commodore 64 employed a technique called bank switching to extend this even further.

Funny. You say it is not an ancient tech and you give examples of ancientc processors, like the 8086 or the MOS6502 processor. And the Motorola 68000 had 16bit bus but 32bit memory and data registers. So? You compare an ancient architecture that had no performance hit whatsoever by using mem segmentation and switch bank addressing because the cpu itself was so slow!

The fact police needs to correct you again ;)
Wooooooow... really? So you are the.. tech police here, who are always right and the others are wrong and you correct them? No shit? Really? Do you have more jokes like that?

Memory paging is not ancient, nor is it outdated in any way. Paging must not be confused with swapping/pagefile, that's when memory pages are moved to another storage medium. Paging is just the division of sections of memory organized into continous virtual memory spaces for each application, it's essential for multitasking operating systems.

You dont even KNOW what is memory paging and segmentation, do you? Read again. I speak about 32bit memory addressing vs 64bit memory addressing and you and bug are saying whatever comes to your minds. And if PAE was such a panacea as you and bug imply, we would still be using 32bit cpus. We went to x86-64 FOR A REASON. Do you know what it was?
 
Last edited:
@CandymanGR At this point I would suggest you either drop it or try to be more concise about what you're trying to say.
 
Well, it might not cover all of it, but it might not have to. Turing cards do in general have much more memory bandwidth than Pascal already, so they have a lot of headroom.

Turing cards have more bandwidth? Are you sure? An RTX 2080 has more bandwidth than a GTX 1080?
 
Get back on topic.
Stop the side topic bickering.

Thank you and try to have a nice day.
 
@CandymanGR At this point I would suggest you either drop it or try to be more concise about what you're trying to say.
Seriously?

Really? REALLY? Ok then.

32bit cpus cannot "see" and address the whole physical memory at once (if more than 4gb). Only 64bit cpus can.

PAE is NOT the same as x64 memory addressing and it CANNOT use all memory at once (if more than 4gb). Anyone who says otherwise is ignorant and should stop pretending he is...... tech police. If someone still keeps insisting on that, he has no fucking clue.

Memory paging and segmentation, switch back addressing and other ancient shit is for history lessons. Not for 2019 tech.
And mulitasking doesnt need memory paging since 1992 because of memory protection!!!!!!

Turing has not more bandwidth. Bandwidth is the combination of bus width and memory speed. It is not.. MAGIC as some "experts" here believe.

Some wannabe experts should really reconsider their opinion about themselves.

Covered?
 
Last edited:
Oh look! A 6 year old game (Max Payne 3) is using 3.2 GiB VRAM at 1920x1200!
MaxPayne3.png
 
I don't know if anyone mentioned it, but if the rumor of 6 variants of RTX 2060 is true, perhaps some might be third-world editions™? (similar to the GTX 1060 5GB)
As mentioned, I'm not a fan of this due to confusion with naming.
 
Perhaps but 4 GiB and 3 GiB? GDDR5 and GDDR6?

2060 3 GiB GDDR6 and 6 GiB GDDR6 makes sense (albeit stupid on the 3 GiB SKU) for western release and 2060 4 GiB GDDR5 release for internet cafes.

Maybe two of these variants are actually 2050s?

Edit: No... the list is all of them exclusively for Gigabyte. Only two logical conclusions:
a) the rumor is wrong or
b) Gigabyte has lost its marbles.

Having that many SKUs to support doesn't make business sense.
 
Last edited:
@FordGT90Concept it doesn't really make sense unless Gigabyte thinks it's doing a favor that no one is asking...
 
Back
Top