Friday, December 18th 2020

NVIDIA GeForce RTX 3060 to Come in 12GB and 6GB Variants

NVIDIA could take a similar approach to sub-segmenting the upcoming GeForce RTX 3060, as it did for the "Pascal" based GTX 1060, according to a report by Igor's Lab. Mr Wallossek predicts a mid-January launch for the RTX 3060 series, possibly on the sidelines of the virtual CES. NVIDIA could develop two variants of the RTX 3060, one with 6 GB of memory, and the other with 12 GB. Both the RTX 3060 6 GB and RTX 3060 12 GB probably feature a 192-bit wide memory interface. This would make the RTX 3060 series the spiritual successors to the GTX 1060 3 GB and GTX 1060 6 GB, although it remains to be seen if the segmentation is limited to the memory size, and doesn't also go into the chip's core-configuration. It's likely that the RTX 3060 series goes up against AMD's Radeon RX 6700 series, with the RX 6700 XT being rumored to feature 12 GB of memory across a 192-bit wide memory interface.
Source: Igor's Lab
Add your own comment

126 Comments on NVIDIA GeForce RTX 3060 to Come in 12GB and 6GB Variants

#76
Vya Domus
It's pretty insane that after roughly 80 years of computing you still have people arguing over whether or not more memory is needed.

Your average computer now has about 10^8 times more memory that one from the 40s, that works out to an order of magnitude more memory every decade. Imagine having someone from that era read this forum.

Never buy the option with more memory guys, it's clearly a bad idea.
Posted on Reply
#77
efikkan
lexluthermiester
History always proves people with opinions like yours wrong. So keep thinking what you want and the rest of us will buy cards with the higher amount of ram and get more use out of them. Yes, yes.
Yeah right. In reality those cards usually gets obsolete for any decent frame rates at those detail levels.

Even you can't deny the fact that using more VRAM within a single frame at the same frame rate would require more memory bandwidth? Well we know current cards often are constrained by bandwidth already, so throwing more VRAM at them for "future proofing" without also adding more bandwidth is illogical. :kookoo:
Vya Domus
It's pretty insane that after roughly 80 years of computing you still have people arguing over whether or not more memory is needed.

Your average computer now has about 10^8 times more memory that one from the 40s, that works out to an order of magnitude more memory every decade. Imagine having someone from that era read this forum.

Never buy the option with more memory guys, it's clearly a bad idea.
Cut it with your straw man arguments. This is just trolling.
The argument is whether more memory can be utilized for the intended purposes.
Posted on Reply
#79
windwhirl
efikkan
Because it's a midrange card which doesn't need more.

I've seen No Man's Sky use well over 7 GB of VRAM. It starts at 4 or 5 GB, but it ramps up rather quickly once you start exploring. The screenshot shows the Radeon Overlay Metrics, with over 6 GB of VRAM either in use or allocated after a few minutes in-game, at 1080p.

To be fair, I do have NMS fully maxed out, though. But it shows that games are starting to push it, and that's why I feel like 6 GB is cutting it too close.
Vya Domus
Never buy the option with more memory guys, it's clearly a bad idea.
There might be exceptions, though. IIRC, there were a few Radeon cards a long time ago that came out with double the usual memory, but with the same bandwidth, that suffered performance loss because of that.
Posted on Reply
#80
efikkan
windwhirl
I've seen No Man's Sky use well over 7 GB of VRAM.
As I said, allocated memory isn't the same as needed memory.
The proper way to test whether a card have enough memory is to do a proper review. If it suffers from to little, the stutter will be severe. If it does well, it's very likely to do well for 3+ years going forward.
windwhirl
To be fair, I do have NMS fully maxed out, though. But it shows that games are starting to push it, and that's why I feel like 6 GB is cutting it too close.
Feel is the key here. Such discussions should be based on rational arguments not feelings. ;)
Cards manage memory differently, and newer cards do it even better than RX 580.
While there might be outliers which manage memory badly, games in general do have a similar usage pattern between memory capacity (used within a frame), bandwidth and computational workloads. Especially capacity and bandwidth are tied together, so if you need to use much more memory in the future, you also need more bandwidth. There is no escaping that, no matter how you feel about it.
Posted on Reply
#81
Vya Domus
windwhirl
there were a few Radeon cards a long time ago that came out with double the usual memory, but with the same bandwidth, that suffered performance loss because of that.
That's not possible, it couldn't have been because they had double density memory chips.
Posted on Reply
#82
Solid State Soul ( SSS )
Freaking Xbox Series S has 10gb of GDDR6, and is priced at 300$, going below 8gb in 2021 is below average
Posted on Reply
#83
efikkan
Vya Domus
That's not possible, it couldn't have been because they had double density memory chips.
That's certainly possible, if the memory is managed badly. If you double the density, you need to spread out the load to maintain the same bandwidth and latency.
I certainly hope cards does this properly.
Solid State Soul ( SSS )
Freaking Xbox Series S has 10gb of GDDR6, and is priced at 300$, going below 8gb in 2021 is below average
That's for the system and graphics combined. A typical PC has 16GB system memory or more.
Posted on Reply
#84
lexluthermiester
efikkan
In reality those cards usually gets obsolete for any decent frame rates at those detail levels.
Another narrow-focused and incorrect opinion not supported by historical information. Why do you bother with that argument? Don't you want more memory on your GPU? Or is it pride?
efikkan
Even you can't deny the fact that using more VRAM within a single frame at the same frame rate would require more memory bandwidth?
Sure, that's one way of looking at it. A little oversimplistic, but ok let's run with that notion for a moment. Here's a key point: most VRAM bandwidth is under-utilized. So pushing it further isn't really going to slow things down much, if at all.
efikkan
Well we know current cards often are constrained by bandwidth already, so throwing more VRAM at them for "future proofing" without also adding more bandwidth is illogical.
Not really. Current gen card have shown conclusively that they power to spare as they can run on max settings at acceptable performance. This means they have room for adjustment in both VRAM capacity and bandwidth performance.
efikkan
If it suffers from to little, the stutter will be severe.
Not in my experience, which is extensive.
efikkan
A typical PC has 16GB system memory or more.
True!
Posted on Reply
#85
bug
lexluthermiester
While that's true, the extra VRAM does enable extra functionality. Always has.
What do you mean? TPU reviews always look at the highest settings. What extra functionality is there?
Posted on Reply
#86
Rei
Solid State Soul ( SSS )
Freaking Xbox Series S has 10gb of GDDR6, and is priced at 300$, going below 8gb in 2021 is below average
efikkan
That's for the system and graphics combined. A typical PC has 16GB system memory or more.
And let's not forget that out of 10 GB GDDR6 RAM, Xbox Series S will only have 7.5 GB RAM usable as the other 2.5 GB RAM is used for OS & other non-game application. And that 7.5 GB RAM is as efikkan said: "system and graphics combined." so actual VRAM offering is much less.
Posted on Reply
#87
lexluthermiester
bug
What do you mean? TPU reviews always look at the highest settings. What extra functionality is there?
You mean beyond future games and non-gaming functionality? Let's remember, GPU's are not just designed and built for the current moment but to pave the way for future computing tasks and allowing for expanded capabilities.
Rei
And let's not forget that out of 10 GB GDDR6 RAM, Xbox Series S will only have 7.5 GB RAM usable as the other 2.5 GB RAM is used for OS & other non-game application. And that 7.5 GB RAM is as efikkan said: "system and graphics combined." so actual VRAM offering is much less.
Yeah and it & PS5 are going to suffer and lag behind the PC market. The phrase "PC Master Race" wasn't coined by some smartass taking a jab at PC users, it was coined to illustrate that PC's have always blazed the trails of advancement and performance. No console that ever had an edge over PC tech maintained that edge for very long. The current gen consoles are already well & truly outpaced and being "left in the dust".
Posted on Reply
#88
bug
lexluthermiester
You mean beyond future games and non-gaming functionality?
That sounds a bit like circular reasoning ;)

There are so few instances where a card got obsoleted because of VRAM, I can probably count them on my fingers. GPU HP is what determines if a video card will age nicely, I pay little attention to VRAM.
lexluthermiester
Let's remember, GPU's are not just designed and built for the current moment but to pave the way for future computing tasks and allowing for expanded capabilities.
Actually, you're wrong on this one. If it were up to GPU makers, you'd be upgrading with each and every generation they make, much like people do with their smartphones.
It's just that those of us that know more about GPUs are able to pick those having more chances to last us 2 or 3 generations.

That is probably why I tend to look down on people that put future proofing at the top. Video cards (and products in general, sad as that is) are just not built for that. I'll take a "future proof" video card, given a chance, but I will not fault a video card that works well today, simply because it might not work as well tomorrow.
Posted on Reply
#89
Rei
lexluthermiester
No console that ever had an edge over PC tech maintained that edge for very long.
Understandable. The only thing recent console can do to become faster is to upgrade the storage drive, and even that has its limits. Can't upgrade GPU, CPU nor RAM.
lexluthermiester
The current gen consoles are already well & truly outpaced and being "left in the dust".
While already outpaced on release by mid-range to high-end PCs, I wouldn't call it "left in the dust" Games are still playable but that's all there is to it. No graphical upgrades on the console side while on the PC side, AAA-games released next year will be graphically better than this years AAA-games due to the expandibility of PC GPU.
Posted on Reply
#90
bug
Rei
Understandable. The only thing recent console can do to become faster is to upgrade the storage drive, and even that has its limits. Can't upgrade GPU, CPU nor RAM.

While already outpaced on release by mid-range to high-end PCs, I wouldn't call it "left in the dust" Games are still playable but that's all there is to it. No graphical upgrades on the console side while on the PC side, AAA-games released next year will be graphically better than this years AAA-games due to the expandibility of PC GPU.
Games may be playable, but they always sacrifice (lots of) detail to be playable. PC GPUs are just more powerful and the lower overhead of programming for a console can only help so much.
Also, people fret about DLSS not being "true 4k", but you barely hear a word about how consoles are upscaling all the time, yet they are advertised now as "4k capable".

There is one thing and one thing only consoles are a better pick then PCs: convenience/ease of use. Not capability. Of any kind.
Posted on Reply
#91
efikkan
lexluthermiester
Another narrow-focused and incorrect opinion not supported by historical information. Why do you bother with that argument? Don't you want more memory on your GPU? Or is it pride?
Actually not. It's very unrealistic that you can expect such a card to perform in games 3+ years down the road at the same frame rates as today. This card is not likely to run 1440p or higher at 120 Hz at high details years from now, which means you will be limited by other factors before VRAM, unless you think games will use more VRAM without needing bandwidth for it.

I don't mind having extra VRAM if it didn't affect the price. The problem is that it does, and we fool people into paying extra for "future proofing" when in reality the card will become obsolete just as fast. You can go ahead an waste all the money you want, but don't mislead others. If there were no downsides, I'll take 4x the VRAM. There is no pride here, I'm just being pragmatic.

Assuming this card will have a 192-bit memory bus and 14 Gbps memory, this will yield 336 GB/s. Now assume a game will read 6 GB during a single frame (not that a game would read the entire VRAM per frame), that would yield a maximum of 56 FPS, so the argument that future games will need more than 6 GB for this card is highly unrealistic, unless they mismanage memory terribly.
lexluthermiester
Sure, that's one way of looking at it. A little oversimplistic, but ok let's run with that notion for a moment. Here's a key point: most VRAM bandwidth is under-utilized. So pushing it further isn't really going to slow things down much, if at all.
If that were true, overclocking memory would yield no significant performance improvement.
lexluthermiester
Not in my experience, which is extensive.
So I guess other's extensive experience in graphics doesn't count then.
When a GPU truly runs out of VRAM, it starts swapping, swapping will cause serious stutter, not once, but fairly constantly. If the shortage is large enough, some games will even display popping textures etc. Anyone doing a serious review of the card will notice if this happens.
Posted on Reply
#92
Rei
bug
Also, people fret about DLSS not being "true 4k", but you barely hear a word
I barely heard a word about people fretting that DLSS is not "true 4K". Lucky me, i guess? :laugh:
While it's true, but does it really matter? DLSS is the best thing since checkerboard upscaling.
Sadly, no use for me as you'll find out why by checking my bio's system specs.
Posted on Reply
#93
bug
efikkan
Actually not. It's very unrealistic that you can expect such a card to perform in games 3+ years down the road at the same frame rates as today. This card is not likely to run 1440p or higher at 120 Hz at high details years from now, which means you will be limited by other factors before VRAM, unless you think games will use more VRAM without needing bandwidth for it.

I don't mind having extra VRAM if it didn't affect the price. The problem is that it does, and we fool people into paying extra for "future proofing" when in reality the card will become obsolete just as fast. You can go ahead an waste all the money you want, but don't mislead others. If there were no downsides, I'll take 4x the VRAM. There is no pride here, I'm just being pragmatic.

Assuming this card will have a 192-bit memory bus and 14 Gbps memory, this will yield 336 GB/s. Now assume a game will read 6 GB during a single frame (not that a game would read the entire VRAM per frame), that would yield a maximum of 56 FPS, so the argument that future games will need more than 6 GB for this card is highly unrealistic, unless they mismanage memory terribly.
I think that's what I was trying to say, only worded better.

Also @lexluthermiester 's point that you may be doing some compute, ML stuff is also valid. But that's rather rare and not really what was being discussed here. (Hell, with the "right" learning set, you can run out of 128GB VRAM.)
Posted on Reply
#94
lexluthermiester
bug
That sounds a bit like circular reasoning
I call it forward thinking, but whatever! :toast:
efikkan
If that were true, overclocking memory would yield no significant performance improvement.
Good point. It often doesn't.
efikkan
When a GPU truly runs out of VRAM, it starts swapping, swapping will cause serious stutter, not once, but fairly constantly.
Which is a perfect argument for more VRAM... Funny that...
bug
Also, people fret about DLSS not being "true 4k", but you barely hear a word about how consoles are upscaling all the time, yet they are advertised now as "4k capable".
Those who use can and do use that feature don't really care because it still looks good. However, there are some of us who can use it and don't because we don't like the way it looks nor do we want the performance hit.
bug
But that's rather rare and not really what was being discussed here.
It's not as rare as you think it is, but it is a smaller market consideration. But what is applicable to everyone is advances in games that will use more than 8GB or 10GB of VRAM.
Posted on Reply
#95
RandallFlagg
Rei
And let's not forget that out of 10 GB GDDR6 RAM, Xbox Series S will only have 7.5 GB RAM usable as the other 2.5 GB RAM is used for OS & other non-game application. And that 7.5 GB RAM is as efikkan said: "system and graphics combined." so actual VRAM offering is much less.
Yeah, most of the benchmarks of PS5 / XBox Series X have shown it gets about the same performance as a 1660. With the graphics detail up, they can't maintain 30fps and are often running in the low 20 fps range on CyberPunk. They have to go into 'performance' mode aka reduced detail to get 60 FPS, and even then they can't maintain it.

A 1660 Ti by comparison gets an average of 43fps on this same game with ultra details, and doesn't have the luxury of dynamic resolution scaling that the xbox is using to maintain its fairly pathetic sub 30 fps in its visual mode.

This is the same story as I've seen with every console release. When new, they can compete with upper-midrange PC GPUs, but there is a lot of hype that they're faster than that. They aren't. All else being equal a 1660 Ti or Super looks to be equal to the pinnacle of next gen console performance.


www.techpowerup.com/review/cyberpunk-2077-benchmark-test-performance/5.html
Posted on Reply
#96
evernessince
efikkan
Allocated and utilized memory are not the same thing. GPUs are very good at compressing memory which are not currently in use, or partially used. During the lifecycle of a single frame, multiple buffers are compressed and expanded significantly, as they are used then blanked during different render passes. This is why games can allocate 8-9 GB on a 6 GB card without swapping.
The example and graph I provided earlier is of used RAM, not allocated. It's states as such at the top of the graphic.

Yes, architecture, drivers, and the game engine can compensate to an extent for a lack of VRAM but do you really buy a graphics card because it might have enough VRAM for current games and likely won't for future titles?

Look at HWUB's re-review of the 1060 3GB after only 18 months:

The big difference here is that you are not buying a $200 card that's intended for those on a budget, the 3060 is going to be at least $100 more if not close to double the price.

This argument was acceptable for the 1060 3GB due to it's price, it's not for the 3060.
Posted on Reply
#97
Caring1
Solid State Soul ( SSS )
This is GTX 1060 3gb vs 6gb situation all over again ...
The 1060 3GB is irrelevant in this discussion as it uses a different chip than the 6GB variant.
Posted on Reply
#98
newtekie1
Semi-Retired Folder
windwhirl
I've seen No Man's Sky use well over 7 GB of VRAM. It starts at 4 or 5 GB, but it ramps up rather quickly once you start exploring. The screenshot shows the Radeon Overlay Metrics, with over 6 GB of VRAM either in use or allocated after a few minutes in-game, at 1080p.

To be fair, I do have NMS fully maxed out, though. But it shows that games are starting to push it, and that's why I feel like 6 GB is cutting it too close.
It has been shown, even here at TPU, that allocated VRAM =/= used VRAM. There is no way to tell how much of the allocated RAM is actually being used. Game developers have gotten lazy, they just cram textures into VRAM regardless of if they are actually being used or not. It's become the norm for games it seems. This leads to large allocated VRAM numbers but no real performance decrease on cards that have less VRAM.
Posted on Reply
#99
Condelio
Hey, nice discussion over here. Although gpu conversations tend to escalate too much for my taste haha... sometimes i have the same doubts about ram being enough. When gtx1070 was launched i bought it with confidence because 8gb looked almost overkill back then. More ram obviously doesn't hurt, but i followed 1060-3 vs 1060-6 drama and as time passed, 5fps seemed the most common difference, and now i think they are both kind of outdated. In the end, for me, the deciding factor is money, and in my country argentina, 1060-6 costed way too much than 1060-3, almost 80% at times if i'm remembering correctly.

Now i'm at a crossroad. An rtx3060ti costs more or less 890usd here, rtx 3070 costs 1000usd, and an rtx3080 costs 1622usd. Prices are inflated all over the world, but prices in Argentina are even more inflated. I don't play much, i work too much, but when i do, i have a 4k TV that obviously would like to try with CP2077 with RT enabled. I´d like to buy 3070, but now the ram does seem to be limiting a little bit? 3080 seems to be the minimum premium experience to my eyes. I don´t like to overspend, but i'm seeing too much difference betweeen x70 and x80. Maybe wait for 3070ti with 16gb of ram? i´m seeing all cards struggling with CP2077, but rtx3080 at least is losing the battle with a little more dignity.

any thoughts? i have the money already saved, but i'm making a big effort to wait a little bit for nvidia to counter attack amd´s RAM decisions.
Posted on Reply
#100
bug
newtekie1
It has been shown, even here at TPU, that allocated VRAM =/= used VRAM. There is no way to tell how much of the allocated RAM is actually being used. Game developers have gotten lazy, they just cram textures into VRAM regardless of if they are actually being used or not. It's become the norm for games it seems. This leads to large allocated VRAM numbers but no real performance decrease on cards that have less VRAM.
That's not necessarily lazy. If there's VRAM there, using it to preload is actually smart. It won't make a difference in your FPS. It may save you a bit of stutter here and there or when changing zones, but that's it. Like pointed out above, if you're really out of VRAM, you'll see either a dramatic FPS drop or stutter galore. Or both.
Condelio
Hey, nice discussion over here. Although gpu conversations tend to escalate too much for my taste haha... sometimes i have the same doubts about ram being enough. When gtx1070 was launched i bought it with confidence because 8gb looked almost overkill back then. More ram obviously doesn't hurt, but i followed 1060-3 vs 1060-6 drama and as time passed, 5fps seemed the most common difference, and now i think they are both kind of outdated. In the end, for me, the deciding factor is money, and in my country argentina, 1060-6 costed way too much than 1060-3, almost 80% at times if i'm remembering correctly.

Now i'm at a crossroad. An rtx3060ti costs more or less 890usd here, rtx 3070 costs 1000usd, and an rtx3080 costs 1622usd. Prices are inflated all over the world, but prices in Argentina are even more inflated. I don't play much, i work too much, but when i do, i have a 4k TV that obviously would like to try with CP2077 with RT enabled. I´d like to buy 3070, but now the ram does seem to be limiting a little bit? 3080 seems to be the minimum premium experience to my eyes. I don´t like to overspend, but i'm seeing too much difference betweeen x70 and x80. Maybe wait for 3070ti with 16gb of ram? i´m seeing all cards struggling with CP2077, but rtx3080 at least is losing the battle with a little more dignity.

any thoughts? i have the money already saved, but i'm making a big effort to wait a little bit for nvidia to counter attack amd´s RAM decisions.
I'd go for the 3070. But not right now. Give it till February or March so the prices will settle down first.
Posted on Reply
Add your own comment