• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

So is the 10 gb vram on rtx 3080 still enough?

Status
Not open for further replies.
Yes 12-16GB is better but we discussion about 3080 10GB. If this card from beginning was maked with 12 or 16GB VRAM will have zero interest to discuss it's VRAM size. Yes 20GB is too much and will be reflected on it's price and will make RTX 3090 pointless.
...broken record mode On

Because if they gave users 12-16GB from the beginning they couldn’t ask 1000+$ for it. They wanted to create the hype of a 700$ flagship with no availability and then sell the card they really want to sell and make their desired profit margin.

broken record mode Off...

They know most users will go crazy for the 20GB over the 10GB model. Even tho in here (TPU) most of us, sane an informed users, know about real VRAM usage.
 
Sorry lads and lasses, there won't be any 20 or 16GB cards...
At least not for now.
 
Sorry lads and lasses, there won't be any 20 or 16GB cards...
At least not for now.
I'm gonna take a wild guess and say that the alleged TSMC refresh will have the Super naming and introduce double the vram of the non-super cards.
 
I'm gonna take a wild guess and say that the alleged TSMC refresh will have the Super naming and introduce double the vram of the non-super cards.

Not much time for that:

DigiTimes report suggests the Nvidia Ampere GPUs launching in late 2020 may be replaced by 5 nm Hopper GPUs one year later.
Apparently, Nvidia underestimated the impact of AMD's 7 nm GPUs and is now looking to phase out the upcoming Ampere GPUs faster,
as the green team has already pre-booked an important part of TSMC's 5 nm production capacity for 2021, when the Hopper GPUs are expected to hit the market.

https://www.notebookcheck.net/DigiT...5-nm-Hopper-GPUs-one-year-later.464133.0.html
 
I'm gonna take a wild guess and say that the alleged TSMC refresh will have the Super naming and introduce double the vram of the non-super cards.
Might also have something to do with Micron and GDDR6X supply.
 
No.

Games are always developed for consoles first, then scaled up for pc and with next Gen 4K consoles have 16 gb of memory, the 10 gb of ram is enough up to 1440p gaming on 4K you'll likely to be limited years down the road, especially if you like to max out the textures.

The golden rule for a future proof GPU is imo to buy one with the same amount or higher RAM then the consoles, with 10 gbs maxing out the graphics in high resolution gaming will be limited by anything lower than 13gb video memory years down the road

We already see some Pc ports of current Gen games consume up to 8gb of memory when maxed out, and it'll increase with true next Gen console games
Considering the specs of Cyberpunk, I would say yes. Considering most people only keep gpus 2 or 3 years, I can't see 10gb not being enough for that time span. Any longer than that is like asking how long a keg will last.
By the time is not enought gpu manufacture will release new gen GPU to rip off our money, 3080 20g is just to show to competitor that if you have 16 then nvidia has 20, they tell you is future proof but nothing is future proof in PC. And if we are still comparing pc and console, its not apple to apple.... Console are manufacture to performe gaming only, they can saya 4k 120fps but is always stated "up to" maybe when loading screen is 120fps, but in real gaming true 4k with half of teraflop of rtx3080, do you think it will exceed 60 fps? Thats marketing gimmick, like z490 mb support pci gen 4, but intel realese a 500 chipset.... Not even a year.... What future proof?
 
Last edited:
I was one of the people saying it would probably be enough for this gen, it's looking like it's cutting it close at the moment with the recent games requirements being announced. The way to see will be how much the difference in performance is between the 3080 and 6800XT in games where they are evenly matched rasterization wise. I still think it will be fine for 99% of cases but only time will tell.
 
I was one of the people saying it would probably be enough for this gen, it's looking like it's cutting it close at the moment with the recent games requirements being announced. The way to see will be how much the difference in performance is between the 3080 and 6800XT in games where they are evenly matched rasterization wise. I still think it will be fine for 99% of cases but only time will tell.
Well
I was one of the people saying it would probably be enough for this gen, it's looking like it's cutting it close at the moment with the recent games requirements being announced. The way to see will be how much the difference in performance is between the 3080 and 6800XT in games where they are evenly matched rasterization wise. I still think it will be fine for 99% of cases but only time will tell.
Have you seen watch dog new spec for 4k ultra setting? 11gb.....not even a year they force people who is crazy about spec buying new cards again... Nvidia is playin cards on us, where RX6800xt know how to kick nvidia, and nvidia fans....
 
Well

Have you seen watch dog new spec for 4k ultra setting? 11gb.....not even a year they force people who is crazy about spec buying new cards again... Nvidia is playin cards on us, where RX6800xt know how to kick nvidia, and nvidia fans....

That's because they tested it on a 2080Ti 11GB, the fastest card available they had to test with. Even if they got 30x0 cards launch day, theres no time to optimise the game for them.
 
That's because they tested it on a 2080Ti 11GB, the fastest card available they had to test with. Even if they got 30x0 cards launch day, theres no time to optimise the game for them.

Sorry if i am pissed with intel and nvidia, 1 week ago just bought the z490i unify board, and yesterday i just found out 2021 is releasing 500 chipset motherboard, i have not assemble my rig yet.... Best spec is only todays talk, tommorow is diffrent
 
Sorry if i am pissed with intel and nvidia, 1 week ago just bought the z490i unify board, and yesterday i just found out 2021 is releasing 500 chipset motherboard, i have not assemble my rig yet.... Best spec is only todays talk, tommorow is diffrent

That is how hardware releases work and its why you never want to pay too much for it.

Its why I'm always calmly waiting out the storm to buy at a very competitive price. Usually long post-release. Its pretty comfy being half a gen behind the curve, or even a full gen, or buy into sub-top. You avoid lots of problems and buyers remorse.

I'm gonna take a wild guess and say that the alleged TSMC refresh will have the Super naming and introduce double the vram of the non-super cards.

That was my initial take on the 10GB card as well, I mean it was clear as day that this was not going places and not a day goes by with another confirmation that Ampere will turn obsolete faster than you can blink. I think Nvidia has realized it has gotten lazy and complacent with their pioneering Turing release that was really a shitload of silicon gone to waste with nothing to show for it; and then following up with a product range on a grossly inferior node, forcing themselves AGAIN into a larger die than the competition while not really being better at anything. They're going to have do something. Refreshing Ampere might not be enough and I think the Hopper rumor is credible in that sense.

Realistically that's what they've got now. We can be all cheery about their added value bullshit but all of it is proprietary so its nothing you SHOULD care about - remember PhysX, remember Gsync - they're both gone the way of the dodo, or pretty much. Its fun if they have it alongside a normal GPU comparison, but that's all it really is. And that includes the additional RT performance, too - you can rest assured the focus on the consoles will push more dev budget to the approach that works everywhere and not just with RTX special sauce. They're all just bonus points that come on top of whatever a GPU should be doing: Produce frames. And the simple fact is, AMD Is much better at that right now, doing more for less in every possible way: Power, die size, and even VRAM capacity for the mid-long term.
 
Last edited:
That is how hardware releases work and its why you never want to pay too much for it.

Its why I'm always calmly waiting out the storm to buy at a very competitive price. Usually long post-release. Its pretty comfy being half a gen behind the curve, or even a full gen, or buy into sub-top. You avoid lots of problems and buyers remorse.



That was my initial take on the 10GB card as well, I mean it was clear as day that this was not going places and not a day goes by with another confirmation that Ampere will turn obsolete faster than you can blink. I think Nvidia has realized it has gotten lazy and complacent with their pioneering Turing release that was really a shitload of silicon gone to waste with nothing to show for it; and then following up with a product range on a grossly inferior node, forcing themselves AGAIN into a larger die than the competition while not really being better at anything. They're going to have do something. Refreshing Ampere might not be enough and I think the Hopper rumor is credible in that sense.

Realistically that's what they've got now. We can be all cheery about their added value bullshit but all of it is proprietary so its nothing you SHOULD care about - remember PhysX, remember Gsync - they're both gone the way of the dodo, or pretty much. Its fun if they have it alongside a normal GPU comparison, but that's all it really is. And that includes the additional RT performance, too - you can rest assured the focus on the consoles will push more dev budget to the approach that works everywhere and not just with RTX special sauce. They're all just bonus points that come on top of whatever a GPU should be doing: Produce frames. And the simple fact is, AMD Is much better at that right now, doing more for less in every possible way: Power, die size, and even VRAM capacity for the mid-long term.
Do you think they have plan this all along giving is shortage of supply so that they can sell it more expensive and also waiting for amd to release their rx6000 card and giving is news that they are going to release 20gb vesion so that buyesrs will wait for it. Because they know it would be 16gb versiin because xbox and ps5 using rdna 2 are having 16gb
 
Do you think they have plan this all along giving is shortage of supply so that they can sell it more expensive and also waiting for amd to release their rx6000 card and giving is news that they are going to release 20gb vesion so that buyesrs will wait for it. Because they know it would be 16gb versiin because xbox and ps5 using rdna 2 are having 16gb

No I don't believe in conspiracies, I believe in the way markets work and respond. Competitors responding to each other is what we've seen with Ampere launch and AMD's follow up.

Nvidia wanted to pre-empt AMD to catch its buyers on mindshare because they would otherwise start having doubts, as AMD has the better offer now. Its clear as day. They also try to use their RT and comparison to Turing to show us they're somehow better, but in reality, that only works because Turing was pretty shit to begin with - they also needed SUPER before it was a meaningful product line there.

Now, AMD delivered and Nvidia is firing on all cilinders to keep the losses to a minimum. There is also a rumor mill with lots of absolute nonsense in it. Whether or not Nvidia planned a 16-20-whatever GB card is irrelevant until they themselves announced it and they never did. What's happening now is that Nvidia is turning from leader into follower: they HAVE to implement stuff like RTX IO because AMD is leading us into a new console gen with fast access to storage, for example. And wrt ray tracing, most titles will be coming out console-first even despite the presence of Turing.

The trend is clear: Nvidia wants to pre-empt developments happening at large and in doing so define the marketplace. They seem to be failing at it this time. 10GB might well be enough - but there is this nagging thought that its probably not a few years down the line for the high resolutions its made for, and yes, the console capacity is a writing on the wall. Ignoring that is living in denial - its sub optimal at the very least.

The supply shortages... are shortages. It happens. We have a global pandemic, we have a bottleneck on fab capacity, Christmas holiday shopping and a fight over the best nodes available. Also, we are too many people on this planet, so you can readily expect more of this to happen in the future. No conspiracy involved, its just humans being human.
 
Last edited:
10GB will be plenty for 4K gaming, when it's not anymore, 3080 will be too slow for 4K anyway, and so will 6800XT.

Alot of VRAM is not going to help, when the GPU is not capable. We have seen this many times.

3080 beats 6800XT easily at 4K and this is even without DLSS enabled.

Remember that Ampere has Tensor Memory Compression which can lower VRAM usage by up to 40%


If you insist on maxing all new games out in 4K, you will simply be upgrading every 1-2 year anyway



People really need to understand how RAM allocation works
Tons of game engines simply allocate most/all VRAM, yet uses only a small percentage of that amount

Even at 4K, I don't think there's any game that uses more than 8GB (bandwidth matters too tho)

Godfall will a "texture pack" that requires 12GB, you know, that AMD sponsored game with horrible reviewsscores
VERY WEIRD that it requires 12GB when 3080 only has 10GB right? ;)

Do you remember Shadow of Mordor "ULTRA HD" texture pack? https://www.pcgamer.com/spot-the-di...rdor-ultra-hd-textures-barely-change-a-thing/

Back when AMD had more VRAM overall too? Used way more VRAM without improving textures anyway
Uncompressed vs compressed, but ended up with same visuals haha

GAME DEV'S SHOULD KNOW how to do loseless compression and next gen consoles wont have anywhere NEAR 10GB VRAM for "native 4K" (reality; dynamic res)

XSX and PS5 gets 16GB shared RAM, meaning that graphics will be 8GB TOPS, if not 5-6GB...
 
Last edited:
GAME DEV'S SHOULD KNOW how to do loseless compression and next gen consoles wont have anywhere NEAR 10GB VRAM for "native 4K" (reality; dynamic res)

Key points in your post, and neither are guarantees. What'll happen is that dynamic resolution will force an Nvidia GPU into lower detail levels sooner and faster than games on the similar performing AMD GPU. Consoles DO have access to 10+ GB of VRAM though. They will be having 11-13GB for graphics, potentially.

I agree with you and most others when it comes to how we used to approach VRAM. But what has been is not what tomorrow looks like. We're looking at higher bandwidth usage and a preference for fast access to storage, ESPECIALLY because there is much more going over that bus since allocation is even more dynamic than it used to be. Already we notice how interconnects matter wrt latency for example. Frame times are already better in numerous games on an RDNA2 card. Look at TPU reviews. Its not just the FPS - frame delivery also benefits from not swapping and re-allocating. And that's right away, post release. It won't be getting worse on the AMD side of things, only better. Ampere similarly can still improve... but there is much less wiggle room.

Something's gonna give. 10GB means heavier load on swaps which results in one of two things: Lower detail level, or lower performance. The only real basis anyone has for saying '10GB is enough' is that you believe Nvidia on its green eyes that it will be. But Nvidia is not leading the game industry at the start of a new console gen. We all know this - if you look at the past, we've seen every new console gen was a major influence for a new performance level in games. Nvidia fed the last console release with a very solid GTX 970 (and lo and behold... it also had a so-so-memory system!) but even so, you don't want to run a 4GB GPU for the last crop of PS4 games do you? You want 6 or 8 at least.

So if you're going to look back, don't look with rose tinted glasses, but be honest and apply the principle everywhere. Games DO exceed the last console gen's launch GPU right now and they have been for several years and its not even exclusive to the highest resolution. The logical conclusion here is that the 3080's 10GB will therefore be obsolete long before this console gen is over. And then, when you're being honest with yourself... consider whether that is acceptable or not. The trend however is clear: Nvidia has been cutting back on VRAM per performance tier while steadily increasing the price points, the competitor is not, and the competitor is defining the general direction of game/port development. Just put two and two together.

Its all crystal ball guesswork... but this is my educated guess.
 
Last edited:
Key points in your post, and neither are guarantees. What'll happen is that dynamic resolution will force an Nvidia GPU into lower detail levels sooner and faster than games on the similar performing AMD GPU. Consoles DO have access to 10+ GB of VRAM though. They will be having 11-13GB for graphics, potentially.

I agree with you and most others when it comes to how we used to approach VRAM. But what has been is not what tomorrow looks like. We're looking at higher bandwidth usage and a preference for fast access to storage, ESPECIALLY because there is much more going over that bus since allocation is even more dynamic than it used to be. Already we notice how interconnects matter wrt latency for example. Frame times are already better in numerous games on an RDNA2 card. Look at TPU reviews. Its not just the FPS - frame delivery also benefits from not swapping and re-allocating. And that's right away, post release. It won't be getting worse on the AMD side of things, only better. Ampere similarly can still improve... but there is much less wiggle room.

Something's gonna give. 10GB means heavier load on swaps which results in one of two things: Lower detail level, or lower performance. The only real basis anyone has for saying '10GB is enough' is that you believe Nvidia on its green eyes that it will be. But Nvidia is not leading the game industry at the start of a new console gen. We all know this - if you look at the past, we've seen every new console gen was a major influence for a new performance level in games. Nvidia pre-empted the last console release with a very solid GTX 970 (and lo and behold... it also had a so-so-memory system!) but even so, you don't want to run a 4GB GPU for the last crop of PS4 games do you? You want 6 or 8 at least.

So if you're going to look back, don't look with rose tinted glasses, but be honest and apply the principle everywhere. Games DO exceed the last console gen's launch GPU right now and they have been for several years and its not even exclusive to the highest resolution. The logical conclusion here is that the 3080's 10GB will therefore be obsolete long before this console gen is over. And then, when you're being honest with yourself... consider whether that is acceptable or not. The trend however is clear: Nvidia has been cutting back on VRAM per performance tier while steadily increasing the price points, the competitor is not, and the competitor is defining the general direction of game/port development. Just put two and two together.

Its all crystal ball guesswork... but this is my educated guess.

Who keeps a GPU for 8 years tho? An entire console generation. Most that does that, keeps playing the same games, like WoW etc

Most PC gamers, that play new AAA games, will be upgrading AT LEAST 2 times in a console generation, for me, more like 4 times. Every 2 year is what I do :D

A high amount of VRAM will not save you, because GPU is not getting faster, so you will still be looking at low fps in the end, forcing you to lower image quality, and VRAM requirement drops as a result, making the VRAM pointless again

Thats why you don't go all-out on VRAM if GPU is not absolute high-end to begin with, it's better to upgrade more often, than trying to futureproof

Never be more than 2 generations behind if you want proper driversupport from Nvidia/AMD or even game dev's (which are testing with newest and last gen mostly)

Nvidia and AMD will focus on newest arch first, then "last gen", older arch's *might* get support, might not - You will see wonky performance and issues, this is what people with older GPU's often experience in new games

Take 390X for example, it had 8GB, today it can barely do 1080p maxed, GPU is way too dated but VRAM is fine, still does not save performance. Then look at 3070 with 8GB too, performs like a 2080 Ti even at 4K; https://www.techpowerup.com/review/msi-geforce-rtx-3090-suprim-x/33.html

A friend of mine bought 380X card solely because of VRAM. He though he could be using the card for 5+ years, but he has experienced flicker and weird glitches in tons of new games in the last years + bad performance in most games relased after 2018, like very bad in some of them (unplayable) shadowbugs and even crashing (but the card does 3dmark looping for hours, meaning it's the game/drivers)

Meanwhile Fury X released with 4GB and AMD claimed this was more than enough, yet aged like milk because of the 4GB, can still do 1080p "fine" tho, the problem is that it was a 1440p-4K card, 980 Ti still does 1440p DECENT in most games using medium settings, Fury X can barely do low

6GB in 2020 is still decent for 1080p, 8GB for 1440p and 10GB for 4K, and you should be fine for a few years, if not lower some settings and enjoy anyway, who cares if you play a game at 95% IQ instead of 100%
 
Last edited:
  • Like
Reactions: Rei
Sorry if i am pissed with intel and nvidia, 1 week ago just bought the z490i unify board, and yesterday i just found out 2021 is releasing 500 chipset motherboard, i have not assemble my rig yet.... Best spec is only todays talk, tommorow is diffrent
And your Z490 board will suddenly work less well when the 590 comes out? Stop falling for marketing. Your board will perform great for years!
 
And your Z490 board will suddenly work less well when the 590 comes out? Stop falling for marketing. Your board will perform great for years!

LOL YES.

Both Intel, AMD and Nvidia are doing YEARLY updates, some better than others, but NEW STUFF = MORE SALES, which is the point

ALL TECH BUSINESSES does this now.
 
And your Z490 board will suddenly work less well when the 590 comes out? Stop falling for marketing. Your board will perform great for years!
Intel has a pretty established schedule of two chipset/motherboard generations per socket. With 400-series being first chipset series for LGA1200, pretty sure 500-series chipsets and whatever CPUs come next are all going to be compatible. Rumors and news bits so far are saying exactly that. The only question mark might be PCIe 4.0 support (and based on early information and rumors, most Z490 boards should be compatible given that new CPUs come with support).
 
LOL YES.

Both Intel, AMD and Nvidia are doing YEARLY updates, some better than others, but NEW STUFF = MORE SALES, which is the point

ALL TECH BUSINESSES does this now.
Intel has a pretty established schedule of two chipset/motherboard generations per socket. With 400-series being first chipset series for LGA1200, pretty sure 500-series chipsets and whatever CPUs come next are all going to be compatible. Rumors and news bits so far are saying exactly that. The only question mark might be PCIe 4.0 support (and based on early information and rumors, most Z490 boards should be compatible given that new CPUs come with support).
You are both missing the point. Yearly updates and releases don’t make a product perform less well. It will still do whatever it did the day before a new product came out. And in the case of CPU’s/motherboards they are relevant for several years.
 
You are both missing the point. Yearly updates and releases don’t make a product perform less well.

Demands are rising and when you buy a high-end products and see 1 year later you now have a mid-end product you will always feel bad

My 8700K at 5.2 GHz still smashes any Ryzen chip in gaming and emulation, which is what this home-rig is used for

My 3080 needs to last till 2022 and I will be on 4000 series and Hopper

Never buy the refreshed series, always jump on the new arch first, you are then secured at least 2 years with PRIME FOCUS
 
Demands are rising and when you buy a high-end products and see 1 year later you now have a mid-end product you will always feel bad

My 8700K at 5.2 GHz still smashes any Ryzen chip in gaming and emulation, which is what this home-rig is used for
You seem to contradict yourself with your two paragraphs.

Besides, what one “feels” has nothing to do with performance numbers being the same today as they were the day before.
 
  • Like
Reactions: Rei
Simple answer.

For people who change hardware every two-three years, YES.
But people like me who stick with hardware for years, NOPE.
 
Status
Not open for further replies.
Back
Top