• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Is 8gb vram the minimum target for 2023 gaming?

Is 8gb vram the minimum entry for gaming 2023 and onwards

  • Yes

    Votes: 69 56.6%
  • No

    Votes: 53 43.4%

  • Total voters
    122
  • Poll closed .
Status
Not open for further replies.
They target console specs, and make it work on pc afterwards.
Indeed, and consoles don't have much more than that available for textures targeting 1440p to 4k anyway. I'd like to see more effort made with ports.

If they're going to make a pc release, they should put effort into said release to make the experience smooth. Bare minimum with the $$ they charge.
 
Indeed, and consoles don't have much more than that available for textures targeting 1440p to 4k anyway. I'd like to see more effort made with ports.

Except the "new" consoles do. And they have several tricks up their sleeve allowing them to stream assets at a rate that isn't feasable on pc (atm) meaning everything has to be ready in vram and ram.
 
Except the "new" consoles do. And they have several tricks up their sleeve allowing them to stream assets at a rate that isn't feasable on pc (atm) meaning everything has to be ready in vram and ram.
Pc has those tricks too, devs don't seem to want to code for it. And what's available for textures isn't drastically more than 8gb. Again, at the prices charged for these pc releases, for a game where all the content is already developed, they should be putting more effort into optimisation, inclusion of direct storage etc.
 
Pc has those tricks too, devs don't seem to want to code for it. And what's available for textures isn't drastically more than 8gb. Again, at the prices charged for these pc releases, for a game where all the content is already developed, they should be putting more effort into optimisation, inclusion of direct storage etc.

PC doesn't have a dedicated decompression chip, like ps5 does.

And issue is - how many on pc actually has direct storage? If you go by steam numbers, not many (as it requires a fast NVME). So obviously it makes no sense for the devs to spend alot of time coding for a small handful of people. While on console it's a universel solution.
 
Last edited:
With 80% of pc gamers having 8gb or less, they'd better target those builds to have a playable, stutter free, reasonably good looking experience.

The rate thing are evolving anyone with an 8gb or less card should be willing to sacrifice settings to get a great experience, which is what everyone but the 1% with top end hardware do all the time anyway, one way or another. I also reject the notion texture quality is the be all end all of iq. The amount og games I've played lately where they're next to the last thing I notice, at least for me, makes it not much of an issue.
Funny coming from a guy with a 10GB GPU and a signature saying

"my goal is speed, full ultra, and extreme gaming"

You're in denial. But since that's not an issue to you, great! This is fine memes come to mind though, sorry.

Pc has those tricks too, devs don't seem to want to code for it. And what's available for textures isn't drastically more than 8gb. Again, at the prices charged for these pc releases, for a game where all the content is already developed, they should be putting more effort into optimisation, inclusion of direct storage etc.
Of course they don't and they never will, nobody got time for that nonsense, you're left in the caring hands of Nvidia's driver TLC. Lovely innit. The only saving grace here is that Nvidia HAD a good track record on that TLC. Too bad they're showing to be greedy fucks since Turing and they also stopped doing that proper, going by the DLSS3 support as a prime example.

Another big fat 'too bad' is that new APIs also push the bonus work to devs rather than to Nvidia or AMD, so that's going to be good fun going forward.
 
So obviously it makes no sense for the devs to spend alot of time coding for a small handful of people
If the devs or company or whoever decide to release the game to PC, you bet I expect work put into a good PC release, or they won't get my money, or a lot of other peoples. They may be a small handful compared to consoles, but if you do a PC release, PC gamers have expectations.

These recent garbage releases are more a reflection on craptacular ports, fanboyism and tech media than it is on Nvidia, while I also agree Nvidia has been light on VRAM. It's pretty easy to see both sides.
Funny coming from a guy with a 10GB GPU and a signature saying

"my goal is speed, full ultra, and extreme gaming"

You're in denial. But since that's not an issue to you, great! This is fine memes come to mind though, sorry.
It's a quote from a user on this forum from... before your time, but please, read a random quote in my sig and the card I own and draw whatever conclusion you like, you always do anyway, irrespective of my comments and shared experience with how that's treating me. A lot of memes come to mind with you too, got a point? or rather drop it and stay on topic, I would.
----------------------------------------------------
I absolutely expect that for the next 2-3 years easy that PC game releases cater to 8GB VRAM (and less) gamers, sue me, quote me, you do you.
 
If the devs or company or whoever decide to release the game to PC, you bet I expect work put into a good PC release, or they won't get my money, or a lot of other peoples.

These recent garbage releases are more a reflection on craptacular ports, fanboyism and tech media than it is on Nvidia, while I also agree Nvidia has been light on VRAM. It's pretty easy to see both sides.

It's like everyone completely forgets the entire history of pc gaming, everytime a new game comes out that doesn't run well on their pc, and cries about bad optimization.

Fact of the matter is - as time progresses, games become more demanding in every regard. 8gb vram was great 7 years ago - today it isn't. That's all there is to it.
 
That's all there is to it.
Hard disagree when doing a PC release of the game.

I suppose then that's where we disagree, and I don't know what you want from me, I don't agree with you, and you don't agree with me, doesn't make me right, doesn't make you wrong, shall we stop quoting each other?
 
It's like everyone completely forgets the entire history of pc gaming, everytime a new game comes out that doesn't run well on their pc, and cries about bad optimization.

Fact of the matter is - as time progresses, games become more demanding in every regard. 8gb vram was great 7 years ago - today it isn't. That's all there is to it.

So...I disagree entirely. The premise that you start with, and end with, are in opposite veins.

Yes, companies add new bells and whistles so everything always takes more power to run...or everyone running a radeon 5770 at 1080p would still be happily doing so in 2023. My problem is that most issues with games running poorly are entirely about bad coding, and that's proven. Don't care to believe what another forum user states? Cool. Let me provide some examples.
DX10 - Crysis. Good lord it was beautiful, but even today the DX10 makes it run like a two legged dog.
Hogwarts Legacy - Already patched to the moon to deal with launch issues
Ubisoft....enough said. I'm going with whatever their latest release is...but it's been years since the open worlds they make aren't buggier than a south american jungle.

Why do I say any and all of this? Well, a 470 can happily chew into Doom:Eternal. After a huge amount of community patches to fix bugs, it can chew into Fallout 4 with a decent amount of mods. I don't think I need to offer more, but I'll highlight that in 2023 it still has some issues running Crysis...despite being a DX11 card. Moreover, there are people that play games which run DX11 and DX12 today, that are doing so on hardware older than Bulldozer and Nehalem...because companies like Digital Extremes have to announce that their game will require instruction sets in 2023 which processors older than that simply do not have.
My goal here is to highlight that coding is hard. Coding badly is easy. Optimization takes money. Most companies that want a port want to spend as little as possible on a thing they already developed, get that bump of money, and then don't care about it. While I can agree that requirements keep going up, I cannot agree that they're uniformly because of improvements. Shoddy code can often be excused by such a sentiment...and that's just counterproductive. I don't want to pay an extra $400 for a low-mid tier card (considering current pricing that may even be a low estimate) every 4-5 years because a few $70 AAA titles don't run well until patch 10 because a company pushed it out the door with minimal QA to hit a financial deadline.
 
So...I disagree entirely. The premise that you start with, and end with, are in opposite veins.

Yes, companies add new bells and whistles so everything always takes more power to run...or everyone running a radeon 5770 at 1080p would still be happily doing so in 2023. My problem is that most issues with games running poorly are entirely about bad coding, and that's proven. Don't care to believe what another forum user states? Cool. Let me provide some examples.
DX10 - Crysis. Good lord it was beautiful, but even today the DX10 makes it run like a two legged dog.
Hogwarts Legacy - Already patched to the moon to deal with launch issues
Ubisoft....enough said. I'm going with whatever their latest release is...but it's been years since the open worlds they make aren't buggier than a south american jungle.

Why do I say any and all of this? Well, a 470 can happily chew into Doom:Eternal. After a huge amount of community patches to fix bugs, it can chew into Fallout 4 with a decent amount of mods. I don't think I need to offer more, but I'll highlight that in 2023 it still has some issues running Crysis...despite being a DX11 card. Moreover, there are people that play games which run DX11 and DX12 today, that are doing so on hardware older than Bulldozer and Nehalem...because companies like Digital Extremes have to announce that their game will require instruction sets in 2023 which processors older than that simply do not have.
My goal here is to highlight that coding is hard. Coding badly is easy. Optimization takes money. Most companies that want a port want to spend as little as possible on a thing they already developed, get that bump of money, and then don't care about it. While I can agree that requirements keep going up, I cannot agree that they're uniformly because of improvements. Shoddy code can often be excused by such a sentiment...and that's just counterproductive. I don't want to pay an extra $400 for a low-mid tier card (considering current pricing that may even be a low estimate) every 4-5 years because a few $70 AAA titles don't run well until patch 10 because a company pushed it out the door with minimal QA to hit a financial deadline.

No, it is not "in different veins" - hardware demands in games increase as time passes, thus what was great hardware wise 7 years ago is barely cutting it today. I really don't see what's so hard to understand about it.

And while there are always rotten apples when it comes to coding (always have been, always will be), alot of people are making it out as if the increase in vram demands in alot of recent AAA titles is purely down poor optimization, rather than coming to the realization that the games are simply being developped for the new consoles first and foremost, and that the hardware demands reflects that.

But sure, be my guest and continue to just blame it on coding, while giving nvidia a free pass on skimping on vram with their planned obsolescence strategy.
 
oh, allocation ... well i am not one of the less technical people ... but i do tend to take thing at face value ... so, what calling it usage did for me was missdirecting me instead of just saying what it is for what it is ...

corrected the post, although the argument of Skyrim maxing out vRAM still hold even if it's "anecdotal" being allocated 11.2gb ... instead of the vanilla 1.8gb :p and the allocation still change with resolution textures and effect (RT) but it does not use all available at all, unless not enough vRAM.
That's just your Skyrim with your mods. The only thing it proves is that you can do anything with mods.

Funny thing is, the 8GB I have now has enabled ultra textures regardless and still does, even if I cut down on some super performance hogging post effects. Even in a game like TW Wh3, I get FPS sub 50, but I can still run maximum quality textures, or near it, while I drop down some other settings like SSR to keep the thing nicely playable. It still looks great for the performance it has then. On a 7 y/o card, at a native 3440x1440. Everything I play today still works stutter free. Like butter smooth. Even at 40 FPS, perfectly playable. So yes, the GPU is pegged to 100%, but the experience is rock solid and detail close to top end.

VRAM enables you to actually maintain high fidelity / detail on the basic assets and high LOD settings on geometry, while not costing a lot out of your GPU core. In that sense its a LOT more than just textures. Its the base detail of the whole picture, the thing every other post effects builds up on. Its also draw distance, which is huge for immersion in open world games.

You got this completely backwards, as do all those others who defend 'I"ll dial down some textures' because they lack VRAM and saying its fine. What KILLS performance on any GPU, is realtime processing and post processing: it adds latency to every frame regardless of the texture detail you got. Whereas high texture detail doesn't or barely adds latency because its already loaded into VRAM way before its used.

People should stop these idiotic coping mechanisms and see things for what they are. History is full of these examples and this has never been different. If you lack either core/thread count or RAM on your CPU/board, or if you lack VRAM on your GPU, you're cutting its life expectancy short. Its that simple. Bad balance is bad balance and it means you're not getting the best out of your hardware.

For my next GPU its going to be very simple. 3x the perf of the 1080? I want about 3x the VRAM. That 24GB 7900XTX is looking mighty fine, covering that principle perfectly. The 7900XT and 6800XT are also along that scale with 20~16GB. But 12GB on a 4070ti is ridiculous and I'm staying far, far away. Given the cost of today's GPUs, I think 5-6 years life expectancy at top dog gaming is what we should be aiming for, most definitely. That keeps gaming affordable. If I would add up the cost of the 1080 > resale at 150~200 EUR, and then buying a 7900XTX, I'd end up at about 1,1K EUR for a total of 12 years of gaming. That's less than 100 EUR a year. Having the perfect VRAM balance makes that happen.


You will know you're missing hardware resources when the game stutters, that applies to a lack of any kind of memory - or any kind of hiccups in feeding that memory, which means lack of CPU cycles for that specific frame, or not having the data available in time. All of this points at system RAM and VRAM and its bandwidth. All of my rigs for gaming had stuttery experiences until I stopped buying midrange shite GPUs with crappy Nvidia memory subsystems.

Back when GPUs were dabbling between much lower memory capacities (256~512MB > 1GB/2GB), the urge to upgrade was apparent much faster. Those were the days where you'd turn 180 degrees in a shooter and you could easily be served massive stutter as textures had to be swapped - and if you didn't stutter at it, you'd have pop-in of assets and textures all the time... Remember Far Cry 3... VRAM in GPUs actually started to become great around the early Maxwell days, when we realized 3GB was a massive boon over 2; and 4GB midrange followed up soon after - heck even the 970 with its crippled 4GB was in a miles better place than anything Kepler based. 6GB lasted super long, you could say it still does on low end 1060s today, and 8GB similarly was built to last when it was new. To compare to today and still be looking at 8GB on much more powerful GPUs saying its all fine... is a strange thing to see.
Not to mention the 4 GB 970 and 2 GB 960 (I know 4 GB versions existed, but they were rare) were followed by the 6 GB 1060 and 4 GB 1050 Ti. That's the same performance with more VRAM on a lower tier card!

The point he's making and stressing is that even with sufficient VRAM, you're already using system RAM to swap assets.
At some point, something's gonna give,
and the thing that gives are your frametimes which means 'Stutter'.

The same relation applies between bandwidth and VRAM. Nvidia's using cache to alleviate bandwidth constraints but that combo along with low VRAM means they're taxing data swapping heavily, and depend heavily on developer TLC to keep the whole affair smooth. And failing at it, as we can see in the results already. Developers haven't got time for that, just the same way as mGPU on DX12 died a swift death when it got pushed to 'the developer'. That push basically just ended SLI fingers and Crossfire within the space of two generations of GPU. Economical realities don't lie. Time is money and devs will always prefer to focus their time on their game, not on bullshit surrounding it.

Its not rocket science, honestly, and it never has been, and this silly story of 'muh usage is not muh allocation' needs to stahp. Allocation IS usage because it means you get to swap less and therefore you're mitigating frame time variation, keeping that variability focused on the GPU core's capabilities, not its memory subsystems. I could dig up some TPU reviews to show the actual examples, including confirmation from the W1zz himself. We had at least one in the Ampere days already on 10GB. Those exceptions make the rule here and time will prove it.
I don't disagree, but my point is that there's a difference between the game swapping data while still delivering a smooth frame rate, and the game swapping data leading to stutters. Your gameplay experience matters, not your VRAM usage, or allocation, or whatever you want to call it. You're playing a game, after all, not benchmarking.

The only thing I disagree with is that usage is allocation. It is not. Storing assets on screen is one thing, storing assets that may be needed soon is another.

You could always lower the settings...

All these youtubers test on "ultra" or "maximum" or whatever instead of "high". Usually the maximum settings are completely unoptimized because that's the "base" for other settings, especially the case with textures, but plenty other settings. They are just left as an option for future PCs or people with too much money that buy literally top-end.

Tests on "high" would be way more sensible and show actual requirements and allow proper comparison of cards.
I don't think most buyers of mid-high-end Nvidia cards ever think about lowering their quality settings. People on this forum, sure, but in the outside world, where people buy pre-built PCs, not really.

I just got a question from a colleague yesterday on how easy or difficult it would be to replace his 3080 with a 4080 because he's starting to get stutters even after reinstalling Windows. He didn't ask what else he could do - he straight away thought about buying another GPU. I think this is closer to the "typical gamer" mindset.

Indeed, and consoles don't have much more than that available for textures targeting 1440p to 4k anyway. I'd like to see more effort made with ports.

If they're going to make a pc release, they should put effort into said release to make the experience smooth. Bare minimum with the $$ they charge.
You know they won't, because the less effort you put into something and the more you charge for the results, the bigger profit you make. Titles like TLoU can pull huge profits with their name alone. You don't need to spend more than the absolute minimum amount of time on their development. If it runs, good enough, let's push it out and count some sweet cash!
 
8Gb = 1080p
12Gb = 1440p
16Gb = 4K
Broadly speaking, obviously. It depends on the games.
I honestly wouldn't be so on this VRAM starvation issue if it weren't for the pricing. AMD sells a 12Gb 6700 xt for 400€? Jolly good. Nvidia sells a 12Gb 4070 Ti for 1000€? Eat a dick.
It's a 4K capable card with the VRAM for 1440p. And what we've been seeing since a few weeks is only the first games of the PS5 era. I wouldn't be surprised at all if even at 1440p, by 2024, we start having people needing to lower textures or raytracing because the VRAM just can't bear it.

1000 dollars for a 1440p RT capable card "if you lower the textures". FU too, Nvidia.
 
I don't think most buyers of mid-high-end Nvidia cards ever think about lowering their quality settings. People on this forum, sure, but in the outside world, where people buy pre-built PCs, not really.

I just got a question from a colleague yesterday on how easy or difficult it would be to replace his 3080 with a 4080 because he's starting to get stutters even after reinstalling Windows. He didn't ask what else he could do - he straight away thought about buying another GPU. I think this is closer to the "typical gamer" mindset.
"Typical gamer" doesn't have thousands of [some currency] to waste every year or two on high end GPU. Maybe your friends are just rich and therefore do not care about optimizing anything at all, but most "gamers" buy mid-end GPUs and use them for a while. Of my friends most people play games, but I know only two who purchase high end (and they still keep them for a few years), I know nobody else who owns any high end card. People get *60s or *50Tis or something around that and play on them until they can't run stuff on low smoothly.

If you care about money, that's a reasonable thing to do. Sure, if you are in your 40s, own an apartment and have a stable job then you're probably going to get some good stuff, but most gamers, especially young ones, do not have abundance of money to throw around for such builds and replace them every year or two.

No need to believe me, just look at Steam hardware statistics.
 
I sort of agree. Hogwarts and Last of us is prime examples of not bad, but very unoptimized game engines. UE4 is partly to blame, but after a few patches performance has increased a lot so there bad optimization was the issue.

Talking about UE4, Borderlands 3 stuttered (and still does but a lot less) and had absolutely nothing to do with the amount of VRAM you had, or even anything on your PC. If you threw a lot of horsepower at it it kind of make it less bad, so it seemed like anyone playing with hardware on the edge had a problem.
 
8Gb = 1080p
12Gb = 1440p
16Gb = 4K
Broadly speaking, obviously. It depends on the games.
I honestly wouldn't be so on this VRAM starvation issue if it weren't for the pricing. AMD sells a 12Gb 6700 xt for 400€? Jolly good. Nvidia sells a 12Gb 4070 Ti for 1000€? Eat a dick.
It's a 4K capable card with the VRAM for 1440p. And what we've been seeing since a few weeks is only the first games of the PS5 era. I wouldn't be surprised at all if even at 1440p, by 2024, we start having people needing to lower textures or raytracing because the VRAM just can't bear it.

1000 dollars for a 1440p RT capable card "if you lower the textures". FU too, Nvidia.
I don't want to play the devil Jensen's advocate, but within every generation the budget segment is 1080p focused, the mid-range on 1440p, and the high-end is targeting 4K.
That overall performance is raised from gen to gen does not change that.
The 70-series are 1440p cards. They just needed the 12 gigs already at the previous generation, because of RT, but as HUB pointed out - Nvidia planned obsolescence in effect ^^
 
"Typical gamer" doesn't have thousands of [some currency] to waste every year or two on high end GPU. Maybe your friends are just rich and therefore do not care about optimizing anything at all, but most "gamers" buy mid-end GPUs and use them for a while. Of my friends most people play games, but I know only two who purchase high end (and they still keep them for a few years), I know nobody else who owns any high end card. People get *60s or *50Tis or something around that and play on them until they can't run stuff on low smoothly.

If you care about money, that's a reasonable thing to do. Sure, if you are in your 40s, own an apartment and have a stable job then you're probably going to get some good stuff, but most gamers, especially young ones, do not have abundance of money to throw around for such builds and replace them every year or two.

No need to believe me, just look at Steam hardware statistics.
The only thing the Steam survey shows is that people will always buy Nvidia no matter what. No AMD in the top 10 cards with offerings like the 6600, 6650 XT or the 6700 XT... sad.

As for my colleague, I think a lot depends on where you're from. What you said is very true of developing countries where you need to be careful with your money even in your daily chores (I know, I came from one of these countries myself). Here in the UK, though, if you're single and have a full-time job, nothing stops you from upgrading your PC whenever you want to. You don't even need to be rich.
 
Talking about UE4, Borderlands 3 stuttered (and still does but a lot less) and had absolutely nothing to do with the amount of VRAM you had, or even anything on your PC. If you threw a lot of horsepower at it it kind of make it less bad, so it seemed like anyone playing with hardware on the edge had a problem.
Yeah, UE4 for pc is often bad in general, shader loading during gameplay is a terrible deal. As for Hogwarts and LOS it uses insane amounts of vram for no good reasons, but patches has luckily improved it.
 
@AusWolf
I never said the opposite... Just that allocated memory is dependent of resolution, textures and effect, be it from mods or base game.
;)
 
@AusWolf
I never said the opposite... Just that allocated memory is dependent of resolution, textures and effect, be it from mods or base game.
;)
That's very true... and it brings me back to my first point in this thread: there's no such thing as "minimum amount of VRAM in 2023". Your minimum is whatever you want it to be with regards to what you're playing at what settings. :)
 
The 70-series are 1440p cards. They just needed the 12 gigs already at the previous generation, because of RT, but as HUB pointed out - Nvidia planned obsolescence in effect ^^
At the price at which Jensen is selling them, bullshit.
If I'm really sweet on the 6700 xt it's because I see it as a perfect entry point to decent 1440p.
I'm just as hard on the 4070 Ti because it's unfathomably overpriced for 1440p and has the power to to mediocre 4K. Either price it at 2/3rds of its current price tops, or give it 16Go and make it a 4K card.

The only thing the Steam survey shows is that people will always buy Nvidia no matter what. No AMD in the top 10 cards with offerings like the 6600, 6650 XT or the 6700 XT... sad.
I'm starting to have doubts about that. Even at the worst of the worst, AMD was 10% of the market (Nvidia 84%). I don't think that Steam should be relied on that much.
TechEpiphany has been posting a steady climb of AMD sales in the past weeks, to the point that AMD is now over 60% of sales and 50% of revenue.
I am not confident in just one source being used (one german reseller), but it is a trend that is unmistakable. People are buying AMD more and more, and Steam isn't showing that.
 
At the price at which Jensen is selling them, bullshit.
If I'm really sweet on the 6700 xt it's because I see it as a perfect entry point to decent 1440p.
I'm just as hard on the 4070 Ti because it's unfathomably overpriced for 1440p and has the power to to mediocre 4K. Either price it at 2/3rds of its current price tops, or give it 16Go and make it a 4K card.
Ofc the price is bullshit bro. That doesn't change the purpose of the designs ^^
 
Of course it does. The design targets a certain performance class and a certain production cost. That translates into a selling price.
What in the hell are you talking about.
 
Of course it does. The design targets a certain performance class and a certain production cost. That translates into a selling price.
What in the hell are you talking about.
ALL segments have their prices raised for no other reason but greed. What is it that you don't get about it? :D
 
What I don't get is your argument, because at this point I don't think you have any.
 
Are you the same age as the 4070 VRAM buffer in Gs?
If you move all segments up a notch so that the entry models cost as much previous mid-range cards do, and current mid-range cost as much as previous high-end cards do, etc. that doesn't affect that the hardware itself.
You still have the same cards, you just pay way more for each.
 
Status
Not open for further replies.
Back
Top