• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Is 8gb vram the minimum target for 2023 gaming?

Is 8gb vram the minimum entry for gaming 2023 and onwards

  • Yes

    Votes: 69 56.6%
  • No

    Votes: 53 43.4%

  • Total voters
    122
  • Poll closed .
Status
Not open for further replies.
Yes, exactly!

The RTX 3070 is kind of in limbo when it comes to ray tracing performance. It isn't terrible but it isn't good enough to comfortably handle all games on the market at a playable framerate. It can handle some, but not all, which makes it a bad (or at the very least sub-optimal) choice for people who are serious about ray tracing.

The RTX 3050 also supports ray tracing. Does anyone expect that card to provide solid ray tracing performance? Should that card be judged based on its ray tracing performance? I mean, it supports it, so if it can't handle a modern AAA game with ray tracing enabled, then it has to be trash, right?
Just because a card is marketed on a certain feature, doesn't mean it will perform it well. That's on NVIDIA's marketing, not on the card itself.

Let's look at motherboards for another example. Take you average H510/H610 or A520/A620 board. Does anyone realistically expects those boards to do perfectly well with a i9 or Ryzen 9 processors, respectively? Marketing departments can try and convince people as much as they want but no PC builder worth their salt would pair such a motherboard with such a processor. Just because a high-tier processor is listed as supported does not make it a good idea to use it in a cheap-ass board. Of course, PC builders/enthusiasts generally don't complain about this because they understand reality.

It's called market segmentation: differently priced products for different needs. The RTX 3070 happens to simply not be particularly well suited for ray tracing (regardless of its VRAM capacity). Steve knows this but chose to make the card look bad to "prove" his point.
I would accept that argument if everybody on the planet was a well-versed PC builder. But the truth is very far. Most people who buy 3070s actually buy them as part of pre-built systems because they know nothing about PCs, they just want to play games. If they're told to expect superior RT performance, and they're not getting that, I call it lying to the customer. There's no two ways about that.
 
Minimum entry? Sure, of course.

8GB is fine for most games available at present. Going forward there will be more and more VRAM hungry games, but there should still be the ability to turn down settings to achieve a playable experience. Compromise is inherent for entry level hardware.
 
Not true my 3080 10 GB GPU will rock and roll at 2K above 90 FPS(DLSS off) on FS 2020, MW 2, control, cyberpunk and Dead Space. No issues except bad code or ports.
Tell me how does it feel when a 400$ console has more vram than your 800$ gpu ?
 
Last edited by a moderator:
I would accept that argument if everybody on the planet was a well-versed PC builder. But the truth is very far. Most people who buy 3070s actually buy them as part of pre-built systems because they know nothing about PCs, they just want to play games. If they're told to expect superior RT performance, and they're not getting that, I call it lying to the customer. There's no two ways about that.

I think most people on here are talking about people well versed in hardware most of us know we are the minority it doesn't change the fact that we should expect a certain amount of Vram given a certain amount of performance for what they want to charge for current gen graphics cards.....


Tell me how does it feel when a 400$ console has more vram than your 800$ gpu ? Also how tasty is Jensen huang cum taste like hmm?

That went dark pretty quick from liking a 3080 to sucking #*## not sure what's more disturbing you thinking someone liking their gpu equals sucking off a ceo or that you compared two shit consoles with weaker than 6700XT gpus and mobile Zen2 cores to a 3080 that is miles better and will run conslow settings for ages at 1440p with likely double the framerate.... My PS5 is great but I would pick a 3080 over it any day of the week,.... Maybe Lisa Su is sitting on your face giving you pink eye and that is why you can't see how much better a 3080 is than a console.....
 
Tell me how does it feel when a 400$ console has more vram than your 800$ gpu ? Also how tasty is Jensen huang cum taste like hmm?
Which console has more VRAM? The PS5 has 16 GB GDDR6 system memory shared between the CPU and GPU. That's as if you had 16 GB incredibly fast RAM with a powerful iGPU in your PC. It's not VRAM.

Not to butt into the conversation, but let's stay civil, shall we? :ohwell:
 
Honestly this thread sorta turned into a bash nvidia/defend my Nvidia card thread don't get me wrong Nvidia deserves to be criticized but people defending 8GB of ram on a 400-500 GPU is comical at best in 2023.

You see this with every time the bar movies up in relation to pc gaming.

When 8-12 thread cpus become much better for gaming than 4 thread ones. ( There are other factors that matter like Cache, ST performance, clock speed, but they all tend to be higher with more cores especially on intel CPU)
When 16GB of ram started to outpace 8GB of Ram.
when SSD became almost a requirement over HDD.
And now having more than 8GB of vram is better than having 8GB or less.

Although you saw the same people complaining when 4GB outpaced 2Gb, and then 6GB outpaced 4GB, and then 8GB outpaced 6GB you get the picture.

I don't get it assuming developers properly use it more is better it never will not be better.


It doesn't magically make any of the stuff mentioned above unusable it's just likely going to give you issues in certain scenarios and a lesser overall gaming experience after all regardless of how good a game is if it stutters it completely breaks the immersion for most.

Either way for anyone who actually wants to game in 2023 and play every game the target should be

8-12 thread modern cpu with a decent amount of cache.
16GB of ram although 32GB isn't a bad idea
10-12GB of vram
ssd

For gamers trying to target 1080p 60-120hz at medium/high settings an 8 GB gpu is fine but at this point 8GB in 2023 should strictly be considered entry level.

IF you don't agree awesome Nvidia will likely have an overpriced 4060ti with 8GB just for you it will be called the I hate progress special edition and will likely age as well as the Intel i5 7600k.....
 
Last edited:
If you claim the testing is unrealistic because ray tracing is enabled then you're implying one should expect a 3070 user to not use ray tracing as result, because you know, it's unrealistic. But then the 6800XT is considerably faster than a 3070 in raster, so going by that logic yes, the 3070 would become effectively useless because the one scenario where it's supposed to have an advantage is actually too "unrealistic" apparently. Mind you, Nvidia has being hammering RT performance into everyone's head as a selling point for years.


You know what I consider crappy ports that have disappointingly become the norm as well ? Game where RT causes performance to drop off a cliff into a chasm but I am sure you'd disagree with me because ray tracing is amazing and people should just manage their expectations.

Games are what they are, they are now using a lot more VRAM than a couple of years ago, they are not crappy ports because of that.
That's the funny thing, when there's enough VRAM and the RTX 3070 wins by a country mile, nVidia fanboys would be like, "See? That's why I'd gotten the card!"

But should it loses, it's suddenly down to games being 'unoptimized' (which may be in to some degree true) or that the test was then perceived as 'unfair'. I believe Steve ran the games in a particular setting and scenario NOT to deliberately make nVidia 8GB cards looks bad, but to demonstrate that VRAM is a major factor in graphics intensive games moving forward. IF not, how was he gonna demonstrate that VRAM shortfall affect both in PQ and framerate (or rather, smoothness of gameplay).

VRAM is fast becoming a major factor in newer games, basically more graphics intensive ones, and the fact of the matter is, nVidia had been deliberately shortchanging their customer base by putting just 8GB of many of their cards (which are more capable of better performance had they been given more VRAM), this is especially true for the RTX 3070 Ti and to some degree, the RTX 3080 10GB. I also feel that the RTX 4070 Ti/4070 have been given just enough VRAM to possibly satisfy certain newer games for now and the immediate future, but really, they (like the RTX 3070/3070 Ti) actually deserve more VRAM. But then, that would encroach on the RTX 4080 performance due to fact that the memory interface would have to be adjusted and VRAM size, and that ain't good for business (and I understand that).

These powerful cards that are kneecapped both in terms of performance and perceived 'longevity' as what I see as planned early obsolescence by nVidia. Yes, you can run these games by lowering textures (like RE4R) and ingame setting, or do without RT (as in the case of the RTX 3070 Ti). There's a modded 16GB RTX 3070, and IF it can be supported by drivers to use the 16GB VRAM buffer available, it'd most likely equal or beat the RX 6800 in VRAM intensive games. I'd honestly love to see that card benchmarked in these newer titles....

I played DSR last night, this time with MSI AB showing both allotted VRAM and actual usage, at 3840x1080 (higher pixel count than 2560x1440, but less than 3440x1440), max ingame + RTAO, and it showed that >10GB was allotted, but actual VRAM usage was >9GB. I've yet to get RE4R, or Hogwarts but I'm almost certain that they would chew up more VRAM than DSR.
 
Last edited:
That's the funny thing, when there's enough VRAM and the RTX 3070 wins by a country mile, nVidia fanboys would be like, "See? That's why I'd gotten the card!"

But should it loses, it's suddenly down to games being 'unoptimized' (which may be in to some degree true) or that the test was then perceived as 'unfair'. VRAM is fast becoming a major factor in newer games, basically more graphics intensive ones, and the fast of the matter is, nVidia had been deliberately shortchanging these fanboys by putting just 8GB of many of their cards (which are more capable of better performance had they been given more VRAM), this is especially true for the RTX 3070 Ti and to some degree, the RTX 3080 10GB.

These powerful cards that are kneecapped both in terms of performance and perceived 'longevity' as what I see as planned early obsolescence by nVidia. Yes, you can run these games by lowering textures (like RE4R) and ingame setting, or do without RT (as in the case of the RTX 3070 Ti). There's a modded 16GB RTX 3070, and IF it can be supported by drivers to use the 16GB VRAM buffer available, it'd most likely equal or beat the RX 6800 in VRAM intensive games.

I played DSR last night, this time with MSI AB showing both allotted VRAM and actual usage, at 3840x1080 (higher pixel count than 2560x1440, but less than 3440x1440), max ingame + RTAO, and it showed that >10GB was allotted, but actual VRAM usage was >9GB. I've yet to get RE4R, or Hogwarts but I'm almost certain that they would chew up more VRAM than DSR.

While I don't care what people purchased in 2020/2021 or even 2022 if you got it cheap enough in 2023 8GB is Just not adequate anymore it is too easy in most modern games to cripple it's performance assuming the gpu is at least as fast as a 3070... I consider the 3060ti to be more of an entry level card at this point so it's 8GB is fine. Same with my 3080ti being strictly midrange with it's 12GB....

Unfortunately this thread has turned into a AMD fanboy bashing Nvidia GPUs/Nvidia Fanboys defending their 8GB gpu thread. It should have been about wanting progress and not accepting 8GB on 400+ usd gpus.

I get that it's way too much to ask AMD/Nvidia users to have a civil conversation about games and increasing requirements we can't even have a civil conversation about upscaling/RT/Whatever the hell else fanboys want to trigger other fanboys with I guess Vram now....

To me it doesn't matter what people buy I actually usually find it more interesting as to why they made that choice whether it be a 7900XTX all the way down to a RTX 3050 it makes 0 difference if I think there are better alternatives what is fascinating is when it's not the choice I would have made. I get it some people will only buy AMD and some will only buy Nvidia but at that point does it really matter you just buy the best your camp offers in whatever price point you decide and roll with it for better or for worse.

Don't get me wrong I might take jabs at the 4070ti or laugh about the RT performance AMD has in Cyberpunk but at the end of the day if people are happy with their gpu's it doesn't matter what camp they bought from just if they got their monies worth in their own minds.

Although I do hate FSR but I also hope AMD improves it to the point that DLSS is irrelevant.

I also want both companies to do well at the same time I won't support them just because of that if they are not offering what I need. When I say do well I don't care what their ceo's are making I'm talking about them pushing technology forward because regardless of what camp we sit in isn't that the point, stuff getting better. We are all tech enthusiast after all I would like to think.
 
Last edited:
My laptop has a 16 GB GPU (4090 Mobile), I frequently see RTSS reporting 14+ usage by some games at 4K, wouldn't even look at cards with less than 16 in the future, but obviously budget, and your monitor resolution, and what kind of games you play means that while 16 or higher might be something that I consider a necessity, it doesn't need to be something everyone else does as well.
 
Let's take X graphics card with X amount of VRAM doing a steady 80 FPS in X game, and another graphics card with half as much VRAM, that also does a steady 80 FPS in X game. You might say that more VRAM isn't needed, and if you were talking strictly about the present moment, you'd be absolutely right. But if Y game comes out two years later, and X graphics card does a steady 50 FPS, what the other graphics card will do remains a mystery. Maybe it'll do 50 FPS as well. Maybe not. Maybe it will reach 50 FPS sometimes, but run into some stutters every now and then. There's no way to know what the future brings, but more VRAM gives you just a little bit more certainty. Let this be my bottom line. :)
 
Last edited:
Since the lunch of Nvidia's Ampere and AMD's RDNA2 I been kinda rolling up and down and left and right and after trying the RTX 30 series I finally went back to the RX 6800 XT and gone down from 4K to 1440p it's such a better satisfaction and so much easier to drive.

I started out with RX 6800 Ref. --> RX 6800 XT Gaming X Trio --> RX 6800 XT Ref --> RTX 3090 Ventus --> RTX 3070 Strix --> RX 6800 XT Red Devil.

Yes I had 3 different RX 6800 XT's by now still wish I had tried more models but currently since my first RX 6800 XT performance since I been happy and 16GB of vram is so nice AMD doesn't leave the RX 6800 missing any of that like Nvidia does with their almost same model but different vram configs.
 
I think most people on here are talking about people well versed in hardware most of us know we are the minority it doesn't change the fact that we should expect a certain amount of Vram given a certain amount of performance for what they want to charge for current gen graphics cards.....




That went dark pretty quick from liking a 3080 to sucking #*## not sure what's more disturbing you thinking someone liking their gpu equals sucking off a ceo or that you compared two shit consoles with weaker than 6700XT gpus and mobile Zen2 cores to a 3080 that is miles better and will run conslow settings for ages at 1440p with likely double the framerate.... My PS5 is great but I would pick a 3080 over it any day of the week,.... Maybe Lisa Su is sitting on your face giving you pink eye and that is why you can't see how much better a 3080 is than a console.....
Well, the point is, a console has more vram than a flagship gpu that is capable of delivering better performance, but Nvidia are artificially crippling the cards with bare minimum vram from two years to make you upgrade, so Nvidia can go F it self.

Which console has more VRAM? The PS5 has 16 GB GDDR6 system memory shared between the CPU and GPU. That's as if you had 16 GB incredibly fast RAM with a powerful iGPU in your PC. It's not VRAM.

Not to butt into the conversation, but let's stay civil, shall we? :ohwell:
Even with 3gb reserved for system, that leaves 13 GB for games, more than a 3080 lol

A console can drive better textures than an Nvidia flagship gpu lol that is the most ironic bs ever, Nvidia are getting away with daylight robbery
 
Last edited by a moderator:
RTX 4070/Ti 16GB editions (Super?) when?
 
RTX 4070/Ti 16GB editions (Super?) when?

Never, as the gap in specs and performance is already quite small between the 4070 ti and 4080... so would just eat into the already lacklustre 4080 sales...
 
Unrealistic because ray tracing is enabled then you're implying one should expect a 3070 user to not use ray tracing as result, because you know, it's unrealistic. But then the 6800XT is considerably faster than a 3070 in raster, so going by that logic yes, the 3070 would become effectively useless because the one scenario where it's supposed to have an advantage is actually too "unrealistic" apparently.
crappy ports that have disappointingly become the norm as well ? Game where RT causes performance to drop off a cliff into a chasm
So RT is disappointing for tanking performance but 3070 users definitely bought their card for RT... Pick a side lol. The 3070 will always possess superior RT performance, if anyone that owns one is dumb enough to use it in games with a crap visual return, or run everything on ultra and tank performance, that's on them. Not like it plays amazingly on the competitor either, compromises are necessary for a playable experience.

Yeah, RT in half the titles hub tested purposely to shame the 8gb of VRAM, virtually nobody should use, I wouldn't enable RT in plage tale if I had a 4090, it's an effect with a massively disproportionate hit compared to the visual payback. Also you raise a good point, a 6800XT is considerably faster in raster so it's not an even match up from the outset.

Your mates can like your comment till the cows come home, it doesn't change the reality of shit RT effects not being worth applying (good ones are), and that hub constructed their test to make a point and they made it, to the boon of a many AMD fan. Good work, I'll clap, slowly.

I've said it before and I'll say it again, more VRAM is better, but bugger me 3070 owners simply wouldn't put themselves into these situations, they'd use applicable settings for a great experience.

RTX 4070/Ti 16GB editions (Super?) when?
Imo with all the backlash, I see nvidia allowing partners to double vram and charge more, perhaps after a mid cycle 'super' refresh? They're not usually one to let it all slide.
 
The strangest thing I find in this thread right now has got to be the "ultra-settings are dumb", yet "RT is important" line of thought.
RT will always be Ultra+. What do I care about light bounces when I have to cut down texture resolution or draw distances? Not much...
 
I voted "no", I consider that to be 12 GB.
 
Honestly this thread sorta turned into a bash nvidia/defend my Nvidia card thread don't get me wrong Nvidia deserves to be criticized but people defending 8GB of ram on a 400-500 GPU is comical at best in 2023.

You see this with every time the bar movies up in relation to pc gaming.

When 8-12 thread cpus become much better for gaming than 4 thread ones. ( There are other factors that matter like Cache, ST performance, clock speed, but they all tend to be higher with more cores especially on intel CPU)
When 16GB of ram started to outpace 8GB of Ram.
when SSD became almost a requirement over HDD.
And now having more than 8GB of vram is better than having 8GB or less.

Although you saw the same people complaining when 4GB outpaced 2Gb, and then 6GB outpaced 4GB, and then 8GB outpaced 6GB you get the picture.

I don't get it assuming developers properly use it more is better it never will not be better.


It doesn't magically make any of the stuff mentioned above unusable it's just likely going to give you issues in certain scenarios and a lesser overall gaming experience after all regardless of how good a game is if it stutters it completely breaks the immersion for most.

Either way for anyone who actually wants to game in 2023 and play every game the target should be

8-12 thread modern cpu with a decent amount of cache.
16GB of ram although 32GB isn't a bad idea
10-12GB of vram
ssd

For gamers trying to target 1080p 60-120hz at medium/high settings an 8 GB gpu is fine but at this point 8GB in 2023 should strictly be considered entry level.

IF you don't agree awesome Nvidia will likely have an overpriced 4060ti with 8GB just for you it will be called the I hate progress special edition and will likely age as well as the Intel i5 7600k.....

So, I chalk this up as two things. One is history, and the other is a frustration with an industry that doesn't see its consumers as valuable enough to invest in.


To the former, you may be aware of the limitations of a 32 bit os. Why bring this up? Well, for years the consoles and PC were competing with 32 bit hardware. It had a finite limitation on addressable space, and thus your system having 4 GB was literally enough. There was a magic time when developers were limited by their actual resources, so they made compromises to make better games.
This isn't a set of thick rose tinted goggles. These games often had idiotic compromises to modern sensibilities. Try going back to System Shock 2 and telling me that the game wasn't a steaming mess sometimes. That said, you had to optimize. You had to cut what didn't work. You had to be economical and choose what was done. In another example, you have Fallout 3 which had an "open world" that loaded in tiles...and anyone who played that for any period of time probably encountered a run through the wasteland only to hit an invisible loading screen or two...all so they could load the next brown and gray tileset. Again, not perfection but definitely with limitations.

The reason that this is a problem is that new games simply don't require that level of optimization. I'll look at Bethesda, who are still basically running Gamebryo as their engine. In Fallout 4 you've got the same load cells issue, the same issues with rendering distance, and the solution wasn't to improve the system. Their solution was basically to increase hardware requirements or have terrible performance (read: downtown Boston) and call it a day. "Throw more resources at it" rather than "optimize it so that it works" is the new answer to the problem.



Now, Fallout 4 also touches on point 2. Rarely is a PC port given the love required to be awesome. There are horror stories abound, but the general assumption is that because of poor optimization you need a PC not equal to, but more powerful than a console to get the same performance. I call some of that truth, and some of that shenanigans. Consoles advertise 4k...without discussing that it's tiled mode. They discuss 4k, without noting the upscaling routines. The PC has 144+ Hz refresh monitors to contend with...whereas 60 Hz is basically what most consoles aim for as far as monitor refresh rates. All of this is fine, but it's obtuse to consumers.
Let me be slightly less obtuse. The problem is that with known hardware developers test features, enable what doesn't tank performance too much, and they're done. That's console optimization. Now, PC optimization is a joke. What hardware do you assume is running? What drivers for the GPU are there? How do a CPU, RAM, and GPU interact? Can you really assume that someone running a Sandybridge CPU with a 3060ti has roughly the same performance as a low end 5000 series Ryzen with a 1080ti? All of this is why instead of optimization it's more accurate to minimum feature listing and a recommended and minimum performance hardware set.

This may sound like a dichotomy. Earlier I railed against poor coding...and here I'm talking about not optimizing things at all. Marrying the two requires you, for a moment, pretend to be a code monkey. How do you solve the impossible problem of optimizing code for the unknown? The only real option is to make the code run well enough on a test system. Push it out to launch, and when it breaks you collect feedback and work on fixes for reported issues. Bad coding is often a sign of both lazy practices and intricate hardware.
Once you marry the two of these ideas it's not difficult to see why we fail, and why companies aren't willing to truly optimize for the literal myriad of configurations which can theoretically be out in the wild.



Now your final point...you're welcome to your opinions. I have nothing worth saying.
 
I frequently see RTSS reporting 14+ usage by some games at 4K

THAT IS NOT USAGE, THAT IS RESERVED, and the MORE YOU HAVE THE MORE IT RESERVES

Are 8Gb enough? NO
Do the arguments made here over and over make any sense? NO
 
Last edited:
So RT is disappointing for tanking performance but 3070 users definitely bought their card for RT... Pick a side lol.
No, you need to pick a side. Either accept the fact that the 3070, amongst other Nvidia cards, have been equipment with an inadequate amount of VRAM for what they were advertised to be able to do or admit that their ray tracing capabilities were so bad that it was never worth it in the first place.

The first option implies a card like the 3070 has become useless over time due to VRAM capacity and the later implies it has always been useless, since we both know the main selling point of these cards was the RT performance, either way it's the same outcome : 3070 owners got "scammed" so to speak.
 
Even with 3gb reserved for system, that leaves 13 GB for games, more than a 3080 lol

A console can drive better textures than an Nvidia flagship gpu lol that is the most ironic bs ever, Nvidia are getting away with daylight robbery

That's not how this works.

It's "system" RAM not in the sense that it is also used for the OS, it's also used by games, for things that aren't textures.

A PC with a 3080 has 10 GB VRAM for the GPU alone, this doesn't mean the game only has the equivalent of 10 GB on a console. Games use both RAM and VRAM. They don't only use VRAM.

When I play something like Hogwart's Legacy at 4K it uses close to 20 GB RAM + 14 GB VRAM. The console has nowhere near 34 GB of VRAM. It doesn't even have enough VRAM to cover for the system RAM the game uses.

The only way the console has more RAM than a desktop with a 3080 is if that 3080 system is running less than 8 GB of RAM, which it virtually never is. With 16 GB RAM + 10 GB VRAM, a PC has more effective VRAM than a console with 16 GB VRAM, since games use more than 6 GB of system RAM incredibly frequently.

THAT IS NOT USAGE, THAT IS RESERVED, and the MORE YOU HAVE THE MORE IT RESERVES

Are 8Gb enough? NO
Do the arguments made here over and over make any sense? NO
I'm well aware of this. It's why I explicitly mentioned it was as reported by RTSS. There's no reliable way to measure how much the game actually needs though.

This is how much the game effectively thinks it may need though, that's why it reserves it.

No, you need to pick a side. Either accept the fact that the 3070, amongst other Nvidia cards, have been equipment with an inadequate amount of VRAM for what they were advertised to be able to do or admit that their ray tracing capabilities were so bad that it was never worth it in the first place.

The first option implies a card like the 3070 has become useless over time due to VRAM capacity and the later implies it has always been useless, since we both know the main selling point of these cards was the RT performance, either way it's the same outcome : 3070 owners got "scammed" so to speak.
I don't think RT is an unrealistic expectation for a 3070. RT at 4K however.....

It performs fine in games with RT, at 1080P.
 
Well, the point is, a console has more vram than a flagship gpu that is capable of delivering better performance, but Nvidia are artificially crippling the cards with bare minimum vram from two years to make you upgrade, so Nvidia can go F it self and you fan boys can stay and take it from the back while Jensen is pumping you harder, you fools.
No one denies that Nvidia isn't giving you enough VRAM in current generations. Just read back a couple of posts, you fool. ;)

Even with 3gb reserved for system, that leaves 13 GB for games, more than a 3080 lol

A console can drive better textures than an Nvidia flagship gpu lol that is the most ironic bs ever, Nvidia are getting away with daylight robbery
Reserved for system? Because a game doesn't need system memory to run, surely. Why do we have 16-32 GB RAM in our gaming PCs, then? If what you're saying had any substance, 4 GB would be enough.
 
There's no reliable way to measure how much the game actually needs though.
It's irrelevant anyway, any application allocates the amount of memory it was designed to allocate. Pondering what an application might "need" is a worthless and nonsensical line of questioning.

The people writing game engines aren't idiots to be outsmarted by forum dwellers that know better as to how much memory a game "needs". What a joke, these people got no clue about anything related to writing software.
 
So, I chalk this up as two things. One is history, and the other is a frustration with an industry that doesn't see its consumers as valuable enough to invest in.


To the former, you may be aware of the limitations of a 32 bit os. Why bring this up? Well, for years the consoles and PC were competing with 32 bit hardware. It had a finite limitation on addressable space, and thus your system having 4 GB was literally enough. There was a magic time when developers were limited by their actual resources, so they made compromises to make better games.
This isn't a set of thick rose tinted goggles. These games often had idiotic compromises to modern sensibilities. Try going back to System Shock 2 and telling me that the game wasn't a steaming mess sometimes. That said, you had to optimize. You had to cut what didn't work. You had to be economical and choose what was done. In another example, you have Fallout 3 which had an "open world" that loaded in tiles...and anyone who played that for any period of time probably encountered a run through the wasteland only to hit an invisible loading screen or two...all so they could load the next brown and gray tileset. Again, not perfection but definitely with limitations.

The reason that this is a problem is that new games simply don't require that level of optimization. I'll look at Bethesda, who are still basically running Gamebryo as their engine. In Fallout 4 you've got the same load cells issue, the same issues with rendering distance, and the solution wasn't to improve the system. Their solution was basically to increase hardware requirements or have terrible performance (read: downtown Boston) and call it a day. "Throw more resources at it" rather than "optimize it so that it works" is the new answer to the problem.



Now, Fallout 4 also touches on point 2. Rarely is a PC port given the love required to be awesome. There are horror stories abound, but the general assumption is that because of poor optimization you need a PC not equal to, but more powerful than a console to get the same performance. I call some of that truth, and some of that shenanigans. Consoles advertise 4k...without discussing that it's tiled mode. They discuss 4k, without noting the upscaling routines. The PC has 144+ Hz refresh monitors to contend with...whereas 60 Hz is basically what most consoles aim for as far as monitor refresh rates. All of this is fine, but it's obtuse to consumers.
Let me be slightly less obtuse. The problem is that with known hardware developers test features, enable what doesn't tank performance too much, and they're done. That's console optimization. Now, PC optimization is a joke. What hardware do you assume is running? What drivers for the GPU are there? How do a CPU, RAM, and GPU interact? Can you really assume that someone running a Sandybridge CPU with a 3060ti has roughly the same performance as a low end 5000 series Ryzen with a 1080ti? All of this is why instead of optimization it's more accurate to minimum feature listing and a recommended and minimum performance hardware set.

This may sound like a dichotomy. Earlier I railed against poor coding...and here I'm talking about not optimizing things at all. Marrying the two requires you, for a moment, pretend to be a code monkey. How do you solve the impossible problem of optimizing code for the unknown? The only real option is to make the code run well enough on a test system. Push it out to launch, and when it breaks you collect feedback and work on fixes for reported issues. Bad coding is often a sign of both lazy practices and intricate hardware.
Once you marry the two of these ideas it's not difficult to see why we fail, and why companies aren't willing to truly optimize for the literal myriad of configurations which can theoretically be out in the wild.



Now your final point...you're welcome to your opinions. I have nothing worth saying.
You're not wrong, but that's why standards like x86, x86-64 and instruction sets exist. As for GPUs, it's enough to buy one each from the last 3-4 generations, as you're working with the same driver and similar architectures. What runs on a 3060 should also run on a 3080. Heck, even I could supply some parts from my home collection. :D
 
No, you need to pick a side. Either accept the fact that the 3070, amongst other Nvidia cards, have been equipment with an inadequate amount of VRAM for what they were advertised to be able to do or admit that their ray tracing capabilities were so bad that it was never worth it in the first place.
It's not a zero sum game mate, and I've 'admitted' it multiple times over, the 3070 would have been better with more VRAM, there you have it yet again. Now for people who already own a 3070, who cannot change its phsycial characteristics, what do you advise they do, live a life of gimped settings forever playing with 2 fps or not at all, or optimising settings for their setup and likely still having a great time, like literally everyone else does, just get on with it. And yes, on the vast majorory of RT games the 3070 still does great, when you don't hang your hat one one hub video.

This debate is absolutely hilarious. It's all well and good to state the obvious for the topic Du jour, but it feels like what people propose isn't helpful. 'go back in time and choose more VRAM, never buy nvidia because they're scum', well whoopie for the 80% of pc gamers today that expect when paying full release price for a AAA title they get a decent experience with an 8gb card. In their shoes I can accept lower texture quality, I can't accept a shit house experience because they might have 8gb of VRAM, that should never crash or gimp a game, period.

Never buy / pre order shit day 1 ports because you don't know what you're getting. That's the lesson bigger than any other imo.

Feel free to only quote and address parts that suit you, leaving out what you can't disagree with, after all, we all do the same right?
 
Status
Not open for further replies.
Back
Top