• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Are game requirements and VRAM usage a joke today?

Joined
Jul 16, 2013
Messages
11 (0.00/day)
I was looking at old videos and found where I was playing Guild Wars 2 back in 2013 and barely using 300 MB of VRAM, at 5040x900 resolution.


I think I was using either a Radeon HD 7850 GPU, or R9 380 at the time, and I recall GW2 being one of the best-looking games (or at least MMORPG) at that time. It must have been relatively optimized too for me to be pulling off 50 FPS :p

What's with some game requirements nowadays requiring 8GB or more VRAM for 1080p? I've heard a Plague Tale and some Jedi game being big offenders.
 
Last edited:
the only joke is the laughably low amount of VRAM on most modern GPUs.
 
You can't trust software GPU usage at all. Not even in-game numbers (some games shows you xx/xx GB usage for example).
+ Many game engines just allocate xx% by default. Tons of games allocate more than they need. Like 85 or 90% of all available VRAM, yet uses half of that in reality.

Also, Nvidia has several features to limit VRAM requirement (cache hit/miss requests in Ada arch for example + better memory compression - you can read more about this in architecture deep dives)

Pretty much no games use more than 12GB in 4K/UHD using ultra settings. With heavy RT you might go above this tho, yet no 12 or 16 GB cards will manage heavy RT at 4K/UHD anyway. Especially not AMD cards, since RT perf is very weak. Pointless.


Most of those games you talk about, is AMD sponsored games and rushed console ports -> https://www.amd.com/en/gaming/featured-games.html

Properly optimized games use little VRAM. Atomic Heart completely maxed out at 4K/UHD uses like 7GB and looks better than 99% of games.

Generally pretty much no games needs more than 8-10GB at 1440p. Most hovers around 4-5-6GB usage. 12GB is more than enough. 16GB or more is wasted.

Look here. 3070 8GB outperforms 6700XT 12GB with ease in new games at 4K/UHD in terms of minimum fps. Minimum fps would drop very low if VRAM was an issue -> https://www.techpowerup.com/review/amd-radeon-rx-7800-xt/35.html

All this VRAM talk started because of AMD marketing and a few rushed console ports (AMD sponsored as well). Very few properly optimized games uses alot of VRAM. Like I said -> https://www.techpowerup.com/review/atomic-heart-benchmark-test-performance-analysis/5.html


AMD was catched before doing this stuff. Back with Shadow of Mordor uncompressed texture pack - that did nothing for the end user -> https://www.pcgamer.com/spot-the-di...rdor-ultra-hd-textures-barely-change-a-thing/

They only did this to make many Nvidia cards drop performance (however also affeted many AMD users)

The difference between high and ultra textures is often just slightly less compression (sometimes ultra is uncompressed), which you most of the time won't notice when actually playing the game. Dropping texture quality to low and sometimes medium can be seen easily, but high and ultra mostly looks identical, especially without 300% zoom and in motion.


I have a 4090 and 24GB is absolutely wasted for gaming. Outside of allocation only, it's simply not needed.

By the time 24GB is actually needed in demanding games maxed out, 4090 will belong in the trashbin. GPU will be too slow to run games max out anyway.


Some people think alot of VRAM will matter eventually, they just don't account for the GPU which will be too weak to max out games anyway, meaning less VRAM is required.


You have to be logical. Game developers knows that the majority of PC gamers don't have more than 8GB.

PS5 and XSX have 16GB shared RAM for entire system, meaning OS and BACKGROUND TASKS, GAME and GRAPHICS with a 4K/UHD resolution target (dynamic res tho).
 
Last edited:
In my opinion, it's the 95-5 rule. That is, you achieve 95% greatness in any technology with relative ease, then the other 5% takes an ever-increasing amount of effort. PC graphics has reached the 5% stage.
 
What's with some game requirements nowadays requiring 8GB or more VRAM for 1080p? I've heard a Plague Tale and some Jedi game being big offenders.
It always baffles me how the "experts" of the pc gaming community don't know the answer to this, maybe its because games are being developed for current gen consoles with 16 gb ram????????

Maybe at the time guild wars 2 came out the 360 and ps3 both had 256mb vram, which explains most games at the time are never more than 500mb in pc????????
 
Waiting for the first 100GB vram gaming GPU....

Loop Waiting GIF
 
In my opinion, it's the 95-5 rule. That is, you achieve 95% greatness in any technology with relative ease, then the other 5% takes an ever-increasing amount of effort. PC graphics has reached the 5% stage.
I still think there's a better, easier way to graphics even with ray-chasing, the PowerVR-like tile-based rendendering.
 
If you'd have compared games from 2000 with games from 2010 you'd have seen the same difference, things advance.
 
I was looking at old videos and found where I was playing Guild Wars 2 back in 2013 and barely using 300 MB of VRAM, at 5040x900 resolution.


I think I was using either a Radeon HD 7850 GPU, or R9 380 at the time, and I recall GW2 being one of the best-looking games (or at least MMORPG) at that time. It must have been relatively optimized too for me to be pulling off 50 FPS :p

What's with some game requirements nowadays requiring 8GB or more VRAM for 1080p? I've heard a Plague Tale and some Jedi game being big offenders.
Assets and shaders got BIG. Texturing these days is just a small piece of the pie. There are many more things that are real geometry now (an asset) rather than a flat wall with a texture on it. Also, view distance came into play and with that, LOD. Models might need several versions of it. A super high res one for close ups. Another one for general gameplay. Etc. Heck even GW2 has these features. You get an extra high resolution model of your own character, graphics options even have a toggle for it. Additionally, the game has a setting where it limits the amount of different character models you can view at any time, the introduction of which actually fixed the age old MMO problem of massive FPS drops in crowded areas.

Guild Wars 2, you might not feel like it when playing it, but it is an instanced affair. There is no open world. There are just maps and they're not quite so immense - and those that are, generally use a fixed set of assets with some area specific stuff on top. Every area is distinct - deserts are deserts, a jungle is a jungle, and transitions barely exist within an area. A lot of the terrain is just a height map with a texture pass - many objects are part of terrain too, its basically precooked. None of this consists of highly detailed shaders. So basically to 'paint' a whole map in GW2, they use some base textures and some color variations. There is a water texture. There is a skybox. Done. On top of that you have the interactive assets the game uses everywhere. A lot of the variation is also introduced through static lighting and precooked shadowmaps.

Take a look at VRAM usage during the load of a GW2 area, this is where the game replaces what's in VRAM to get shit done. This game is optimized extremely well, and for good reason, because optimization translates into low-end system players getting their fantastic experience nonetheless and staying in the game (B2P!) to use more of the MTX in it. Everything in this game is literally 'flowing' on your screen, even the UI is adapted to it.

That last bit also answers, in part, the question why games today aren't quite as optimized. Their primary sale might just happen on the console, where performance is fixed to a target (and not always achieved). This is the baseline that's optimized towards. Everything else is bonus and so most of the time you just don't get it, at least not on launch, and perhaps never. There's just barely any money to be made from it. So we can whine at devs and companies all day, or we can just purchase into hardware that will sufficiently take the test of time and of the current (and perhaps next / transitional console phase) baseline performance requirement. The baseline is no longer 8GB. Its 12-14GB, and you can make do with less, but there are no guarantees. This echoes in the VRAM usage I see in many recent games - 10-12GB is not uncommon. Does the GPU need all of that all the time? Nope, but the game likes to allocate it nonetheless, so you can use it.

And frankly back in the day it wasn't much different, except lower settings levels would impact VRAM usage more dramatically. If you look at the gap between lowest and highest settings in VRAM usage today (see TPU testing), its more often than not less than 2GB, but curiously, the gap between midrange and top end VRAM has doubled (even tripled if you count x90, but I consider the 24GB a 'creator' upgrade more so than gaming oriented) over time. Kepler+Refresh offered 2~3GB across two generations no less (!), Maxwell offered 4~6GB and Pascal moved the bar to 8~11GB. Now we're looking at anything between 8GB~24GB. Even just 8~16GB is a radical change. If you read between the lines here, you can distill that 8GB is far from mainstream for any half decent gaming setup now and in the near future. Its the new bottom line, hopefully, for some time - at best.
 
Last edited:
Some people think alot of VRAM will matter eventually, they just don't account for the GPU which will be too weak to max out games anyway, meaning less VRAM is required.

This is (generally) true, but that doesn't mean it makes sense to skip on VRAM. The 8GB 4060ti is ... not good. A 12GB 4090 would be an insult.
PS5 and XSX have 16GB shared RAM for entire system, meaning OS and BACKGROUND TASKS, GAME and GRAPHICS with a 4K/UHD resolution target (dynamic res tho).

Sure, but those systems are also quite different from say a bog standard Windows machine.
 
You can't trust software GPU usage at all. Not even in-game numbers (some games shows you xx/xx GB usage for example).
+ Many game engines just allocate xx% by default. Tons of games allocate more than they need. Like 85 or 90% of all available VRAM, yet uses half of that in reality.

Also, Nvidia has several features to limit VRAM requirement (cache hit/miss requests in Ada arch for example + better memory compression - you can read more about this in architecture deep dives)

Pretty much no games use more than 12GB in 4K/UHD using ultra settings. With heavy RT you might go above this tho, yet no 12 or 16 GB cards will manage heavy RT at 4K/UHD anyway. Especially not AMD cards, since RT perf is very weak. Pointless.


Most of those games you talk about, is AMD sponsored games and rushed console ports -> https://www.amd.com/en/gaming/featured-games.html

Properly optimized games use little VRAM. Atomic Heart completely maxed out at 4K/UHD uses like 7GB and looks better than 99% of games.

Generally pretty much no games needs more than 8-10GB at 1440p. Most hovers around 4-5-6GB usage. 12GB is more than enough. 16GB or more is wasted.

Look here. 3070 8GB outperforms 6700XT 12GB with ease in new games at 4K/UHD in terms of minimum fps. Minimum fps would drop very low if VRAM was an issue -> https://www.techpowerup.com/review/amd-radeon-rx-7800-xt/35.html

All this VRAM talk started because of AMD marketing and a few rushed console ports (AMD sponsored as well). Very few properly optimized games uses alot of VRAM. Like I said -> https://www.techpowerup.com/review/atomic-heart-benchmark-test-performance-analysis/5.html


AMD was catched before doing this stuff. Back with Shadow of Mordor uncompressed texture pack - that did nothing for the end user -> https://www.pcgamer.com/spot-the-di...rdor-ultra-hd-textures-barely-change-a-thing/

They only did this to make many Nvidia cards drop performance (however also affeted many AMD users)

The difference between high and ultra textures is often just slightly less compression (sometimes ultra is uncompressed), which you most of the time won't notice when actually playing the game. Dropping texture quality to low and sometimes medium can be seen easily, but high and ultra mostly looks identical, especially without 300% zoom and in motion.


I have a 4090 and 24GB is absolutely wasted for gaming. Outside of allocation only, it's simply not needed.

By the time 24GB is actually needed in demanding games maxed out, 4090 will belong in the trashbin. GPU will be too slow to run games max out anyway.


Some people think alot of VRAM will matter eventually, they just don't account for the GPU which will be too weak to max out games anyway, meaning less VRAM is required.


You have to be logical. Game developers knows that the majority of PC gamers don't have more than 8GB.

PS5 and XSX have 16GB shared RAM for entire system, meaning OS and BACKGROUND TASKS, GAME and GRAPHICS with a 4K/UHD resolution target (dynamic res tho).
God I wish more people understood this! The "VRAM" hysteria is out of control. People seem to generally be horrible at seeing the "big picture".
Much of this nonsense originates from "Hardware Unboxed" and people just continue to regurgitate it.
 
Last edited:
Yes and no, oddly enough. In some games, the amount of VRAM they ask for feels insulting, and in others it's extremely reasonable. Other members have covered a lot of reasons why this can happen, it's not just textures of course, but yeah some of the insulting ones is high requirements with sweet f*** all visual return, be it in textures or overall quality of the assets and geometry.

I've been very content with 10GB for ~3 years now targeting 4k120, albeit using upscaling when available because IMO you'd be a fool not to, but if I was buying a new card today for use over the next 3+ years at 4k or perhaps even 1440p long term, I wouldn't buy a 10GB card. #080 is going to truck through 1080p for a damn long time after I do finally replace it.
 
Yet still feels that it wasn't THAT long ago when people said that 256MB was overkill..

I play at 4K but since my 6700 XT isn't the fastest card out there, its 12GB is fine if I want to have 60fps since I have to drop down some settings.

Waiting for the first 100GB vram gaming GPU....

Loop Waiting GIF
Well, we have 24GB on the enthusiast-level cards now, so I wouldn't be surprised if we get 96GB/128GB cards on this decade :laugh: in recent times the growth of cards' VRAM has been pretty quick compared to, let's say, 10 years ago for example. I guess that modern games with their sharp textures etc. has the most to do with it.
 
the only joke is the laughably low amount of VRAM on most modern GPUs.
then **90 series are specially for you, go get and don't forget to take two!
 
Guild Wars 2 launched in 2012 and it was an MMO, which means it has to cater the weakest systems out there.

MMOs have to be able to run on a toaster to attract a large audience. I remember back when I did play GW2 it used about 3GB of VRAM on max settings. It's not surprising that it could go down to 256 MB VRAM, given many MMO players still had cards with that much VRAM at the time. That said it was always a stutter-fest in world v world v world even on an SSD with the fastest processor with an overkill GPU (game was mostly CPU bound). It does use significantly more main system memory though, 8GB on average.

So is it crazy that an MMO from 2012 could work on 256 MB VRAM and 8 GB main system memory? No, it's expected that MMOs will cater to lower specs akin to how RPG Maker games do (they only require 512 MB memory in total).

Some people think alot of VRAM will matter eventually, they just don't account for the GPU which will be too weak to max out games anyway, meaning less VRAM is required.

Let's assume you have an underpowered GPU, say it can run games at 40 FPS ultra settings. Not that great right. You might consider that GPU underpowered but that same GPU without enough VRAM will dip down to 29 FPS with massive frame spikes. The experience of the latter is vastly worse objectively. On top of that, if the user wants to they can enable DLSS (with or without frame-gen). Those technologies will help the card with more VRAM, which will now be able to get above 60 FPS and potentially even 120 FPS if they enable frame generation. Meanwhile the VRAM strapped card can enable these technologies too, only they don't help because the card simply doesn't have enough VRAM so the frame-rate goes slightly up but the frame-spikes are still terrible and the experience is aweful.

What you deam as "too weak to max out games anyway" is your opinion and there's evidence that it's an incorrect one. Was the 3070 too weak to take advantage of 16GB? Well we already know the answer to that one because HWUB already tested it, the answer is no, it wasn't. There was an immediate and obvious benefit to the extra VRAM when comparing an 8GB vs 16GB 3070 (aka a professional varient). 16GB for that card is the different betwen still being able to use Ray Tracing in new games and having a stutter free experience. That's huge.
 
Starfield started to optimize vram to 8Gb (because of the consoles). It is a very aggressive optimization that cause a lot of bugs an even broken the Intel Arc compatibility. It was somewhat fixed.


Lets go back in time: (correct me if it is wrong, its just in my head)
2010 512Mb-1Gb cards was common
2012 1-2Gb was common.
2014 2-3Gb was common
2016 4-8gb was common but a lot of cheap cards have 6gb (1060) and 1070 8gb was the best buy i think
2018 average was 8gb
1.5-2x increase over 2 years.
2023 average gaming cards 8Gb-12Gb....RTX 3060-3070 just 8gb??? 4070-4080 12gb
So i think in 2022-2023 an average video card should have at least 12-16gb vram.
Intel A750-RTX3060-4060-RX6700-RX7600 should have at least 12-16gb vram (no need for 6-8gb modells). And never should run out of vram 1440p even with RT.
 
Starfield started to optimize vram to 8Gb (because of the consoles). It is a very aggressive optimization that cause a lot of bugs an even broken the Intel Arc compatibility. It was somewhat fixed.


Lets go back in time: (correct me if it is wrong, its just in my head)
2010 512Mb-1Gb cards was common
2012 1-2Gb was common.
2014 2-3Gb was common
2016 4-8gb was common but a lot of cheap cards have 6gb (1060) and 1070 8gb was the best buy i think
2018 average was 8gb
1.5-2x increase over 2 years.
2023 average gaming cards 8Gb-12Gb....RTX 3060-3070 just 8gb??? 4070-4080 12gb
So i think in 2022-2023 an average video card should have at least 12-16gb vram.
Intel A750-RTX3060-4060-RX6700-RX7600 should have at least 12-16gb vram (no need for 6-8gb modells). And never should run out of vram 1440p even with RT.

Your numbers are pretty spot on, 2016 there were cards in the $270 - $320 price range (where most users buy) with between 4GB and 8GB of VRAM.

I would only change 2023 from 8-12GB to 8GB as the average. You have to spend at least $500 on the Nvidia side to get more than 8GB of VRAM (and that comes with the caveat that you are getting a 4060 Ti, which is a tiny chip with a tiny memory bus). Even on the AMD side you have to spend $450 to get more than 8GB unless you go last gen. Most people cannot afford to spend more that much money on a GPU alone so they are forced to 8GB and by extension the average remains at 8GB.

It's the largest stagnation in VRAM size the industry has ever seen. We've had affordable 8GB cards since 2016 and new cards are coming out today with the same 8GB at even higher prices in 2023. It's not like memory prices are high either, they are dirt cheap. Nvidia could certainly add more memory to it's cards but that would cut into their 70% gross margins and customers wouldn't be forced to upgrade so often.
 
Computing requirements are like goldfish: they grow to fill their environment. As capability increases, there's less incentive to write more-efficient programs, as the greater muscle enables the target performance. It also becomes more difficult to program for efficiency as the programs in question become more complex (or bloated if you're feeling unkind). Run a 5-10 year old version of a given piece of software, and I'll wager you'll be surprised at how snappy it feels. On the surface, there's no technical reason this should be the case. The most recent version of Adobe Reader, for instance, doesn't do anything markedly different from Reader 2020. They even simplified the UI. And yet, navigating the newer version feels like you're behind two VMs and a bad VPN.
 
then?Guild Wars 2 launched in 2012 and it was an MMO, which means it has to cater the weakest systems out there.

MMOs have to be able to run on a toaster to attract a large audience. I remember back when I did play GW2 it used about 3GB of VRAM on max settings. It's not surprising that it could go down to 256 MB VRAM, given many MMO players still had cards with that much VRAM at the time. That said it was always a stutter-fest in world v world v world even on an SSD with the fastest processor with an overkill GPU (game was mostly CPU bound). It does use significantly more main system memory though, 8GB on average.

So is it crazy that an MMO from 2012 could work on 256 MB VRAM and 8 GB main system memory? No, it's expected that MMOs will cater to lower specs akin to how RPG Maker games do (they only require 512 MB memory in total).



Let's assume you have an underpowered GPU, say it can run games at 40 FPS ultra settings. Not that great right. You might consider that GPU underpowered but that same GPU without enough VRAM will dip down to 29 FPS with massive frame spikes. The experience of the latter is vastly worse objectively. On top of that, if the user wants to they can enable DLSS (with or without frame-gen). Those technologies will help the card with more VRAM, which will now be able to get above 60 FPS and potentially even 120 FPS if they enable frame generation. Meanwhile the VRAM strapped card can enable these technologies too, only they don't help because the card simply doesn't have enough VRAM so the frame-rate goes slightly up but the frame-spikes are still terrible and the experience is aweful.

What you deam as "too weak to max out games anyway" is your opinion and there's evidence that it's an incorrect one. Was the 3070 too weak to take advantage of 16GB? Well we already know the answer to that one because HWUB already tested it, the answer is no, it wasn't. There was an immediate and obvious benefit to the extra VRAM when comparing an 8GB vs 16GB 3070 (aka a professional varient). 16GB for that card is the different betwen still being able to use Ray Tracing in new games and having a stutter free experience. That's huge.
so what's the point of DLSS then?
 
PS5 and XSX have 16GB shared RAM for entire system, meaning OS and BACKGROUND TASKS, GAME and GRAPHICS with a 4K/UHD resolution target (dynamic res tho).
Yeah not exactly a fair comparison though. Consoles are gaming machines and are much more efficient with their memory use, due to them all being the same, and only having one pool of memory versus two... plus the fast internal ssd that the gpu has access to.

On pcs we expect better than from consoles, the developers have to shuffle data around from place to place... and account for the great diversity of hardware which has overhead... and whether we like it or not all this IS increasing vram requirements for current gen console and especially ps5 ports.

You can't trust software GPU usage at all. Not even in-game numbers (some games shows you xx/xx GB usage for example).
+ Many game engines just allocate xx% by default. Tons of games allocate more than they need. Like 85 or 90% of all available VRAM, yet uses half of that in reality.
You can easily get the actual vram usage in RTSS, instead of going to 'memory' scroll down to 'GPU Dedicated memory usage.' The problems that not enough vram cause, like stuttering, crashing, or missing textures, its not like too much allocation is causing this. And you shouldn't assume allocation is what people are measuring, because a lot of the time, its not.
 
Run a 5-10 year old version of a given piece of software, and I'll wager you'll be surprised at how snappy it feels. On the surface, there's no technical reason this should be the case. The most recent version of Adobe Reader, for instance, doesn't do anything markedly different from Reader 2020. They even simplified the UI. And yet, navigating the newer version feels like you're behind two VMs and a bad VPN.
So why can't we just use the Reader from 2020 until there's a major quality update? Why update even when there's absolutely no need to?
 
So why can't we just use the Reader from 2020 until there's a major quality update? Why update even when there's absolutely no need to?
Still rocking Office XP pro at home for exactly that reason. There is no snappier Word... Even when files get large (>300 pages full of tables and whatnot), XP remains snappy. 365 & Word Online will choke completely on this; and even the offline version is ridiculously slow, you can just feel it choke on large text files. If its really bad, you'll even get a constant intermittent loading cursor, like once every second, continuously, and you can't even work proper. We're even moving to splitting the files in half now at work, its that bad.
 
So why can't we just use the Reader from 2020 until there's a major quality update? Why update even when there's absolutely no need to?

We can and I do. Adobe doesn't make finding the installer particularly obvious, though. At least it shows in web search results if one searches for it specifically.
 
Back
Top