• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Why Are Modern PC Games Using So Much VRAM?

Joined
Jun 1, 2011
Messages
4,959 (0.97/day)
Location
in a van down by the river
Processor faster at instructions than yours
Motherboard more nurturing than yours
Cooling frostier than yours
Memory superior scheduling & haphazardly entry than yours
Video Card(s) better rasterization than yours
Storage more ample than yours
Display(s) increased pixels than yours
Case fancier than yours
Audio Device(s) further audible than yours
Power Supply additional amps x volts than yours
Mouse without as much gnawing as yours
Keyboard less clicky than yours
VR HMD not as odd looking as yours
Software extra mushier than yours
Benchmark Scores up yours
An excellent read and explanation even if it is on a different web site


Some interesting points

But where PCs keep the meshes, materials, and buffers in VRAM, and the game engine (along with a copy of all the assets being used) in the system memory, consoles have to put everything into the same body of RAM. That means textures, intermediate buffers, render targets, and executing code all take their share of that limited space. In the last decade, when there was just 5 GB or so to play with, developers had to be extremely careful at optimizing data usage to get the best out of those machines.

This is also why it seems like games have just suddenly leaped forward in memory loads. For years, they were created with very tight RAM restrictions but as publishers have started to move away from releasing titles for the older machines, that limit is now two to three times higher....

Thus, if a next-gen console game is using, say, 10 GB of unified RAM for meshes, materials, buffers, render targets, and so on, we can expect the PC version to be at least the same. And once you add in the expected igher detail settings for such ports, then that amount is going to be much larger....

There are other, somewhat less nefarious, factors that will also play a part – development teams inexperienced in doing PC ports or using Unreal Engine, for example, and overly tight deadlines set by publishers desperate to hit sales targets before a financial quarter all result in big budget games having a multitude of issues, with VRAM usage being the least of them.

We've seen that GPU memory requirements are indeed on the rise, and it's only going to continue, but we've also seen that it's possible to make current games work fine with whatever amount one has, just by spending time in the settings menu. For now, of course.
 
High resolution textures and high-poly-count models are a cheap (work amount wise) way to improve the looks of the games. Software engineering basically has to do nothing for this kind of improvement.

Level-of-detail handling requires SWE involvement, as well as asset designers looking over the results. So it is done less.

Now you have a lot of very expensive objects drawn at distances where you can't even tell the difference between the expensive version and a (hypothetical, it isn't there) LOD version.
 
A) Sloppy/lazy coding
B) Conspiracy with the GPU mfgr's to keep prices high, thereby inflating all their bank accounts, except the consumers of course :(
C) Stupid people who keep buying/downloading/playing those games that supposedly "need" all that vram....
 
Finally an article saying what I have been trying to tell people, it is the future at least for the length of the current console gen.
 
High resolution textures and high-poly-count models are a cheap (work amount wise) way to improve the looks of the games. Software engineering basically has to do nothing for this kind of improvement.

Level-of-detail handling requires SWE involvement, as well as asset designers looking over the results. So it is done less.

Now you have a lot of very expensive objects drawn at distances where you can't even tell the difference between the expensive version and a (hypothetical, it isn't there) LOD version.
And how do you increase the quality of the closest lod ? Do you believe game devs are incapable of adjusting lod bias to get the best results ?

Really sad to see cookie cutter "devs are incompetent/lazy/evil" posts under what is for once a quality article from techspot.
 
If the rumoured PS5 Pro comes with even more VRAM this problem might get even worse.
 
A) Sloppy/lazy coding
B) Conspiracy with the GPU mfgr's to keep prices high, thereby inflating all their bank accounts, except the consumers of course :(
C) Stupid people who keep buying/downloading/playing those games that supposedly "need" all that vram....

I do wonder what goes on in the heads of people that use this reasoning.

Like, what exactly do you expect. Do you want games to literally never use more VRAM than they use right now ? Games 5 years ago used half of the VRAM they use now, you go back 10 years they use maybe a tenth, so on and so forth.

Do you want no progress ?
 
And how do you increase the quality of the closest lod ? Do you believe game devs are incapable of adjusting lod bias to get the best results ?

Really sad to see cookie cutter "devs are incompetent/lazy/evil" posts under what is for once a quality article from techspot.

That's not what I am saying at all. What I am saying is that the companies don't give the programmers the time for optimizations like that. Not even when first playtesting shows significant problems (because then some artificial deadline or release date is too close).

You don't even get to find out whether the developers are capable of good optimization.
 
under what is for once a quality article from techspot

the author used to run the futuremark site "yougamers" which may pre-date many people here but was a web site ahead of it's time although for a very niche audience. They not only reviewed games but reviewed how they ran with hardware and delved into the game engine. They even would tell you how well your hardware would run with the game.
 
Did either nVidia or AMD ever "secure" the framebuffer?
I can recall back in the early 2010s, reading about how 'open' the framebuffer is as both an attack surface and vector for exfiltrating data.

That's a lot of memory to work within, for malware.
The same concern came up with Firmware ROMs becoming bigger than needed, too. (Though, I'd like to think that's since been addressed by UEFI, secureboot, etc.)
 
That's not what I am saying at all. What I am saying is that the companies don't give the programmers the time for optimizations like that. Not even when first playtesting shows significant problems (because then some artificial deadline or release date is too close).

You don't even get to find out whether the developers are capable of good optimization.
Sorry for misunderstanding you then, my bad.
However my point still stands : broadly speaking the increase in quality does justify the increase in vram occupancy. You can't go around it if you want to improve the detail of close or far away objects alike, you need more polygons and more pixels and thus more vram, no amount of optimization will replace that. Even though you can absolutely make better or worse use of those pixels or polygons.

They not only reviewed games but reviewed how they ran with hardware and delved into the game engine. They even would tell you how well your hardware would run with the game.
I did not know about this site and it seems unacessible right now. Too bad it sounds much more insteresting than the "x number of cards tested at ultra" we get now.
 
You can't go around it if you want to improve the detail of close or far away objects alike, you need more polygons and more pixels and thus more vram, no amount of optimization will replace that.
I think it a two part thing; 1) obviously with more detail means more polygons and more pixels means more vram but 2) you can optimized vram usage to a certain extent as the article stated

"For example, in version 1.03 of The Last of Us, we saw average VRAM figures of 12.1 GB, with a peak of 14.9 GB at 4K Ultra settings. With the v1.04 patch, those values dropped to 11.8 and 12.4 GB, respectively. This was achieved by improving how the game streams textures and while there's possibly more optimization that can be done, the improvement highlights how much difference a little fine-tuning on a PC port can make."

clearly you are not going to optimize the game for a 8GB video card at those ultra settings but they were able to knock down peak usage by 20% with a little optimization.
 
Sorry for misunderstanding you then, my bad.
However my point still stands : broadly speaking the increase in quality does justify the increase in vram occupancy. You can't go around it if you want to improve the detail of close or far away objects alike, you need more polygons and more pixels and thus more vram, no amount of optimization will replace that. Even though you can absolutely make better or worse use of those pixels or polygons.


I did not know about this site and it seems unacessible right now. Too bad it sounds much more insteresting than the "x number of cards tested at ultra" we get now.
What increase in quality? These titles that run overboard with VRAM usage don't look any better than other titles (some look worse).
Often, when the devs are inexperienced or sloppy, they won't be able to handle high-poly count. When that happens, the cop-out solution is to throw bigger textures on top of the low-poly meshes.

I think it a two part thing; 1) obviously with more detail means more polygons and more pixels means more vram but 2) you can optimized vram usage to a certain extent as the article stated

"For example, in version 1.03 of The Last of Us, we saw average VRAM figures of 12.1 GB, with a peak of 14.9 GB at 4K Ultra settings. With the v1.04 patch, those values dropped to 11.8 and 12.4 GB, respectively. This was achieved by improving how the game streams textures and while there's possibly more optimization that can be done, the improvement highlights how much difference a little fine-tuning on a PC port can make."

clearly you are not going to optimize the game for a 8GB video card at those ultra settings but they were able to knock down peak usage by 20% with a little optimization.
Which underscores the real issue: those extra 2.5GB weren't actually used, but just allocated for no particular reason.
 
What increase in quality? These titles that run overboard with VRAM usage don't look any better than other titles (some look worse).
Often, when the devs are inexperienced or sloppy, they won't be able to handle high-poly count. When that happens, the cop-out solution is to throw bigger textures on top of the low-poly meshes.
I was gonna say this.

These titles kinda look like ass for the most part. Jedi Survivor and Redfall especially. Textures and models look worst than something like fallout 4. Last of US had nice landscape and vegetation but nothing extrodinary that hasn't been seen before. Hell, Metro looks better and far better optimized.

I think there is a lot of coping going on behalf of the game developers on this site and elsewhere. These games are previous gen at best in looks and yet demand far too much to run.
 
Last edited:
Zbrush and substance painter are to blame. They made making normal maps and edge wear quite easy. So textures exploded...
 
Which underscores the real issue: those extra 2.5GB weren't actually used, but just allocated for no particular reason
true but you could say a strong possibility for the reason is due to the company that got the deal to port the game was under trained, under staffed, and under an unrealistic launch deadline
 
true but you could say a strong possibility for the reason is due to the company that got the deal to port the game was under trained, under staffed, and under an unrealistic launch deadline
I have no data to infer that. But if that was true, it begs the question: why did they sign up for the job?

I mean, I'm not expecting console-levels of optimizations (where you only need to deal with less than 10 configurations) on PC. But the ability to profile memory usage is built into most tools/framework. However, I'm not fretting over this. Unlike most people here, I'm not afraid to turn a few knobs to find my own playable settings, if I really want to play a particular game.
 
I have no data to infer that. But if that was true, it begs the question: why did they sign up for the job?
Well considering SONY used Iron Galaxy instead of their own porting studio Nixxes, we can assume it was at least a scheduling issue of Nixxes not being able to hit a preferred launch window or Iron Galaxy offering a lower bid or both. In any scenario, the reason is money (shocker). Iron Galaxy has a history of having port issues like Batman Arkam and I believe Uncharted as well.
 
I think it a two part thing; 1) obviously with more detail means more polygons and more pixels means more vram but 2) you can optimized vram usage to a certain extent as the article stated

"For example, in version 1.03 of The Last of Us, we saw average VRAM figures of 12.1 GB, with a peak of 14.9 GB at 4K Ultra settings. With the v1.04 patch, those values dropped to 11.8 and 12.4 GB, respectively. This was achieved by improving how the game streams textures and while there's possibly more optimization that can be done, the improvement highlights how much difference a little fine-tuning on a PC port can make."

clearly you are not going to optimize the game for a 8GB video card at those ultra settings but they were able to knock down peak usage by 20% with a little optimization.
What increase in quality? These titles that run overboard with VRAM usage don't look any better than other titles (some look worse).
I did not say that every game ever was perfectly optimised to minimize vram usage. But generaly speaking the evolution in vram usage is dictated by necessity. If you call necessity the increase in visual fidelity.
I also want to stress that vram allocated and used by the game one a 24GB card does not necessarily mean it can't run with less vram and the same settings and performance. Many games will allocate and let unused textures in a vram pool if the memory is available for instance.

These titles kinda look like ass for the most part. Jedi Survivor and Redfall especially. Textures and models look worst than something like fallout 4. Last of US had nice landscape and vegetation but nothing extrodinary that hasn't been seen before. Hell, Metro looks better and far better optimized.
You must be joking, comparing jedi survivor which has extremely high asset quality and diversity to fallout 4.

The thing is, as rendering techniques advance it costs more and more for a similar uptick in perceived quality. I don't think most gamers appreciate the amount of data needed for modern rendering. Most objects will need several 4K textures super imposed to create modern materials. If anything devs have become much better at streaming assets and utilizing vram in the past 10 years or so.

I think there is a lot of coping going on behalf of the game developers on this site and elsewhere.
Given the general hostililty towards those that create games and complete lack of technical understanding of many gamers everywhere this is quite rich.
 
Last edited:
more usage is better in general, with putting caps on it you get many issues.
just wish the gpu manufacturer kept up, we should have 30 to 40gb on a 4090 by now. and 16gb on a 4060.

the amount of PC gamers is gonna be pretty small if your thinking is correct.. :)

trog
 
the amount of PC gamers is gonna be pretty small if your thinking is correct.. :)

trog

worst then small, PC gaming would die tomorrow if they did that. Maybe not PC gaming, but AAA gaming for sure. Dead.

Anyway they will learn that sales will be bad if they alienate 80% of their possible customers, this wont last let alone become worst.
 
Given the general hostililty towards those that create games and complete lack of technical understanding of many gamers everywhere this is quite rich.

I think hostility is a bit much in most cases and much of it is aimed at Publishers pushing games out before they are ready and in some cases broken. I think "fed up" would be a more appropriate term. Have a look at this:

 
general hostililty towards those that create games
I don't believe the hostility (personally) is towards those people who create games, I still believe there is a strong level of gratitude towards them. I believe the hostility is towards the publishers and GPU tech companies that increasingly treat the consumer as a product that they deliver to their shareholders (their one true consumer). You are seeing rushed products go out the door where consumers almost act as beta testers for these products and hardware that costs a pretty penny, positioned as high performance, and is now outdated in 24 months. The response from these companies is just buy more, just pay more, but consumers have the right to say what are we getting for our money and we want better.
 
For nvidia this is pure gold, buy the 4090, that's what they wont, shift sales to higher tiers. Amd is a bit better but not blameless, they are going to release 8GB cards too, and probably less.

Lazy game devs that refuse to optimize bad ports are the ones with most to lose, and we are seeing that, sales of shit are getting incredible hard to go through distracted buyers. Games keep flopping one after another.
 
Back
Top