• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

24 Gb video RAM is worth for gaming nowadays?

24 Gb video RAM is worth for gaming nowadays?

  • Yes

    Votes: 66 41.5%
  • No

    Votes: 93 58.5%

  • Total voters
    159
I think that when Plague tale at 4k ultra uses around 4.5 to 5.5 GB Vram, the whole Vram discussion is pointless. For comparison, TLOU requires 9.5GB at 720p, and it looks much worse than Plague tale. So the only thing having more VRAM will achieve is games hogging more and more cause devs don't optimize crap. You won't get better visuals, youll just pay more for a card with more vram that also uses more power. Great, isn't it
yes, but does it stutter?
cause I've been seeing a lot of complaints about stuttering on new games.
 
No "added" input lag, key word here is "added", game with vsync at 60fps is typically triple buffered that means 2 frames worth of delay so roughly 30ms.
No added input lag compared to the uncapped situation. What more do you want? Zero input lag doesn't exist, you know.

Say you cap at 60 with no vsync, still, every fame costs 16ms, so the input lag is at the very least 16ms.
That is factually wrong because input lag does not equal frame time (because of the rendering pipeline you mentioned below), but for simplicity's sake, let's say you're right. But then...

Say you don't cap and the game runs at 120 average, that's 8 ms per frame at the very least now. This isn't entirely accurate because a typical rendering pipeline has more frames rendered ahead of times but you get the point (hopefully), the less time you spend on a frame the lower the input lag.
If that frame ever gets displayed on your screen, which it doesn't. It's wasted effort.

Why you insist to believe that getting tearing (which by the way is less noticeable at higher framerate aka uncapped) and not getting the lowest input lag possible as well is the superior choice is beyond me
Where did I say that?

Enhanced/Fast sync eliminates tearing, and so does a frame rate cap at the refresh rate of your monitor.

By the way, tearing isn't less noticeable at higher framerates.

But each to their own, for the record I never run uncapped, I do not care for input lag, it's always vsync or VRR for me.
So you've been arguing about nothing in the last 5 or so pages. Congratulations, well done! You've just wasted half a day of everyone's time. :banghead:
 
What's crazy and a little sad at the same time is the 4060/4060ti will both likely come with 8GB of vram which is just pathetic..... Even entry level cards had 8GB like 7 years ago smh.

As much as I don't like the 7900XT/7900XTX at least they come with adequate Vram.

Yeah, even nvidia offered 8gb on the midrange 1070... 7 years ago ! But i'm quite sure that nvidia regrets making the 1000 lineup as good as they did...

And Nvidia are deffo guilty of stagnating the midrange products in the last years, especially when it comes to vram. People who buys a 4060 ti, expecting to use it for AAA gaming in the next 3+ years will be sorely disapointed with their experience.

But ever since the kepler days nvidia has done stuff like this. The 960 2gb was more or less unusable for anything other than esports games even on release. 780 ti was also more or less a bad experience at release due to the 3gb vram... especially if you ran them in sli at high res. Remember watch dogs stuttering soooo bad due to running out of vram on the 780 ti's, while the 6 gb versions of the 780 provided a much better experience, despite being 20% slower gpu's.
 
40-60fps feels pretty terrible on most monitors Gsync/Freesync isn't going to fix that.
I'm usually fine with 40-60 FPS. It's around 30 and below where I start making faces.
 
I'm usually fine with 40-60 FPS. It's around 30 and below where I start making faces.

I think for me it comes from gaming at high framerates for over a decade now... I can do 60 on an OLED likely due to the vastly superior pixel response but anything lower hell na.
 
I think for me it comes frome gaming at high framerates for over a decade now... I can do 60 on an OLED likely due to the vastly superior pixel response but anything lower hell no.

I think it is also highly subjective. I deffo prefer playing at 8k 60 hz vs 4k 144 hz (have both). Image clarity and detail from higher resolution makes a much bigger difference to me than higher than 60 fps (58 actually, as i cap fps at that) does :)
 
I think it is also highly subjective. I deffo prefer playing at 8k 60 hz vs 4k 144 hz (have both). Image clarity and detail from higher resolution makes a much bigger difference to me than higher than 60 fps (58 actually, as i cap fps at that) does :)

I'm the opposite I'd rather game at 1440p 240hz than 8k60.

That's the beauty of PC hardware though everyone assuming they can aford it can game how they'd prefer.
 
I'm the opposite I'd rather game at 1440p 240hz than 8k60.

That's the beauty of PC hardware though everyone assuming they can aford it can game how they'd prefer.

Yeah, as said it's highly subjective. I suppose i'm in the minority with my preference... but hot dang does it look good in 8k... :P
 
xcellent delivery and perfection in proving!

Just check if any game consumes 20+ GB whilst staying playable (1% low is strictly more than 30 and avg is strictly more than 45 FPS) with 4090.
...there's no such game.

2160p, still not maximum graphic option all, using 3 DLC HD textures....

vram usage .....

warplanes = 14gb .....
wartanks = 16gb.....
warships = 18gb....

yes, there is.....

on 1080p (16:9) = recommended 12gb vram....
on 1440p (16:9) = recommended 16gb vram....
on 2160p (16:9) = recommended 20gb vram....

IMO....
 
@aus and ferret,

I don't buy on a whim, and I rarely buy full price, so please, don't try to put me in a box just because I didn't listen to the haters. I have said COUNTLESS times that this game is an exception with the lifted 2 Hr refund limit. I just didn't see any reason to refund it, and I still don't after well over 2 play throughs now. In fact I'm only liking it more on Survivor mode, as it ramps up the challenge with no see through walls "Hearing" feature, and half as many available resources to scavenge. But hey, if you're only into games for the atmosphere and exploration, I can see why you'd only value it at 10 GBP. I happen to want MUCH more than that out of my games, so I get why you don't understand my rationale. :rolleyes:

Furthermore, if you guys were 65 like me, with ever fading memory, eyesight, reflexes, and dexterity, maybe you'd get that waiting a long time for price drops makes me wonder if I'll still be able to play with the challenges I put on myself when I game by the time it hits YOUR idea of an acceptable price. I am by no means a casual gamer, but I know playing on the harder modes will one day become quite the chore. I also only play select genres of games, so I don't buy nearly as many as some do, which makes near full price or even at full price shopping once in a while, very manageable for me.

It seems like you guys judge players the same way a lot of the skeptics do games, without really knowing them, and it's clear you guys don't get how business works. You can say all you want that boycotting or waiting for bargain bin prices will improve game quality, but the practical man knows it's the early months of a release at or near full price that keeps developers stable enough to even AFFORD to put out good games. They'd all go broke and stop making games if every player listened to your way of thinking.
Don't get me wrong, I don't judge you. If you feel like the game is worth its price to you, that's fine. :) It's only that it isn't worth it to me.

I was only explaining my own reasons for not yet buying it, or in fact, for not buying any remake at normal game prices. I never expected you to agree with me or follow me in my thoughts. I apologise if my comment seemed that way.

I know how much full price sales mean to devs to stay afloat - I just don't feel like they deserve that full price for a remake / platform port, nor am I a charity organisation dedicated to support game devs even when their actions / products / prices don't fully resonate with me. But this is again, personal. If you feel like they do deserve your support, and you're enjoying the game, that's all that matters, and I wish you lots of fun with it. :)

Our ways of thinking may be different, but in the end, our love of games is what counts. :toast:
 
Last edited:
No added input lag compared to the uncapped situation. What more do you want? Zero input lag doesn't exist, you know.
But uncapped is better, that's the point.

That is factually wrong
It literally isn't, 16ms is the absolute lowest input lag figure achievable in that case, in an ideal situation when there are no other frames in queue that's the lowest number you can expect. Who knows what the final input lag is in absolute terms but the time it takes to finish a frame is obviously the biggest factor and it also sets the lowest bound.

If that frame ever gets displayed on your screen, which it doesn't. It's wasted effort.
It's wasted effort in terms of what ? You are getting the benefit of lower input lag, what a strange statement.

By the way, tearing isn't less noticeable at higher framerates.

It most definitely is, the more frames are being sent to the display the higher the chances are that the tearing will be less jarring because there is not going to be as big of a difference from the frame that's currently being displayed and the one that just arrived.

Enhanced/Fast sync eliminates tearing, and so does a frame rate cap at the refresh rate of your monitor.

I just explained that capping the frame rate to that of your monitor does not eliminate tearing, you only get rid of tearing if the frames are synchronized with the monitor (vsync or VRR on), just having the same number of frames begin rendered is not enough, that's not how this works. And if you enable Enhanced/Fast sync then you are truly getting nothing in return because now you've added that framebuffer lag back and the game is capped. I suppose it would be better vs just vsync but you'll get a ton of stutter, Enhanced/Fast sync is intended to work well when the framerate is many times over that of your monitor.

So you've been arguing about nothing in the last 5 or so pages

What a bizarre thing to say, just because I don't use something that doesn't mean I can't argue over what is true and what isn't.
 
Last edited:
Wasn't Metro Exodus performance mostly a problem with your OC?
Yes my only option now is to reduce graphics settings. My overclock, like you said, obviously isn't stable with Metro:Exodus but this is the only game giving me this issue.
 
Yes my only option now is to reduce graphics settings. My overclock, like you said, obviously isn't stable with Metro:Exodus but this is the only game giving me this issue.
There's some irony here:
STALKER's X-ray engine has been a better stress test for my OCs than many utilities, yet according to metrics, it's not even really loading my hardware.
GSC game world folks, when they first closed, migrated to the 4A Games and helped build the engine Metro games are running.
 
@LabRat 891
That's an old engine too. I can't get the STALKER benchmark to run on my windows10 box. I do remember the STALKER benchmark fully loaded all cores of my CPU though.
 
@LabRat 891
That's an old engine too. I can't get the STALKER benchmark to run on my windows10 box. I do remember the STALKER benchmark fully loaded all cores of my CPU though.
Odd. I was just running the Call of Pripyat bench a few months ago in comparing a Windows 7 R5 3600 + RX580 8GB vs. my (at the time) Windows 10 R5 5600 + 6500XT 4GB.

The benchmark would've been a great 'if it runs this, you're stable' kinda tool, if X-Ray engine didn't act like The Zone itself. Even the enormously improved 64-bit recompile the community did, has a LOT of "moments".
From what I recall reading about in the dev(b)logs, X-Ray engine is basically some kinda slavic black magic, in that it work(ed)(s) at all.
 
But uncapped is better, that's the point.
Still not proved.

It literally isn't, 16ms is the absolute lowest input lag figure achievable in that case, in an ideal situation when there are no other frames in queue that's the lowest number you can expect. Who knows what the final input lag is in absolute terms but the time it takes to finish a frame is obviously the biggest factor and it also sets the lowest bound.


It's wasted effort in terms of what ? You are getting the benefit of lower input lag, what a strange statement.
If "16 ms is the absolute minimum", then how will you "get the benefit of lower input lag"? You're contradicting yourself.

It most definitely is, the more frames are being sent to the display the higher the chances are that the tearing will be less jarring because there is not going to be as big of a difference from the frame that's currently being displayed and the one that just arrived.
That's not how it works. Screen tearing occurs any time your GPU's output image and the monitor's refresh rate are not synchronised. Higher frame rates do not eliminate it.
Read about it, seriously:
I just explained that capping the frame rate to that of your monitor does not eliminate tearing, you only get rid of tearing if the frames are synchronized with the monitor (vsync or VRR on)
And I just explained that I also use Enhanced/Fast sync.

And if you enable Enhanced/Fast sync then you are truly getting nothing in return because now you've added that framebuffer lag back and the game is capped. I suppose it would be better vs just vsync but you'll get a ton of stutter, Enhanced/Fast sync is intended to work well when the framerate is many times over that of your monitor.
Again, that's not how it works. Enhanced/Fast sync is essentially the same as the uncapped frame rate situation, except that the frames that your monitor can't display due to its refresh rate are dropped. There is no more framebuffer lag than there would be in an uncapped situation. You have a lot more lag in the traditional Vsync situation where your frames are queued to be displayed whenever your monitor is ready.

What a bizarre thing to say, just because I don't use something that doesn't mean I can't argue over what is true and what isn't.
Sure, you can always argue about stuff that you have no first hand experience with. It'll just make you look like an idiot. No big deal. ;)
 
Why is this still goin? It's obvious he is wrong on so many things and is not going to admit it even if you put his feet to the fire. And - apparently - nvidia themselves are clueless on the subject, since there is an option on the control panel that allows you to choose the amount of frames the CPU pre- RENDERS. What a clueless bunch these nvidia engineers, they should hire the local troll.
 
Easy, you'll just have to deal with it for now....
 
A picture is a thousand words I guess, optimum tech tested exactly that. If you don't run any of nvidias latency boost tech (which basically does cap your framerate in a nutshell) - these are the results. Lower fps, lower latency. "How is that possible"? :roll:

1680948591616.png
 
A picture is a thousand words I guess, optimum tech tested exactly that. If you don't run any of nvidias latency boost tech (which basically does cap your framerate in a nutshell) - these are the results. Lower fps, lower latency. "How is that possible"? :roll:

View attachment 290766
I think I've just written the most concise, most logical post that I could explaining Enhanced/Fast sync and screen tearing. And now you've got this. There's really nothing more we can do now.
 
Yes my only option now is to reduce graphics settings. My overclock, like you said, obviously isn't stable with Metro:Exodus but this is the only game giving me this issue.
Then you probably should have said your OC scenario, instead of your GPU limited scenario. As I said before, I ran Metro Exodus on my GTX 1080 just fine, which I don't need to tell you is less capable than what you have.
 
@Frag_Maniac
Did you try the DLC Sam's Story without reducing graphical detail/fidelity? That's the Metro:Exodus map that's dragging my FPS down to 40.
 
That's categorically not true, the higher the framerate the lower the average latency, so by caping your framerate you are most certainly not lowering your latency, that makes no sense.


And you are also increasing the average frametimes as a result, I don't know what use this could have to anyone, also I don't know where you guys got this idea that maxed out CPU/GPU = more even frametime or less latency, none of that is true.



If you can't be bothered to actually explain yourself and you just post a link I am going to assume that, as in many other things, you didn't have anything intelligent to say.
When multiple people tell you that you misunderstood and explain how and why, then post the evidence to back it up say that they're wrong because they posted a link to the definitive source of information on that topic?
No man, that's all you. You do not understand this topic or how a conversation works. Make a claim, and back it up with proof - or don't make that claim.


GPU bottlenecks result in uneven frametimes and render latency, as the CPU renders ahead while it's waiting. Frametimes and FPS are the same thing expressed in a different way. 1000Hz in a second, divided by the FPS... thats a frametime. This is not the same as render time.

CPU bottlenecks result in microstutters at worst, or a lower ceiling cap to max FPS. That said, you're going to have lower input latency in this situation.

Variable refresh rates allow the system to enter new information in a shorter period - If you had a 240Hz display with a VRR range of 1Hz to 240Hz, a 1FPS signal is still updated in the speed the 240Hz is - meaning that a new frame can be displayed at any of those 240 incremental steps, as soon as it's ready. This allows a much faster recovery from any stutters, and offers the reduced input latency of the maximum refresh rate even if the frame rate is lower.


The only reason people use uncapped framerates is because some older game engines like CS gave a competitive advantage to high frame rates as an engine bug, combined with reducing the input latency of a maxed out GPU. VRR provides that reduced latency with Vsync on, and an FPS cap gives you all the benefits without needing anything.

You'd know all this - If you weren't too lazy and arrogant to click a damned link.

It takes 10 seconds to fire up nvidias monitoring stats and see this in real time, it's not rocket science.

These are render latency values. These are from nvidia at the GPU stage of the pipeline only, before the monitor is involved.
Every ms of render latency is a delay in you seeing the image, which means you are responding to an older image. It's not input latency from your input device, but it IS an added delay to you reacting to what is actually happening.



60Hz, Vsync on. Oooh yay i love 31ms of input lag!
1681209424779.png



Exactly the same but with a 60FPS cap that prevents the CPU from rendering ahead of the GPU. (2 frames in DX12, so 60 +2 rendered ahead by the CPU, just like if the GPU was maxed out)
5.6ms.

Let's not use an FPS cap and enjoy 5.6x more render latency
(There is some beauty in that it's 5.6ms, and that vsync was 5.6 times higher)
1681209464540.png



"BUT I WANT VSYNC OFF AND MAXING THE GPU IS BETTER BECAUSE THATS HOW I BENCHMARK"
1681209606488.png


Sure, we go from 59FPS to 151 on the those 99% lows but... oh wait the input latency doubled.

letting my GPU max out its clocks to get that maximum performance managed to... still be worse.
1681209756734.png




Freesync/Vsync is a whole nother bag of fun on top of this, because freesync behaves differently between AMD and Nvidia, and Gsync is nvidia exclusive and different again.

The main key is when they work properly (Vsync off and maxed out GPU's is not properly) they can update at divisions of the full refresh rate.
Using Samsungs 240Hz displays as an example here, they have a minimum of 80Hz because it's 1/3 the max refresh rate.

At 1/2 or 1/3 of a supported refresh rate, the highest rate is used - and the signal sent for a new frame is sent at the start of the final repetition, not at the start. So 80Hz is 12.5ms, 160Hz is 6.25ms and 240Hz is 4.166ms

80FPS at 240Hz VRR would give you better input latency and response times because it asks for the new frame at the start of the final duplicate - 4.166/4.166/4.166
In a GPU limited situation you lose that benefit with frames rendered ahead end up right back at the higher latency values anyway.

Running 235Hz and FPS would give you 4.255ms input latency. If your GPU was maxed out and the CPU had to render ahead to cover up the lack, you'll end up at 8.51ms or 12.765ms, despite being at the higher framerate

Unlimited framerates are not a positive. Maxing your GPU is not a positive. They need spare performance ready to render the next frame when it's needed, because if they're even 0.000001 milliseconds late, you're getting doubled or tripled input latency and that's the most common microstutter there is, as input latency values go crazy.

Doing stupid things like running the GPU at 100% all the time forces you into that higher latency state all the time and people mistake that consistent latency for being the best they can get.
 
Last edited:
When multiple people tell you that you misunderstood and explain how and why, then post the evidence to back it up say that they're wrong because they posted a link to the definitive source of information on that topic?
No man, that's all you. You do not understand this topic or how a conversation works. Make a claim, and back it up with proof - or don't make that claim.


GPU bottlenecks result in uneven frametimes and render latency, as the CPU renders ahead while it's waiting. Frametimes and FPS are the same thing expressed in a different way. 1000Hz in a second, divided by the FPS... thats a frametime. This is not the same as render time.

CPU bottlenecks result in microstutters at worst, or a lower ceiling cap to max FPS. That said, you're going to have lower input latency in this situation.

Variable refresh rates allow the system to enter new information in a shorter period - If you had a 240Hz display with a VRR range of 1Hz to 240Hz, a 1FPS signal is still updated in the speed the 240Hz is - meaning that a new frame can be displayed at any of those 240 incremental steps, as soon as it's ready. This allows a much faster recovery from any stutters, and offers the reduced input latency of the maximum refresh rate even if the frame rate is lower.


The only reason people use uncapped framerates is because some older game engines like CS gave a competitive advantage to high frame rates as an engine bug, combined with reducing the input latency of a maxed out GPU. VRR provides that reduced latency with Vsync on, and an FPS cap gives you all the benefits without needing anything.

You'd know all this - If you weren't too lazy and arrogant to click a damned link.

It takes 10 seconds to fire up nvidias monitoring stats and see this in real time, it's not rocket science.

These are render latency values. These are from nvidia at the GPU stage of the pipeline only, before the monitor is involved.
Every ms of render latency is a delay in you seeing the image, which means you are responding to an older image. It's not input latency from your input device, but it IS an added delay to you reacting to what is actually happening.



60Hz, Vsync on. Oooh yay i love 31ms of input lag!
View attachment 291088


Exactly the same but with a 60FPS cap that prevents the CPU from rendering ahead of the GPU. (2 frames in DX12, so 60 +2 rendered ahead by the CPU, just like if the GPU was maxed out)
5.6ms.

Let's not use an FPS cap and enjoy 5.6x more render latency
(There is some beauty in that it's 5.6ms, and that vsync was 5.6 times higher)
View attachment 291089


"BUT I WANT VSYNC OFF AND MAXING THE GPU IS BETTER BECAUSE THATS HOW I BENCHMARK"
View attachment 291091

Sure, we go from 59FPS to 151 on the those 99% lows but... oh wait the input latency doubled.

letting my GPU max out its clocks to get that maximum performance managed to... still be worse.
View attachment 291093



Freesync/Vsync is a whole nother bag of fun on top of this, because freesync behaves differently between AMD and Nvidia, and Gsync is nvidia exclusive and different again.

The main key is when they work properly (Vsync off and maxed out GPU's is not properly) they can update at divisions of the full refresh rate.
Using Samsungs 240Hz displays as an example here, they have a minimum of 80Hz because it's 1/3 the max refresh rate.

At 1/2 or 1/3 of a supported refresh rate, the highest rate is used - and the signal sent for a new frame is sent at the start of the final repetition, not at the start. So 80Hz is 12.5ms, 160Hz is 6.25ms and 240Hz is 4.166ms

80FPS at 240Hz VRR would give you better input latency and response times because it asks for the new frame at the start of the final duplicate - 4.166/4.166/4.166
In a GPU limited situation you lose that benefit with frames rendered ahead end up right back at the higher latency values anyway.

Running 235Hz and FPS would give you 4.255ms input latency. If your GPU was maxed out and the CPU had to render ahead to cover up the lack, you'll end up at 8.51ms or 12.765ms, despite being at the higher framerate

Unlimited framerates are not a positive. Maxing your GPU is not a positive. They need spare performance ready to render the next frame when it's needed, because if they're even 0.000001 milliseconds late, you're getting doubled or tripled input latency and that's the most common microstutter there is, as input latency values go crazy.

Doing stupid things like running the GPU at 100% all the time forces you into that higher latency state all the time and people mistake that consistent latency for being the best they can get.
Mah man :D :D :D

Couldn't have said it better myself.

237 FPS locked...

1681213977861.png
 
Oh i forgot:

Vsync without VRR, adds a latency you can't measure.

Using 60Hz here as an example 1000Hz (per second) divided by 60FPS, you get 16.6ms.
If your GPU can have a render time under 16.6ms, you're stutter free. See my 60Hz with a 60FPS cap example above.


However, if you're at 2/3 that value - let's say 40FPS things go bad at the monitor level.

40FPS is 25ms exactly, but the monitor can only display every 16.6ms - some frames line up better than others, but what happens there is every second frame has higher display latency than the previous one.
Putting a star where a frame every 25ms would line up, shows the issue

16.66 -> 33.32* -> 49.98ms -> 66.64ms* -> 83.3ms*

I'm too tired to do the damned math on this one for you, but you can see the pattern - some frames are delayed one, two, or zero frames resulting in what visually would appear as skipping and jumping, with erratic latency. This is the situation people run Vsync off to fix, as tearing would be far better than this level of mess.

Combine that with the render ahead issues of a maxed out GPU, and you can have random lag spikes into the 100ms+ range at any time from the moment the CPU renders the initial frame, to it being visible on a users display, with wild variance frame to frame - and it cant all be measured in software.


The monitor tech matters. The settings matter.
Not all of it can be seen or measured in software, and sometimes something as simple as lowering a refresh rate or an FPS cap is the answer - why run 144Hz with stutters or tearing, when 120Hz would have neither?
 
Back
Top