• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

24 Gb video RAM is worth for gaming nowadays?

24 Gb video RAM is worth for gaming nowadays?

  • Yes

    Votes: 66 41.5%
  • No

    Votes: 93 58.5%

  • Total voters
    159
You're logic is just bizarre, to say the least. Everything generates heat, like a faster GPU for instance. You must think that's just marketing as well, right ?
Logic is just fine thanks. 24 GB of memory on the 3090ti takes 60-80 W at stock. On the 3090 I believe it's approximately double that, since it used twice the amount of 1 GB chips compared to the 2 GB chips of the 3090ti. Now this is useful in that instance because the card can solidly push 4K where large frame buffers are needed, or in 3D modelling work for example.

Ampere and Turing cards for instance were power limited in their clock algorithms, not thermal. That means they had headroom to boost but wouldn't do so due to power budget. Why add more power drawing memory to cards if they don't need it?

There's zero practical advantage to having massive amounts of memory on a GPU for gaming. It should be relative to the GPU core performance and tier - nobody cares that the Arc has 16 GB because it's beaten by a 3060ti with 8 GB, and neither are cards that are suitable for 4K, where 16 GB is actually useful.
If it has N memory controllers then obviously that card was designed to have N number of memory chips attached. It's as simple as that, you know better than the people who design these thing, you think they do that because of "marketing" ?
Chips exist in different capacities bud. You can use 1 or 2 GB GDDR6 chips on a card for example and still saturate the bus.
 
Last edited:
Logic is just fine thanks. 24 GB of memory on the 3090ti takes 60-80 W at stock. Now this is useful in that instance because the card can solidly push 4K where large frame buffers are needed, or in 3D modelling work for example.
So, it’s useful then ? You’re defeating your own logic.

You can use 1 or 2 GB GDDR6 chips on a card for example and still saturate the bus.
That’s not how it works, the memory density is irrelevant, you can saturate the bus only if you use all memory controllers in the GPU.
 
So, it’s useful then ? You’re defeating your own logic.
Wow you're certainly trying hard to be pedantic. It's useful in a xx90 tier card yes, because it has the power to drive the resolutions where large framebuffers are needed as stated.
That’s not how it works, the memory density is irrelevant, you can saturate the bus only if you use all memory controllers in the GPU.
Yes... and you can do so with different density chips bud. A hypothetical card's bus would be saturated with four 1 GB chips or four 2 GB chips, the difference is cost and whether more memory would give a performance advantage, which isn't always "yes".

You could make the 3060ti for example a 16 GB card easily - just swap the 1 GB chips with 2 GB ones. It's still pointless as the GPU isn't powerful enough to push resolutions where 16 GB is needed. All doing that would achieve is driving up cost.
 
nobody cares that the Arc has 16 GB because it's beaten by a 3060ti with 8 GB, and neither are cards that are suitable for 4K, where 16 GB is actually useful.
Afaik example when reason is not in hardware but is software (bad drivers)
 
playing warzone with texture streaming has a dedicated VRAM Usage of around 15GB at 1080p and 21-22GB at 1440p.
4K is maxing out the 24GB and has very small loading stutters/low res textures when moving fast (in a vehicle for example)

it's obviously not needed but it makes a visible difference when every texture is at its highest resolution everywhere.
 
Nvidia is notorious for skimping on memory on all but the absolutely highest end cards though.

People are always in denial about this sort of stuff, memory requirements increase all the time.

It's part of their monetization strategy, IMO. Lower number increases rate of FOMO, and the eventual hitching when it truly does get tight drives people to upgrade :(

You're logic is just bizarre, to say the least. Everything generates heat, like a faster GPU for instance. You must think that's just marketing as well, right ?

I believe dgian has somewhat of a point - mostly about the G6X power budget contributing very significantly to the allowed power consumption on Ampere cards. On my OG 3090, that can be up to 60% of all power consumption(!), and since GPU BIOSes are now signed and impossible to edit, that too contributes towards programmed obsolescence. Not to mention that these memory chips have grown quite expensive in their own right.

I still heavily prefer fully enabled GPUs whenever possible, but those are admittedly getting out of my price ranges and fast.
 
this is for the umpteenth time but: just because a game allocates a certain amount of vram (usually bc of caching) doesnt mean that amount of vram is necessary for a game to perform well.

stop posting sense will you and let's get back to the fan boy factually unsupported discussion of more ram = better performance
 
this is for the umpteenth time but: just because a game allocates a certain amount of vram (usually bc of caching) doesnt mean that amount of vram is necessary for a game to perform well.
if you were to look at fps numbers you'd realise that it isn't - if it were, the lesser vram models w/ like, 8gib's performance would be falling off a cliff.

if you were to look at them forspoken benches you'd realise that none of the gpus exhibit this kind of behavoir.

stop spreading fud.

Sure, if the GPU has the power to actually use it, not always the case.

It's a marketing number as much as anything is.

Arc has 16 GB memory, does it perform better than a 10 GB 3080? I don't think so. How about an 8 GB 2080? Still no.

The way that GPU prices are heading, 8 GB SHOULD be the minimum. Also, it seems like Nvidia is doing decent with the 8 GB of VRAM on the 3070/3070ti. IMO, reasons for needing more VRAM are if you're doing some outside of gaming or if you're running a dump load of mods on games(hence why I got a 6900 XT and thinking about getting the 4090(maybe ti?))

The VRAM argument is the equivalent of the CPU core argument(...console players mostly argue this...). You can run games on a Quad Core CPU if the CPU is powerful enough to handle it(Intel STILL makes Quad Core CPUs) and not everyone needs 8+ cores to run games, and video cards that have 12 GB of VRAM...but it's sure nice lol
 
Wow you're certainly trying hard to be pedantic. It's useful in a xx90 tier card yes, because it has the power to drive the resolutions where large framebuffers are needed as stated.
I'm not, your argument is just pointless.

A 4070ti is about as fast as a 3090 yet has half the VRAM. Since it has the power to push the same resolutions and large framebuffers are needed as you said it yourself then that means that card clearly has insufficient memory.

So is it better to have the extra memory anyway or not ? The answer is obvious, you're just in denial.
 
Last edited:
I'm not, your argument is just pointless.

A 4070ti is about as fast as a 3090 yet has half the VRAM. Since it has the power to push the same resolutions and large framebuffers are needed as you said it yourself then that means that card clearly has insufficient memory.

So is it better to have the extra memory anyway or not ? The answer is obvious, you're just in denial.
It also 800%( or 8 times more) more L2 cache than the RTX 3090 has. This depends on the acrhitechure, or how it utilizes it's own Vram.
 
It also 800%( or 8 times more) more L2 cache than the RTX 3090 has. This depends on the acrhitechure, or how it utilizes it's own Vram.
That is of no use when you run out of memory. The 4070ti needs that much cache because it has like half the bandwidth.
 
That is of no use when you run out of memory. The 4070ti needs that much cache because it has like half the bandwidth.
which is still depends on the acrhitechure, & how how it utilizes it's own Vram. It doesn't matter that it runs out or vram, it matters how & when it runs out. Then what matters it what does after that
because
1. it's either going to go to RAM
2. it's going to go get it from a type hardrive.

1. is the most likly because it's the fastest compared to any other connection.
 
which is still depends on the acrhitechure, & how how it utilizes it's own Vram. It doesn't matter that it runs out or vram, it matters how & when it runs out. Then what matters it what does after that
because
1. it's either going to go to RAM
2. it's going to go get it from a type hardrive.

1. is the most likly because it's the fastest compared to any other connection.
or in a special case, the Cache memory in the 5800X3D and the future 3D V-cache products from AMD
 
that is true ofcourse (and tbf i didnt even look at rtx numbers so yeah, mea culpa in that case), but it is usually quite obviously noticeable when you're actually running out of vram

also, on the flip side i'd also argue that a game that does not allocate as much vram as a gpu has is inefficient - if there's resources you can make use of to cache more, you should.
but i digress.
that is your theory on paper, but the reality is, more vram on gpu is much better.... have u try using HD texture on far cry 6 with RT=ON, in 2160p, using rtx 3080 ti 12gb or rtx 4070 ti 12gb......
 
which is still depends on the acrhitechure, & how how it utilizes it's own Vram. It doesn't matter that it runs out or vram, it matters how & when it runs out. Then what matters it what does after that
because
1. it's either going to go to RAM
2. it's going to go get it from a type hardrive.
It does not depend on the architecture, it depends on the application. The amount of cache is not gonna change anything, you either have enough memory or not, there is no in between, the performance hit that comes with running out of memory will always be the same.
 
Forspoken is apparently underperforming on 3060 Ti and 3070 against the 3060 12 GB and exhibiting texture streaming problems if raytracing is activated and higher texture quality settings are enabled even at 1080p, a few media outlets whose reviews I've read (notably Computerbase) and people I spoke to on Discord apparently ran into the problem. But given how heavy the game is, using raytracing on this class of hardware is probably a very bad idea. Luminous Studio has officially recommended a 12 GB GPU for the game, too.

I've tried it with high settings on 4K (with a 3060ti) and it seemed fine when just running around. For combat "standard" settings seemed fine. But yeah RT on that game with that card is definitely a no go.
stop posting sense will you and let's get back to the fan boy factually unsupported discussion of more ram = better performance

No but you see a RTX 2060 12GB is better than a 2080ti because in three years the 2080ti will run out of RAM.
that is your theory on paper, but the reality is, more vram on gpu is much better.... have u try using HD texture on far cry 6 with RT=ON, in 2160p, using rtx 3080 ti 12gb or rtx 4070 ti 12gb......

I won't get into it, but yeah more memory is better but it's not like more RAM can make up for anything. A RTX 2060 12GB will at no point be a faster card than a 3070ti.
 
The way that GPU prices are heading, 8 GB SHOULD be the minimum. Also, it seems like Nvidia is doing decent with the 8 GB of VRAM on the 3070/3070ti. IMO, reasons for needing more VRAM are if you're doing some outside of gaming or if you're running a dump load of mods on games(hence why I got a 6900 XT and thinking about getting the 4090(maybe ti?))

The VRAM argument is the equivalent of the CPU core argument(...console players mostly argue this...). You can run games on a Quad Core CPU if the CPU is powerful enough to handle it(Intel STILL makes Quad Core CPUs) and not everyone needs 8+ cores to run games, and video cards that have 12 GB of VRAM...but it's sure nice lol

+1

The number of people that think VRAM allocation = VRAM utilization is crazy. Games and monitoring software never standardize on what they consider VRAM "usage", yet people assume that's the case.

Games often allocate the majority of VRAM. DCS is hard on VRAM. On some MW2 maps Radeon overlay shows 19-20GB "utilization" on my 7900XT. The 7900XT is a 20GB card. Make of that what you will ("omg we need 48GB GPUs for MW2!1!1!").

It's not my call to make, but this thread is pretty pointless. Buy what's right for you. If the framebuffer size is right for your usage (when VRAM size is the issue, it's anything but subtle), then what are we bickering about?

And when performance is an issue, is VRAM beyond a reasonable doubt solely to blame or are there other factors at play? Not every card is a GT 1030 DDR4.
 
Have the 3080 10GB and game at 4k targeting 120, never once maxed out the frame buffer from monitoring it, and don't get any hitching or stutters (that are texture related, this is easy to check). No fomo either, at this rate I'll be upgrading late 2024 to RDNA4/Ada next. Is more better in general? sure, is it needed? often, far from it.
 
nvidia and amd would like you to believe 24Gb is necessary :)
 
The number of people that think VRAM allocation = VRAM utilization is crazy. Games and monitoring software never standardize on what they consider VRAM "usage", yet people assume that's the case.
If a game allocates some buffer in memory it's because it will probably use it at some point. The less available memory there is the higher the chances are of buffers being swapped in out of memory which can have minor or major performance implications.
 
nvidia and amd would like you to believe 24Gb is necessary :)
Just like the old saying " no bodies going to ever need more than 640k of system ram".

for longevity yes if 4+ years, if your upgrading every two or less no.
 
Just like the old saying " no bodies going to ever need more than 640k of system ram".

for longevity yes if 4+ years, if your upgrading every two or less no.

Yes, of course someday it will be necessary.. Someday 256Gb will be necessary.
 

this is a new and very demanding game on 2016 tech with 3BG of v ram at a horrible 192 gbs(720P with the fsr used) keep in mind amds last gen high cards are 512 with the super super fast l3 chace to help...

v ram used vs paged is a big thing and so the used about isnt super important a game using 11 on a 10gb v ram card will do just fine(were assuming your not on old old tech the the v ram speed is bad) will run fine very similar. my companys i phone is 4 gbs of ram and with 0 apps open to order says im low on memory... and can crash but ive also had 2-4 small apps open witht he waring where it didnt crash... the paged vs usaged u cant see or track much when the cards have why more then they need and the differeance isnt 1 GB vs 2 GBS more...

also keep in mind thats just vram not total ram the most popular gaming system of all time is the nintendo swtich a 2017 system with a total of v ram and system ram of 4.... yes 4 gbs cell phones have been higher for years keep in mind in 2022 the top gpu on steam had 6 GBs and many cards on that list had less then that

Yes, of course someday it will be necessary.. Someday 256Gb will be necessary.
that will NEVER be neccasry for v ram... 12k on games now woulnt hit that but that is like 120x the pixles for the switch so a 40 inch tvs ppl for the same ppi on screen size idk would need to be about 240+ inch tv... are u drunk to comment that comment? the human eye could never discern that difference heck on my 75 inch 2k isnt that big of a diffeance.. get help plz most states have help for ppl who cant do math..

do u know(u dont hence my comment hence yours) the top gpu in 2022 was the 6 gb 2016 1060... with many many cards on the list on steam being lower vram... the currnet consoles are 16... for system memory not just gpu meory but entire pc.. u know the most popular device in the world the switch has 4 bgs of system and v ram total.... 4...so 256 for just v ram... wow maybe in 2080
 
I'm running a 4GB RX 580 card, it runs games at 1080p 60+fps just fine on medium-high settings, though it does stutter just a bit on more VRAM intensive newer titles. And medium-high is mostly barely any different from ultra settings in my eyes, your mileage may vary with yours.

I do wish I had a 24GB VRAM NVIDIA card though...for machine learning purposes. The high VRAM count is absolutely necessary in that case.
 
Back
Top