• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Explains GeForce RTX 40 Series VRAM Functionality

I noticed they didn't include Hogwarts Legacy in the vram talk,

Cap Lying GIF by Twitch

And Resident Evil 4 and many others

:)
 
Hogwarts Legacy is missing in their chart because they used up all their magic to put DLSS3 FPS in the same chart as real FPS. Cache can replace memory bandwidth as RDNA2 has proofed, but it can't replace capacity.

As far as I remember Pascal was the generation which enabled decent 4k gaming and now three generations later they want to sell 1080p again?
 
Heh, remember the 4090 got a 60%+ performance increase for 100$ more.

The 4060 Ti got a 15% increase for the same (or higher) price.

Them using larger caches to offset a smaller bus is cool and all, but what's the point if your GPU's are much more underpowered than could be possible at the same price point?
 
  • Like
Reactions: N/A
don't forget his wall of leather jackets and marble countered kitchen.
Why not a closet full of leather jackets? I joked about it in another news post that reminds me of Mr. Rogers and his collection of sweaters. The number of silicone spatulas in his kitchen I thought was rather amusing.
he wants that yacht club membership next boys, so you need less vram for max profits! :roll:
Since he is worth billions, he can buy many yacht clubs! I dont want to know the cost of the membership fee for that. :laugh:
 
imagine being an nvidia fanboy and nvidia marketing telling you why you should be ok with less video memory lol


Pft... Cognitive dissonance has never been proven to have negative effects on the human mind of fan-boys and bigots. In fact, the ability to dismiss other beliefs, thoughts and even common sense is seen as a strength.

"You could use facts to prove anything that's even remotely true!”
Homer
 
Hardware unboxed and others have been heavily pushing nGreedia hardware as it has fake frame generator.
I feel like they have been bashing NVIDIA recently with not one, not two, but at least three videos criticizing the 8 GB RTX 3070. I think they have been advocating for more VRAM instead of being satisfied with the status quo and frame smearing.
 
to be fair those are very shitty ports, unfinished games, that run like shit
Nvidia wished they could say that their cards have enough VRAM and just use this sorry excuse whenever it's not.
Thank god the reviewers know better.
 
I agree with this.

My 5800x3d caps out around a 7900 XT... so if my 6800 XT will ever sell, my plan is to go route of 7900 XT. If 4070 ti had 16gb of vram I would have went with that instead. Vram is important these days, thats been demonstrated clearly by several people.
Can vouch for the 7900XT. Its been flawless.

Nvidia wished they could say that their cards have enough VRAM and just use this sorry excuse whenever it's not.
Thank god the reviewers know better.
Well I have to give Nvidia some credit for their honesty. I mean this is like a coming out, even if they don't realize it, they are confirming more suspicion than they've removed. They do that especially in the lines where they say even a 4060ti will benefit from 16GB for higher IQ settings. They know they can't fight facts.

Imagine buying an 8GB card with all this information. You'd be pretty damn stupid. Especially if you know RT also demands added VRAM - the very feature Nvidia itself pushes.
 
Last edited:
at this point the vram discussion is more like a shouting match, lots of unreasonable claims, everyone has their own opinion, and me i'm still waiting for reasonable tests in reasonable scenarios. This should be a 1440p card to be use with medium settings, and that's what i want to see. Not 1080p (even if anyone can use it for that for sure), not ultra, not RT (no one is actually taking you seriously RT), not 4k, not tested with a 4090 pretending to be a 4060.
Check here from time to time, I think English gets you far enough, the text is irrelevant anyway.


Beyond that, medium settings are easy for any and most graphics cards in this segment. If you want to truly see the performance delta you need to challenge cards. You will see this if you click through that review up here. Its a wild orgy of CPU and pipeline / game engine bottlenecking, you're seeing the test bed, not the GPU.
 
I will test all cards at the same settings for the foreseeable future, which are maximum

I'm hoping with the increased cache this card performs better than expected but I'm not holding my breath.
 
I will test all cards at the same settings for the foreseeable future, which are maximum
And rightly so. If people want the kiddie version they can go to YT to spell it out for them.
 
That's definitely marketing speak say this whole thing skips the copy to main system ram that always happens if it had to all the way out to the HDD/SSD when it ever it happens.
This quote right here.
If the data's missing from the VRAM, the GPU requests it from your system's memory. If the data is not in system memory, it can typically be loaded into system memory from a storage device like an SSD or hard drive. The data is then copied into VRAM, L2, L1, and ultimately fed to the processing cores. Note that different hardware -and software- based strategies exist to keep the most useful, and most reused data present in caches.
It's basic misinformation.
The fact the Nvidia thinks a 400 usd gpu should be used for 1080p med/high settings is pretty sad.
Sadder that people are still willing to pay for nvidia crap, just for so called features.
Nvidia is the "apple" of gpus.
Memory Bus Width Is One Aspect Of A Memory Subsystem
Historically, memory bus width has been used as an important metric for determining the speed and performance class of a new GPU. However, the bus width by itself is not a sufficient indicator of memory subsystem performance. Instead, it's helpful to understand the broader memory subsystem design and its overall impact on gaming performance.
marketing bs, high resolution game plays show's a different story.
Due to the advances in the Ada architecture, including new RT and Tensor Cores, higher clock speeds, the new OFA Engine, and Ada's DLSS 3 capabilities, the GeForce RTX 4060 Ti is faster than the previous-generations, 256-bit GeForce RTX 3060 Ti and RTX 2060 SUPER graphics cards, all while using less power.
Let's completely disregard the fact that each one for the new generation is on a smaller node then the previous generation were most of that energy efficiency comes from, almost 0 from architecture.
The only real so-called improvement that ADA has over the last two generation is 3% increase in efficiency of raytracing. A more accurate "optical flow accaletors" that were present even three generations back in turing. Which means it's just wasting silicon die space on Turing & ampere doing nothing.
 
to be fair those are very shitty ports, unfinished games, that run like shit
Nothing shitty about resident evil, it runs and looks great Hogwarts and the likes of tlou are a different kettle of fish however. I actually bought re4 RM at almost full price (slightly discounted as I used cdkeys) which I rarely ever do for a new game and it was money well spent.
 
the other side of the vram 'issue' is lazy console devlopers
being that consoles are unfied memory the fast/cheap thing todo is just to cram all the assets into memory because its all one very fast pool of 16gb GDDR6 On a 256bit/320bit buss (no seperate 'ram' and 'vram' its all one segeragated pool

so come pc port time they don't bother to properly manage memory / i/o pressure and everything falls apart because they way they are handling assets is frankly inefficient

now nvidia knows this and they should have made the effort to ensure that 10GB was the minium
 
AMD created Infinity Cache, it's game changer. Nvidia created this, it's gimmick they should add more memory chips. Tech illiterates these days. :shadedshu:
 
AMD created Infinity Cache, it's game changer.
I don't think anyone claimed that for AMD's L3 Infinity Cache.
 
AMD created Infinity Cache, it's game changer. Nvidia created this, it's gimmick they should add more memory chips. Tech illiterates these days. :shadedshu:
You jest right?
There's a difference between adding extra cache in an attempt to give your cards an edge versus adding extra cache in an effort to save some money by skimping on the VRAM/bus width.
We've already seen that the 4070 still falls apart if VRAM fills up regardless of the extra cache vs last gen.

And even then, we've seen with AMD 6000 that the extra cache primarily improved performance at 1440p (a bit), it was never intended to compensate for VRAM.
I checked reviews against and honestly you can't tell if it's the cache or the architecture.
 
Last edited:
Well then I guess it's best to get a 7900XT/XTX since they have both large amount of VRAM and large amount of cache. Thanks Nvidia!
Props to PR team.
 
I mean this article could basically be paraphrased "Nvidia fails to divert attention away from inadequate VRAM by mansplaining how they copied AMD's InfinityCache"

Nobody here is falling for it; 8GB was $200 mainstream 7 years ago. We need at least 12GB at $399.

I'm hoping with the increased cache this card performs better than expected but I'm not holding my breath.
What's the point of better-than-expected performance if you're forced to run low settings due to VRAM limitations?

And even then, we've seen with AMD 6000 that the extra cache primarily improved performance at 1440p (a bit), it was never intended to compensate for VRAM.
I checked reviews against and honestly you can't tell if it's the cache or the architecture.
Cache helps offset less bandwidth, not less VRAM capacity. 2023's hot topic is games needing more capacity.
 
I mean this article could basically be paraphrased "Nvidia fails to divert attention away from inadequate VRAM by mansplaining how they copied AMD's InfinityCache"

Nobody here is falling for it; 8GB was $200 mainstream 7 years ago. We need at least 12GB at $399.

for 16gb dont be pay more than 300us, and 10gb to 12gb no more than 250us personally

:)
 
for 16gb dont be pay more than 300us, and 10gb to 12gb no more than 250us personally

:)
A pipedream in the current market for new GPUs, that's barely achievable buying used, last-gen AMD cards on ebay.

That is, however, what I did. I bought a used 6800XT and 6700XT at the prices you suggested. The 3070 was replaced by a 6800XT, and whilst the 3060 at least had enough VRAM it was hot and slow (Thanks, Samsung 8nm!)
 
Back
Top