• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA DLSS Transformer Cuts VRAM Usage by 20%

8GB / 0.8 is just 10GB
Check 3080 VRAM issue for more context.

It is a fact that many users had complaints about 3080 VRAM issue since 2021.

You can accuse me for making assumptions or anything, it is meaningless.
Just a simple google search and people quickly realize 10GB is not enough.
There is no point debating on simple facts.

You are just ignoring them and refuse to understand the reality.
All I'm seeing here is a bunch of blah, blah, blah. You are not making logical arguments, you're trying to save face and failing, badly.

That's because I slightly edited the better starting image that had lighting that wasn't a borked to begin with and improved it. I can still improve the one that had worse lighting though as well just it's starting from a weaker point in the first place.

Here's the DLAA with edits...it helps for certain, but I still don't like it as much overall though some aspects are better in a bit of the crispness, but the lighting and shading wasn't as good to begin so while it's better than it was before edits it's still not as good as the the DLSS was with edits given it had better lighting to start with and further enhanced. It's a case of more natural lighting and shading versus more refined details. Lighting and shading helps a lot with general depth perception it brings you into the scene a lot more. One feels a lot more like you could reach out touch and feel the stuff within the scene while the other kind of feels flat and a bit fake and kind of doesn't feel so 3D in terms of perception and depth. Basically one does a better job tricky the eye into thinking it's real life environment even though it's still very much a fake 3D world environment.
View attachment 405912
I mean, that image in itself exceeded the forum's limit for 16 MB for image attachments, so I had to lower it to jpg (albeit still at 100% quality quotient) to post. It's not a perfect representation of what the image comes off as in its purest form, although you can see where the DLSS blur tends to show if you're really careful - bear fur, some of the foliage, it handles grass surprisingly well. But this is feeding 25% of the pixel count, so three-fourths of the image is basically being inferred here. I think it's a decent result, one you'd find more than acceptable on a small monitor or laptop panel. I'm satisfied with the overall result even on a large 4K panel and an eye for detail, I just have a problem when DLSS is flaunted as a fix for crap optimization (and sadly Oblivion is such a case).
Either I'm just not that picky, or the difference is small enough not to be bothersome.. Maybe it's because it's a video game, it's never going to look perfectly real(and shouldn't) and I am consciously choosing not to be bothered by imperfections. Regardless, nothing is standing out here.
 
All I'm seeing here is a bunch of blah, blah, blah. You are not making logical arguments, you're trying to save face and failing, badly.
I have no idea where the whole “Consoles shared memory pool = VRAM on PC” fallacy even comes from. Shared is shared, not an insignificant part of that would be what system RAM is used for on PC. And not like consoles run ULTRA NIGHTMARE EPIC OVERDRIVE setting presets or whatever, with dynamic resolution often in place as well. I am fairly certain that, if one would find the closest to consoles set of settings on the PC version, that 8 gig cards would still actually run just fine, especially if your target is to replicate console 30 FPS.

MSAA is still perfectly usable and SSAA isnt impossible to run. IDK what bizzaro universe you come from, but I'm glad I dont live there. Sounds like hell, where you're forced to use TAA everywhere.
Brother, literally most modern games actually rely on having some sort of temporal AA to even look properly. I live in reality where that’s an indisputable fact. UE5 is straight up MEANT to run not only with TAA, but actually upscaled with TAAU. That’s the default. Forcing AA off where possible just leads to a grainy, over-dithered look.
And as for SSAA - of course you CAN use it. If you want to cut your framerate drastically. If you want to - go ahead.
 
I might be wrong but doesnt DLSS Trasformer also lowers performance by ~10% ?
I noticed ~6% on my 4090.

I'm still trying to figure out why I would want to run DLSS at all. The quantity of NV's black screen of death increases dramatically, and image quality is most certainly worse than native. I get more than acceptable framerates out of native res so can someone help me understand why any of this matters? I have to be missing some key data point.
Sure, here you go

1) DLSS CNN - and even more so with DLSS Transformer can in a lot of cases offer better image quality in games that use TAA.

2) DLSS allows you the freedom to buy a higher res monitor (4k instead of 1440p for example), and youll end up with very similar performance at 4k dlss q compared to 1440p native but with drastically higher image quality

Who told you this? This has never been the case, like, ever. DLSS doesn't cause BSODs, and of course the image loses sharpness, it's no longer running at the native image, unless it is. I took these screenshots for another thread, but I'll post it again here, you'd be hard pressed to tell both of these apart on most monitors, btw. It's DLSS at performance (25% resolution scale) vs. DLAA, both with 2x FG enabled.

View attachment 405884

View attachment 405883
The bottom one is clearly more sharp, top is blurrier. For being DLSS performance, the top one looks very playable.
 
I have no idea where the whole “Consoles shared memory pool = VRAM on PC” fallacy even comes from.
Exactly, it's a failure of an argument on every level. While many consoles run on the same X86/X64 arch as found in PC's with very similar hardware, this does not mean they are fully comparable. Quite the opposite. Anyone offering that argument instantly disqualifies themselves as someone to be taken seriously.
 
I might be wrong but doesnt DLSS Trasformer also lowers performance by ~10% ?
That's incorrect statement. Transformer model is not as performing as CNN model, but looks significantly better. AMD has similar situation with FSR4. It's not as performing as FSR3, but has significantly better quality. Honestly, I'd take better quality any day over a minimal difference in framerate. I mean, does 90 fps for Transformer and 95 for CNN make any realistic difference when Transformer looks so much better? I think not. Even if it was 80 and 90 difference. But if you take away DLSS (or FSR) entirely, you'd have I don't know, 40-50 fps? I think it's still worth it either way.
 
How about the Fake Frame generator? Still requires a huge amount a VRAM.

Yes, oh boy and try and convince people that VRAM capacity matters when you use FG, that it will tank performance anyway making the feature pointless.
 
Yes, oh boy and try and convince people that VRAM capacity matters when you use FG, that it will tank performance anyway making the feature pointless.
There's a simple answer to that... ready? Turn frame-generation... OFF! As in: don't use it, disable with prejudice, remove it from being applied, etc., etc.

Yes, yes.
 
All well and good, but how about as they say in north america, 'stop nickel and diming' customers with 8gb VRAM buffers in the first place..
 
There's a simple answer to that... ready? Turn frame-generation... OFF! As in: don't use it, disable with prejudice, remove it from being applied, etc., etc.

Yes, yes.

Well of course, you know that, I know that, they don't want to know that. :D

Point I am trying to make is, I don't think they should advertise the feature for low VRAM capacity cards, it's misleading and dishonest.
 
There's a simple answer to that... ready? Turn frame-generation... OFF! As in: don't use it, disable with prejudice, remove it from being applied, etc., etc.

Yes, yes.

Honestly, frame generation doesn't consume excessive VRAM. It's usable even on 6 GB GPUs. Won't say 4, because no Ada GPU has 4 GB (the 4050 is 6 GB) - but you can usually save 300-400 MB of VRAM through settings tweaking or using a lower DLSS resolution level.

All well and good, but how about as they say in north america, 'stop nickel and diming' customers with 8gb VRAM buffers in the first place..

When did 8 GB become completely unusable? Sheesh you guys. Y'all keep insisting this and use 16 GB of RAM like you've been doing since what, 2010? Scrap that, in 2010 I already had 24. That's far more of a problem than 8 GB VRAM will ever be. These cards are for 1080p gamers, you want more, pony up. Really.
 
Point I am trying to make is, I don't think they should advertise the feature for low VRAM capacity cards, it's misleading and dishonest.
Wait, frame-gen or DLSS Transform? For the first, I agree totally, for the subject of this article, I disagree. It seems to specifically target lower VRAM card to improve performance.

Honestly, frame generation doesn't consume excessive VRAM. It's usable even on 6 GB GPUs. Won't say 4, because no Ada GPU has 4 GB (the 4050 is 6 GB) - but you can usually save 300-400 MB of VRAM through settings tweaking or using a lower DLSS resolution level.
While that's a fair point, there are other, better ways of improving performance. FrameGen is a neat trick that can improve smoothness if the frame rate is already high. For low frame-rate situations it's better to turn other settings down.
 
Last edited:
Honestly, frame generation doesn't consume excessive VRAM. It's usable even on 6 GB GPUs. Won't say 4, because no Ada GPU has 4 GB (the 4050 is 6 GB) - but you can usually save 300-400 MB of VRAM through settings tweaking or using a lower DLSS resolution level.

Depends on the game really. Some will use way more than others, example? Mmmm, okay, Hogwarts Legacy FG uses very little with X4, try that with HL2 RTX, I use around 15200GB, already way too close to 16GB for my liking.
 
Newsflash: " Nvidia Finds Cure For Cancer!"
Internet commenters: "Oh, but what about dengue fever? Ngreedia just doesn't care about that at all and is anti-human as usual! Vote with your wallet!!1!"
 
When did 8 GB become completely unusable? Sheesh you guys.
The fact your asking is bad enough, but let me simplify why 8gb is no longer an option, even at 1080p.
  • Most AA/AAA game will get close to 8gb during normal use, as in no UpSc'ing, no FG and no RT.
  • As soon as you enable just DLSS/FSR your instantly going to go over the 8GB VRAM capacity -
    • This results in very low 0.1/1% lows which manifests as stutters, lag.
  • Once your over your VRAM capacity then you will see other issues, which further compound the above point -
    • The game will have to constantly stream textures, even with PCIe gen 5.0 no consumer CPU can load textures fast enough (SSD>Chipset>Sys RAM>CPU>GPU - (CPU-to-RAM is direct) to avoid stutter and lag in-game.
    • And/or the game will downgrade the textures to fit inside the VRAM buffer, so you get a inferior visual experience than a GPU with even a little bit more VRAM, eg 10GB+
  • If you then also enable just FG (no MFG), in conjunction with Up-scaling, then at that point the game is leaning so heavily on system RAM that its just compounding the above point, making the game is unplayable. Anything less than 60fps is unplayable and 30fps is DOA, which(60fps) @1080p most low-end cards should be capable of handling in 2025.
If still don’t believe me then have a look at these excellent videos by hardware unboxed -

 
solid if you dont want to give your customers more Vram while keep asking more and more money.
Solid? They further compressed the data, that reduces VRAM usage by DLSS module ONLY.
1751270544216.png

Yep, you save 20% of VRAM occupied by DLSS module. It's 80 MB VRAM saved for 4K, provided you have DLSS enabled.
80 MB freed won't save anyone who runs out of memory with 8 GB VRAM GPU.
I don't know why this DLSS VRAM reduction is actually worth place of a news post.
There are always minor differences in overall VRAM usage between driver versions.

Of course an NV GPU with X GB VRAM will run out of VRAM later than an AMD GPU with the exact same amount because of better overflow handling and overall superior VRAM management (+ higher bandwidth if we compare 9070 VS 5070; 9060 VS 5060) but the difference isn't massive, 10 percent at the very most if we only look at the worst case scenarios for AMD.
This is no longer a thing. Go see TPU's review of RX 9060 XT 8 GB version, which fares better in memory constrained scenarios (also see RT, and PT parts of review).
Memory utilization capabilities have been reworked with RDNA4.
8 GB variants of RTX 5060 series suffer from (most probably) sort of a bug (or maybe it's design flaw, or actually a feature = intended behavior).
When paired with older than PCIe 5.0 interface suffers from performance hit in few titles much worse than just 10%.
This is the reason why I would not recommend RTX 5060 (Ti) 8 GB variant for anyone with older motherboards (older than PCIe 5.0).

 
Some comparation, including Transformer model.

 
The fact your asking is bad enough, but let me simplify why 8gb is no longer an option, even at 1080p.
  • Most AA/AAA game will get close to 8gb during normal use, as in no UpSc'ing, no FG and no RT.
  • As soon as you enable just DLSS/FSR your instantly going to go over the 8GB VRAM capacity -
    • This results in very low 0.1/1% lows which manifests as stutters, lag.
  • Once your over your VRAM capacity then you will see other issues, which further compound the above point -
    • The game will have to constantly stream textures, even with PCIe gen 5.0 no consumer CPU can load textures fast enough (SSD>Chipset>Sys RAM>CPU>GPU - (CPU-to-RAM is direct) to avoid stutter and lag in-game.
    • And/or the game will downgrade the textures to fit inside the VRAM buffer, so you get a inferior visual experience than a GPU with even a little bit more VRAM, eg 10GB+
  • If you then also enable just FG (no MFG), in conjunction with Up-scaling, then at that point the game is leaning so heavily on system RAM that its just compounding the above point, making the game is unplayable. Anything less than 60fps is unplayable and 30fps is DOA, which(60fps) @1080p most low-end cards should be capable of handling in 2025.
If still don’t believe me then have a look at these excellent videos by hardware unboxed -


It was a rhetorical question. I'm well aware and think that HUB has played a huge part in this being blown way, way out of proportion. In fact, I even have to concede to Frank Azor defending the 8 GB 9060 XT... there is a market, and there is demand, and that is why AMD built them.

I can play most games on my laptop's 4 GB 3050 just fine. Games that this level of hardware can handle, anyway - games that people who buy entry-level gaming laptops will play included. You just need to swallow your pride and not play everything on ultra. If you want to play the latest games or you're a settings gourmet... then don't buy the cheapest, lowest end cards in the product stack. The pricing sucks? Sure, but we should have gotten the hint by now. $150 low end cards are a thing of the past - $300 GPUs are now the wood tier. The 60 tier is now the lowest, and this isn't changing - since AMD agrees with it and Intel has been relatively unable to keep their products in stock, so pressure doesn't exist. And to make things worse, the technology and supply chain for this to become a thing isn't there. They'll make a 5050... it's still gonna be $250-280 and won't deliver on what this vocal minority wants.
 
You just need to swallow your pride and not play everything on ultra. If you want to play the latest games or you're a settings gourmet... then don't buy the cheapest, lowest end cards in the product stack.
This simple truth is the White Elephant in the room, And outlets like HU's buddies TechSpot, will absolutely tell you that you should "lower the settings" when it suits some other arguments they are making. This is what makes the sensationalist "reporting" by the likes of HU really disgusting, and is of course swallowed line, hook and sinker by posters who like to follow simple, excitable narratives (especially when they lay into Nvidia)
 
$150 low end cards are a thing of the past - $300 GPUs are now the wood tier. The 60 tier is now the lowest, and this isn't changing - since AMD agrees with it and Intel has been relatively unable to keep their products in stock, so pressure doesn't exist. And to make things worse, the technology and supply chain for this to become a thing isn't there. They'll make a 5050... it's still gonna be $250-280 and won't deliver on what this vocal minority wants.
That's why I stopped buying cards. Will only buy a new one when my GPU dies beyond all repair. They no longer innovate, it's pure greed. To hell with this.
 
The fact your asking is bad enough, but let me simplify why 8gb is no longer an option, even at 1080p.
  • Most AA/AAA game will get close to 8gb during normal use, as in no UpSc'ing, no FG and no RT.
  • As soon as you enable just DLSS/FSR your instantly going to go over the 8GB VRAM capacity -
    • This results in very low 0.1/1% lows which manifests as stutters, lag.
  • Once your over your VRAM capacity then you will see other issues, which further compound the above point -
    • The game will have to constantly stream textures, even with PCIe gen 5.0 no consumer CPU can load textures fast enough (SSD>Chipset>Sys RAM>CPU>GPU - (CPU-to-RAM is direct) to avoid stutter and lag in-game.
    • And/or the game will downgrade the textures to fit inside the VRAM buffer, so you get a inferior visual experience than a GPU with even a little bit more VRAM, eg 10GB+
  • If you then also enable just FG (no MFG), in conjunction with Up-scaling, then at that point the game is leaning so heavily on system RAM that its just compounding the above point, making the game is unplayable. Anything less than 60fps is unplayable and 30fps is DOA, which(60fps) @1080p most low-end cards should be capable of handling in 2025.
If still don’t believe me then have a look at these excellent videos by hardware unboxed -

First of all dlss doesnt increase vram usage. It decreases it. The game will only downgrade the textures if you use a very high setting, if you instead turn textures down to high it will work fine. And yes you are correct, more expensive cards with more performance and more vram play games better. My 4090 plays games better than a 5060ti 16gb. A 5060ti 16gb plays games better than a 5060 8gb. Thats why they cost more money.
 
This simple truth is the White Elephant in the room, And outlets like HU's buddies TechSpot, will absolutely tell you that you should "lower the settings" when it suits some other arguments they are making. This is what makes the sensationalist "reporting" by the likes of HU really disgusting, and is of course swallowed line, hook and sinker by posters who like to follow simple, excitable narratives (especially when they lay into Nvidia)

Really, I wanna play games in their full glory, 4K all bells and whistles, have access to all the latest experimental features, toy around with LLMs (even if just for fun, I don't need exceptional performance in this particular aspect), record and edit gameplay snippets... I bought a RTX 5090. Was it expensive? Yes. Does it do everything I want and more with unrivaled performance? Yes. Should everyone buy one? No. Not even close. I'd argue most gamers are better served by a 9070 XT, and even then most gamers with an exquisite taste should stop at the RTX 5080 or grab a 4090 at a nicer price while they still can. This is for people who seek the ultimate, and life isn't made of that. 12-16 GB VRAM and you're set for 1440p and even some moderate 4K gaming usage for 95%+ of scenarios today.

The laptop 3050 has treated me well, my laptop is now 4 years old and it held up well in my opinion - I haven't run into anything that won't work on it, and whatever doesn't isn't exactly sensible to expect out of it (for example, MH Wilds, it doesn't run... ok, big deal, it's below minimum requirements for that game after all).

That's why I stopped buying cards. Will only buy a new one when my GPU dies beyond all repair. They no longer innovate, it's pure greed. To hell with this.

Would not say that there has been no innovation and that it's pure greed, there has been solid progress. The problems are two-fold: since the manufacturing technology is too advanced, market demand is seriously skewed and R&D costs are sky high, the products are priced out of the reach of most hobbyists at the high end, with shrinkflation hitting everything below halo tier very hard (thus giving a perceived image that there has been little to no progress) - and that really, games have not yet truly caught up even to the RDNA 2/Ampere generation. With production costs soaring ever higher and games barely just about starting to make full use of the PS5's resources plus the fact that games have to run on an existing install base in order to sell, it'll be some time until we see the technological improvements of the latest GPUs reflect on games. GTA 6 may be the pilot title for the next generation, and it's not going to come out for some time, the first version on the PS5/Xbox Series obviously scaled down for that hardware.
 
Framegen is actually not that Vram heavy or at least not at my ~niche resolution, might be game per game basis since I've only had time for a quick check in Cyberpunk but in that game its like +200-300MB for each FG step up.
High settings/Path Tracing/Ray Reconstruction/Balanced Transformer DLSS and Reflex was also enabled so basically everything.
No FG:
CyberNoFG.jpg
2x FG:
CyberFG2.jpg
4x caps out around 10 gigs so I'm still well within my 12 tho I would most likely stick to 2 or 3 anyway since 4 feels a bit off to me while 2 and 3 I'm perfectly fine with in this game.
 
shrinkflation hitting everything below halo tier very hard
You put it so mildly I can even see you working for a PR department. Making a 600 dollar GPU less advanced than a 330 dollar GPU from just 4 and a half years ago isn't just shrinkflation, it's a total and complete disaster.
With production costs soaring ever higher
Why should ANYONE care? Both NVIDIA and AMD are getting super dooper profits either way: it's like 100 dollars a GPU for them all things considered. Upped from 50, it doesn't change the whole picture that much.
 
I guess this achievement will allow nVidia to put 20% less VRAM on their cards... :p
 
Guys,
Can we please keep the 8GB debate of this thread please? Otherwise, you know what.
Thanks.
 
Back
Top