• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA DLSS Transformer Cuts VRAM Usage by 20%

It was a rhetorical question. I'm well aware and think that HUB has played a huge part in this being blown way, way out of proportion. In fact, I even have to concede to Frank Azor defending the 8 GB 9060 XT... there is a market, and there is demand, and that is why AMD built them.

You just need to swallow your pride and not play everything on ultra.
Ah fair enough :)

And I dont play game on Ultra or even High because as been shown there isn't much visual fidelity to be had in a lot of games. That and I like to keep my min avg FPS at 100+ as I am using a 144hz VRR capable monitor. SO I tune the in-games setting to give me the best visual bang for buck while maintaining 100+ fps..

First of all dlss doesnt increase vram usage. It decreases it.
I will wait for indipendant testing.
 
I will wait for indipendant testing.
Brother, it’s 2025. All the tests were already done. It’s one search away, like here for example. DLSS upscaling (frankly, ANY upscaling) lowers VRAM consumption, full stop, not up for debate. It makes perfect sense since it runs lower internal resolution. The only thing that has a VRAM cost in the entire DLSS suite is FG (obviously). And maybe an incredibly minor one with DLAA vs non-AA native output, but in that case it is negligible.
 
Really, I wanna play games in their full glory, 4K all bells and whistles, have access to all the latest experimental features, toy around with LLMs (even if just for fun, I don't need exceptional performance in this particular aspect), record and edit gameplay snippets... I bought a RTX 5090. Was it expensive? Yes. Does it do everything I want and more with unrivaled performance? Yes. Should everyone buy one? No. Not even close. I'd argue most gamers are better served by a 9070 XT, and even then most gamers with an exquisite taste should stop at the RTX 5080 or grab a 4090 at a nicer price while they still can. This is for people who seek the ultimate, and life isn't made of that. 12-16 GB VRAM and you're set for 1440p and even some moderate 4K gaming usage for 95%+ of scenarios today.

The laptop 3050 has treated me well, my laptop is now 4 years old and it held up well in my opinion - I haven't run into anything that won't work on it, and whatever doesn't isn't exactly sensible to expect out of it (for example, MH Wilds, it doesn't run... ok, big deal, it's below minimum requirements for that game after all).



Would not say that there has been no innovation and that it's pure greed, there has been solid progress. The problems are two-fold: since the manufacturing technology is too advanced, market demand is seriously skewed and R&D costs are sky high, the products are priced out of the reach of most hobbyists at the high end, with shrinkflation hitting everything below halo tier very hard (thus giving a perceived image that there has been little to no progress) - and that really, games have not yet truly caught up even to the RDNA 2/Ampere generation. With production costs soaring ever higher and games barely just about starting to make full use of the PS5's resources plus the fact that games have to run on an existing install base in order to sell, it'll be some time until we see the technological improvements of the latest GPUs reflect on games. GTA 6 may be the pilot title for the next generation, and it's not going to come out for some time, the first version on the PS5/Xbox Series obviously scaled down for that hardware.
PC gaming master race means companies must not make profit or even lose money all for the glory of mUh G4m1ngPC!!!! PC gamers are owed this. Because PC gaming.
 
will wait for indipendant testing.
Though DLSS requires some VRAM to work, it actually lowers overall VRAM usage.
That's because DLSS/FSR/XeSS causes game to render frame at lower than native (or set as native) resolution.
This partial resolution requires less VRAM for rendering, as there is less image data to be rendered.
Image is then upscaled to native resolution, using various filters, enhancements, AI model etc.

Frame generation, Ray Tracing, Path Tracing, Ray Reconstruction ... all those increase VRAM usage, some more, some less.
 
PC gaming master race means companies must not make profit or even lose money all for the glory of mUh G4m1ngPC!!!! PC gamers are owed this. Because PC gaming.
This. Companies are in business to be in business. There is nothing wrong or evil about making profit, it's how the world works. People need to STFU about the prices. As always(and forever) go for what can be afforded at the time the purchase is made, or wait longer and save up more money.

I will wait for indipendant testing.
As you should before jumping to conclusions..
 
Wait, frame-gen or DLSS Transform? For the first, I agree totally, for the subject of this article, I disagree. It seems to specifically target lower VRAM card to improve performance.

Was talking about the former, Frame-Generation, as for the latter, it's a big win for everyone, as for using both with 8GB cards, not sure yet, to be continued.
 
As you should before jumping to conclusions..
No conclusion jumping here, just weary cynicism. I just naturally assume everything ngreedia says these days is a lie, predicated on a falsehood, predicated on an anecdote..
 
No conclusion jumping here, just weary cynicism. I just naturally assume everything ngreedia says these days is a lie, predicated on a falsehood, predicated on an anecdote..
Tom-foolery is as Tom-foolery does sir... By the same measure and bar, AMD, Intel and EVERY other company doing business in tech deserves the same treatment, and yet you only seem to be brand-bashing on one of them.. and THAT speaks volumes.
 
By the same measure and bar, AMD, Intel and EVERY other company doing business in tech deserves the same treatment, and yet you only seem to be brand-bashing on one of them.. and THAT speaks volumes.
Eh no, I think if you look through my posts its evedent I have very little brand loyalty if at all, and trying to insinuate otherwise is benal. nGreedia have since the 2000 series, shown a special kind of contempt for its customers. So while its verafiable fact that these corporations dont give a flying fcuk about me or you as thier customer, ngreedia are currently worse than a lot of the others. That dosent mean I don't buy thier products, I just dont dont buy them new and I get a much longer warranty (cex.co.uk).

But we are so far off-topic now, I suspect the thread will be locked in due course.
 
On my 3060 Ti the 310.3 dll on DLSS Transformer (preset K) looks and performs well. Sadly it also significantly increased coil whine. I'm on Linux using DLSSTweaks so not sure if that's to blame. Anyway I reverted and all is well.
 
On my 3060 Ti the 310.3 dll on DLSS Transformer (preset K) looks and performs well. Sadly it also significantly increased coil whine. I'm on Linux using DLSSTweaks so not sure if that's to blame. Anyway I reverted and all is well.

That is odd. Unless your card is pushing tons of frames (150+ fps), coil whine shouldn't increase. LLMs also seem to cause some audible coil whine on my 5090, but it's also pushing 160-170 tokens on most models, I actually find it whirring and purring as it thinks to be quite endearing lol
 
Is this really a transformer model improvement or, DirectX 12 Agility SDK improvement allowing for smaller chunks of memory allocation, leading to significant reduction of memory allocation for all GPU memory usage, including RT.

Transformer model, as it exists today, offers no such improvements. My bet is that this is just because of MS Agility SDK (which came out earlier this year), and NVidia able to leverage its improvements in their own new DLSS SDK.
 
@W1zzard needs to test this for science!

this debate is pointless without hard data

also is this just games built with the new SDK or will the transformer overrides also apply this optimisation ?
 
No Man's Sky on Linux at 4k, DLSS Quality, all game settings set to "enhanced". Very basic test consisting of loading a save from my settlement and spinning the camera 360 four times then reading the VRAM usage from Mangohud.

DLSS 310.2.1.0 (included with the game) - 6.6GB
DLSS 310.3.0.0 - 6.4GB
 
Is this really a transformer model improvement or, DirectX 12 Agility SDK improvement allowing for smaller chunks of memory allocation, leading to significant reduction of memory allocation for all GPU memory usage, including RT.

Transformer model, as it exists today, offers no such improvements. My bet is that this is just because of MS Agility SDK (which came out earlier this year), and NVidia able to leverage its improvements in their own new DLSS SDK.
It's the actual model being smaller. Do notice that the 20% reduction is for the model itself, and not for the overall game's vram usage, the final impact is really small, as shown by the post above mine.
 
vram is not system ram 200Mb is the difference between a game running smoothly and being a stuttery mess its the difference between being able to have chrome open or not
 
It's the actual model being smaller. Do notice that the 20% reduction is for the model itself, and not for the overall game's vram usage, the final impact is really small, as shown by the post above mine.
Exactly.

Unfortunately, this was not formulated the best way:
AleksandarK said:
This update directly addresses the needs of gamers running on 8 GB or lower graphics cards by trimming VRAM usage by 20%.

This is from sourced news post @ Videocardz:
For Transformer models, the 310.3.0 SDK now uses 87.77 MB at 1080p resolution, which is 19.8% less than the previous version. This VRAM reduction also applies to other resolutions, including 1440p, 4K, and 8K. On average, memory requirements are around 20% lower.

This essentially means DLSS will require less VRAM to run super resolution and ray recontruction models. For common setups like 8GB of VRAM and 1080p resolution, the memory usage to run the model is barely 1%.
For clarification, it's not trimming overall VRAM usage by 20% (as TPU's news post points to). It's trimming just the portion of VRAM that DLSS function requires by 20%.

Off-topic: I wonder why there's no news post on TPU regarding alleged Intel Nova Lake performance uplift (+10% ST, +60% MT).
 
Pinning this "news" is a crime.
Pretending there is any kind of impact of saving 20MB or even 100MB of ram on any current modern GPU is a nonsense.
We all *know* the reseved space up there on the site are ads under cover, I mean, Trending Topics, generally for the 3 big brands, but its against TPU credibility to sell yourself so low.
1751467314117.png


The best part is they lower the VRAM usage 20MB but increase the ammount by several GB.
 
It's the actual model being smaller. Do notice that the 20% reduction is for the model itself, and not for the overall game's vram usage, the final impact is really small, as shown by the post above mine.

Thank you for taking the time to respond. My only question would be, then why wasn't this visible prior to Agility SDK, in transformer model which was announced earlier this year?

Further proof will be, if they launch this after or along side their driver which supports agility SDK. Let's wait and watch.

If this tag line is indeed true, and these efficiencies have nothing to do with agility SDK, this optimization should be observable without their Agility SDK supporting driver right?

I have my doubts, but that is ok, I am bit of a cynic in general.
 
Pinning this "news" is a crime.
Pretending there is any kind of impact of saving 20MB or even 100MB of ram on any current modern GPU is a nonsense.
We all *know* the reseved space up there on the site are ads under cover, I mean, Trending Topics, generally for the 3 big brands, but its against TPU credibility to sell yourself so low.
View attachment 406251

The best part is they lower the VRAM usage 20MB but increase the amount by several GB.
the only one posting non-sense is you vram is handled differently than system ram every MB counts

why people insist on hating on free performance is completely baffling I guess we all can't be AMD/Intel and find a new way to crash the system every update...
 
Thank you for taking the time to respond. My only question would be, then why wasn't this visible prior to Agility SDK, in transformer model which was announced earlier this year?

Further proof will be, if they launch this after or along side their driver which supports agility SDK. Let's wait and watch.

If this tag line is indeed true, and these efficiencies have nothing to do with agility SDK, this optimization should be observable without their Agility SDK supporting driver right?

I have my doubts, but that is ok, I am bit of a cynic in general.

Yes I'm running the new Nvidia GeForce Driver 576.88 (DLSS 310.3.0)

Running perfectly on my MSI RTX 4080 Super 16G SUPRIM X on MSI MEG Optix MEG381CQR Plus 3840x1600 144Hz G-Sync Ultimate.

Transformer Model works on all RTX cards just takes a performance hit on older hardware cards.

Been using RTX 2080 Super 8GB NVlink setup for 5 years on "CNN Model" till now, I upgraded last year to RTX 4080 Super 16GB now with the new Transformer Model I have to say very clean and butter smooth. Basically a free fantastic graphics upgrade. This is the only thing Nvidia did good this year my opinion. Until the (RTX 5000 Super Series)

Transformer Model is awesome and you don't need a RTX 5000 series with Multi Frame Generation broken fake frames jello artifacts mess to use it.

It's good and free upgrade can't complain.
 
Back
Top