• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Nvidia's Neural Texture Compression technology should alleviate VRAM concerns

Wait a minute. If they could enlarge a frame (DLSS 1), then enlarging a texture file should be piece of cake.

Moreover, they're compressing, which means they have all the original data, so they know how to squeeze it down so that best decisive parts remain.
but in upscaling (DLSS 1) missing data is absent with no concern about how an algorithm can retain it.
in compressing, you have the luxury of selecting what to dismiss, thus giving your algorithm the best odds of gaining it back. :)
 
To keep on slapping on more and more gobs of VRAM generation after generation isn't a complete solution because that VRAM cost money. It raises the price of a card at a time when prices for cards are already ridiculous. imo there needs to be an increase in VRAM offered but a step towards not letting the VRAM requirements get absurd is also part of the solution.
 
VRAM is among the cheaper components on any card, the prices also go down continuously throughout the lifetime of GDDRxx so it's a more viable solution than alternatives. Also when you're doing "neural texture compression" whatever the eff that means, did anyone count the power consumption on that?

FYI that video super resolution using tensor cores on NVIDIA cards uses tons of power!
 
Last edited:
Be happy with what good can come(from this) and not look for the bad.

It has the potential to reduce VRAM usage yes but given the current market, for end consumers, the benefit it likely to be nothing. Can you blame people not rooting for a feature that will only be used to further increase Nvidia's margins? That's a far more realistic POV given the current market. I find it hard to enjoy new graphics technologies when they are gated to only those with fat stacks.
 
They already implement texture compression everyone has for decades. It’s cool they to use your analogy went from zip to rar but at some point maybe they should just use more vram like there competitors.

the answer isn’t always more hardware but I also think there are members in this thread who cling to the 6gb days and that colors there world view, it is 2023

isn't amd releasing a new card with 8GB soon? about the same performance as the old 3070?

what is this "more vram like the competitors"?
 
  • Haha
Reactions: Lei
isn't amd releasing a new card with 8GB soon? about the same performance as the old 3070?

what is this "more vram like the competitors"?

it will cost 250-300 and actually perform like a 6650XT-6700XT most likely. So it will still be meh AF but likely less embarrassing than a 400+ usd 8GB card
 
what is this "more vram like the competitors"?

Lol.

7900XTX 24GB, same as 4090 but much cheaper.
7900XT 20GB > 12GB 4070ti

6800/6800XT/6900XT/6950XT all have more memory than 4070ti/4070 and as much a 4080, a card significantly more expensive.

Yeah that 7600XT will have 8GB but how much you want to bet that it will be significantly cheaper than the 4070, stop kidding yourself, AMD does offer much more memory at every price point, I don't know how you could refute that.
 
If you getting something "for free"* - you are the product.



*From Nvidia
 
Yeah that 7600XT will have 8GB but how much you want to bet that it will be significantly cheaper than the 4070, stop kidding yourself, AMD does offer much more memory at every price point, I don't know how you could refute that.

so if it's cheaper it's ok to sell the same performance as the 3070, with stutters, unplayable gameplay, etc...?

i think the fanboys lost their collective minds. Either the vram is a real issue or not, pick a lane. Now someone can buy a broken product and it's ok if it's a couple bucks less.
 
Imagine all textures will be saved grayscale, and then we tell AI, they're sitting on a rock and Einstein is wearing blue shorts. The sand is (specify a color gradient)

It doesn't have to predict everything, because we have the original texture. we include enough data to fully recover it from grayscale (instead of telling what color each pixel is one by one)



1683739746938.png
E
 
so if it's cheaper it's ok to sell the same performance as the 3070, with stutters, unplayable gameplay, etc...?

i think the fanboys lost their collective minds. Either the vram is a real issue or not, pick a lane. Now someone can buy a broken product and it's ok if it's a couple bucks less.

I think most people have been consistent 8GB is fine on an entry level card. I still think even 250 is likely too much for this card but it's much better than what Nvidia seems to want to do and stick 8GB on a 400 usd one.
 
so if it's cheaper it's ok to sell the same performance as the 3070, with stutters, unplayable gameplay, etc...?

i think the fanboys lost their collective minds. Either the vram is a real issue or not, pick a lane. Now someone can buy a broken product and it's ok if it's a couple bucks less.

It doesn't have to be either or. How about a twofold approach to a future solution. Some increase in VRAM and pursue the Neural Texture Compression Technology to keep from having ridiculous amounts of VRAM required in the future?
 
so if it's cheaper it's ok to sell the same performance as the 3070, with stutters, unplayable gameplay, etc...?

i think the fanboys lost their collective minds. Either the vram is a real issue or not, pick a lane. Now someone can buy a broken product and it's ok if it's a couple bucks less.

It depends on the price point. Baked in certain price tiers are certain expectations as well.

Ultimately reviews will bear out if these 8GB cards are cheap enough to warrant such a small VRAM buffer and if they offer enough value to customers.
 
Imagine all textures will be saved grayscale, and then we tell AI, they're sitting on a rock and Einstein is wearing blue shorts. The sand is (specify a color gradient)

It doesn't have to predict everything, because we have the original texture. we include enough data to fully recover it from grayscale (instead of telling what color each pixel is one by one)



View attachment 295303E
Well that's a pretty horrible result if you wanna go by that example, also I doubt if it would be lossless.
 
i think the fanboys lost their collective minds.
No, you're just talking nonsense, AMD does offer more memory at every price point that's just a fact.

It's one thing to sell an 800 dollar video card and skimp on VRAM and a totally different thing to sell a 350$ one and do the same, if you somehow think the two are the same then you're the fanboy here. I wouldn't expect from a lower end video card to come with a ton of VRAM but I would from one that sells for the same money that used to buy you a flagship card not too long ago.
 
Last edited:
Imagine all textures will be saved grayscale, and then we tell AI, they're sitting on a rock and Einstein is wearing blue shorts. The sand is (specify a color gradient)

It doesn't have to predict everything, because we have the original texture. we include enough data to fully recover it from grayscale (instead of telling what color each pixel is one by one)
But what happens if there are actual grey colours in your black&white colour palette (e.g. grey pants, grey socks etc.)?
 
Well that's a pretty horrible result if you wanna go by that example, also I doubt if it would be lossless.
I said we will provide all the information to guide the model for full retrieval of an image identical to original.

The example: colorful image does not exist. All we can do is GUESS.
Neural Compression: how much data you need to grab for a full recovery?
DLSS 1/2/3: make a good guess, but good luck


besides, in texture files, rock and sea are separate files, there will never be any bleeding of color from one mesh to another.

But what happens if there are actual grey colours in your black&white colour palette (e.g. grey pants, grey socks etc.)?
my example is not a texture, it is an image. your socks will be a .... okey, let's say your character has one texture file, you just tell AI the socks are grey!
and in textures, there's nothing around the socks (UV islands have padding). so it's easy to isolate the object accurately:

1683742186963.png


although most likely we won't be doing compression for hero assets (main character) but mostly for environment (rocks, road, vegetation, NPC, barks and some litter here and there)
 
definitely sus but the 2 things don't cancel out, that's like saying i don't need winrar, i can just get a bigger drive or a faster internet
or 7-zip.
 
  • Like
Reactions: 64K
I said we will provide all the information to guide the model for full retrieval of an image identical to original.
That still requires computational power & if it were implemented anything like the video up-scaling on tensor cores (i.e. requiring more power) then it'd not be worth it. Nothing is free & you know what's cheaper? The freaking VRAM ~ not always but often enough!
 
Those of us who feel like we got the shaft on vram this is something nice.

"There is some good news on the horizon thanks to Nvidia. It's working on a new compression technology it calls Neural Texture Compression, and like most of the tech coming out of Nvidia these days, it's thanks to AI."


Someone else mentioned this a couple of days ago in some other thread. Had a brief look at it. Definitely exciting possibilities. But hope this is not nV's ultimate response to growing VRAM demands when the most easiest method is to simply adopt increased VRAM provisions from the get-go. A combination of increased VRAM, enhanced compression techniques (not just textures), widely adopted DS (+less CPU decompression performance tax), etc are all Golden providing we're not compromising one over the other. It's a shame we're in 2023 and still looking forward to these possibilities. Its not that these achievements were previously impossible, only hampered progression from the usual suspects: corporate self-interests

My point is i don't need more vram,.....

Like you theres plenty of more people, quite possibly a majority of gamers (consoles/PC) who don't need more than 8GB. Many of them won't probably need more than 8GB for the next 5 years and beyond. Ask one of my nieces....not long ago i picked up a used but GPU-less cheapie 4790K/16GB/256GB-SSD build and slipped in an all-mighty globally acceptable "4GB" RX 570. Boooom! She's over the moon! roblox/valorant/another less stressing weird game which i can't recall on a 1080p display. In her current setting, if she were familiar with all the technicalities, i'm sure she'd be here today blissfully joining you in this quest keeping expectations low with "i don't need more than 4GB VRAM" .......but that doesn't mean the rest of us fit the same 4/8 bill.

I applaud you, nothing wrong with the ~8GB happy bunch as long as we're not suggesting "keep incapacitating the graphics evolution" which is already hacked to pieces (or politely put, trimmed)
 
Textures don't really use that much memory anymore in proportion to the rest of assets and other memory the engine might need to allocate, if you don't believe me fire up any recent game toggle between ultra and low textures and you'll see for yourself, it's often <2GB difference at most. Unless of course they are absurdly high resolution for some reason.

Shouldn't the AI futurist megaminds at Nvidia work on something else that's more useful, like using ML to dynamically generate textures at run time rather than try to squeeze some -5% in VRAM consumption ?
Yeah it is a smaller proportion, mostly due to games migrating over to unified memory architecture. Although with some games it will still be around 6-8 gigs just for textures.

So Nvidia's work here isnt going to be a huge game changer. Plus we can rely on them to make it proprietary and only work on their latest SKU's.

It will also be countered by direct storage memory demands I think, I cannot see anything other than more VRAM needed for games utilising DS.

I think Nvidia missed the memo of which when you offload things to a GPU, it also needs VRAM to store that data.
 
That still requires computational power & if it were implemented anything like the video up-scaling on tensor cores (i.e. requiring more power) then it'd not be worth it. Nothing is free & you know what's cheaper? The freaking VRAM ~ not always but often enough!
But if the compression is only done on the GPU side (e.g. the textures are pre-compressed) couldn't they incorporate a custom ASIC on the vidoecard (i.e. like they do on consoles) to do the decompression on-the-fly? Depending on how large the textures are, couldn't transferring the compressed textures possibly save you enough time over xferring uncompressed textures that the decompression could be done for free from a time perspective?
I also thought GDDR6 and GDDR6x were notoriously power hungry -- even over GDDR5 and GDDR5x?
 
  • Like
Reactions: Lei
I've seen some conflicting rumors about this.
one claims it's Nvidia's "Tensor cores" doing the work fdor the compression the other claims it the part BHV transversal part of the raytracing core/or shaders. Which would mean both AMD & possibly Intel could also use it, but if it's just "tensor cores" of what Nvidia uses then it will most likely be exclusive to Nvidia cards. Which means they're market it as a feature to say they won't need X amounts of Vram. I don't like the idea of More properity hardware that makes last generation hardware e-waste because they can't use the so call new software without the hardware.
 
Imagine all textures will be saved grayscale, and then we tell AI, they're sitting on a rock and Einstein is wearing blue shorts. The sand is (specify a color gradient)

It doesn't have to predict everything, because we have the original texture. we include enough data to fully recover it from grayscale (instead of telling what color each pixel is one by one)



View attachment 295303E
This... doesn't look good, tbh.
 
I've seen some conflicting rumors about this.
one claims it's Nvidia's "Tensor cores" doing the work fdor the compression the other claims it the part BHV transversal part of the raytracing core/or shaders. Which would mean both AMD & possibly Intel could also use it, but if it's just "tensor cores" of what Nvidia uses then it will most likely be exclusive to Nvidia cards. Which means they're market it as a feature to say they won't need X amounts of Vram. I don't like the idea of More properity hardware that makes last generation hardware e-waste because they can't use the so call new software without the hardware.
Isn't the only other use of tensor cores for DLSS? If they can find another use for them all power to Nvidia -- for games that don't implement DLSS at all!
 
Back
Top