• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Nvidia's Neural Texture Compression technology should alleviate VRAM concerns

To keep on slapping on more and more gobs of VRAM generation after generation isn't a complete solution because that VRAM cost money. It raises the price of a card at a time when prices for cards are already ridiculous. imo there needs to be an increase in VRAM offered but a step towards not letting the VRAM requirements get absurd is also part of the solution.
One could apply the same argument to things like slapping on RT cores, spending loads of money to expand rendering performance.

The difference with VRAM though despite what Nvidia like to present its not an expensive component. Its just given the feel of being expensive because Nvidia are so stingy with it and of course VRAM is not modular. So one has to buy an entire new GPU to upgrade their VRAM.

If you look at it from the point of view of a tfops vs VRAM capacity on Nvidia, its regressed, if you look at it in a way of what Nvidia provide vs the console market, its regressed. At the very least I think there should be a bump to what the consoles have on mid and high end cards.

Now if Nvidia add e.g. 8 gigs VRAM and then add $400 to the price, thats them being greedy, its not the cost to them.

I think its ok to apply minimalism (to a degree) on the budget cards whilst charging an appropriate price, so 8 gigs on a 4060 for $300. But it shouldnt be that aggressive on a 4070ti same with the 3080.

Remember Intel stuck 8 gigs on a £250 card.

Isn't the only other use of tensor cores for DLSS? If they can find another use for them all power to Nvidia -- for games that don't implement DLSS at all!
In my opinion the tensor cores at least have more value than the RT cores, as DLSS can be used in many games now due to DLDSR or whatever its called, whilst RT is only on a tiny fraction of games with questionable value. But with that said I expect if Nvidia wanted they could probably implement DLDSR using normal shaders.

If I had a choice between say a 16 gig 4070ti GTX or a 12 gig 4070ti RTX at same price, absolutely 100% I be picking the former. I suspect I wouldnt be alone either.
 
Last edited:
No, you're just talking nonsense, AMD does offer more memory at every price point that's just a fact.

It's one thing to sell an 800 dollar video card and skimp on VRAM and a totally different thing to sell a 350$ one and do the same, if you somehow think the two are the same then you're the fanboy here. I wouldn't expect from a lower end video card to come with a ton of VRAM but I would from one that sells for the same money that used to buy you a flagship card not too long ago.
60 class card is not lower end, the 3070 issue we are talking now the 3070 doesn't cost 800$ now, nothing there makes sense

I think most people have been consistent 8GB is fine on an entry level card. I still think even 250 is likely too much for this card but it's much better than what Nvidia seems to want to do and stick 8GB on a 400 usd one.
250 usd for the 7060 and it comes with a free plane ticket to a magic land where pink elephants fly
 
250 usd for the 7060 and it comes with a free plane ticket to a magic land where pink elephants fly

If it's over 300 usd it's a joke for sure... it's 6nm which is much cheaper than 5nm and the die isn't much larger than a 6500XT so AMD would be pretty bold to price it any higher than 300 usd pretty sure even at that price margins would be pretty high.
 
they will have cores for path-tracing and will call it PTX6090, gone are the days for RTX :clap:

Or they could just call them PX and make them available in Military PX stores, because you'll probably need a hefty discount to afford them. LOL
 
If it's over 300 usd it's a joke for sure... it's 6nm which is much cheaper than 5nm and the die isn't much larger than a 6500XT so AMD would be pretty bold to price it any higher than 300 usd pretty sure even at that price margins would be pretty high.

i'm sure the problem isn't your misaligned expectations, in what universe did amd gave any indication they are going to do those prices?
a bit like keep pre ordering games and being pissed they all come out like crap, completely unexpected, goldfish syndrome
 
i'm sure the problem isn't your misaligned expectations, in what universe did amd gave any indication they are going to do those prices?
a bit like keep pre ordering games and being pissed they all come out like crap, completely unexpected, goldfish syndrome

I mean it could cost 500 usd for all I know or anyone else but the rumors floating around are 250-330 usd so that is all we have to go by with performance in the neighborhood of the 6650XT
 
Moreover, they're compressing, which means they have all the original data,
The compression (which is lossy) is done offline at the developer's end.
The decompression is what happens on the user's side.
But your other guess isn't far off. The paper does acknowledge the possibility of "generative textures." Although that's not part of the scope. Iirc, they already did something similar with the Remix AI tech.

As for the grayscale idea, there is already ways to tell the engine the colour of some object without using encoding them in a texture's pixels: Use a material with single RGB value for diffuse, or use vertex colors.

Depending on how large the textures are, couldn't transferring the compressed textures possibly save you enough time over xferring uncompressed textures that the decompression could be done for free from a time perspective?
That's how it already works.
Major graphics APIs use block compression algorithms unpacked on the GPU. Some engines take it even further by running another layer of compression unpacked on the CPU.

It has the potential to reduce VRAM usage yes but given the current market, for end consumers, the benefit it likely to be nothing. Can you blame people not rooting for a feature that will only be used to further increase Nvidia's margins?
but if it's just "tensor cores" of what Nvidia uses then it will most likely be exclusive to Nvidia cards.
The concept isn't limited to Nvidia cards. While they could, in a cynic's view of the future, throw some propriety sauce in the mix, for the method to actually be adopted, it needs to be supported by major APIs. And to do that, the means to implement at least the decoder must be accessible to all applicable vendors.
No developer would implement a critical pipeline stage that isn't widely supported across vendors. Unlike DLSS or whatever propriety crap is common these days, texture decompression isn't just something you can toggle on/off. Only way they'd implement this (and we should remember that this is a feasibility research, not a ready-to-use product) is either by having a custom decompressor for non-supporting hardware, or have two sets of textures in NTC and DXT/BC. Both approaches obviously means more work, more storage requirements, and less performance.
 
I'm all for finding ways to use hardware better but why is Nvidia seem so reluctant to increasing vram? Is there an industry shortage?
This was a thought I had: https://www.techpowerup.com/forums/threads/general-nonsense.232862/post-5013923

No industry shortages, just pandemic-type lockdowns (VRAM) alongside pandemic-prolonged MSRPs. Pay more, get less and repeat more often (scheming obsolescence).

The reluctance doesn't end there, there's heaps of professional/content creation workloads which demand more than 8GB. Nvidias got a firm footing in these types of Pro-build use cases with a pretty strong client base. The last thing they want is abundant VRAM solutions on more affordable mid-tier consumer cards which consequentially denotes to less-profitability. Speaking of Pro-series workloads, its not just the pricier enterprise edition cards (Quadro/whatnot) but also the extortionately priced prosumer cards too (very popular outside of gaming).

8GB is still highly relevant seeing not everyone is playing the latest and most graphics challenging titles, or that too on higher resolution displays. Also a good solution for tight budget builds with compromises acceded to. All of that somewhat belongs in the ~$250/~$300 category. For a $400-$500 card, the mid-segment of hi-performance cards, the 8GB rationing is disconcerting and a tough one to swallow seeing some games are already showing signs of tipping over the limit. Thats the bit that annoys me the most... pushed to the limit before change can materialise and yet discounting the oodles of graphical splendour wished away with current VRAM limitations (tip or no tip).
 
Interesting ideas from Nvidia, but I think I share similar concerns with others in the thread in that this will be used as a shortcut for bad development, similar to upscaling now. Why bother with optimization when DLSS/FSR/Shiny-new-AI-technology-of-the-day will fix it? I guess only time will tell at this point.

I don't think this is damage control from Nvidia in response to low VRAM complaints. Rather I think that AI is currently Nvidia's focus and this is another product banking on it (Microsoft and the cloud anyone?). The timing is mildly inconvenient from a perception standpoint, but I doubt Nvidia are fussed by that.
 
they will have cores for path-tracing and will call it PTX6090, gone are the days for RTX :clap:
Wasn't path-tracing the predecessor to ray-tracing?
 
Be happy with what good can come(from this) and not look for the bad.
Are we happy with the good Nvidia has for us today?
Are you?

I'm certainly not.

We have a live analogy to this fancy new technology, its called DLSS3. Nvidia sells cards at premium with it.
DLSS also shows how developers use it to save themselves time, instead of it making games truly run better. I'm trying to not look for the bad here, but I just don't have this much cognitive dissonance in my gut, sorry.

If you were hoping for a circlejerk topic on Nvidia's next marketing push... wow. If you were thinking this was a serious technology in its current state... wow x2. Read the article, and please get a reality check. Its full of fantasy, like most current day articles surrounding the infinite capabilities of AI are. Remember Big Data? It was going to make our lives complete much the same. They call these buzzwords. Good for clicks & attention, and inflated shareholder value.

it can. suppose they make a 5060 with 24g physical vram. If It can neurally show me textures 4 times larger, let it do.

I'm sure more vram would draw more power. so first find a way to keep things from flying over 500w, then talk about adding vram.
That's easy, you just clock your GPU 100-200 mhz lower instead of pulling boost frequencies way beyond the optimal curve.

There is just one card flying over 500W. Its the one that has 24GB ;) And it cán run way below 500W, still carrying 24GB, and still topping the performance charts.

I'm all for finding ways to use hardware better but why is Nvidia seem so reluctant to increasing vram? Is there an industry shortage?
This was a thought I had: https://www.techpowerup.com/forums/threads/general-nonsense.232862/post-5013923
Nvidia has historically always used VRAM to secure its market and limit the potential use of its product, especially on Geforce.
They have also always pushed heavily on reducing VRAM footprint and maximizing usage. So in that sense, its understandable Nvidia wants to 'cash in' on that R&D, that's why they sell their proprietary crap instead of their hardware these days. The hardware is pretty efficient. But its also monolithic, so Nvidia is rapidly going to Intel way, and it shows, with exploding TDPs even on highly efficient Ada. If it wasn't for Ada's major efficiency gap compared to Ampere, the gen probably wouldn't even have a right to exist. There is literally nothing else in it over Ampere.

For much the same reasons AMD has stalled on RDNA3, and they've forgotten to add some spicy software to keep people interested. But there is a similar efficiency jump there too, and they've focused R&D on the hardware/packaging front, which arguably is a better long term plan to keep cost down without cutting the hardware down completely.

There are no shortages, there is only a matter of cost and profit.
 
Last edited:
Back
Top