• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Gamers Reject RTX 5060 Ti 8 GB — Outsold 16:1 by 16 GB Model

There's no guarantee that the lower end cards will even be able to run Neural Texture compression to a satisfactory level.

A bit of a paper feature right now. It might be good in due time, but anyone that might need more than 8GB today rather than down the road would be naive to get the 8GB version. I'd have to see more working examples of it in action before I consider it a must have legitimate feature to base my entire GPU purchase decision around for something that could somewhat heavily impact my GPU experience. It has potential, but it's also got to mature a bit unless it's a toggle feature that works across everything old and new seamlessly and at the same time people are ok with the results of it.

If it were AVIF decoding hardware on a GPU that would probably be worth consideration since that kind of a bit similar and it's easy enough to convert PNG or other image formats to AVIF.
 
Maybe these console ports are just poorly made? In Spider-Man you've got the 7600 XT barely beating the 4060 and losing to the 8GB 3060-Ti. And with TLOU: P1 the 3060 12GB gets the same performance as the 4060 8GB, while the 7600 XT loses to the 8GB 3070.

These seem to be edge cases where some extra VRAM provides benefits when comparing some equally strong cards but not when comparing others. There's no apparent point where performance drops off a cliff, like you'd expect if shared memory in system RAM needed to be utilized. TLOU: P1 also required like 20 minutes of shader compilation, and would frequently crash on Nvidia GPUs. I don't think the devs put a ton of time or effort into these games, and it obviously shows in areas beyond pure performance. If there's no other examples beyond poorly done Sony ports, that's not a very convincing argument.

View attachment 406120

This is what an actual VRAM limitation looks like. 12GB and below, literally not working. 16GB and above, constrained by horsepower, not memory (except for the 6700 XT, you go little buddy!)
VRAM limitations are not solely represented by sheer drops. It can also be shown by severe upticks in frame lag, whenever something in system RAM is called upon, or by missing or heavily degraded textures which can actually INCREASE FPS. Teshspot has numerous reviews showing this over the years.

I think most of us can agree that 8GB, a VRAM capacity that first launched 12 years ago, has well and truly outlived its usefulness on anything over display adapter level. The 5060ti is easily capable enough of saturating an 8GB buffer and I'd bet even the 5060 is held back.
 
I think most of us can agree that 8GB, a VRAM capacity that first launched 12 years ago, has well and truly outlived its usefulness on anything over display adapter level
No, I don't think most people agree on that, actually, given that over 50% of people are still using GPUs with 8GB or less.
 
Last edited by a moderator:
Maybe these console ports are just poorly made?
Console ports isn't a valid excuse in 2025, not when x60 cards significantly outperform a PS5. Having more VRAM shows in many titles with less frame drops and better 1% lows.
As for AW2, it's yet another Nvidia tech demo to show off RT on the $1500+ cards.
No, I don't think most people agree on that, actually, given that over 50% of people are still using GPUs with 8GB or less.
Over 50% of people still have 8GB or less because Nvidia has been stuck on 8GB or less for so long, as was already pointed out. Those on laptop GPU's are getting shafted even harder.
 
Last edited by a moderator:
Over 50% of people still have 8GB or less because Nvidia has been stuck on 8GB or less for so long, as was already pointed out. Those on laptop GPU's are getting shafted even harder.
Do do you agree with him that 8GB "has well and truly outlived its usefulness on anything over display adapter level."
 
A 15% performance hit on a mere 272 MB of data seems unworkable. Are you saying that the performance hit isn't going to scale up when we are talking about 8-16GB of data? It will, you are feeding the NN more data and therefore it requires more processing power. The only fixed overhead is the VRAM, which will remain constant.

The article you got this from specifically points out that this isn't anywhere close to a modern demanding game and points out that it's more a proof that the tech works rather a "this can run a low end cards" thing: https://www.pcgamer.com/hardware/graphics-cards/ive-been-testing-nvidias-new-neural-texture-compression-toolkit-and-the-impressive-results-could-be-good-news-for-game-install-sizes/#:~:text=At 1080p, the non-NTC setup runs at,4K—for a 96% reduction in texture memory.

It's an amazing advancement for compression but until there's a demonstration of it being used in a full game, the numbers so far don't bode well for it. I'd much rather have this creating compressed archives, game installer files, etc. Files I can bake and only need to handle infrequently.
The question is: How does the scaling curve looks like ? If 500MB means a 30% performance hit, I can't see that tech becoming usefull until they add strong dedicated hardware to handle the decompression. At 8GB the game would be a slideshow.

I've read their GitHUB, and from what I understand, the compressed data is that small because it's decompressed by inferering the data, so the compression get rid of a lot of things, only keeping what is needed for inference. And some type of data are poorly handled, so you need to convert them before the compression. They even advice to not use NTC for some type of data because the decoding would result in a noticeable loss in quality.

So far most of the research seems to be around pictures and video since this what is putting the biggest amount of strain on the internet, but i do wonder if that could work for general archive? Pictures and sounds are all about perception, but would that work as well with data that need more precision? o_O
 
The question is: How does the scaling curve looks like ? If 500MB means a 30% performance hit, I can't see that tech becoming usefull until they add strong dedicated hardware to handle the decompression. At 8GB the game would be a slideshow.

That would be great data to have. Hopefully we get in sometime in the near future.

I've read their GitHUB, and from what I understand, the compressed data is that small because it's decompressed by inferering the data, so the compression get rid of a lot of things, only keeping what is needed for inference. And some type of data are poorly handled, so you need to convert them before the compression. They even advice to not use NTC for some type of data because the decoding would result in a noticeable loss in quality.

So far most of the research seems to be around pictures and video since this what is putting the biggest amount of strain on the internet, but i do wonder if that could work for general archive? Pictures and sounds are all about perception, but would that work as well with data that need more precision? o_O

Makes sense given those file types tend to take up the most space and bandwidth.

In regards to archives, I can imagine AI can be immediately useful in selecting the best compression method and settings on a per file basis before even considering neural compression. Usually it's more efficient to apply the best compression possible tailored to a specific file type and then put those in a compression-less archive for maximum size savings. Double compressing both the file and the archive itself tends to result in little to no size reduction. This is more or less the method employed by repackers, who typically go in and decompress game assets and then recompress them using compression settings optimized specifically for that game. AI can be useful in sampling a variety of compression encoders / settings and select the best one on a per file basis. Neural compression will be a step further than that with their use of variational auto encoders.
 
That would be great data to have. Hopefully we get in sometime in the near future.

Don't forget Nvidia never got DMM to work in a shipping product on Lovelace so until I see a shipping product using it I'll treat it as a cool tech demo.
 
Neural Texture Compression is not an end-all solution for VRAM issues.
Instead, it is a targeted solution that has to be specifically integrated into game engines.

That is to say, it will not magically reduce VRAM usage across the board, it will only apply in specific games where developers implement it.
Additionally, anything else that is stored in VRAM will still be there, taking up space.

While I am all for improved vendor-agnostic texture compression, all the Neural Texture Compression hype is just that, nVidia marketing hype to sell more cards.
I think you said it well, but to add one final point, if something is given to a developer, they will always be tempted to still utilise so whilst the feature may free up some VRAM, it doesnt mean the VRAM wont be used by something else in the game.
 
I think you said it well, but to add one final point, if something is given to a developer, they will always be tempted to still utilise so whilst the feature may free up some VRAM, it doesnt mean the VRAM wont be used by something else in the game.

Just like DLSS which isn't used anymore as a bonus but almost a requirement/crutch to get decent fps in a lot of games on top of the absolute trash TAA implantations and the net negative that is Framegen which is being shoved down our throats as extra performance even though it introduces more latency/artifacts.....
 
Last edited by a moderator:
Do do you agree with him that 8GB "has well and truly outlived its usefulness on anything over display adapter level."
IMO, any card above the x50 or basic entry level should have more than 8GB of VRAM. Nvidia is being egregious with putting 8GB on the 5060Ti, and this news shows that at least some buyers know better than to buy into planned obsolescence. Even Nvidia knows 8GB is stagnating on progress since they attempted to hide reviews of the 8GB version.
 
This headline isn't all that surprising, the 8GB model practically exists to upsell the 16GB model, with a passable use case for older games and eSports. I wonder what the sales figures will show for the 9060XT 8/16GB.
 
This headline isn't all that surprising, the 8GB model practically exists to upsell the 16GB model, with a passable use case for older games and eSports. I wonder what the sales figures will show for the 9060XT 8/16GB.

The 5060 vs 9060XT 8GB will be the most interesting to me.
 
Rather, they buy it for CAD or visualization, cheaper card with the highest built-in VRAM memory. Nobody buys a card that barely beats a 10 year old 2080 Ti.
 
The 5060 vs 9060XT 8GB will be the most interesting to me.
Indeed, I wonder if Nvidia minus some $$$ will work for them this time.
Nobody buys a card that barely beats a 10 year old 2080 Ti.
Not a concern for the 5060Ti though in your example, because we need to wait another 3.3 years for the 2080Ti to be 10 years old.
 
At this point I think it's fair to say that anyone buying a GPU is more or less informed and the 8GB version has not been recieved well in reviews. This is confirmed by stocks.

Yes.
Customers are getting more informed.
8GB just isn't enough for a 2025 card.
Actual sales numbers won't lie.
 
Good. 16GB should be minimum allowed into existence. The are already games like the latest Indiana Jones that easily go over that 16GB VRAM consumption, and its just the first in the next games to come.
So no wonder that 8GB is no longer an option. Sadly, most of mobile laptop GPU configuration still sell top GPU with this crappy amount of VRAM.
 
A 15% performance hit on a mere 272 MB of data seems unworkable. Are you saying that the performance hit isn't going to scale up when we are talking about 8-16GB of data? It will, you are feeding the NN more data and therefore it requires more processing power. The only fixed overhead is the VRAM, which will remain constant.

The article you got this from specifically points out that this isn't anywhere close to a modern demanding game and points out that it's more a proof that the tech works rather a "this can run a low end cards" thing: https://www.pcgamer.com/hardware/graphics-cards/ive-been-testing-nvidias-new-neural-texture-compression-toolkit-and-the-impressive-results-could-be-good-news-for-game-install-sizes/#:~:text=At 1080p, the non-NTC setup runs at,4K—for a 96% reduction in texture memory.

It's an amazing advancement for compression but until there's a demonstration of it being used in a full game, the numbers so far don't bode well for it. I'd much rather have this creating compressed archives, game installer files, etc. Files I can bake and only need to handle infrequently.
While true that more data means more processing power required, we don't know how much ''processing power'' this Neural network is using in these test. Presumably neutral network is running on tensor core, and there could well be untapped performance left.

All I am saying is making performance and hardware requirement guesses now is too early.
 
Good. 16GB should be minimum allowed into existence.

These 8/16GB cards serves 2 purposes:

1. Use the 8GB one to upsell the 16GB one to informed customers.
2. Let the system integrators save a few bucks using the 8GB model while still have the "5060Ti" Label on their product page, and sell to less informed customers.

In my opinion,
This is actively deceiving the customer and needs to be regulated by customer protection laws.
 
In my opinion,
This is actively deceiving the customer and needs to be regulated by customer protection laws.
Luckily for people who need the budget options the 8GB cards provide, the law(and common sense) does not agree with your opinion.

Put another way, there is no deception going on. AMD and NVidia are offering 8GB variants of their GPU products as budget focused offerings. This is not deceptive, nor dishonest, nor is it a new concept. Making budget cards is and has been a business practice for decades. There is ZERO reason for it to change.
 
Last edited:
Mindfactory buyers aren't a good representation of what actually sells. Consider complete Ryzen CPU dominance in sales there, and the estimated global PC CPU market share, where AMD and Intel are much more even.

Sure, people who do at least a bit research and DIY their gaming computers will choose 16 GB cards, but I'm sure Nvidia will sell truckloads of 8 GB cards in various "back to school" PCs and other cheap prebuilds - if the cards will pile in warehouses, they could even be "mandatory" - forcing the partners to sell them to customers, not even giving them the choice.
 
Nvidia: Here are two 5060 Ti models for you. One at $379 and one at $429 but with double the memory.
Reaction:
Reviewers and gamers bashing online the 8GB model.

AMD pricing it's RX 9060 XT with 16GB of VRAM at $349, lower than the 8GB model from Nvidia, while offering comparable performance
Reaction:
Reviewers and gamers reacting in a positive way to the 16GB model


So, what would someone expect a few weeks later as the logical result?
Someone would expect the 9060xt to be ~at least 80$ cheaper than the 5060ti 16gb, but it isn't, not outside mindfactory (and mindfactory doesn't send outside germany unless you are a business). On actual shops that ship across EU the price difference is ~35€ between a 5060ti and a 9060xt. Fake MSRP at it's finest.

Nvidia is being egregious with putting 8GB on the 5060Ti
Surely amd is also being egregious with putting 8gb on 9060xt as well, right?
 
Surely amd is also being egregious with putting 8gb on 9060xt as well, right?
Yes, yes they are.
When the amount of VRAM included in otherwise identical products materially affects gaming performance, they should be clearly differentiated from each other.
This is regardless of the manufacturer.
 
Last edited:
No, no they're not.

While that's true, you're failing to understand the purpose of these cards;

Thus..
Then call it a RX 9060XT-B (or RTX 5060Ti-B) for Budget and I will have no issue with it.

Expecting every non-informed person out there to clearly understand the potential performance penalties between a [9060XT / 5060Ti] 8GB card and 16GB card is disingenuous.
They will simply see that it is the exact same model, and that one is cheaper.

I understand the need/desire for AMD/NV/etc. to offer Budget offerings... but they should clearly differentiate them as so.
The consumer can still vote with their wallet, if they want the budget offering or not, but passing a budget offering off as the premium option is very scuzzy in my eyes.

Printing extra letters/numbers on boxes is cheap; they have to do it anyways to show that one is an 8GB card, and the other is 16GB.
So, just spend these cheap extra letters/numbers on clearly differentiating the two products.

EDIT:

Perhaps call them "Platinum" models?
Makes them still seem high-end, but still clearly different than the standard XT/Ti designations?
 
Back
Top