• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Gamers Reject RTX 5060 Ti 8 GB — Outsold 16:1 by 16 GB Model

Nvidia: Here are two 5060 Ti models for you. One at $379 and one at $429 but with double the memory.
Reaction:
Reviewers and gamers bashing online the 8GB model.

AMD pricing it's RX 9060 XT with 16GB of VRAM at $349, lower than the 8GB model from Nvidia, while offering comparable performance
Reaction:
Reviewers and gamers reacting in a positive way to the 16GB model


So, what would someone expect a few weeks later as the logical result?




Gamers Reject RTX 5060 Ti 8 GB — Outsold 16:1 by 16 GB Model


At Nvidia headquarters while reading the article

View attachment 406115


At AMD's headquarter while reading the article and waiting for customers to buy the RX 9060XT 16GB model
View attachment 406116
Yeah strange isn't it... that a company that consistently appears in marketing and releases GPUs on a pretty predictable cadence is having a greater following than one who even fails to put an AMD logo on its AMD inside-console marketshare and has de-facto abandoned dGPU on the PC, unless it last-minute decides it hasn't after all.

Got any more comparisons we can laugh about :D
AMD deserves everything it gets. They're drunk when it comes to gaming GPUs. All in, except not really, depending on the mood of the day. I mean what the fuck
 
Yeah strange isn't it... that a company that consistently appears in marketing and releases GPUs on a pretty predictable cadence is having a greater following than one who even fails to put an AMD logo on its AMD inside-console marketshare and has de-facto abandoned dGPU on the PC, unless it last-minute decides it hasn't after all.

Got any more comparisons we can laugh about :D
AMD deserves everything it gets. They're drunk when it comes to gaming GPUs. All in, except not really, depending on the mood of the day. I mean what the fuck
The greatest obstacle towards amds success in the gpu market is amd themselves. Their products are absolutely fine. But the overpromising every freaking time followed by a massive underdelivery makes them a laughing stock.

They moved out of the high end segment to flood us with el cheapo cards and it turns out, nvidia had more budget offerings and they even released them before amd. That is absolutely crazy.
 
The greatest obstacle towards amds success in the gpu market is amd themselves. Their products are absolutely fine. But the overpromising every freaking time followed by a massive underdelivery makes them a laughing stock.

They moved out of the high end segment to flood us with el cheapo cards and it turns out, nvidia had more budget offerings and they even released them before amd. That is absolutely crazy.
That much I agree with. AMD, especially with their latest generation; has some very compelling products. Almost every single loss in marketshare has been AMD's fault to some extent, give or take some events depending on the person. Failing to catch up to give products that can raytrace in time, failing to provide a suitable / usable alternative to CUDA (which doesnt suck), failing to catch up in RT performance, alot of the shenanigans there ,etc, etc.
Pick and choose, you can find some that you can definitely agree are failings.

That being said, AMD is very much also possible of doing great things. They keep relapsing the same problems over and over and I dont personally feel they have reliable (or stable) leadership for their GPU division. They have the money, the engineers (in both ways), there is a road TO success. Easy? No, especially when NVIDIA would do what they can to prevent AMD from climbing up the marketshare ladder (atleast not in a directly monopolistic way, of course, because this is just one section of the market), but its possible. So where does that leave the blame? Mostly at AMD.

Considering I suspect that Intel's dedicated GPU's are gonna be dead in a few years (some of the news I've heard doesnt seem like they have any sort of confidence that they are a good move, especially with the loss of Pat Gelsinger, who seems to be the one who primarily pushed the concept, I'm really worried about the future of dedicated graphics cards.
 
A lot of people were saying the same about ray tracing, then the same about DLSS, then the same about frame generation. In the end nearly all recent games have these technologies, because they work well.

You're saying this now, and in 2 years when everyone will use it so will you, and then you'll criticize the new technology which comes out again.
Frame generation introduces input latency because the generated frames will need to be inserted before some of the rendered frames in order for the frames to be presented in the correct order. That will make it harder to perform timing-dependent reaction commands such as Street Fighter 6's perfect parries which give you a 2-frame window at a standardized 60 frames per second in order to actually parry a hit and potentially open the opponent up to a punish counter. There are situations where frame generation will have to be fully rejected because it harms that kind of game.

That seems to have a ~15% performance impact. but NTC to BCN has also been designed to work on weaker and older GPUs. The gains in memory aren't as big, but still massive.

But I'm not even sure that neural compression is going to be used to lower the memory requirement of current games, rather than cranking up texture details while avoinding balooning the VRAM requirement.
View attachment 406102
That is a nice tech demo. However, it is a very small scene. I am concerned about how much performance will be hammered if neural texture compression is applied to a whole game scene and not just a helmet, head, and other accessories. I would imagine that would amp up the texture cache thrashing quite a bit.

As far as I understand, traditional hardware texture compression boosts texture mapping performance by reducing texture cache thrashing because more texels can be stored in the texture cache at the same time. When a texture map needs to be used, the needed blocks of the compressed texture map are loaded into the texture cache. When any texel is needed, the compressed block is loaded from the texture cache and decompressed to generate the texels. Since nearby pixels will probably also need the same texture map block, the other texels will be needed as necessary. Using traditionally compressed textures saves memory throughput and boosts performance by lowering the amount of data that must be loaded from VRAM into the texture cache if a load is necessary, and if more texels can be stored in the texture cache in compressed form, fewer compressed texels need to be ejected from the texture cache in order to make room for other compressed texels, reducing texture cache thrashing.

The potential problem that I can see with neural texture compression that requires the AI hardware to decompress before the texture unit can use it is that this might not have enough time to recompress the data into BCn (for Direct3D) or ASTC (for OpenGL or Vulkan) formats to reap the benefits of traditional hardware compression. This would force the texture cache to accept uncompressed texels which will force the texture cache to eject texels more often to make room for the decompressed NTC texels before those texels can be used. Also, if someone wanted to recompress the NTC textures on the fly, how much shader computational muscle would be needed, and would past GPUs be able to handle that load?

I think that we need to see real-time demos of scene-wide neural texture compression in action on multiple GPUs from multiple vendors and generations before we can either accept neural texture compression or find out that it is an impractical technology.
 
Back
Top