You actually agreed with the opposing point. PC enthusiasts are willing to spend "small fortunes" to purchase hardware they covet, in all contradiction to the performance that hardware has. Collectively, I've spent more on mechanical keyboards in the last few years than the price of a 5090. Am I an enthusiast? No one said "enthusiast = spending money" or "buying the very fastest". It's about not allowing money (and time) to stand in the way. Your enthusiast who restores the old PC may only spend "10 quid" on the purchase -- but he very well may spend hundreds of hours of his time restoring it.
The fact you consider this in any way relevant demonstrates a rather clear emotional bias.
Those who speak in such absolutes are zealots promoting a cause. For my own personal needs, NVidia cards are orders of magnitude better than any competitive offering, for the sheer sake of CUDA alone. And I won't even mention the horrifically bad power efficiency of most AMD cards.
Puerile analogies, since card makers aren't subtracting features. A better analogy is to note that the automatic windows of today's cars roll up no faster than those from the 1980s. Outrageous! These automakers are cheating us blind!
Honestly, this hyperfixation on VRAM to the exclusion of all else is beyond absurd. Even in the face of benchmarks repeatedly showing that 8GB is plenty for 1080p and 1440p resolutions, you continue the farce. It's a mentality that explains why AMD is currently working on 32GB variants that, despite performing no better for you gamers, will still be bought in droves, simply so you can come here and boast the size of your VRAM buffers. The customer is always right ... even when they're not.
1) The point was, as I read it, that enthusiast items were how you sold things like extra RAM, and how in parallel people go out and buy silly expensive super cars. My point was that this is not the sole domain of enthusiasm...because even ancient cars have enthusiasts. The relative cost is not a point I made, because I also cited gutting antique cars and reinstalling new hardware. You want to make enthusiast about cost. My point is enthusiasm is not defined by cost.
2) Nvidia is not constrained by funding for design. The reason their fluid capital matters is the same reason that Nissan exists. Because they don't have a ton of money they cannot afford the research to develop anything new. Nvidia can, and instead of developing new they've decided they are a software company. This is a fact, that you want to read as me being against them how? I do love that you read things in that are not there to suit your bias...but the core argument that they have money is a statement that they likewise should be doing something to move development forward, rather than simply waiting for another company to give them a die shrink so they can move forward.
There is an argument to be made that they are doing this with frame generation...but it's a bad one. It's also a not so great look when you as a company want to put most of your eggs into the AI driven improvement basket while offering lower end cards with insufficient VRAM to support meaningful AI...the core debate of this whole thread.
3) You see, this is just bull ****. You quote without context, and offer an absolute as answer to what you claim for an absolute. It seems really meaningful when you do it...until you think for two whole nanoseconds and discover that you wanted to seem profound. Let me simplify this for you. Near the same price point AMD offers more VRAM, of the same type. Intel offers more VRAM, and they offer competing Ray trace. Which is best? The answer is none of them. There is no clear winner. There's a case for all of the cards at that competitive model, which means by definition they do not make the best part, but do control its placement in the market to make it as profitable for them as possible. This would be absolutely different if there wasn't an 8 GB and 16 GB model at the same performance level...but instead of making a product that nobody else can compete with they, like Intel, have decided to do just enough. Great for competition, not so much for best.
4) Do you live under a rock? I ask because the issue with PhysX is something you should understand. The ever shrinking bus, that Nvidia claims is fine because they manage memory better is a feature. The fact that they've been behind on VRAM quantity is a lack of a feature. I...want to give you the benefit of doubt, but that'd also be drinking the proverbial kool-aid of what Nvidia claims. It's just as bad as AMD's kool-aid.
5) Hyperfixating is you wanting to project. You came to a thread asking if 8 GB cards were worth it, and complain when people are fixated on VRAM. Isn't that like going to a McDonalds, and complaining that their dinner menu has too many hamburgers on it? I mean, if you cherry pick and exclude all cases where VRAM is required then you'll definitely find that it isn't a factor. I on the other hand look at the programs I can use professionally, and will tell you that if ZLUDA wasn't assassinated by Nvidia then the difference between the 8 GB card and 16 GB card is the difference between crashing and not...so my money would be on the 9070 or 9070xt. If I ran ray tracing I'd tell you the extra VRAM is the difference between smooth and choppy. If I run games from 2015 I'd say 8 GB is more than I'll be needing for a long time. People ask about this because a $300 MSRP card, with a $500 street price, with an available +$100 street price to double the VRAM is a large question mark. As such, it's important for people to which a $500 expenditure is substantial to know whether that 20% increase in price matters. I say that this is a bad choice that Nvidia forced upon the market segment with a 5060, 5060ti 8 GB, and the 5060ti 16 GB. You're welcome to say otherwise, but the 2060 is a joke in terms of adoption. The 3060 is almost beating the 4060 despite being "replaced." The 3060 8 GB vs 12 GB comes down to the 12 GB pulling 25 watts less, which is attributable to not having to constantly bang against memory limits and do intensive calls.
Last bit here....the 32GB version of the card is not for gamers. I have no idea why you would assume that. If you looked at the first few people on this forum to respond to that rumor thread you'd be able to understand that the 32GB variant is AMD offering a soft solution to people wanting an AI accelerator or professional hardware that burns through VRAM, without having to spend a small fortune. I'd nominate for use in 3D rendering, general rendering, 3D scanning, and LLM playtime. With the jump from 16 to 32 you are not doing it for gaming unless you want 4k/8k. You are doing it to satiate stupidly large amounts of data. You're welcome to believe otherwise, but I'll bet my left pinky finger that this'll be the discussion again in 4-6 years when both AMD and Nvidia release crap that has to support whatever the newest toy is. TressFX, Ray Tracing, or whatever it's called at the time.