• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Edward Snowden Lashes Out at NVIDIA Over GeForce RTX 50 Pricing And Value

Are you talking about ray reconstruction in cyberpunk
Not a ray reconstruction. This is Shader Execution Reordering. It's in the NV presentations for ADA, and also for Blackwell (the next version, needed for neural rendering).
And at this point, we know for sure that it's implemented in exactly one game CP2077, two years after launch. And no one else is going to implement it because of the way it works.
And it's not what you call cooperative vectors, which are included in the Direct X API description - that's a completely different feature, but also needed for neural rendering. This again suggests that this won't get any traction in the next 3-5 years, if at all, and all the older games won't get it either.
 
Last edited:
The 5080 should have been a 320-bit card with 20GB of VRAM.
This garbage 5080 wouldn't have pass even for a 5070 a couple of years ago, both performance and spec wise.
True but this hypothetical 320 bit 20GB card at $1200 MSRP no doubt. We get what we pay for. here the proof.
4080 -> 4080S $200 cheaper +1% perf
4080S -> 5080 +11% perf but it overclocks like crazy + another 10%.
 
Well not sure if Mr. Edward Snowden is the right man to say this but not gonna disagree with him, I will go support AMD instead. Because I am tired of Nvidia and have been since the RTX 20 series launch :rolleyes:

dcbaab0c5598e2de9638168c041d59f1.jpg
 
Just wanna reiterate that Snowden was probably not talking about the value of these cards for gaming. His shtick is digital privacy. He wants broader public access to local AI models, not HD textures.

I'm shaking my head a bit at the 5 pages of comments that may as well have been in response to MLID or some YT shoutyhead. Like, whoosh.

So, anyone here tried running Ollama?
Yes, on 4090 and 7900xtx.
And you know, 7900 is actually twice as good (perf/$), which roughly means that nv has been selling garbage for two years now, and is not going to improve)
 
If you need a card (cause you have say a 3060 or 2060 or whatever) you need a card, period. Doesn't matter how generationally lackluster the new gpus are, you still need a card. So you are going to buy what's best for the money. That's why nvidia sells, not necessarily because it makes good cards every generation, but because it makes better cards than everyone else.
But Nvidia isn't "best for the money". It's, at best, equal for the money, oftentimes worse.

In general performance, Nvidia has the better cards, yes, and if you need 4080S+ tier, there just isn't anything AMD has to offer. But below that, Nvidia isn't better, price-performance wise.
 
And they're currently selling every single 5090 and 5080 they can produce worldwide.

Laughing at everyone.
Yes. 1000 cards for at least 100 million gamers world wide. :laugh: :laugh: :laugh: :banghead:
 
But Nvidia isn't "best for the money". It's, at best, equal for the money, oftentimes worse.

In general performance, Nvidia has the better cards, yes, and if you need 4080S+ tier, there just isn't anything AMD has to offer. But below that, Nvidia isn't better, price-performance wise.
Of course it is, when you take into account RT and DLSS things aren't really competitive. When 4k DLSS Q looks better than 1440p native (while having the same performance) how do you even put a numeric value to that?
 
They didn't cancel pre-orders, they cancelled the money hold and are instead moving to a waitlist system where you pay once they have the card ready to send out.
They actually cancelled both of mine, but I only ordered the second one because the first one said order failed when it actually went through. Either way it was a whole heap of mess and very few cards reached these shores anyway.

I'll see if I can get in the waitlist somehow.
 
"Lashes Out" - is there are a reason TPU are using lazy inflammatory wording. Have some respect yourselves and for the person your quoting.
 
Yes. 1000 cards for at least 100 million gamers world wide. :laugh: :laugh: :laugh: :banghead:

Which is very strange because the 4080 4090 stopped production a long time ago to start ramping up 50s and the result is a paper launch.
 
"Lashes Out" - is there are a reason TPU are using lazy inflammatory wording. Have some respect yourselves and for the person your quoting.

They have changed the title to "What Snowden just said is WILD (shocked face) read till the end (crying laughing emoji).
 
Which is very strange because the 4080 4090 stopped production a long time ago to start ramping up 50s and the result is a paper launch.

This is exactly why I say, this "shortage' is a marketing ploy to keep prices higher than they should be for a mediocre generational uplift. Those shipping containers are most likely filled to the brim at the loading docks, we should ask someone working there, to check the manifests, you will no doubt find that they have thousands of RTX5090,5080,5070's stored there.
 
You aren't paying the same price if you're buying clusters surely? I would have thought a 5080 would have skewed more towards gaming / gaming adjacent. Aren't there better / cheaper / different options for running / training LLMs depending on your use cases? Was / is Snowden in to mining?
My understanding: it's not a mining thing, and not really about training either, but mostly about running inference at home on standard consumer hardware. Snowden wants more people to be able to run AI models cheaply at home and not have to send their data to OpenAI / AWS / Microsoft because he's all about data privacy. Nvidia is sandbagging memory capacity and only doing the double-density memory thing for 8->16GB cards like the 4060Ti, but not 12->24 or 16->32 which would be ideal for local inference and allow running 30B-parameter models. The cheapest way to run 30B models is with a used RTX 3090, and has been that way for years now.

The 5080 with 16GB is a gaming card. With a 32GB variant it could have become a great inference card, and Snowden's gripe is Nvidia's still forcing the upsell to the 5090, which has more processing and bandwidth capability than is needed for inference (the 5080 too; a 5070 with 24GB or 48GB would be the sweet spot I think). Nvidia's announced the $3000 "DIGITS" mini-PC later this year but is conspicuously avoiding the sub-$1000 market.

LLM support for Radeon cards is steadily improving, so maybe AMD is the answer.
 
Lets go AMD, you beat Intel in gaming this generation time to beat NVIDIA too!
 
It would not be glossy at with all the damn chalk writing on it, Which by the way you can actually freaking see before enabling the damn glossy dog crap that you claim is "realistic". I am not complaining about "realistic" I'm complaining about It being UNREALISTIC FROM ITS ACTUAL USE.

I swear you just sit on this website to troll anyone that comments you.
Ohhhhhhhhh.

So you were complaining about something you didn't communicate about and expected that everyone reading your post should also be able to read your mind.

That's a bold strategy Cotton, let's see how if it pays off for him.
 
Well not sure if Mr. Edward Snowden is the right man to say this but not gonna disagree with him, I will go support AMD instead
The 9070XT is in a similar performance class and it also has only 16GB of VRAM. But since AMD uses the older and slower GDDR6 instead of GDDR7, they can save about 80$.

High yield estimates the 5090 and 32GB VRAM for about 350$ each. For the 5080 it is half for half the VRAM, but the GPU is actually less then half the cost.
So I understand why Nvidia only used 16GB of VRAM, since VRAM is actually the highest cost component on the 5080. 3GB chips might be even costlier per GB.

That all said, I don't see great performance benefits from GDDR7 on the RTX5000 series, so Nvidia could have gone with the GDDR6(X) and make the cards cheaper or put more on it for the same price...

Which is very strange because the 4080 4090 stopped production a long time ago to start ramping up 50s and the result is a paper launch.
As for the shortage, it is simply Nvidia putting datacenters first... From a buisness standpoint I can understand it, from a RTX-customer standpoint I don't like it.
GH100/Hopper and GB100/Blackwell-datacenter is made on the same node as the desktop Blackwell and Ada...

But Nvidia isn't "best for the money". It's, at best, equal for the money, oftentimes worse.
I would say Nvidia is best per GPU-unit and since your typical gaming PC can only use one of them, people tend to buy the most powerful at a higher price.

Nvidia is sandbagging memory capacity and only doing the double-density memory thing for 8->16GB cards like the 4060Ti, but not 12->24 or 16->32 which would be ideal for local inference and allow running 30B-parameter models.
It is a purely buisness decision to limit the VRAM on everything but the halo product to prevent people from using those cards for AI and push people into buying the MUCH more expensive workstation cards.
As for the 8->16GB on the 4060TI, Nvidia does know 8GB is not enough for that amount of performance, but 12GB is only possible with a 96 or 192bit bus and the 4060Ti has 128bit and everything below 128bit crushes the performance.
 
Hopefully AMD price their GPU's extremely aggressively and are able to really trounce Nvidia this generation, but as I've written already the 9070XT needs to be $520 or less at 4080 performance levels, that is the only way they are going to cut massively into Nvidia.

also make up some bullshit feature, doesn't have to be anything meaningful or useful, but advertise the hell out of it as the next best thing since sliced break and only make it available on Radeon GPU's. Additionally bundle your graphic cards with Ryzen cpu's, see it fly off the shelves.
 
This is exactly why I say, this "shortage' is a marketing ploy to keep prices higher than they should be for a mediocre generational uplift. Those shipping containers are most likely filled to the brim at the loading docks, we should ask someone working there, to check the manifests, you will no doubt find that they have thousands of RTX5090,5080,5070's stored there.
I'm in a discord server with someone who bought a 5080 that was manufactured the 17 January...sounds like Nvidia rushed the launch
 
He's not wrong, and the 5080 we got should have been a 5070ti at best. Even just looking at a 5080 board it looks like it should be a $700 GPU with a midrange size die.
I think you hit the nail on the head. 40 series Nvidia offered 2 4080 until forced to remove one because of the backlash against the base model. This time they figured out just don't offer the higher model so the crap card can't be compared to anything. Problem solved because the masses will still buy the shit card.
 
Of course it is, when you take into account RT and DLSS things aren't really competitive. When 4k DLSS Q looks better than 1440p native (while having the same performance) how do you even put a numeric value to that?
RT works way better with the 7000 series (than with 6000). FSR is fine, too. It is correct that Nvidia is oftentimes the frontrunner for new technologies, but on the long run, AMD catches up and gives better value for the money. FreeSync, FSR and FG are (or will be) good examples.

But lets be honest: If you really wanna use RT and not have a big performance-hit, you still need high end (4080+). AMD has nothing there. But I think the software-implementation will get better over the next few years so that you can enable RT without completely tanking in perf.
 
Not a ray reconstruction. This is Shader Execution Reordering. It's in the NV presentations for ADA, and also for Blackwell (the next version, needed for neural rendering).
And at this point, we know for sure that it's implemented in exactly one game CP2077, two years after launch. And no one else is going to implement it because of the way it works.
And it's not what you call cooperative vectors, which are included in the Direct X API description - that's a completely different feature, but also needed for neural rendering. This again suggests that this won't get any traction in the next 3-5 years, if at all, and all the older games won't get it either.
Sounds like a glaring oversight has been made then. If SER hardware is mandatory to enable neural rendering, then it should have become a standard feature set of Direct X. And how is Intel also working on Neural rendering if they are not going to implement something similar to SER ?
 
Doesn't G-Sync (specifically dedicated hardware) work better for flickering? I'm not sure how much difference these new Mediatek scalers will make?! It's a bridge I'll have to cross at some point in the near future.

Sounds like a glaring oversight has been made then. If SER hardware is mandatory to enable neural rendering, then it should have become a standard feature set of Direct X. And how is Intel also working on Neural rendering if they are not going to implement something similar to SER ?
They certainly like to use the term "suite of techniques"
 
Doesn't G-Sync (specifically dedicated hardware) work better for flickering? I'm not sure how much difference these new Mediatek scalers will make?! It's a bridge I'll have to cross at some point in the near future.
No idea, I use FreeSync in Windows and Linux and had no problems with that ingame. I had that problem in loading-screens, but you know... who cares ^^
 
I could turn that around and ask why you have such a problem with upscalers, but I won't because it's not liable to generate a productive discussion. Instead I highly suggest you do some research on how rasterisation fundamentally works, the inherent problems WRT lights and shadows that are caused by how it works, and how and why ray-tracing doesn't (cannot) suffer from the same drawbacks.

If I had to sum it up in the context of this thread, though: rasterisation is to rendering what upscalers are, whereas ray-tracing is rendering at native quality.
What I mean is that the terms "tricks and fakery" implies that something is wrong with the way we render our images. As it's been proven from time to time, traditional raster can look good.

Ultra realism and photo-accuracy isn't the only way to produce a good-looking image in a game. It's a brush in the painter's hand, just like any other.

If you actually want to learn how game rendering works, start here:
I know how game rendering works. Read my post above to see what I meant earlier.
 
Back
Top