• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 5090 Features 575 W TDP, RTX 5080 Carries 360 W TDP

this looks like just one more point against the claims of 5080 being faster than 4090. It's the same process node, only 65% of the cores, lower power rating, lower memory bandwidth, less memory...the only way this even competes with 4090 is going to be if there's a new DLSS tech or if they made 5000-series better at frame-gen and that's how they're compared. Raw raster power, no way.
 
Just buy a 5080 and save $1000+. The performance between a 5090 at 320W and a 5080 at 360W is going to be about the same.
But i'd also run the 5080 at 320w, and so the performance difference will still be whatever it ends up being.
 
Well, there is over 1GHz difference in core speed on my 4070Ti vs 3070Ti, more cache, and other tweaks, I am sure they have good AI to design these things :D
 
But i'd also run the 5080 at 320w, and so the performance difference will still be whatever it ends up being.
Sure, a 5090 at 320 W will probably be a little bit faster than a 5080 at 320 W, but is it worth the massive difference in price?
 
Sure, a 5090 at 320 W will probably be a little bit faster than a 5080 at 320 W, but is it worth the massive difference in price?
Yes, because when running a GPU like this, your only concern is if you have a big enough PSU
 
Sure, a 5090 at 320 W will probably be a little bit faster than a 5080 at 320 W, but is it worth the massive difference in price?
I think the confusion lies in the process nodes. The move from the 2000 series to the 3000 series was horrible because of the 8 nm Samsung node. The situation greatly improved with the 4000 series on the 4 nm TSMC node. That's why the 4090 is such a great performer. But now the 2000 to 3000 series situation is happening again as the 5000 series is on the same 4 nm TSMC node. Efficiency can only go down if you add 30% more transistors to the same process node unless you greatly decrease the clock speed which negates any performance improvements over the previous generation.
 
Yes, because when running a GPU like this, your only concern is if you have a big enough PSU
The same way when you're buying a Ferrari, your only concern is whether you have space for it in your garage next to your other Ferraris? Um, maybe.

I'm still thinking that if a 5090 performs at 100%, and a 5080 at 320 W performs at 50%, and you can get 60% by running your 5090 at 320 W, then the other 40% is wasted money.

Edit: Then, you basically paid double price for 20% more performance.

I think the confusion lies in the process nodes. The move from the 2000 series to the 3000 series was horrible because of the 8 nm Samsung node. The situation greatly improved with the 4000 series on the 4 nm TSMC node. That's why the 4090 is such a great performer. But now the 2000 to 3000 series situation is happening again as the 5000 series is on the same 4 nm TSMC node. Efficiency can only go down if you add 30% more transistors to the same process node unless you greatly decrease the clock speed which negates any performance improvements over the previous generation.
I completely agree, although this wasn't my question.
 
Doom is mostly a shooter on rails, or by sections, they couldn't pull that off in a open world for example. Not to say they didn't do a great job but not all games are equal.
*cough cough* RAGE *cough cough*

I agree. I want to see speed increases due to advancements in GPU architecture, like I did in the Pascal years, and not due to cramming more parts into a chip and increasing power (what I call brute forcing).
Pascal's main advancements came from cramming significantly more transistors onto a chip with a higher power limit, specially given maxwell was its predecessor. Rose colored glasses and all that.
 
Pascal's main advancements came from cramming significantly more transistors onto a chip with a higher power limit, specially given maxwell was its predecessor. Rose colored glasses and all that.
Well, Maxwell wasn't a bad architecture, either, imo... but I get what you mean.
 
The same way when you're buying a Ferrari, your only concern is whether you have space for it in your garage next to your other Ferraris? Um, maybe.

I'm still thinking that if a 5090 performs at 100%, and a 5080 at 320 W performs at 50%, and you can get 60% by running your 5090 at 320 W, then the other 40% is wasted money.

Edit: Then, you basically paid double price for 20% more performance.
Except its usually the opposite, the 5090 at 320w would be putting out 60%, while the 5080 at 320w would be putting out 50%.

Besides, if you are buying the 5090, it's because the 5080 isnt enough for what you want. For most who want high end hardware, drawing 525w isnt a concern. The high end has always had huge power draw (hello SLI era).
Well, Maxwell wasn't a bad architecture, either, imo... but I get what you mean.
No it wasnt bad. It was great. My point was that overall most GPU generations are defined by MOAR COARS and more power, with power being offset by smaller nodes. IPC is far less important to GPUS then it is CPUs, parallelism and clock speeds make a much larger difference. Been true for a long time.
 
Will I be chill with a 1000W ATX 3.1 PSU for the 5090? (paired with 7800X3D)
 
Just buy a 5080 and save $1000+. The performance between a 5090 at 320W and a 5080 at 360W is going to be about the same. Maybe and this is a big maybe, the 5090 will be a little faster but don't forget that Nvidia is using the same node as the 4000 series. This means efficiency of the 5000 series will go down as more transistors are added.

This is a buyer beware situation and no company logo on the box beats physics.
You seem to be under the assumption that performance scales linearly with power.
A 5090 at a lower power budget than a 5080 is still going to have almost double the memory bandwidth, and way more cores, even if those are clocked lower.
The same way when you're buying a Ferrari, your only concern is whether you have space for it in your garage next to your other Ferraris? Um, maybe.

I'm still thinking that if a 5090 performs at 100%, and a 5080 at 320 W performs at 50%, and you can get 60% by running your 5090 at 320 W, then the other 40% is wasted money.

Edit: Then, you basically paid double price for 20% more performance.


I completely agree, although this wasn't my question.
Your assumptions are also wrong. A 5090 at 320W is likely to only be 10~20% slower than the stock setting.
The 5080 math is also not that simple because things (sadly) often do not scale linearly like that.
 
Should I write "in my opinion" in front of every post I make? :confused:

I am an Nvidia user, by the way, just not in my main gaming rig at the moment. I've got two HTPCs that both have Nvidia GPUs in them. Does that make me more qualified to comment here?
It seems to be getting to that point, people take the system specs too seriously,lol.
Let me disagree there. The 5090 has double of everything compared to the 5080 (shaders, VRAM, etc) which is already gonna be a stupidly expensive card. The 5090 is only GeForce by name to sell it to gamers. But it is not a card that your average gamer needs. Otherwise, there wouldn't be such a gigantic gap between it and the 5080 in specs.
The 5090 is more of an RTX A series card than a Geforce card, double the shaders and VRAM also likely means double the price as well, I doubt Jensen is going to be generous since businesses bought up the 4090.
Maybe its just me missing the pricing structure of Pascal, there was only a $200 difference between x80 and x80Ti, the Titan XP wasn't something gamers with money to waste were buying.
 
Sure, a 5090 at 320 W will probably be a little bit faster than a 5080 at 320 W, but is it worth the massive difference in price?
I don't think the performance loss from dropping to 320w will be even 5%. It's the same with CPUs, you can push 50% extra power for single digits performance.

Im currently running 320w with clocked memory and it's around 2-3% faster than stock 450w, so I don't think the 5090 will be any different.
 
the 5090 at 320w would be putting out 60%, while the 5080 at 320w would be putting out 50%.
That's exactly what I said.

Besides, if you are buying the 5090, it's because the 5080 isnt enough for what you want. For most who want high end hardware, drawing 525w isnt a concern. The high end has always had huge power draw (hello SLI era).
That's what I think so, too. If the 5080 isn't enough, I'm not gonna spend double, and then limit my 5090 to be only a little bit faster than the 5080. It's a huge waste of money.

No it wasnt bad. It was great. My point was that overall most GPU generations are defined by MOAR COARS and more power, with power being offset by smaller nodes. IPC is far less important to GPUS then it is CPUs, parallelism and clock speeds make a much larger difference. Been true for a long time.
Then why do we have massive differences between GPUs such as the 5700 XT vs the Vega 64, the former of which was faster with only 62% of the cores, despite having not much of a clock speed difference?
 
Your assumptions are also wrong. A 5090 at 320W is likely to only be 10~20% slower than the stock setting.
The 5080 math is also not that simple because things (sadly) often do not scale linearly like that.
10-20 is still huge, I don't think it will be over 5% honestly
 
I don't think the performance loss from dropping to 320w will be even 5%. It's the same with CPUs, you can push 50% extra power for single digits performance.

Im currently running 320w with clocked memory and it's around 2-3% faster than stock 450w, so I don't think the 5090 will be any different.
May you be right, then. ;)

It seems to be getting to that point, people take the system specs too seriously,lol.
Does it matter, though? Can current AMD users not have an opinion on an Nvidia card and vice versa? Do people sign their souls away when they choose Coca-Cola instead of Pepsi one day? I don't think so.

The 5090 is more of an RTX A series card than a Geforce card, double the shaders and VRAM also likely means double the price as well, I doubt Jensen is going to be generous since businesses bought up the 4090.
Maybe its just me missing the pricing structure of Pascal, there was only a $200 difference between x80 and x80Ti, the Titan XP wasn't something gamers with money to waste were buying.
Exactly. But now, Nvidia wants even gamers to buy the Titan, ehm... x90 card, despite its price.
 
Just tested it in CP2077, 73 fps @ 440-450 watts, 71 fps @ 320w. That's at 4k native
That's awesome! :) Sometimes it's nice to be proven wrong. :ohwell:

It begs the question though, why the 4090 has to be a 450 W card by default if it doesn't bring any extra performance to the table. What is Nvidia aiming at with such a high power consumption?
 
Haven't the official/public TDP numbers been technically TGPs - as in whole card consumption - for a while now? For both AMD and Nvidia, the power consumption numbers measured in reviews are within measuring error of power limit that is set to the TDP. There was a point where GPU manufacturers tried to make things complicated but that did not last long.
TGP is only for the GPU chip. TBP is Total Board power.
 
That's awesome! :) Sometimes it's nice to be proven wrong. :ohwell:

It begs the question though, why the 4090 has to be a 450 W card by default if it doesn't bring any extra performance to the table. What is Nvidia aiming at with such a high power consumption?
I think the 450w makes the card faster in other workloads (than restricted at 320w) but as much as ive tested, games seem to be restricted from memory bandwidth, so they don't scale that much with power. If there are non gaming workloads that don't benefit from memory as much, I guess the 450w will give better performance. Still, i don't expect anything over 10% in either case. What's nvidia thinking? Probably the same thing intel is thinking when they decide to ship CPUs at 400 watts :D

Ocing vram gives me ~8-9% performance, overclocking the core to 3000mhz gives me ~2%.
 
That's awesome! :) Sometimes it's nice to be proven wrong. :ohwell:

It begs the question though, why the 4090 has to be a 450 W card by default if it doesn't bring any extra performance to the table. What is Nvidia aiming at with such a high power consumption?
So that for some very specific cases it can stretch its legs all the way, using an extra 100W for a 100MHz bump for those synthetic benchmark scores.
Same goes for the 600W limit some models have, really pushing the power envelope for minor clock gains. Reminder that after some point, the performance scaling x power becomes exponential.

Both my 3090s have a default power limit of 370W, whereas at 275W I loose less than 10% perf.

Here's a simple example of power scaling for some AI workloads on a 3090, you can see that after some point you barely get any extra performance when increasing power:
1735918863441.png


That has been the case since... always. Here's another example with a 2080ti:
1735918816167.png


Games often don't really push a GPU that hard, so the consumption while playing is really lower than the actual limit.
 
So that for some very specific cases it can stretch its legs all the way, using an extra 100W for a 100MHz bump for those synthetic benchmark scores.
Same goes for the 600W limit some models have, really pushing the power envelope for minor clock gains. Reminder that after some point, the performance scaling x power becomes exponential.

Both my 3090s have a default power limit of 370W, whereas at 275W I loose less than 10% perf.

Here's a simple example of power scaling for some AI workloads on a 3090, you can see that after some point you barely get any extra performance when increasing power:
View attachment 378225
What that diagram tells me is that the 3090 should be a 250-260 Watt card. There is no need for it to eat more than that out-of-the-box. Overclockers would be happy with that, too.
 
Back
Top