• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 5090 and RTX 5080 Specifications Surface, Showing Larger SKU Segmentation

If it had 24V output rail 1200W would be enough. But with standard 12V rail i quess 1500-1600W is safe minimum and a pair of thick cables of course.
You should just accept that 24v won't become a thing anytime soon in PCs lol
 
You should just accept that 24v won't become a thing anytime soon in PCs lol
Yeah, at least in consumer market. Servers may be a different thing (I have no idea that do they already use it?)
 
You should just accept that 24v won't become a thing anytime soon in PCs lol
Frankly speaking I dont care now - cos I'm not interested in high power GPU now - but if I were I would stay away until 24V become a standard if I had pay more than grand for GPU or GPU NPU high power combo.
So its not my problem now.

edit
The other solution is just modding of PSU + 24V/12V DC/DC added right to the power socket(s) of GPU
 
Last edited:
Does anyone know why the performance doesn't scale linearly with core counts? 4080 vs 4090 has about 60% core difference but the performance delta is only 25% at 4k.
 
The funniest thing was when it was going to be released as "RTX 4080 12GB" first. :D


x90 is the Titan and x90 Ti is the Titan Black. ;) Remember, the first Titan wasn't even with full die, hell, even the 780 Ti had a full die (but only 3GB VRAM).

Though they did the same milking with Titan X (Pascal) and Titan Xp.


How fortunate that Seasonic just released a new 2200W unit. :rolleyes:

The x90 and x90 Ti are not TITAN yet because the TITAN usually have 2x the amount of VRAM. If they made a TITAN Ada it would have had 48GB GDDR6X.

Regarding the Seasonic they also have a 1600W that is 80+ Titanium (the 2200W is surprisingly Platinum even though there's not much difference) but I think the 1600W is enough! I wish Corsair would release a new AX1600i with 2x 16-pin connectors! I have a AX1500i and love it!

Does anyone know why the performance doesn't scale linearly with core counts? 4080 vs 4090 has about 60% core difference but the performance delta is only 25% at 4k.

It is due to a Memory Bandwidth bottleneck.
FYI the 4090 has a bandwidth of 1,008GB/s whereas the 4080 has 717GB/s aka ~40% more Bandwidth when it has 68% more CUDA Cores...
Also the 4090 has only 72MB L2 Cache (out of 96MB of a full AD102 die) and the 4080 has 64MB, so only 12.5% more...
 
The x90 and x90 Ti are not TITAN yet because the TITAN usually have 2x the amount of VRAM. If they made a TITAN Ada it would have had 48GB

It is due to a Memory Bandwidth bottleneck.
FYI the 4090 has a bandwidth of 1,008GB/s whereas the 4080 has 717GB/s aka ~40% more Bandwidth when it has 68% more CUDA Cores...
Also the 4090 has only 72MB L2 Cache (out of 96MB of a full AD102 die) and the 4080 has 64MB, so only 12.5% more...
68% more cores at 40% more bandwidth but yields 25% delta gains.
Does the l2 cache really bottlenecking the 4090 and will this plautau affect the 5090 as well?



Update it's official anyone postulating Blackwells high prices for likes is a paid troll!
 
68% more cores at 40% more bandwidth but yields 25% delta gains.
Does the l2 cache really bottlenecking the 4090 and will this plautau affect the 5090 as well?

I bet power supply and heat dissipation are really bottlenecking.
 
The x90 and x90 Ti are not TITAN yet because the TITAN usually have 2x the amount of VRAM. If they made a TITAN Ada it would have had 48GB GDDR6X.
What about 3080 12GB and 3080 Ti? :rolleyes:

And the x90 is a Titan replacement as Nvidia made that clear themselves with 3090's release back then.
 
Even if it's $4000 or more?

I hope I'm wrong, but we might be underestimating how much Nvidia doesn't need Gaming any more.
GeForce GPUs still bring them a lot of money, even if it's true that A.I. brings them a lot more money due to their insane margins... their H100 are selling for $30K to $40K per chip !!!
Nvidia are still a Gaming brand and they know that if the A.I. bubble was bursting tomorrow they would have to go back to Gaming as their main revenue...

What about 3080 12GB and 3080 Ti? :rolleyes:

And the x90 is a Titan replacement as Nvidia made that clear themselves with 3090's release back then.
There is a reason why the 3090 and 3090 Ti were not called TITAN and that's because they are not! TITAN also pack FP64 cores and usually have 2x more VRAM, 780/Ti had 3GB whereas the TITAN had 6GB, they 2080 Ti had 11GB whereas the RTX TITAN had 24GB.

68% more cores at 40% more bandwidth but yields 25% delta gains.
Does the l2 cache really bottlenecking the 4090 and will this plautau affect the 5090 as well?



Update it's official anyone postulating Blackwells high prices for likes is a paid troll!
Performance never scales linearly, and yes the L2 Cache plays a big role in Lovelace architecture, hence the "only" 28% more performance at 4K Ultra but sometimes closer to 40% in Ray Tracing/Path Tracing because it relies on RT Cores performance.

Ps: we don't know how much VRAM the 5090 will have but it could have 96MB this time... when the full GB202 has 128MB so it might still create a bottleneck somewhere even though the Memory Bandwidth should be much higher than the 4090 (almost 1.8TB/s vs 1TB/s)

I bet power supply and heat dissipation are really bottlenecking.
The power is not a limiting factor because even with the 600W BIOS you don't get a lot more performance!
GDDR6X memory Overclocking without raising the power limit can sometimes bring you a lot more fps than Core Overclocking!
God of War: Ragnarök for example is very Memory Bandwidth bound! I have OC'd my GDDR6X to 25Gbps on my 4090 and it gave me 7% more performance without any Core OC for example.
 
Last edited:
There are benchmarks where the RTX Titan is faster than the 3090. nVidia explained back then that the Titan class gets some of the quadro features while the GeForce lineup does not.
So no, the x90s are not Titans.
 
Nvidia keeps repeating 3090 as 4080,5080. granted 4nm node L2$, double the clock speed but thats a given every other gen. 400W is strictly water cooling territory. Too bad it's not 3nm.

The memory bandwidth increase alone can give 2 digit performance increase over previous gen (15-20%) even if the rest of specs are similar. High power consumption might mean either it has crazy high GPU clocks or it's packed with TensorCores and RTX cores, since CUDA cores and SSM numbers are mostly the same.

The performance gap between 4080 and 4090 is enormous, and missing 4080Ti design is obvious there. So the 5080 will fill that gap pretty nicely. The MSRP price might match or be slightly lower than 4090 though. And retailers will for sure match price for a new gen basing upon raster performance and not on MSRPs.
 
  • Like
Reactions: N/A
The power is not a limiting factor because even with the 600W BIOS you don't get a lot more performance!
GDDR6X memory Overclocking without raising the power limit can sometimes bring you a lot more fps than Core Overclocking!
God of War: Ragnarök for example is very Memory Bandwidth bound! I have OC'd my GDDR6X to 25Gbps on my 4090 and it gave me 7% more performance without any Core OC for example.
Yes you are right - my statement is misleading i see. Let me explain what i had on my mind.

If they put more resources on the silicon they would have a lot more problems to supply power to them properly and to dissipate heat as well.

So power delivery and heat envelope were limiting factors at design state - I'm betting and it is what I should write in previous sentence
 
Yes you are right - my statement is misleading i see. Let me explain what i had on my mind.

If they put more resources on the silicon they would have a lot more problems to supply power to them properly and to dissipate heat as well.

So power delivery and heat envelope were limiting factors at design state - I'm betting and it is what I should write in previous sentence
Well the 4090/4080 cooler was already made to sustain 500W easily and up to 600W too! The 4090 almost never reaches 450W that a 4090 so it's almost overkill already.
The 4090 Ti was supposed to be a 600W GPU with a 4-slot cooler... but even the 4090 with a 600W BIOS and fully overclocked doesn't even get very high temperatures so I'm not worried about the 5090. Blackwell is supposed to be a brand new architecture whereas Lovelace was more an Ampere+ architecture. The biggest change was the process node, going from Samsung 8nm (10nm enhnaced) to TSMC 4N (5nm enhanced) was a big jump!
 
Clearly some 5090s can go up to 800W and the dual 12V-2x6 connectors are a must as they double the 5080 in every possible way. or gets stuck at 2.4GHz while the 5080 can average up to 3GHz.
 
Well the 4090/4080 cooler was already made to sustain 500W easily and up to 600W too! The 4090 almost never reaches 450W that a 4090 so it's almost overkill already.
The 4090 Ti was supposed to be a 600W GPU with a 4-slot cooler... but even the 4090 with a 600W BIOS and fully overclocked doesn't even get very high temperatures so I'm not worried about the 5090. Blackwell is supposed to be a brand new architecture whereas Lovelace was more an Ampere+ architecture. The biggest change was the process node, going from Samsung 8nm (10nm enhnaced) to TSMC 4N (5nm enhanced) was a big jump!
I'm not worried too. Just trying point out why you cant get significant extra performance without risking shortening substantially lifespan of the chip imho.
 
I bet power supply and heat dissipation are really bottlenecking.
In an ideal linear scale the performance would be close to delta cores/ bandwidth or somewhere in the middle at the similar clocks speeds. Although my 4090 suprim liquid at 3ghz and 100 mhz + on vram get about 10 to 15% gains in rt titles over factory settings.
Reminds of the sli scaling bs where 2 gpus didn't scale 100% haha the monolithic being superior than 2 gpus was half true.
One would hope that scaling would be linear or close to it especially at almost 100% premium outside a few outliers just like sli. Hopefully that 512 bit bus improves scaling for Blackwell.


update but then again if power was the issue the 4080 at 3ghz and memory oc also has significant performance delta so you have to look at it at factory settings. Tweaking is no part of the equation because both sides improve.
 
Last edited:
Pretty much the same, maybe $350. As much as these cards cost, I can buy 2 decent gaming laptops.
Ya $350 probably acceptable. I'd pay more if there were bios support like motherboards have, and if it was possible to have a sodimm slot so it would be easy to upgrade ram capacity. Reality is, all games I have could run on an RX6600, the other part less friends gaming and less time and desire to game.
 
68% more cores at 40% more bandwidth but yields 25% delta gains.
Does the l2 cache really bottlenecking the 4090 and will this plautau affect the 5090 as well?



Update it's official anyone postulating Blackwells high prices for likes is a paid troll!

The performance scaling is very poor because of the changes to the SM. 3000 series and later, there is one FP32 data path and one FP32 / INT data path that Nvidia counts as 2 cores per SM that share resources. 2000 series though had one FP32 core and one Int core per SM but only technically counts as 1 core. In both versions of the SM, operations can be run simultaneously. What this means is that for operations with a 50/50 mix of INT and FP32, both SMs will be equally occupied (assuming no bottlenecks in other parts of the pipeline).

That said games do not run 50/50 INT / FP32. They run 23 / 77, which essentially perfectly lines up with the expected performance uplift of adding FP32 capability to your INT datapath (assuming no other bottlenecks). 27% of your INT cores that would have otherwise remained idle can now handle FP32, which increases your performance in gaming workloads.

Nvidia has a whitepaper on the 3000 series here: https://www.nvidia.com/content/PDF/nvidia-ampere-ga-102-gpu-architecture-whitepaper-v2.pdf
 
Yeah, at least in consumer market. Servers may be a different thing (I have no idea that do they already use it?)
Some stuff like Nvidia's SXM and the OAM counterpart use 48v to power those 700W+ accelerators.

This is way easier to pull on a platform where you don't need to care that much about standards and can make your own (such as SXM itself). SXM3 even hinted to manufacturers that they could use a 12v to 48v booster in their designs to update legacy projects.

TITAN also pack FP64 cores
FP64 on consumer GPUs haven't been a thing since Kepler. FP64 cores are only a thing on x100 chips now.
The Titan V had it since it used the V100 chip, but the latter Titan RTX did not.
they 2080 Ti had 11GB whereas the RTX TITAN had 24GB.
The 3080ti had 12gb whereas the 3090 had 24GB.
 
There is a reason why the 3090 and 3090 Ti were not called TITAN and that's because they are not! TITAN also pack FP64 cores and usually have 2x more VRAM, 780/Ti had 3GB whereas the TITAN had 6GB, they 2080 Ti had 11GB whereas the RTX TITAN had 24GB.
Did any Titan after the GK110 based ones have any special FP64 performance? Nope. ;)

They've been just glamorized halo-tier cards with (almost) full die and with full memory bandwith tand with larger VRAM amount. That's why x90 is the Titan these days, just branded for gamers.

FP64 on consumer GPUs haven't been a thing since Kepler. FP64 cores are only a thing on x100 chips now.
You were faster, looks that we said the same things.
 
The x90 and x90 Ti are not TITAN yet because the TITAN usually have 2x the amount of VRAM. If they made a TITAN Ada it would have had 48GB GDDR6X.

Regarding the Seasonic they also have a 1600W that is 80+ Titanium (the 2200W is surprisingly Platinum even though there's not much difference) but I think the 1600W is enough! I wish Corsair would release a new AX1600i with 2x 16-pin connectors! I have a AX1500i and love it!



It is due to a Memory Bandwidth bottleneck.
FYI the 4090 has a bandwidth of 1,008GB/s whereas the 4080 has 717GB/s aka ~40% more Bandwidth when it has 68% more CUDA Cores...
Also the 4090 has only 72MB L2 Cache (out of 96MB of a full AD102 die) and the 4080 has 64MB, so only 12.5% more...
I wouldn't dare to try and find some sort of reason or logic within the Nvidia naming schemes.

The first, foremost and dare I say only aspect that determines what Nvidia calls A, B or C is marketing strategy. Every single Titan was created with that express purpose: marketing. GTX and RTX were created for marketing purposes, too. They call it whatever it is they want to sell you. Its not necessarily a different product. Its just whatever's deemed popular.
 
Intel: here is a CPU that needs 300W
Nvidia: here is a GPU that needs 600W
Intel: challenge accepted
Me at the sidelines with 4090 suprim liquid with power limit and 7800X3D pbo offset -25 @ 95% performance and half the power. :cool:
 
Clearly some 5090s can go up to 800W and the dual 12V-2x6 connectors are a must as they double the 5080 in every possible way. or gets stuck at 2.4GHz while the 5080 can average up to 3GHz.
If the leak from kopite7kimi is true the 5090 is a Dual-slot GPU therefore it's probably Liquid-cooled like the MSI 4090 SUMPRIM LIQUID X ! If so then AIBs are going to struggle even more to make buyers want to buy theirs. I guess people who want Air-Cooling will go for AIBs but Water-Cooling is definitely going to become a Standard sooner or later if GPUs start pulling 600W+

I wouldn't dare to try and find some sort of reason or logic within the Nvidia naming schemes.

The first, foremost and dare I say only aspect that determines what Nvidia calls A, B or C is marketing strategy. Every single Titan was created with that express purpose: marketing. GTX and RTX were created for marketing purposes, too. They call it whatever it is they want to sell you. Its not necessarily a different product. Its just whatever's deemed popular.
As much as Naming doesn't mean anything the TITAN line is definitely aimed at Professionals. They have some FP64 cores that Consumer GPUs do not have, and they usually have 2x the amount of VRAM for Professional workloads too.
 
Back
Top