Thursday, September 26th 2024

NVIDIA GeForce RTX 5090 and RTX 5080 Specifications Surface, Showing Larger SKU Segmentation

Thanks to the renowned NVIDIA hardware leaker kopite7Kimi on X, we are getting information about the final versions of NVIDIA's first upcoming wave of GeForce RTX 50 series "Blackwell" graphics cards. The two leaked GPUs are the GeForce RTX 5090 and RTX 5080, which now feature a more significant gap between xx80 and xx90 SKUs. For starters, we have the highest-end GeForce RTX 5090. NVIDIA has decided to use the GB202-300-A1 die and enabled 21,760 FP32 CUDA cores on this top-end model. Accompanying the massive 170 SM GPU configuration, the RTX 5090 has 32 GB of GDDR7 memory on a 512-bit bus, with each GDDR7 die running at 28 Gbps. This translates to 1,568 GB/s memory bandwidth. All of this is confined to a 600 W TGP.

When it comes to the GeForce RTX 5080, NVIDIA has decided to further separate its xx80 and xx90 SKUs. The RTX 5080 has 10,752 FP32 CUDA cores paired with 16 GB of GDDR7 memory on a 256-bit bus. With GDDR7 running at 28 Gbps, the memory bandwidth is also halved at 784 GB/s. This SKU uses a GB203-400-A1 die, which is designed to run within a 400 W TGP power envelope. For reference, the RTX 4090 has 68% more CUDA cores than the RTX 4080. The rumored RTX 5090 has around 102% more CUDA cores than the rumored RTX 5080, which means that NVIDIA is separating its top SKUs even more. We are curious to see at what price point NVIDIA places its upcoming GPUs so that we can compare generational updates and the difference between xx80 and xx90 models and their widened gaps.
Sources: kopite7kimi (RTX 5090), kopite7kimi (RTX 5080)
Add your own comment

181 Comments on NVIDIA GeForce RTX 5090 and RTX 5080 Specifications Surface, Showing Larger SKU Segmentation

#126
pk67
igormpYou should just accept that 24v won't become a thing anytime soon in PCs lol
Frankly speaking I dont care now - cos I'm not interested in high power GPU now - but if I were I would stay away until 24V become a standard if I had pay more than grand for GPU or GPU NPU high power combo.
So its not my problem now.

edit
The other solution is just modding of PSU + 24V/12V DC/DC added right to the power socket(s) of GPU
Posted on Reply
#127
Godrilla
Does anyone know why the performance doesn't scale linearly with core counts? 4080 vs 4090 has about 60% core difference but the performance delta is only 25% at 4k.
Posted on Reply
#128
x4it3n
RuruThe funniest thing was when it was going to be released as "RTX 4080 12GB" first. :D


x90 is the Titan and x90 Ti is the Titan Black. ;) Remember, the first Titan wasn't even with full die, hell, even the 780 Ti had a full die (but only 3GB VRAM).

Though they did the same milking with Titan X (Pascal) and Titan Xp.


How fortunate that Seasonic just released a new 2200W unit. :rolleyes:
The x90 and x90 Ti are not TITAN yet because the TITAN usually have 2x the amount of VRAM. If they made a TITAN Ada it would have had 48GB GDDR6X.

Regarding the Seasonic they also have a 1600W that is 80+ Titanium (the 2200W is surprisingly Platinum even though there's not much difference) but I think the 1600W is enough! I wish Corsair would release a new AX1600i with 2x 16-pin connectors! I have a AX1500i and love it!
GodrillaDoes anyone know why the performance doesn't scale linearly with core counts? 4080 vs 4090 has about 60% core difference but the performance delta is only 25% at 4k.
It is due to a Memory Bandwidth bottleneck.
FYI the 4090 has a bandwidth of 1,008GB/s whereas the 4080 has 717GB/s aka ~40% more Bandwidth when it has 68% more CUDA Cores...
Also the 4090 has only 72MB L2 Cache (out of 96MB of a full AD102 die) and the 4080 has 64MB, so only 12.5% more...
Posted on Reply
#129
Godrilla
x4it3nThe x90 and x90 Ti are not TITAN yet because the TITAN usually have 2x the amount of VRAM. If they made a TITAN Ada it would have had 48GB

It is due to a Memory Bandwidth bottleneck.
FYI the 4090 has a bandwidth of 1,008GB/s whereas the 4080 has 717GB/s aka ~40% more Bandwidth when it has 68% more CUDA Cores...
Also the 4090 has only 72MB L2 Cache (out of 96MB of a full AD102 die) and the 4080 has 64MB, so only 12.5% more...
68% more cores at 40% more bandwidth but yields 25% delta gains.
Does the l2 cache really bottlenecking the 4090 and will this plautau affect the 5090 as well?



Update it's official anyone postulating Blackwells high prices for likes is a paid troll!
Posted on Reply
#130
pk67
Godrilla68% more cores at 40% more bandwidth but yields 25% delta gains.
Does the l2 cache really bottlenecking the 4090 and will this plautau affect the 5090 as well?
I bet power supply and heat dissipation are really bottlenecking.
Posted on Reply
#131
Ruru
S.T.A.R.S.
x4it3nThe x90 and x90 Ti are not TITAN yet because the TITAN usually have 2x the amount of VRAM. If they made a TITAN Ada it would have had 48GB GDDR6X.
What about 3080 12GB and 3080 Ti? :rolleyes:

And the x90 is a Titan replacement as Nvidia made that clear themselves with 3090's release back then.
Posted on Reply
#132
x4it3n
BwazeEven if it's $4000 or more?

I hope I'm wrong, but we might be underestimating how much Nvidia doesn't need Gaming any more.
GeForce GPUs still bring them a lot of money, even if it's true that A.I. brings them a lot more money due to their insane margins... their H100 are selling for $30K to $40K per chip !!!
Nvidia are still a Gaming brand and they know that if the A.I. bubble was bursting tomorrow they would have to go back to Gaming as their main revenue...
RuruWhat about 3080 12GB and 3080 Ti? :rolleyes:

And the x90 is a Titan replacement as Nvidia made that clear themselves with 3090's release back then.
There is a reason why the 3090 and 3090 Ti were not called TITAN and that's because they are not! TITAN also pack FP64 cores and usually have 2x more VRAM, 780/Ti had 3GB whereas the TITAN had 6GB, they 2080 Ti had 11GB whereas the RTX TITAN had 24GB.
Godrilla68% more cores at 40% more bandwidth but yields 25% delta gains.
Does the l2 cache really bottlenecking the 4090 and will this plautau affect the 5090 as well?



Update it's official anyone postulating Blackwells high prices for likes is a paid troll!
Performance never scales linearly, and yes the L2 Cache plays a big role in Lovelace architecture, hence the "only" 28% more performance at 4K Ultra but sometimes closer to 40% in Ray Tracing/Path Tracing because it relies on RT Cores performance.

Ps: we don't know how much VRAM the 5090 will have but it could have 96MB this time... when the full GB202 has 128MB so it might still create a bottleneck somewhere even though the Memory Bandwidth should be much higher than the 4090 (almost 1.8TB/s vs 1TB/s)
pk67I bet power supply and heat dissipation are really bottlenecking.
The power is not a limiting factor because even with the 600W BIOS you don't get a lot more performance!
GDDR6X memory Overclocking without raising the power limit can sometimes bring you a lot more fps than Core Overclocking!
God of War: Ragnarök for example is very Memory Bandwidth bound! I have OC'd my GDDR6X to 25Gbps on my 4090 and it gave me 7% more performance without any Core OC for example.
Posted on Reply
#133
gffermari
There are benchmarks where the RTX Titan is faster than the 3090. nVidia explained back then that the Titan class gets some of the quadro features while the GeForce lineup does not.
So no, the x90s are not Titans.
Posted on Reply
#134
kawice
N/ANvidia keeps repeating 3090 as 4080,5080. granted 4nm node L2$, double the clock speed but thats a given every other gen. 400W is strictly water cooling territory. Too bad it's not 3nm.
The memory bandwidth increase alone can give 2 digit performance increase over previous gen (15-20%) even if the rest of specs are similar. High power consumption might mean either it has crazy high GPU clocks or it's packed with TensorCores and RTX cores, since CUDA cores and SSM numbers are mostly the same.

The performance gap between 4080 and 4090 is enormous, and missing 4080Ti design is obvious there. So the 5080 will fill that gap pretty nicely. The MSRP price might match or be slightly lower than 4090 though. And retailers will for sure match price for a new gen basing upon raster performance and not on MSRPs.
Posted on Reply
#135
pk67
x4it3nThe power is not a limiting factor because even with the 600W BIOS you don't get a lot more performance!
GDDR6X memory Overclocking without raising the power limit can sometimes bring you a lot more fps than Core Overclocking!
God of War: Ragnarök for example is very Memory Bandwidth bound! I have OC'd my GDDR6X to 25Gbps on my 4090 and it gave me 7% more performance without any Core OC for example.
Yes you are right - my statement is misleading i see. Let me explain what i had on my mind.

If they put more resources on the silicon they would have a lot more problems to supply power to them properly and to dissipate heat as well.

So power delivery and heat envelope were limiting factors at design state - I'm betting and it is what I should write in previous sentence
Posted on Reply
#136
Minus Infinity
potsdaman70 series again with 12Gb :mad::rolleyes:
Yes, and the 5070 Super will get 3GB memory dies and come with 18GB on the 192 bit bus for $999.
Posted on Reply
#137
x4it3n
pk67Yes you are right - my statement is misleading i see. Let me explain what i had on my mind.

If they put more resources on the silicon they would have a lot more problems to supply power to them properly and to dissipate heat as well.

So power delivery and heat envelope were limiting factors at design state - I'm betting and it is what I should write in previous sentence
Well the 4090/4080 cooler was already made to sustain 500W easily and up to 600W too! The 4090 almost never reaches 450W that a 4090 so it's almost overkill already.
The 4090 Ti was supposed to be a 600W GPU with a 4-slot cooler... but even the 4090 with a 600W BIOS and fully overclocked doesn't even get very high temperatures so I'm not worried about the 5090. Blackwell is supposed to be a brand new architecture whereas Lovelace was more an Ampere+ architecture. The biggest change was the process node, going from Samsung 8nm (10nm enhnaced) to TSMC 4N (5nm enhanced) was a big jump!
Posted on Reply
#138
N/A
Clearly some 5090s can go up to 800W and the dual 12V-2x6 connectors are a must as they double the 5080 in every possible way. or gets stuck at 2.4GHz while the 5080 can average up to 3GHz.
Posted on Reply
#139
pk67
x4it3nWell the 4090/4080 cooler was already made to sustain 500W easily and up to 600W too! The 4090 almost never reaches 450W that a 4090 so it's almost overkill already.
The 4090 Ti was supposed to be a 600W GPU with a 4-slot cooler... but even the 4090 with a 600W BIOS and fully overclocked doesn't even get very high temperatures so I'm not worried about the 5090. Blackwell is supposed to be a brand new architecture whereas Lovelace was more an Ampere+ architecture. The biggest change was the process node, going from Samsung 8nm (10nm enhnaced) to TSMC 4N (5nm enhanced) was a big jump!
I'm not worried too. Just trying point out why you cant get significant extra performance without risking shortening substantially lifespan of the chip imho.
Posted on Reply
#140
Godrilla
pk67I bet power supply and heat dissipation are really bottlenecking.
In an ideal linear scale the performance would be close to delta cores/ bandwidth or somewhere in the middle at the similar clocks speeds. Although my 4090 suprim liquid at 3ghz and 100 mhz + on vram get about 10 to 15% gains in rt titles over factory settings.
Reminds of the sli scaling bs where 2 gpus didn't scale 100% haha the monolithic being superior than 2 gpus was half true.
One would hope that scaling would be linear or close to it especially at almost 100% premium outside a few outliers just like sli. Hopefully that 512 bit bus improves scaling for Blackwell.


update but then again if power was the issue the 4080 at 3ghz and memory oc also has significant performance delta so you have to look at it at factory settings. Tweaking is no part of the equation because both sides improve.
Posted on Reply
#141
mechtech
RandallFlaggPretty much the same, maybe $350. As much as these cards cost, I can buy 2 decent gaming laptops.
Ya $350 probably acceptable. I'd pay more if there were bios support like motherboards have, and if it was possible to have a sodimm slot so it would be easy to upgrade ram capacity. Reality is, all games I have could run on an RX6600, the other part less friends gaming and less time and desire to game.
Posted on Reply
#142
evernessince
Godrilla68% more cores at 40% more bandwidth but yields 25% delta gains.
Does the l2 cache really bottlenecking the 4090 and will this plautau affect the 5090 as well?



Update it's official anyone postulating Blackwells high prices for likes is a paid troll!
The performance scaling is very poor because of the changes to the SM. 3000 series and later, there is one FP32 data path and one FP32 / INT data path that Nvidia counts as 2 cores per SM that share resources. 2000 series though had one FP32 core and one Int core per SM but only technically counts as 1 core. In both versions of the SM, operations can be run simultaneously. What this means is that for operations with a 50/50 mix of INT and FP32, both SMs will be equally occupied (assuming no bottlenecks in other parts of the pipeline).

That said games do not run 50/50 INT / FP32. They run 23 / 77, which essentially perfectly lines up with the expected performance uplift of adding FP32 capability to your INT datapath (assuming no other bottlenecks). 27% of your INT cores that would have otherwise remained idle can now handle FP32, which increases your performance in gaming workloads.

Nvidia has a whitepaper on the 3000 series here: www.nvidia.com/content/PDF/nvidia-ampere-ga-102-gpu-architecture-whitepaper-v2.pdf
Posted on Reply
#143
igormp
RuruYeah, at least in consumer market. Servers may be a different thing (I have no idea that do they already use it?)
Some stuff like Nvidia's SXM and the OAM counterpart use 48v to power those 700W+ accelerators.

This is way easier to pull on a platform where you don't need to care that much about standards and can make your own (such as SXM itself). SXM3 even hinted to manufacturers that they could use a 12v to 48v booster in their designs to update legacy projects.
x4it3nTITAN also pack FP64 cores
FP64 on consumer GPUs haven't been a thing since Kepler. FP64 cores are only a thing on x100 chips now.
The Titan V had it since it used the V100 chip, but the latter Titan RTX did not.
x4it3nthey 2080 Ti had 11GB whereas the RTX TITAN had 24GB.
The 3080ti had 12gb whereas the 3090 had 24GB.
Posted on Reply
#144
Ruru
S.T.A.R.S.
x4it3nThere is a reason why the 3090 and 3090 Ti were not called TITAN and that's because they are not! TITAN also pack FP64 cores and usually have 2x more VRAM, 780/Ti had 3GB whereas the TITAN had 6GB, they 2080 Ti had 11GB whereas the RTX TITAN had 24GB.
Did any Titan after the GK110 based ones have any special FP64 performance? Nope. ;)

They've been just glamorized halo-tier cards with (almost) full die and with full memory bandwith tand with larger VRAM amount. That's why x90 is the Titan these days, just branded for gamers.
igormpFP64 on consumer GPUs haven't been a thing since Kepler. FP64 cores are only a thing on x100 chips now.
You were faster, looks that we said the same things.
Posted on Reply
#145
Vayra86
x4it3nThe x90 and x90 Ti are not TITAN yet because the TITAN usually have 2x the amount of VRAM. If they made a TITAN Ada it would have had 48GB GDDR6X.

Regarding the Seasonic they also have a 1600W that is 80+ Titanium (the 2200W is surprisingly Platinum even though there's not much difference) but I think the 1600W is enough! I wish Corsair would release a new AX1600i with 2x 16-pin connectors! I have a AX1500i and love it!



It is due to a Memory Bandwidth bottleneck.
FYI the 4090 has a bandwidth of 1,008GB/s whereas the 4080 has 717GB/s aka ~40% more Bandwidth when it has 68% more CUDA Cores...
Also the 4090 has only 72MB L2 Cache (out of 96MB of a full AD102 die) and the 4080 has 64MB, so only 12.5% more...
I wouldn't dare to try and find some sort of reason or logic within the Nvidia naming schemes.

The first, foremost and dare I say only aspect that determines what Nvidia calls A, B or C is marketing strategy. Every single Titan was created with that express purpose: marketing. GTX and RTX were created for marketing purposes, too. They call it whatever it is they want to sell you. Its not necessarily a different product. Its just whatever's deemed popular.
Posted on Reply
#146
phints
Intel: our CPU needs 300W
Nvidia: our GPU needs 600W
Intel: challenge accepted
Posted on Reply
#147
Godrilla
phintsIntel: here is a CPU that needs 300W
Nvidia: here is a GPU that needs 600W
Intel: challenge accepted
Me at the sidelines with 4090 suprim liquid with power limit and 7800X3D pbo offset -25 @ 95% performance and half the power. :cool:
Posted on Reply
#148
x4it3n
N/AClearly some 5090s can go up to 800W and the dual 12V-2x6 connectors are a must as they double the 5080 in every possible way. or gets stuck at 2.4GHz while the 5080 can average up to 3GHz.
If the leak from kopite7kimi is true the 5090 is a Dual-slot GPU therefore it's probably Liquid-cooled like the MSI 4090 SUMPRIM LIQUID X ! If so then AIBs are going to struggle even more to make buyers want to buy theirs. I guess people who want Air-Cooling will go for AIBs but Water-Cooling is definitely going to become a Standard sooner or later if GPUs start pulling 600W+
Vayra86I wouldn't dare to try and find some sort of reason or logic within the Nvidia naming schemes.

The first, foremost and dare I say only aspect that determines what Nvidia calls A, B or C is marketing strategy. Every single Titan was created with that express purpose: marketing. GTX and RTX were created for marketing purposes, too. They call it whatever it is they want to sell you. Its not necessarily a different product. Its just whatever's deemed popular.
As much as Naming doesn't mean anything the TITAN line is definitely aimed at Professionals. They have some FP64 cores that Consumer GPUs do not have, and they usually have 2x the amount of VRAM for Professional workloads too.
Posted on Reply
#149
Godrilla
x4it3nIf the leak from kopite7kimi is true the 5090 is a Dual-slot GPU therefore it's probably Liquid-cooled like the MSI 4090 SUMPRIM LIQUID X ! If so then AIBs are going to struggle even more to make buyers want to buy theirs. I guess people who want Air-Cooling will go for AIBs but Water-Cooling is definitely going to become a Standard sooner or later if GPUs start pulling 600W+


As much as Naming doesn't mean anything the TITAN line is definitely aimed at Professionals. They have some FP64 cores that Consumer GPUs do not have, and they usually have 2x the amount of VRAM for Professional workloads too.
2 slot design all in one hybrid would be atypical for a vanilla flagship and has never been done before. Although that would be the most likely design unless they somehow were able to take advantage of the new active silicon cooling but doubtful they would take such a gamble with halo flagship.

Posted on Reply
#150
Lycanwolfen
Hoesntly at this point I hope this is a make or break point for Nvidia Graphic cards. In my point of view Nvidia should sell off it's Video card dept. Myabe then we might see some massive improvements. Looking at the specs so far it's just another machine learning Upscaler with DLSS. I do not want an upscaler I want a video card that can run native 4k or 8k no upscaler anything!. Nvidia used to make Graphic cards like that. Had a catchy logo too Nvidia the way it was meant to be Played. GTX vs RTX well Now it's all about the ray tracing. Funny I remember Games having ray tracing without a Video card needed. I have looked at some ray tracing converts and quite frankly the games look worse than the orginials. Why the hell would I want reflections of everything the walls the floors. If you fire a weapon in a coal mine with the lights off guess what there is no reflections black objects absorb light not reflect it.
Posted on Reply
Add your own comment
Oct 5th, 2024 05:18 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts