Thursday, September 26th 2024
NVIDIA GeForce RTX 5090 and RTX 5080 Specifications Surface, Showing Larger SKU Segmentation
Thanks to the renowned NVIDIA hardware leaker kopite7Kimi on X, we are getting information about the final versions of NVIDIA's first upcoming wave of GeForce RTX 50 series "Blackwell" graphics cards. The two leaked GPUs are the GeForce RTX 5090 and RTX 5080, which now feature a more significant gap between xx80 and xx90 SKUs. For starters, we have the highest-end GeForce RTX 5090. NVIDIA has decided to use the GB202-300-A1 die and enabled 21,760 FP32 CUDA cores on this top-end model. Accompanying the massive 170 SM GPU configuration, the RTX 5090 has 32 GB of GDDR7 memory on a 512-bit bus, with each GDDR7 die running at 28 Gbps. This translates to 1,568 GB/s memory bandwidth. All of this is confined to a 600 W TGP.
When it comes to the GeForce RTX 5080, NVIDIA has decided to further separate its xx80 and xx90 SKUs. The RTX 5080 has 10,752 FP32 CUDA cores paired with 16 GB of GDDR7 memory on a 256-bit bus. With GDDR7 running at 28 Gbps, the memory bandwidth is also halved at 784 GB/s. This SKU uses a GB203-400-A1 die, which is designed to run within a 400 W TGP power envelope. For reference, the RTX 4090 has 68% more CUDA cores than the RTX 4080. The rumored RTX 5090 has around 102% more CUDA cores than the rumored RTX 5080, which means that NVIDIA is separating its top SKUs even more. We are curious to see at what price point NVIDIA places its upcoming GPUs so that we can compare generational updates and the difference between xx80 and xx90 models and their widened gaps.
Sources:
kopite7kimi (RTX 5090), kopite7kimi (RTX 5080)
When it comes to the GeForce RTX 5080, NVIDIA has decided to further separate its xx80 and xx90 SKUs. The RTX 5080 has 10,752 FP32 CUDA cores paired with 16 GB of GDDR7 memory on a 256-bit bus. With GDDR7 running at 28 Gbps, the memory bandwidth is also halved at 784 GB/s. This SKU uses a GB203-400-A1 die, which is designed to run within a 400 W TGP power envelope. For reference, the RTX 4090 has 68% more CUDA cores than the RTX 4080. The rumored RTX 5090 has around 102% more CUDA cores than the rumored RTX 5080, which means that NVIDIA is separating its top SKUs even more. We are curious to see at what price point NVIDIA places its upcoming GPUs so that we can compare generational updates and the difference between xx80 and xx90 models and their widened gaps.
181 Comments on NVIDIA GeForce RTX 5090 and RTX 5080 Specifications Surface, Showing Larger SKU Segmentation
rtx 5080 16gb gddr7 256bit tdp 400w = P O S!
Oh my gosh..... are they still defending 16gb vram?
Don't they know that 4k resolution for AAA PC games, vram usage can be more than 20gb?
why, nvidia??
All other will stay in their raster place (see 3060 vs 4060 and get the picture for 4080 vs 5080) with the new line get dlssX (synthetic better fps) which will make for the only difference in performance.
We might also see a new tire structure to better NV $$$ bank account.
'Ordinary buyers with limited budgets?' Lol. I think you dont realize what you are saying. We have all already conceded that idea by continuously buying high end GPUs. There is nothing ordinary about it. You could buy a 4090. You just dont want to. Nvidia has you by the balls already. Its either an artificially designed lock or its a well placed incentive but yeah thats the M.O and it always has been... except now the actual hardware stagnates hard on top of it.
So it will be 4080 (320W, $1200), 5080(400W, $???), 4090 (450W, $1600), 5090(600W, $???).
We won't get massive "free" performance increase alike with 10xx series.
If rumors are true the 5080 with 16GB VRAM, 256bit memory bus and ~11K CUDA cores seems really castrated. And all this running at least 400W power level, assuming AIBs will add 30W - 50W on top of that, it doesn't look good. I think 4080 Super owners might be laughing right now. Don't worry it will be "5 times faster than previous generation"* on Nvidia charts.
*- in Cyberpunk 2077 at 8k, FrameGen 2, DLSS 5, Meme GPU Cache v3.0, VRAM compression 5.0, RTX dirt road finding, when you look at the wall.
And reviewers will be competing who will declare raster performance in games more dead and outdated.
And we will get price increase - even without the AI craze we would see 50% price increase for 50% performance increase (it won't matter it's not across the board).
With current market situation where making too many gaming cards would actually eat into Nvidia's profitability in Data Center that is now the only thing really bringing in money, I predict Nvidia will try to profit from people who will use these new cards for AI productivity and will be willing to fork serious money for their tools, and limit the actual volume of cards made - and they can do both these things with a single tweak - make the price ridiculously high.
So I don't think +100% is out of the question.
Hopefully 5090 will be >60% faster than 4090, so 4k 240hz screens make more sense :D
There's no reason to even think of RDNA5 as of something that can change the market. We can only hope RDNA4 won't be the AMD's swan song.
There are many here: old.reddit.com/r/LocalLLaMA/
That 5080 should be sold as a 5070, considering the specs compared to the 5090. My old 3080 card has a bus of 320bit and 760GB bandwidth....
Seriously nGreedia??
Just like with the phones, brand clothes and accessories, people just buy them on the credit, with money they don't have.
Why do better if it still won't have any trouble selling?
Except now you need to isolate the VRMs on the GPU so 24v doesnt find its way onto the 12v rail, right now they use a common 12v ground. That will make GPUs more expensive. And now you need a new connector because otherwise you're gonna have people plugging 24v cables into 12v cards and causing some fireworks. Not to mention now your PSUs are more expensive because, well, you still need the 12v lines.
All this for what? To have some slightly cooler running cables? Congrats. There's a reason we never swapped out the 5 and 3.3v lines for 12v on the ATX standard......the juice aint worth the squeeze. 400w is not strictly water cooling. IDK what you're smoking, but the 3090ti pulled 525w, peaked at over 600w, and ran fine on its air cooler.
I personally play with my headphones on so I can't hear it (neither can my neighbours and roommates), that's why fans must blow. However, I'm not the only valid metric here.
Now for the GPUs if computers/gaming are your main hobby, I feel like it's not that hard to put 1k on the side each four years. In three months I've already saved 200€ towards my next tech purchase. I've met few people who are buying the high-end of every generation without having the means to do so, modest people tend to hold those items for a few years.
But credit in the EU/related places also works in a different way vs the US it seems. x3 or x4 payments are asking an authorization for the full price of the item, Credit cards have a fairly low limit, and bank credits have a ton of background check