• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 5080 Founders Edition

A time traveling idiot too!

:p

But I'd be surprised if the majority of RTX 5080 won't actually cost more, as we have already heard from AIBs that MSRP is unrealistically too low.

Maybe if they dispense with the unnecessary quad slot coolers & RGB puke, they might just get to release an MSRP card.
 
It's not the same price, the 4080 was 1200$. The 5080 in fact offers over 40% better perf / $.
So we are supposed to ignore that the 4080 super happened? Even if we pretend that we are in lala land and the 4080s doesn't exist, are we to pretend that the 4080 was good value at $1200? Are we to pretend that that level of performance is worth $1200?

Even today the 4080super is overpriced by about $200 and the 5080 should have come out at least 20% faster for $800 dollars to be a good product, anything less than that and its an abject failure.
 
So we are supposed to ignore that the 4080 super happened? Even if we pretend that we are in lala land and the 4080s doesn't exist, are we to pretend that the 4080 was good value at $1200? Are we to pretend that that level of performance is worth $1200?

Even today the 4080super is overpriced by about $200 and the 5080 should have come out at least 20% faster for $800 dollars to be a good product, anything less than that and its an abject failure.
You compared it to the 4080 by saying that this launched 2 years later. The 4080s didn't launch 2 years ago but last year.
 
4080S was a brutal rebrand with a needless (for the Nvidia buyers, anyways, because they pay whatever large amount is asked from them) discount.
4080S offers 0% difference with the original 4080.

The 5080 is what the 4080S should have been. A real "5080" doesn't exist.
 
Huh. Somebody needs to update Wikipedia then.
Wikipedia is (unsurprisingly) up to date, but you need to compare AMD and Nvidia die sizes on the same node.

AMD were putting out 334mm^2 Cypress dies on 40nm in 2010 when Nvidia's GF100 dies were 529mm^2
For the next node AMD were putting out 352mm^2 Tahiti dies on 28nm when Nvidia's GK110 was 561mm^2

You can only compare die sizes on the same node, and Wikiepedia is quite clear that AMD's largest dies were significantly smaller than Nvidia's largest dies on each node from that era, hence all the talk of "small die strategy".

Ok, I see what your talking about now. And this goes along with the statement of this should not have been a 5080, but instead a 5070 or 5070ti.

A thought about that, perhaps there's a yield problem at TSMC?
Aye, that was my supporting argument for why this is the true 5070Ti, because it's the exact same rinse-and-repeat as the 40-series launch and NOBODY accepted the halved 4090 as an 80-class GPU, hence the unlaunch and rebrand to a 4070Ti

As for yields of the 5090, that's not really a TSMC problem, it's just expected when the die size gets this large. For any given process yield, larger dies have exponentially higher chances of defects.

A 4090 is 608mm^2, a 5090 is 750mm^2.

If you put that into a yield calculator you'll get exact numbers, but as a rule of thumb if the die area goes up by 25% like that the chance of a defect goes up by 25% squared. Nvidia and TSMC keep quiet about exact yields, but estimates say 50-60% yields on the 4090 - meaning 40-50% defect rate. For a 5090, on the same process node, that extra die area means the expected defect rate is 40-50% * 1.25^2 = 62-78%, aka yields of 22-38%, down from 50-60% of the 4090. Hopefully it's not that bad, but these are just estimates based on the process node being identical.

Presumably Nvidia are stockpiling defective dies to have partially-working RTX Blackwell workstation/server compute GPUs that will replace the RTX 5000 and 5880 Ada generation, either that or a 5080Ti down the line.

Could that be L2 cache size not doubled hence the bottleneck?

Or someone could test in native 8K maybe the gap between 5080 and 5090 will widen.
That's not an unreasonable thought, but L2 cache hasn't scaled with core count or core config in forever. I don't know why, that's just my observation.

If I had to guess (and it is a guess), L2 cache would scale with the VRAM bandwidth and total latency, but it doesn't really.
 
Lot of OC headroom, close to 4090 when OCed

Should have been faster than 4090, but then price would be +200-300$ more, so better to OC it and enjoy the lower price.

At least now we know 5070Ti somewhere around 4080-4080S and maybe +12-15% OC headroom.

5080 is real 5070Ti, thats how it goes, but performance is what we pay for not the numbers in box..
Everyone have free will to buy or not to buy this.
 
TTL did a review, but he didn't tear the ZOTAC card down so that we could have had a closer look at the components and how they are cooled.

GALAX & ZOTAC so far, have MSRP cards, but I want to see what they look like under the hood. ZOTAC being at the front of my mind, with the juicy 3 year warranty + 2 years if you register, well, assuming they honour their warranty claims if something goes bad.

Actually I just had a decent experience with Zotac very recently when I did an RMA on my RTX 3090, the process is simple enough and I got my card back after about 2 weeks from the day I sent it in. That being said, a lot do depend on where you are located and your local Zotac distributor.

The 2 years of extended warranty ( year 4 and 5) is not transferable, the standard 3 years is.
 
Actually I just had a decent experience with Zotac very recently when I did an RMA on my RTX 3090, the process is simple enough and I got my card back after about 2 weeks from the day I sent it in. That being said, a lot do depend on where you are located and your local Zotac distributor.

The 2 years of extended warranty ( year 4 and 5) is not transferable, the standard 3 years is.

Oh, that's more than fine, thank you for the information; I don't mind if the other 2 years aren't transferable, I usually keep my GPU's for a long time, the RTX3070Ti that I had, being an exception because of it's limited VRAM. The assurance is very welcome, I always did say, if you want to ask this much for a product, you might as well put in a good warranty duration, after all, if you aren't selling cheap crap, you will have faith in your own product line and it shouldn't bother you, this is already a +1 in my book.

I really think I am going to try ZOTAC for a change. I had issues before with MSi & ASUS, doesn't matter the badge, my MSi experience was good, my ASUS one bad. Just you overpay for their badges, I don't ascribe to fanboyism, I just want a good working product that will last me a while.
 
Regarding the DeepSeek-R1 discussion: I saw that you can run it from a SSD (the data is mmap'ped and no wear/write cycles are taking place, only read). Supposedly, first it's loaded into the the RAM, then the rest to the SSD (AFAIK). DeepSeek R1 being a MOE model, has only 37B active parameters, this makes it run much faster than what its size would suggest. PCIe 5.0 NVMe SSDs in RAID 0 may then become a huge (V)RAM alternative for cheap, but it's still better to have RAM first, because it's faster. If the full version too big, there are quants (don't confuse DeepSeek-R1 with the Distill versions, they are not DeepSeek-R1).
 
Last edited:
You know, as much as I like the 40 supers and the 20 supers, I dont really want there to have to be product refreshes like that every single generation. Theres obviously a practical reason to do product refreshes, which is what the supers mostly are besides some tweaked performance. If they're gonna do 50 supers, they better look as awesome as the 40 supers and 20 supers did. If your gonna be a product refresh, atleast make the FE look good lol.
Not all supers were good 2080 super and 4080 super was let down, hardly much of an improvement and I think the 5080 super will be the same with maybe 3-5% improvements
 
Well, even the AIB's are scalping, just no, even a supposed "MSRP" ZOTAC is +$500 overpriced. Pass. Enjoy folks.
 
A bit of context for the young people on this thread.

Nvidia price hiking strategy
1738250607864.png
 
Well it was a paper launch or bots bought everything. Good luck to scalpers selling 5080, somebody didnt tell them it's not mining times anymore.
 
Well it was a paper launch or bots bought everything. Good luck to scalpers selling 5080, somebody didnt tell them it's not mining times anymore.
Hopefully they will lose money on that.
 
The temps table, cooler comparison page and oc page have been updated with data from today's custom design reviews

@Mr. Perfect
 
Feels more like a rebadged 4080S with a few tweaks. The GPU market has hit stagnation in both performance and power efficiency. This is what happens when there is no competition. NV make so much profit by selling these GPUs to AI datacentres that they don't give a damn about complaints/criticisms when selling a new generation of products with only 10% improvement in 3 years.
 
Last edited:
Feels more like a rebadged 4080S with a few tweaks. The GPU market has hit stagnation in both performance and power efficiency.
RTX 40 Series already was like that from the start. Faster gen to gen but much more expensive at the same time where that speed advantage just evaporates nowhere. People should skip whole nvidia RTX 50 Series lineup (Only way out). If that will happen at least there will be a price drop.......
 
Last edited:
For AMD to release a highend card this gen it would need to beat a 4090 not just be a few percent faster than a 5080 which is still slower than a 4090.
In the 5090 review, the 4090 is 30% faster than the 7900 XTX at 4K. We don't know how good RDNA4 will be, but if they manage a 35% uplift over the 7800 XT with a GPU that's only 6.7% larger, then a larger 7900 XTX would have certainly beaten the 4090.

Fair enough, my mistake. Though you have to admit, that's an easy mistake to make given that the general topic of discussion is the performance.

Ok, I see what your talking about now. And this goes along with the statement of this should not have been a 5080, but instead a 5070 or 5070ti.

A thought about that, perhaps there's a yield problem at TSMC?
Even four years ago, TSMC's yields for N5 were good. N4 is just a tweaked N5 so it's unlikely to have substantially different yields.
 
i'm so happy right now with my 4080 lol. this is more like a 4080 Ti. thanks Nvidia!
 
4080S was a brutal rebrand with a needless (for the Nvidia buyers, anyways, because they pay whatever large amount is asked from them) discount.
4080S offers 0% difference with the original 4080.

The 5080 is what the 4080S should have been. A real "5080" doesn't exist.
That discount was enough to make people like me go and buy it, 20% isnt a needless discount. Although I get your point as well. 5090 should be 4090ti, and 5080 the 4080 super or a 4080 ti. How well does the 5080 run though when at the 4080 super power limit 320w?
 
So in the context of performance per dollar...well 4080 and 4080 Super are discontinued, so yes, 5080 is currently the best ~$1k card you can get right now. And yes at the same MSRP as the 4080 Super, technically improves the performance per dollar in that price segment a little. Technically.

But in the context of historical "what an 80 card is" context, this has to be the single worst 80 card release ever from a gen over gen performance increase perspective over the card it is replacing. We can talk about how the 680 shifted die tiering and such, but it still was outperforming the previous gen flagship by 30%. The 2080 and most of Turing was also derided for not moving the needle on performance per dollar over Pascal, but still, I think the 2080 at least matched a 1080 Ti at launch.

So how is the 5080 "Highly Recommended"? I feel like a review award like that just lacks any thought towards this context. If anything its "Highly Disappointing but what else are you going to get at $1k".

EDIT:

As for this conclusion on the VRAM:
1738268690970.png


That's not entirely accurate. G7 will feature higher density 3GB modules, so a 24GB configuration is entirely possible over 256-bit (see 5090 mobile config). At 4k, I need more than the 12GB I have now, but I am not thrilled about 16GB for an upgrade given it may just have the same problems I have now a couple years into its life if say I owned it for 5ish years.
 
Last edited:
If you put that into a yield calculator you'll get exact numbers, but as a rule of thumb if the die area goes up by 25% like that the chance of a defect goes up by 25% squared. Nvidia and TSMC keep quiet about exact yields, but estimates say 50-60% yields on the 4090 - meaning 40-50% defect rate. For a 5090, on the same process node, that extra die area means the expected defect rate is 40-50% * 1.25^2 = 62-78%, aka yields of 22-38%, down from 50-60% of the 4090. Hopefully it's not that bad, but these are just estimates based on the process node being identical.
It isn't number of defects that goes up with square of the increased size, but the amount of dies you can get out of a single 300mm wafer goes down by square of increase in die size. Number of defects per mm2 stay the same whether you have a 1mm2 die or a 800mm2 die.

Of course you have a higher chance of a defect hitting a bigger die than a smaller one but for a mature process the actual number is quite low.

Disabling units are only benefitting them because there's a fixed amount of people that will buy a GPU at a particular price point, so it's beneficial to disable them and call it a lower end GPU even if there's no technical reason(like defective units) to do so, because you can't have a product line full of xx90 GPUs. By calling it xx80 you get to sell those instead. You can't rely solely on defective dies to sell lower end cards, because they are significantly higher volume.
 
Back
Top