• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

New Leak Reveals NVIDIA RTX 5080 Is Slower Than RTX 4090

It includes 1080p and 1440p where these cards get heavily bottlenecked. Stop living in denial
Oh duh cause no one owns a 9800x3d and a high refresh 1440p/1080p display . Who is denial again ?
 
They have to begin to review the cards at higher than 2160p resolution. 5K, 6K are needed, but also two 4K screens side-by-side - 7680 x 2160.
 
you replied to a post about the *080 on a topic about the 5080, with a general comment about all nvidia cards to my comment on the 80 class card, and then you pretend like i mentioned all nvidia cards, ask a silly question, followed by whatever you want to talk despite replying to something else. It's a true work of art of brain malfunctioning
Just buy your RTX 5080 and stay quiet as most "smart" people do. RTX 4080 12GB was a joke but RTX 4080 16GB for 1200$ "still" was extremely bad value (the worst **80 series graphics card ever released even RTX 2080 was better p/p wise) Nvidia are straight up laughing in people's faces (consumers) with improvements of ~15-20% but you are questioning to me some strange questions.
 
Last edited:
Oh duh cause no one owns a 9800x3d and a high refresh 1440p/1080p display . Who is denial again ?

Talk about not understanding what is being said.
 
For the ones blaming AMD for not competing, hence higher prices, I saw this from someone in X:

IMG_0262.jpeg
 
AMD was never a solution, on price or performance.
Only crazy people that buy a new card every year care about year on year performance, seems like a stupid point to make, it's a nice upgrade with bad pricing if you're stuck on 1000, 2000 or 3000 series
 
This is GTX 9800 vs GTX 8800 all over again.

Another 2 more years wasted....
 
Last edited:
So the 5080 really is just a 4080 Super Duper with Updated Framegen with no price increase.

Nvidia swears they can't add more raster because it's too hard, but not adding any raster improvement per watt is pretty silly because it lays bare the whole con. They're updating the AI hardware, not the graphics engine, and they're desperate to convince everyone that they had to do it. They couldn't be expected to improve raster meaningfully. It was just impossible.

They could. They just didn't and they didn't because they're an AI company now and not a graphics company.
It is indeed hard to add raster improvements but not impossible. You need a better node in the first place… 5nm >> 4nm and improved architecture on the “primary” compute units, more compute units and higher clocks.

The thing is that most of the crowd are forgetting or neglecting the fact that nVidia uses this very same architecture of Cuda/Tensor cores for the professional line of GPUs and compute systems for AI and Machine Learning applications. Those don’t need (or care for) more raster gaming performance.
They need desperately higher AI/ML capabilities.

It’s a unified architecture that they found a way to use the AI/ML utilities with DLSS and frame generation to be able to be useful for good old gaming.

Gamers still think that nVidia is all about gaming… but the clouds are thick and pink.
Soon for AMD to (try to) follow with UDNA.

The big money is on the industry not on $1000~2000 gaming GPUs. You don’t get to be a 3.4 Trillion company (2nd on the planet after Apple) by selling GPUs to gamers.
It’s a joke to even think about it.
 
This is GTX 9800 vs GTX 8800 all over again.

Another 2 years wasted....

This time is more about the death of Moore's law which means progress will slow down to a crawl.
I am sure that during the process of backporting from TSMC 3 nm to the old 5nm process the parts lost quite a bit of shaders, and performance.

Gamers still think that nVidia is all about gaming… but the clouds are thick and pink.
Soon for AMD to (try to) follow with UDNA.

The big money is on the industry not on $1000~2000 gaming GPUs. You don’t get to be a 3.4 Trillion company (2nd on the planet after Apple) by selling GPUs to gamers.
It’s a joke to even think about it.

If AMD and Nvidia follow this (stupid) logic, it means that the whole universe revolves around "the industry", and everything else - all other chips (CPUs, consoles, phones, refrigerators, washing machines, etc.) must die.

:kookoo:
 
This is GTX 9800 vs GTX 8800 all over again.

Another 2 years wasted....

Or 480 to 580 more recently. Just means that people on mid tier gpus will have to wait at least 5 years between upgrades to get a worthwhile upgrade.

But i do feel that nvidia are trying to force ppl into buying more expensive gpus, now that they can't get any meaningful performance upgrade with the same tier on a new gen.
 
This is GTX 9800 vs GTX 8800 all over again.

Another 2 years wasted....
8800 GTX was a november 2006 484mm² chip. 9800 GTX in march 2008 shrink to 324mm² with 1.64X density, but what comes next in june is a 280 GTX big chip 576mm² 512 bit bus.
This would imply that in 2027 we get a 5080 like chip shrinked to 290mm² 192 bit bus as the 6070 Ti.
But what about the 5090, that's here now. you don't have to wait 2 years. and there is not going to be a massive shrink.
we're probably looking at 24576 Cuda 576mm² 6090 already maxed out. so that's 4 years wasted until they get a more advanced node and the 1.6X shrink.
 
Oh duh cause no one owns a 9800x3d and a high refresh 1440p/1080p display . Who is denial again ?
What? Its not just CPU performance holding these GPUs back. Its just an engine limitation more often than not. Mate... get real. You're trying to spin a weird guess your way, it ain't happening.

Also

1737916423596.png


Now, take a long look at the near-performance parity of the 4090 versus the 5090 at 1080p, versus the 30-40% gap at 4K.

Again: time to take a breather... you haven't got a clue what you're looking at clearly.
 
It includes 1080p and 1440p where these cards get heavily bottlenecked. Stop living in denial
They get bottlenecked but it's not always due to a CPU bottleneck either. Nvidia are famous for having a terrible driver overhead too. Hence AMD GPUs sometimes performing better at 1080p and 1440p.
 
They get bottlenecked but it's not always due to a CPU bottleneck either. Nvidia are famous for having a terrible driver overhead too. Hence AMD GPUs sometimes performing better at 1080p and 1440p.
Regardless, you're not getting or seeing the full power of a 4090 or 5090 at these resolutions, which is the initial point that seems hard to grasp for some ;)
 
What? Its not just CPU performance holding these GPUs back. Its just an engine limitation more often than not. Mate... get real. You're trying to spin a weird guess your way, it ain't happening.

Also

View attachment 381833

Now, take a long look at the near-performance parity of the 4090 versus the 5090 at 1080p, versus the 30-40% gap at 4K.

Again: time to take a breather... you haven't got a clue what you're looking at clearly.

bro i think you are confused or you're trolling me one or the other. Or let me know if you are just looking at outliers I'm looking again at "relative performance". Makes sense so far? Also do you understand what the word "parity" means? Heres the link im using but if you are looking at some other review then let me know.


5090 is 10% faster at 1080p than a 4090 not EQUAL Okay moving on
5090 is 17% faster at 1440p than a 4090 not EQUAL. Okay moving on
5090 is 26% faster at 4k than a 4090 not EQUAL. Thats all the resolutions they tested. Hope you're following so far

If you saying that say 10% at 1080p is parity well 1) that's wrong because thats not what parity means 2) realistically its a meaningless comparison between different resolutions its just another data point. most gamers play at native resolutions high refresh. just because this card scales better at 4k doesn't mean anyone playing at 1440p or below will or should not buy it.

Now explain without using big boy words what exactly are you trying to convince me of?
 
Last edited:
For the ones blaming AMD for not competing, hence higher prices, I saw this from someone in X:

View attachment 381816
Unfortunately a lot of people are like that. I have both AMD (Laptop) and Nvidia (Desktop) but as long as AMD won't beat Nvidia in Price & Performance, I think most people will not buy AMD. Maybe if Nvidia became lazy like Intel and came up with a secret "weapon" like their X3D CPUs, I don't see it happening anytime soon.

Regardless, you're not getting or seeing the full power of a 4090 or 5090 at these resolutions, which is the initial point that seems hard to grasp for some ;)
Whoever buys those cards for 1080p and 1440p is an idiot yeah. They're definitely 4K GPUs.

5090 is 26% faster at 4k than a 4090 not EQUAL. Thats all the resolutions they tested. Hope you're following so far
To be fair most reviewers online find an average of +32% at 4K. With very demanding RT/PT games getting closer to 40%.
 
bro i think you are confused or you're trolling me one or the other. Or let me know if you are just looking at outliers I'm looking again at "relative performance". Makes sense so far? Also do you understand what the word "parity" means?


5090 is 10% faster at 1080p than a 4090 not EQUAL Okay moving on
5090 is 17% faster at 1440p than a 4090 not EQUAL. Okay moving on
5090 is 26% faster at 4k than a 4090 not EQUAL. Thats all the resolutions they tested. Hope you're following so far

If you saying that say 10% at 1080p is parity well 1) that's wrong 2) realistically its a meaningless comparison between different resolutions its just another data point. most people play at native resolutions high refresh. just because this card scales better at 4k doesn't mean no one playing at 1440p will or should buy it.

Now explain without using big boy words what exactly are you trying to convince me of?
*near parity, which I think applies if you talk about 10-17%, no?

The original statement was, I believe, the 5080 will be as fast as a 4090. Then we start pulling in weird numbers from TPU's reviews to make that point, and I'm saying you're wrong.
 
This time is more about the death of Moore's law which means progress will slow down to a crawl.
I am sure that during the process of backporting from TSMC 3 nm to the old 5nm process the parts lost quite a bit of shaders, and performance.



If AMD and Nvidia follow this (stupid) logic, it means that the whole universe revolves around "the industry", and everything else - all other chips (CPUs, consoles, phones, refrigerators, washing machines, etc.) must die.

:kookoo:
Very nice examples... bring apples and, not even oranges in comparison, except desktop CPUs because they already industry(server) oriented. Talking about AMD of course.
Do you think we live under a rock?

Gaming/multimedia consoles, and all kinds of devices and you mention vs a chip designer. A chip designer with top tier premium architecture for AI/ML in the dawn of the AI era. Yeah... super successful example.
You compare things as different as night and day. Your choices couldn't be more poorer. I dont expect you to find anything else though...

Keep defending nVidia by distorting the facts.
AMD already is on that train even before Ryzen architecture when Xilinx was acquired and will be on GPUs (again) with UDNA.

We are talking about chip designers here and you are bringing devices into the conversation.

Cant say about the universe but on this little Earth almost everything revolves around profit power and control and for an AI chip designer the profits are in the industry and pros.
You can try to bring Apple, which is No1 company, into this but still it will be night and day.
Lets all think how many gaming users there are that need a gaming GPU and how many people need a phone, a wearable, a PC in general.
I own 1 GPU but for 4 Apple devices and I also subscribed for services.

nVidia's market cap, revenue and earnings grow x5~10 the last couple of years because they sell x5~10 more GPUs to gamers... (my turn... :kookoo: )
Good one! Very good laughs...

Apple grows more steadily over the years because they sell more devices on users year by year? Nope.... they found another way. Feel free to search it...
 
*near parity, which I think applies if you talk about 10-17%, no?

The original statement was, I believe, the 5080 will be as fast as a 4090. Then we start pulling in weird numbers from TPU's reviews to make that point, and I'm saying you're wrong.

Not how i would describe 10-17% but I am now with you bro!! 100%

Yes i pulled my crystal ball and wrote exactly that. I should have added that i meant it in "relative performance"
Basically on Monday or whenever TPU posts their review, we will look here >>>>https://www.techpowerup.com/gpu-specs/geforce-rtx-5080.c4217
And i would expect to see the relative performance chart showing 5080 and 4090 at parity. TBD
 
Not how i would describe 10-17% but I am now with you bro!! 100%

Yes i pulled my crystal ball and wrote exactly that. I should have added that i meant it in "relative performance"
Basically on Monday or whenever TPU posts their review, we will look here >>>>https://www.techpowerup.com/gpu-specs/geforce-rtx-5080.c4217
And i would expect to see the relative performance chart showing 5080 and 4090 at parity. TBD
Keep in mind that even at 4K, TPUs bench suite holds some older games and they just run into walls that aren't GPU walls. That's why those numbers are to be taken with a grain of salt, and its also part of the reason why you see higher gaps in tests elsewhere - smaller games suite, of more recent titles. I just look at shaders right now, and there's no way I'm seeing the 5080 bridge a near 6k shader gap with clocks.

On TPU's testing, we will see the 5090 extend its lead as the bench suite gets newer titles over time.
 
They get bottlenecked but it's not always due to a CPU bottleneck either. Nvidia are famous for having a terrible driver overhead too. Hence AMD GPUs sometimes performing better at 1080p and 1440p.

The driver overhead IS defacto a cpu bottleneck...
 
So with my Suprim X 4080 that’ll only be only 9% better taking into account higher AIB than stock 4080? Wow, RTX 50 not looking good, hope this is just an outlier and not a trend :p
 
Back
Top