• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

GPU IPC Showdown: NVIDIA Blackwell vs Ada Lovelace; AMD RDNA 4 vs RDNA 3

I dont even understand why comparing nvidia vs amd is releavant when nvidia has over 90% part of consumer gpu market shares, and they keep winning some parts.

AMD is out of the game at this level. They need a breakthrough if they want to come back.

Also i'm not an nvidia fan, had some ATI/AMD cards in the past, and nvidia having a monopoly is not good for retail prices. But you cant deny the gap is abysmal
 
Man ADA was way ahead of its time, even more so than Pascal was
 
I think it's pretty clear what the bug was - RDNA3 was the first (and thus far the only only) chiplet based gaming dGPU. Naturally such innovations have growing pains. It never quite reached it's true potential. My guess is due to the chiplet communication issues. Only the 7600 series in that series was fully monolithic.

Nope. The 7600 is the very example given for how much slower RDNA3 is than RDNA4.

The 9060 XT has the same core count (2048) as the monolithic 7600 XT and is 38% faster at 1440p.
The midpoint between the 9070 and 9070 XT is the same core count (3840) as the chiplet 7800 XT and their average is 39% faster at 4K.

There's no performance difference between monolithic and chiplet RDNA3 so that's not the "bug".
 
IPC stands for instruction per cycle, thus higher clocks do not contribute to the IPC metric, but to overall performance. So if the clocks are 20% higher, and we're seeing 144% of overall performance, we have roughly 20% IPC gain generation-to-generation.
Apologies. I corrected my previous post. It was meant to say: RX 9070 XT shows 44% overall performance improvement, 20% higher clocks and rest is on IPC. Thanks.

Also i said from the start that 5080 will never reach 4090 performance. Some people here genuinely believed that.
Because they got tricked by Nvidia's own narrative as to how amazing Blackwell is in terms of progression over Ada.
Blackwell shows progress only for AI workloads, thanks to new instractions support for tensor cores and AI-focused (faster memory).

I think it's pretty clear what the bug was - RDNA3 was the first (and thus far the only only) chiplet based gaming dGPU. Naturally such innovations have growing pains. It never quite reached it's true potential. My guess is due to the chiplet communication issues. Only the 7600 series in that series was fully monolithic.
Plausible theory. Also, RDNA3 was supposed to work at 10-15% higher clocks than it ended up with.
AMD also needs to figure out better chiplet interconnection in CPUs. They are changing communication interface with Zen 6, IIRC.

Doubtful. 5080 Super as it's currently speculated will equal the memory capacity and speed of 4090 (24GB, ~1TB/s), but still be a a far cry from 4090 core config.
In order to truly equal 4090 the 5080 Super/Ti would have to be based on the RTX Pro 5000 based GB202 at the very minimum (with 24GB, naturally) and i suspect even that would fall short without a significant clock speed bump. https://www.techpowerup.com/gpu-specs/rtx-pro-5000-blackwell.c4276
Yep, RTX 5080S will most likely retain the same core configuration. The 1st time in Nvidia's history when Super die would be configured the same way as non-Super one (if I'm correct).

I dont even understand why comparing nvidia vs amd is releavant when nvidia has over 90% part of consumer gpu market shares, and they keep winning some parts.

AMD is out of the game at this level. They need a breakthrough if they want to come back.

Also i'm not an nvidia fan, had some ATI/AMD cards in the past, and nvidia having a monopoly is not good for retail prices. But you cant deny the gap is abysmal
It's always good to have such comparisons, so that you know how particular GPU architecture progressed. This is absolutely irrelevant of market share.
If AMD decided to go for high end with RDNA4, they could manage to make RTX 4090/5090 competitor. At 550-600W, with 500-600 mm2 die, around 7k compute units required.
It was a damn pity they missed this opportunity out. Maybe costs for making such big dies was too high, yielding poor estimated incomes.
Anyway, 108CU monolithic die based on RDNA4, paired with enough RAM and reasonable efficiency, with performance between RTX 4090 and RTX 5090, would sell well at $1500.
 
Nope. The 7600 is the very example given for how much slower RDNA3 is than RDNA4.

The 9060 XT has the same core count (2048) as the monolithic 7600 XT and is 38% faster at 1440p.
The midpoint between the 9070 and 9070 XT is the same core count (3840) as the chiplet 7800 XT and their average is 39% faster at 4K.

There's no performance difference between monolithic and chiplet RDNA3 so that's not the "bug".
7600 (navi33) is not a full rdna3 - it cache system used from rdna2 with 6nm process. We need comparing equivalent cache system and SM-structure, but it hard: navi 32 - 20CU structure. Navi31 started from 80CU... Or just wait for microbench from c&c
 
Last edited:
While I was expecting very low IPC gains with Nvidia’s Blackwell, I wasn’t expecting it this bad. Nvidia is too busy to do anything for gaming grade GPUs. And you can tell how messed up the launch of Blackwell was, like broken MFG, black screens, melting connectors (same problem with ADA RTX 4090), crashing, etc. And some of these software related issues persists till now when it’s been 4 or 5 months since launched.
 
Better question is what was Nvidia doing these past few years after Lovelace launch?

As i understand the development process - most companies have leapfrogging design teams for their architectures. For example the team that made Zen 4 also makes Zen 6 while Zen 5 was made by another team that presumably is already working on Zen 7 etc.
 
This is what I have been saying all along. nGreedia has not made a new architecture for 6 years now! It's all minor tweaks, blown and unblown fuses, process improvements and brute force clocks and adding more of the same to the die. nGreedia cannot continue to keep doing this for much longer.
 
Better question is what was Nvidia doing these past few years after Lovelace launch?

As i understand the development process - most companies have leapfrogging design teams for their architectures. For example the team that made Zen 4 also makes Zen 6 while Zen 5 was made by another team that presumably is already working on Zen 7 etc.
They are focusing on market that brings them the most revenue. Nvidia is not about gaming, anymore. You can't blame them, when all the money comes from the AI bubble. Blackwell was scaled Ada with (problematic) implementation of PCIe 5.0. This way they reduced development costs. I'm afraid AMD will fare similarly with UDNA.
 
nGreedia cannot continue to keep doing this for much longer.
All depends on nvidia consumer iqs :laugh: I already skipped ADA and Blackwell.

AMD gets better nvidia gets worse...... Over time
 
They are focusing on market that brings them the most revenue. Nvidia is not about gaming, anymore. You can't blame them, when all the money comes from the AI bubble. Blackwell was scaled Ada with (problematic) implementation of PCIe 5.0. This way they reduced development costs. I'm afraid AMD will fare similarly with UDNA.
nVidia still brings in nearly $10 billion from gaming GPUs. They arguably sell MORE geforce cards then they do compute cards. People need to get over this "we're not the only market so nobody cares" schtick.

This is what I have been saying all along. nGreedia has not made a new architecture for 6 years now! It's all minor tweaks, blown and unblown fuses, process improvements and brute force clocks and adding more of the same to the die. nGreedia cannot continue to keep doing this for much longer.
This is objectively wrong.

I dont even understand why comparing nvidia vs amd is releavant when nvidia has over 90% part of consumer gpu market shares, and they keep winning some parts.

AMD is out of the game at this level. They need a breakthrough if they want to come back.
They're relevant because they sell cards in a competitive market. Duh.
Also i'm not an nvidia fan, had some ATI/AMD cards in the past, and nvidia having a monopoly is not good for retail prices. But you cant deny the gap is abysmal
The gap has shrunk significantly this generation, and given the attitudes of the respective companies involved, it's likely we're going to see the same thing repeat. uDNA is very promising.
 
nVidia still brings in nearly $10 billion from gaming GPUs. They arguably sell MORE geforce cards then they do compute cards. People need to get over this "we're not the only market so nobody cares" schtick.


This is objectively wrong.


They're relevant because they sell cards in a competitive market. Duh.

The gap has shrunk significantly this generation, and given the attitudes of the respective companies involved, it's likely we're going to see the same thing repeat. uDNA is very promising.
The market is no longer competitive as Nvidia has a monopoly in the GPU space. This is not a criticism but a reality. It will take some time and quite possibility government involvement depending on how bad things get to undo the monopoly.

Nvidia brings in $10B per year for Geforces but almost $150B per year for data center GPUs. I don't think it is completely out of the question that Nvidia might shift priorities a bit.
 
RDNA3 to RDNA4 from the 7600 to 9600, the transistor count did more than double... and 30% faster base clock and 10% faster boost clock.
 
The market is no longer competitive as Nvidia has a monopoly in the GPU space. This is not a criticism but a reality. It will take some time and quite possibility government involvement depending on how bad things get to undo the monopoly.
A couple generations of good cards would easily fix that.
Nvidia brings in $10B per year for Geforces but almost $150B per year for data center GPUs. I don't think it is completely out of the question that Nvidia might shift priorities a bit.
Wrong, wrong, SO hilariously wrong. :laugh: :roll: :laugh: :slap:


Nvidia 2024 full year revenue was $60.9 billion, of which $10.4 billion came from gaming and $47.5 billion came from data center. Given the price of a Geforce GPU VS a datacenter GPU, they actually do ship more Geforces then they do Quattro/A/H/whatever chips.
 
nVidia still brings in nearly $10 billion from gaming GPUs. They arguably sell MORE geforce cards then they do compute cards. People need to get over this "we're not the only market so nobody cares" schtick.
Is that revenue including RTX 4090(D) and RTX 5090(D) that found it's way to China or not?
Selling something as gaming hardware when it actually ends up being used for AI surely helps raise gaming sector revenue, doesn't it?

Wrong, wrong, SO hilariously wrong. :laugh: :roll: :laugh: :slap:
You're living in the past.

1750777353142.png


1750777412852.png
 
Is that revenue including RTX 4090(D) and RTX 5090(D) that found it's way to China or not?
Selling something as gaming hardware when it actually ends up being used for AI surely helps raise gaming sector revenue, doesn't it?
It probably does, so how many are there? What portion of geforce GPUs are 4090 and 5090d? the revenue from such cards is a drop in the bucket compared to $40k+ H accelerators.

I'm going to go on a limb and say not many. The majority of those gaming sales are normal geforce cards. If there really was such a drought of Geforce GPUs that everyone claims, AMD would be climbing steadily on lists like the Steam survey. That hasnt happened.
You're living in the past.

Context. The complaint that "nVidia doesnt care about muh gamurz" in regards to Blackwells design would be determined by revenue in late 2023 and 2024, when blackwell was being designed, not in 2025.
 
7600 (navi33) is not a full rdna3 - it cache system used from rdna2 with 6nm process. We need comparing equivalent cache system and SM-structure, but it hard: navi 32 - 20CU structure. Navi31 started from 80CU... Or just wait for microbench from c&c

From the article you linked:

"As a result, the RX 7600’s WGPs may not be able to keep as many waves active, particularly if shaders ask for a lot of vector registers. … Rasterization will likely see minimal differences, as pixel shaders dominate rasterization workloads and tend to use very few vector registers."

The numbers I mention are from rasterization only tests here at TPU, which is the only reasonable gaming use for a low-end GPU.
 
Wrong, wrong, SO hilariously wrong. :laugh: :roll: :laugh: :slap:
Nvidia's revenue for the last 12 trailing months was $148.5B.


Come on now. Let's not fight over easy to look up facts.
 
I recall reading at the time of RDNA3s release that the bug was to do with the shader prefetcher. Someone went through the MESA driver code which showed software fixes for it.
 
+20% means to be on pair with the 5080 2 years later
On xx70 XT segment product?! I wish this will be the case! Massive improvement in performance, whereas how much faster 6080 would be from a predecessor, 12% (as 5080 from 4080 was)?
 
Nvidia hits 92% gpu market share and AMD for some strange reason are overpricing it's RX 9060 XT series gpus value just isn't there as for example RX 9070 XT. AMD hits his lowest point ever in history! Let's wait until 95% maybe then AMD wakes up. :kookoo: AMD are dumb to miss such an opportunity because RTX 50 Series are one of the worst gpu generations ever released by nvidia.
amd-nvidia-gpu-market-share-jpr-th.jpg


When monopoly rules over minds.
 
Last edited:
Nvidia hits 92% gpu market share and AMD for some strange reason are overpricing it's RX 9060 XT series gpus value just isn't there as for example RX 9070 XT. AMD hits his lowest point ever in history! Let's wait until 95% maybe then AMD wakes up. :kookoo: AMD are dumb to miss such an opportunity because RTX 50 Series are one of the worst gpu generations ever released by nvidia.
amd-nvidia-gpu-market-share-jpr-th.jpg


When monopoly rules over minds.
You have to consider both AMD (on the GPU side) and Nvidia now make over 90% of their revenue from selling AI chips to the likes of Meta and Microsoft etc.

They're not missing out on much if they just continue being lackluster in gaming.
 
Nvidia hits 92% gpu market share and AMD for some strange reason are overpricing it's RX 9060 XT series gpus value just isn't there as for example RX 9070 XT. AMD hits his lowest point ever in history! Let's wait until 95% maybe then AMD wakes up. :kookoo: AMD are dumb to miss such an opportunity because RTX 50 Series are one of the worst gpu generations ever released by nvidia.
amd-nvidia-gpu-market-share-jpr-th.jpg


When monopoly rules over minds.
At this point I would say what AMD is doing is deliberate. They simply do not want to increase their market share, if they wanted to, they would reduce the price of their cards, and would have manufactured more. They do not have the A.I. chip demand that nGreedia has, so production allocation away from high profit SKU's is not such a big deal to AMD.

There have been many whispers of collusion between the two family members of these two companies for years. But another, less conspiratorial take would be AMD management is simply incompetent...
 
Back
Top