Thursday, November 3rd 2022

AMD Radeon RX 7900 XTX Performance Claims Extrapolated, Performs Within Striking Distance of RTX 4090

AMD on Thursday launched the Radeon RX 7900 XTX and RX 7900 XT RDNA3 graphics cards. With these, the company claims to have repeated its feat of a 50+ percent performance/Watt gain over the previous-generation, which propelled the RX 6000-series to competitiveness with NVIDIA's fastest RTX 30-series SKUs. AMD's performance claims for the Radeon RX 7900 XTX put the card at anywhere between 50% to 70% faster than the company's current flagship, the RX 6950 XT, when tested at 4K UHD resolution. Digging through these claims, and piecing together relevant information from the Endnotes, HXL was able to draw an extrapolated performance comparison between the RX 7900 XTX, the real-world tested RTX 4090, and previous-generation flagships RTX 3090 Ti and RX 6950 XT.

The graphs put the Radeon RX 7900 XTX menacingly close to the GeForce RTX 4090. In Watch_Dogs Legion, the RTX 4090 is 6.4% faster than the RX 7900 XTX. Cyberpunk 2077 and Metro Exodus see the two cards evenly matched, with a delta under 1%. The RTX 4090 is 4.4% faster with Call of Duty: Modern Warfare II (2022). Accounting for the pinch of salt usually associated with launch-date first-party performance claims; the RX 7900 XTX would end up within 5-10% of the RTX 4090, but pricing changes everything. The RTX 4090 is a $1,599 (MSRP) card, whereas the RX 7900 XTX is $999. Assuming the upcoming RTX 4080 (16 GB) is around 10% slower than the RTX 4090; the main clash for this generation will be between the RTX 4080 and RX 7900 XTX. Even here, AMD gets ahead with pricing, as the RTX 4080 was announced with an MSRP of $1,199 (exactly 20% pricier than the RX 7900 XTX). With the FSR 3.0 Fluid Motion announcement, AMD also blunted NVIDIA's DLSS 3 Frame Generation performance advantage.
Source: harukaze5719 (Twitter)
Add your own comment

164 Comments on AMD Radeon RX 7900 XTX Performance Claims Extrapolated, Performs Within Striking Distance of RTX 4090

#151
medi01
ModEl4-20% slower vs
You said 4090 was 140%, if 4080 was 100%, which was wrong, it was 166%.

As for "I can grab a random leak on the internet, that shows that 4090 is only 100/73 => 37% faster than 4080", oh well.
Posted on Reply
#152
ModEl4
medi01You said 4090 was 140%, if 4080 was 100%, which was wrong, it was 166%.

As for "I can grab a random leak on the internet, that shows that 4090 is only 100/73 => 37% faster than 4080", oh well.
Where did you find the 166%?
My original post (you replied to my post) was about what potential performance a AD103 based RTX 4080 model could achieve if Nvidia decided to change specs (full die and +4% higher clocks than the current config was my proposal) and for this RTX 4080 config (my original proposal) I quoted that 4090 should have been +39% faster based on specs, but in reality the difference would be only +25% in 5800X TPU testbed with the current games selection, because in this particular setup 4090 realize around -10% of it's true potential.
+25% means 4090=125% and full AD103 4080 100% (or 4090 100% and full AD103 4080 80% , it's the same thing)
The Time Spy results that I quoted as an indication, if valid, shows that even in synthetic results the difference between 4090 (100) and current slower 4080 config (73) is much less than what you claim.
If TPU doesn't change testbed the average difference will be even less in games.(slightly different, around 74-75%)
No point to argue, reviews will come in a few weeks anyway and we will see who's assumption will prove true.
Posted on Reply
#153
medi01
Shader distribution in NV lineups, highlights how terrible things are in the green camp, it had NEVER been so bad, the "unlaunched" 4080 12GB is basically on 2060 levels in terms of % from the halo:

ModEl4Where did you find the 166%?
If 408 is a 40% cutdown from 4090, then 4090 is 166% of 4080.

As for "what is faster" and magical shaders that do much better in 4080 (with slower mem and what not) than in 4090, we'll see soon enough.
Posted on Reply
#154
efikkan
medi01Shader distribution in NV lineups, highlights how terrible things are in the green camp, it had NEVER been so bad, the "unlaunched" 4080 12GB is basically on 2060 levels in terms of % from the halo:
But does that really matter though?
Isn't it far more important how much performance you get per Dollar and how it compares vs. the competition and its predecessor?

I find it funny how the typical complaint over the years has been the opposite; too little difference between the three highest tiers. Quite often, a 60/60Ti model has been "too close" to the 70/80 (before the 90 models) models, and sometimes the 70 model has been very close to the 80 model (e.g. GTX 970).

These days the 90 model is a much bigger step up than the old Titan models used to be. But they haven't done that by making the mid-range models worse, so what's the problem then?
Posted on Reply
#155
ModEl4
medi01Shader distribution in NV lineups, highlights how terrible things are in the green camp, it had NEVER been so bad, the "unlaunched" 4080 12GB is basically on 2060 levels in terms of % from the halo:





If 408 is a 40% cutdown from 4090, then 4090 is 166% of 4080.

As for "what is faster" and magical shaders that do much better in 4080 (with slower mem and what not) than in 4090, we'll see soon enough.
I didn't say 4080 was -40% slower than 4090 (the base of comparison is 4090 in this case, if 4090=100% then 4080=100-40=60%)
i said 4090 is +40% faster than 4080 (the base of comparison is 4080 in this case, if 4080=100% then 4090 = 100+40=140%)
It very basic math-logic stuff really, I don't know why it confuse you...
Posted on Reply
#156
Zach_01
ModEl4All GPCs are active in RTX 4080, they just disabled some SMs, all they have to do is re-enable them for the AD103 dies that can be fully utilized and the rest can be used in future cut-down AD103 based products (and also increase the clocks for the full AD103 parts)
And anyway my point wasn't what Nvidia will do but what it could achieve based on AD103 potential...


According to leak, even an OC cut-down RTX 4080 (304TCs enabled vs 336TCs of my higher clocked full AD103 config...) appears to be only -20% slower vs RTX 4090 in 3DMark Time Spy Performance preset and -27% in Extreme 4K preset...
You do your math, I will do mine!
For example theoretical Shading performance delta alone is useless to extract performance difference between 2 models, it's much more complex than that...

You do realize that in those synthetic scores the CPU is involved too. If assumed that the CPU is the same for all them, then the CPU is responsible in every score differently (as percentage of the final score).
In order to evaluate the GPU you should see the GPU score only.
Posted on Reply
#157
ratirt
efikkanBut does that really matter though?
Isn't it far more important how much performance you get per Dollar and how it compares vs. the competition and its predecessor?

I find it funny how the typical complaint over the years has been the opposite; too little difference between the three highest tiers. Quite often, a 60/60Ti model has been "too close" to the 70/80 (before the 90 models) models, and sometimes the 70 model has been very close to the 80 model (e.g. GTX 970).

These days the 90 model is a much bigger step up than the old Titan models used to be. But they haven't done that by making the mid-range models worse, so what's the problem then?
I find a slight problem with performance per $ recently. I think the companies know how to exploit that understanding. Picture this.
$300 for a card that gets 100 FPS in a game average (it does not matter what game etc.) That will be our starting point.
then you have a new gen card release and the same tier card costs $420 and you get 150FPS.
another gen $540 for 200FPS. and another $660 for 250FPS. The performance per $ is better every gen but it is still a mid range card. It is the same card for which you have paid 300$ merely 4 years ago. The other aspect is, the four year old game has 2 new releases and each one normally halves the FPS of a graphics card. this means you dont get 250FPS with your $660 card which is a mid range card nonetheless. Don't get me wrong, you still have plenty of FPS but the problem is you paid $660 for a card that has around 125FPS in a game compared to $300 for 100FPS in a game 4 years ago?
Check Far Cry franchise (from FarCry 4 to 6) and 980 vs 1080 and 2080 with MSRP prices $550(dropped to 500 in 6 months) , $600, $799 (it dropped to $700 1 year later) respectively. This is just to illustrate the problem.
That is exactly what NV has been doing for years. Now you get a mid range card like 4070 for how many $$$ today? Advertised as 4080 to be exact. You can still say the performance per $ is good but is it worth to pay that much for the card?
Posted on Reply
#158
HTC
I don't remember where exactly i saw it but someone made an interesting observation, and i haven't been able to confirm or dispute it: during the presentation, have the 7900 GPUs been referred to SPECIFICALLY as using the N31 chip?

I didn't notice myself either way but, IF IT'S TRUE, then that would explain why the XTX isn't called a 7950. Could AMD be trolling us an nVidia and planing to launch THE REAL N31 chip @ a later date?

That would also mean higher prices for the lower cards though, and that isn't a good prospect to look forward to ...
Posted on Reply
#159
Zach_01
HTCI don't remember where exactly i saw it but someone made an interesting observation, and i haven't been able to confirm or dispute it: during the presentation, have the 7900 GPUs been referred to SPECIFICALLY as using the N31 chip?

I didn't notice myself either way but, IF IT'S TRUE, then that would explain why the XTX isn't called a 7950. Could AMD be trolling us an nVidia and planing to launch THE REAL N31 chip @ a later date?

That would also mean higher prices for the lower cards though, and that isn't a good prospect to look forward to ...
I don’t understand what exactly suggests that 7900XTX isn’t Navi31. And what might be? A Navi32?
And RDNA2 6900XT and 6950XT are both Navi21 chips.

On the contrary I do believe that AMD will introduce a bigger more expensive die down the road into 2023 but I don’t know what might be called.
Posted on Reply
#160
HTC
Zach_01I don’t understand what exactly suggests that 7900XTX isn’t Navi31. And what might be? A Navi32?
That was what i understood, from what the dude said.

Like i said, i didn't notice if they specifically referred to the two 7900 cards as using N31 chips or not, but it would make SOME sense, i think.

Such a stunt would MOST CERTAINLY catch nVidia with their pants down ...
Zach_01I do believe that AMD will introduce a bigger more expensive die down the road into 2023
That's the most likely scenario, i agree.
Posted on Reply
#161
ratirt
HTCI don't remember where exactly i saw it but someone made an interesting observation, and i haven't been able to confirm or dispute it: during the presentation, have the 7900 GPUs been referred to SPECIFICALLY as using the N31 chip?

I didn't notice myself either way but, IF IT'S TRUE, then that would explain why the XTX isn't called a 7950. Could AMD be trolling us an nVidia and planing to launch THE REAL N31 chip @ a later date?

That would also mean higher prices for the lower cards though, and that isn't a good prospect to look forward to ...
maybe 2 chiplet Core design can be released later. The new AMD design is chiplets but the core is one chiplet and then memory are other.
Posted on Reply
#162
N3M3515
ZubasaIf it works on a similar principle, I am afraid not much can be done on latency.
They actually did say that latency will be lower on their tech.
Posted on Reply
#163
redlock81
fancuckerHonestly the fact they didn't compare it directly to the 4090 shows you it's beneath it. And the aggressive pricing tells the story of the bad ray tracing performance. Pretty much another Nvidia win across the board this generation. Sorry AMD.
Its so close to the performance i dont think people will mind, as inflation keeps going up i believe the all mighty dollar will win, for performance to be so close and AMD is 600$ cheaper for a reference card...that can be a CPU and MB. RT is still in its growing stages and it really depends how its implemented and there are far more games without RT than there is games with RT...also there are far more games that dont look any different when turning the setting on only to tank performance, seriously who cares about RT and besides Unreal Engine 5 as already displayed that you dont need the graphics card to even do RT lol, frames matter more at this point as monitors keep upping the limits. Also AIB's have more freedom making card for AMD they can up the limit on power the card used to 450w and use ddr7 and yes that is a thing and available and easily match the performance of the 4090 at least for rasterized performance. RT is dumb, until its a total game changer no one cares!
Posted on Reply
Add your own comment
May 23rd, 2024 15:23 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts