• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Radeon RX 7900 XTX Performance Claims Extrapolated, Performs Within Striking Distance of RTX 4090

Nvidia has all opportunity to tweak the line up and the better half isn't even out... They always ran the risk of misfires because they release first.
I dont think that was a misfire since that would suggest something unpredictable they are trying to tackle. NVidia's actions were intentional with the pricing or release of the 4090 as a first card which is obvious why. From what they have said so far about the pricing, all graphics cards are a joke. Then removing the 4080 12GB from the release because that was literally a flying circus.
 
Last edited:
It shows you're retired for a while now, because this is absolute nonsense.
Even if what I say is untrue (I kinda doubt considering I follow the linux community) doesn't it suck to have stigmas?
 
-20% slower vs
You said 4090 was 140%, if 4080 was 100%, which was wrong, it was 166%.

As for "I can grab a random leak on the internet, that shows that 4090 is only 100/73 => 37% faster than 4080", oh well.
 
You said 4090 was 140%, if 4080 was 100%, which was wrong, it was 166%.

As for "I can grab a random leak on the internet, that shows that 4090 is only 100/73 => 37% faster than 4080", oh well.
Where did you find the 166%?
My original post (you replied to my post) was about what potential performance a AD103 based RTX 4080 model could achieve if Nvidia decided to change specs (full die and +4% higher clocks than the current config was my proposal) and for this RTX 4080 config (my original proposal) I quoted that 4090 should have been +39% faster based on specs, but in reality the difference would be only +25% in 5800X TPU testbed with the current games selection, because in this particular setup 4090 realize around -10% of it's true potential.
+25% means 4090=125% and full AD103 4080 100% (or 4090 100% and full AD103 4080 80% , it's the same thing)
The Time Spy results that I quoted as an indication, if valid, shows that even in synthetic results the difference between 4090 (100) and current slower 4080 config (73) is much less than what you claim.
If TPU doesn't change testbed the average difference will be even less in games.(slightly different, around 74-75%)
No point to argue, reviews will come in a few weeks anyway and we will see who's assumption will prove true.
 
Last edited:
Shader distribution in NV lineups, highlights how terrible things are in the green camp, it had NEVER been so bad, the "unlaunched" 4080 12GB is basically on 2060 levels in terms of % from the halo:

1667991997478.png



Where did you find the 166%?
If 408 is a 40% cutdown from 4090, then 4090 is 166% of 4080.

As for "what is faster" and magical shaders that do much better in 4080 (with slower mem and what not) than in 4090, we'll see soon enough.
 
Shader distribution in NV lineups, highlights how terrible things are in the green camp, it had NEVER been so bad, the "unlaunched" 4080 12GB is basically on 2060 levels in terms of % from the halo:
But does that really matter though?
Isn't it far more important how much performance you get per Dollar and how it compares vs. the competition and its predecessor?

I find it funny how the typical complaint over the years has been the opposite; too little difference between the three highest tiers. Quite often, a 60/60Ti model has been "too close" to the 70/80 (before the 90 models) models, and sometimes the 70 model has been very close to the 80 model (e.g. GTX 970).

These days the 90 model is a much bigger step up than the old Titan models used to be. But they haven't done that by making the mid-range models worse, so what's the problem then?
 
Shader distribution in NV lineups, highlights how terrible things are in the green camp, it had NEVER been so bad, the "unlaunched" 4080 12GB is basically on 2060 levels in terms of % from the halo:

View attachment 269212



If 408 is a 40% cutdown from 4090, then 4090 is 166% of 4080.

As for "what is faster" and magical shaders that do much better in 4080 (with slower mem and what not) than in 4090, we'll see soon enough.
I didn't say 4080 was -40% slower than 4090 (the base of comparison is 4090 in this case, if 4090=100% then 4080=100-40=60%)
i said 4090 is +40% faster than 4080 (the base of comparison is 4080 in this case, if 4080=100% then 4090 = 100+40=140%)
It very basic math-logic stuff really, I don't know why it confuse you...
 
All GPCs are active in RTX 4080, they just disabled some SMs, all they have to do is re-enable them for the AD103 dies that can be fully utilized and the rest can be used in future cut-down AD103 based products (and also increase the clocks for the full AD103 parts)
And anyway my point wasn't what Nvidia will do but what it could achieve based on AD103 potential...


According to leak, even an OC cut-down RTX 4080 (304TCs enabled vs 336TCs of my higher clocked full AD103 config...) appears to be only -20% slower vs RTX 4090 in 3DMark Time Spy Performance preset and -27% in Extreme 4K preset...
You do your math, I will do mine!
For example theoretical Shading performance delta alone is useless to extract performance difference between 2 models, it's much more complex than that...

View attachment 268945
You do realize that in those synthetic scores the CPU is involved too. If assumed that the CPU is the same for all them, then the CPU is responsible in every score differently (as percentage of the final score).
In order to evaluate the GPU you should see the GPU score only.
 
But does that really matter though?
Isn't it far more important how much performance you get per Dollar and how it compares vs. the competition and its predecessor?

I find it funny how the typical complaint over the years has been the opposite; too little difference between the three highest tiers. Quite often, a 60/60Ti model has been "too close" to the 70/80 (before the 90 models) models, and sometimes the 70 model has been very close to the 80 model (e.g. GTX 970).

These days the 90 model is a much bigger step up than the old Titan models used to be. But they haven't done that by making the mid-range models worse, so what's the problem then?
I find a slight problem with performance per $ recently. I think the companies know how to exploit that understanding. Picture this.
$300 for a card that gets 100 FPS in a game average (it does not matter what game etc.) That will be our starting point.
then you have a new gen card release and the same tier card costs $420 and you get 150FPS.
another gen $540 for 200FPS. and another $660 for 250FPS. The performance per $ is better every gen but it is still a mid range card. It is the same card for which you have paid 300$ merely 4 years ago. The other aspect is, the four year old game has 2 new releases and each one normally halves the FPS of a graphics card. this means you dont get 250FPS with your $660 card which is a mid range card nonetheless. Don't get me wrong, you still have plenty of FPS but the problem is you paid $660 for a card that has around 125FPS in a game compared to $300 for 100FPS in a game 4 years ago?
Check Far Cry franchise (from FarCry 4 to 6) and 980 vs 1080 and 2080 with MSRP prices $550(dropped to 500 in 6 months) , $600, $799 (it dropped to $700 1 year later) respectively. This is just to illustrate the problem.
That is exactly what NV has been doing for years. Now you get a mid range card like 4070 for how many $$$ today? Advertised as 4080 to be exact. You can still say the performance per $ is good but is it worth to pay that much for the card?
 
Last edited:
I don't remember where exactly i saw it but someone made an interesting observation, and i haven't been able to confirm or dispute it: during the presentation, have the 7900 GPUs been referred to SPECIFICALLY as using the N31 chip?

I didn't notice myself either way but, IF IT'S TRUE, then that would explain why the XTX isn't called a 7950. Could AMD be trolling us an nVidia and planing to launch THE REAL N31 chip @ a later date?

That would also mean higher prices for the lower cards though, and that isn't a good prospect to look forward to ...
 
I don't remember where exactly i saw it but someone made an interesting observation, and i haven't been able to confirm or dispute it: during the presentation, have the 7900 GPUs been referred to SPECIFICALLY as using the N31 chip?

I didn't notice myself either way but, IF IT'S TRUE, then that would explain why the XTX isn't called a 7950. Could AMD be trolling us an nVidia and planing to launch THE REAL N31 chip @ a later date?

That would also mean higher prices for the lower cards though, and that isn't a good prospect to look forward to ...
I don’t understand what exactly suggests that 7900XTX isn’t Navi31. And what might be? A Navi32?
And RDNA2 6900XT and 6950XT are both Navi21 chips.

On the contrary I do believe that AMD will introduce a bigger more expensive die down the road into 2023 but I don’t know what might be called.
 
I don’t understand what exactly suggests that 7900XTX isn’t Navi31. And what might be? A Navi32?

That was what i understood, from what the dude said.

Like i said, i didn't notice if they specifically referred to the two 7900 cards as using N31 chips or not, but it would make SOME sense, i think.

Such a stunt would MOST CERTAINLY catch nVidia with their pants down ...

I do believe that AMD will introduce a bigger more expensive die down the road into 2023

That's the most likely scenario, i agree.
 
I don't remember where exactly i saw it but someone made an interesting observation, and i haven't been able to confirm or dispute it: during the presentation, have the 7900 GPUs been referred to SPECIFICALLY as using the N31 chip?

I didn't notice myself either way but, IF IT'S TRUE, then that would explain why the XTX isn't called a 7950. Could AMD be trolling us an nVidia and planing to launch THE REAL N31 chip @ a later date?

That would also mean higher prices for the lower cards though, and that isn't a good prospect to look forward to ...
maybe 2 chiplet Core design can be released later. The new AMD design is chiplets but the core is one chiplet and then memory are other.
 
If it works on a similar principle, I am afraid not much can be done on latency.
They actually did say that latency will be lower on their tech.
 
Honestly the fact they didn't compare it directly to the 4090 shows you it's beneath it. And the aggressive pricing tells the story of the bad ray tracing performance. Pretty much another Nvidia win across the board this generation. Sorry AMD.
Its so close to the performance i dont think people will mind, as inflation keeps going up i believe the all mighty dollar will win, for performance to be so close and AMD is 600$ cheaper for a reference card...that can be a CPU and MB. RT is still in its growing stages and it really depends how its implemented and there are far more games without RT than there is games with RT...also there are far more games that dont look any different when turning the setting on only to tank performance, seriously who cares about RT and besides Unreal Engine 5 as already displayed that you dont need the graphics card to even do RT lol, frames matter more at this point as monitors keep upping the limits. Also AIB's have more freedom making card for AMD they can up the limit on power the card used to 450w and use ddr7 and yes that is a thing and available and easily match the performance of the 4090 at least for rasterized performance. RT is dumb, until its a total game changer no one cares!
 
Back
Top