• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA AD103 and AD104 Chips Powering RTX 4080 Series Detailed

The 6 extra chips are just infinity cache, apparently.

I wonder how well the GPU would work without it, Unlike RDNA2, RDNA3 Flagship will have huge bandwidth to play with.
No they aren't just Infinity cache.
At 7nm 128MB infinity cache was around 78mm², so 96MB around 58.5mm².At 6nm with the same T libraries it would be around 51mm²-54mm² or at this ballpark.
Even if they targeted much higher throughput using higher T libraries, I don't see them more than double from that, so 108mm² at max.
The die area of the chiplets in Navi31 will be according to rumors 225mm² at least, so what you're saying doesn't add up imo.
 
The 6 extra chips are just infinity cache, apparently.

I wonder how well the GPU would work without it, Unlike RDNA2, RDNA3 Flagship will have huge bandwidth to play with.
The six extra chips are cache AND memory controllers. These must be counted as die area as they would typically be part of monolithic chip and the GPU wouldn’t work without them.
 
So they pumped 3 different chips right from the start? that's new, as is the 4080 not using the top 102 one.

Also wasn't jensen saying moore law is dead? Seems pretty alive to me when the 3080 used a 600+mm2 chip and the 4080 is using a 300+mm2 chip :D
 
295 mm² 192bit bus card for 1100€, good luck with that one NV

For reference previous biggest (consumer) 104 die cards were:

GA104 - RTX 3070ti 392 mm² 256bit ~600€
TU104 - RTX 2080S 545 mm² 256bit ~699€
 
That is a lot of assumptions, it will be interesting to see if you are correct.
The die size differences (12-12.5% for AD102/Navi31 and 9-8% for AD103/Navi32) are based on the figures that leakers claimed for AMD.
The performance/W is just my estimation (4090 will be at max -10% less efficient if compared at the same TBP)
AMD fans saying otherwise just isn't doing AMD a favour because anything more it will to disappointment.
Even what I'm saying probably is too much, because if you take a highly OC Navi31 flagship partner card like Powercolor Red devil, Asus strix, Asrock Formula and the TBP is close to 450W, what i just said is that Navi31 flagship will be regarding performance 100% and 4090 90% which probably isn't going to happen...
 
AD103 supposed to be RTX 4070 and AD104 supposed to be RTX 4060, but as there is no competition, they renamed them up and bumped prices 3 times up.
 
AD103 supposed to be RTX 4070 and AD104 supposed to be RTX 4060, but as there is no competition, they renamed them up and bumped prices 3 times up.
No competition? What do you mean by that? RDNA2 matched or beat 30 series in raster, FSR 2.0 has great reviews, and most certainly RDNA3 will compete, and because AMD's chiplet approach should be cheaper to manufacture, RDNA3 should offer better performance per dollar....but despite all of that, everyone will buy Nvidia and reward their behavior and perpetuate Nvidia's constant price increases in perpetuity.

Let's be honest everyone, AMD could release a GPU that matched Nvidia in every way including raytracing, and have FSR equal to DLSS in every way and charge less than Nvidia for it, and everyone would STILL buy Nvidia (which only proves consumer choices are quite irrational and are NOT decided by simply comparing specs, as the existence of fanboys should testify to...)...and as long as that's true, the GPU market will ALWAYS be hostile to consumers. The ONLY way things are going to improve for consumers is if AMD starts capturing marketshare and Nvidia is punished by consumers... but based on historical precedent, I have no hope for that...

And I don't believe Intel's presence would have improved the situation much, not as much as a wholly new company in the GPU space would have, because Intel would have leveraged success in the GPU market (which would have probably been carved away from AMD's limited marketshare instead of Nvidia's and would have resulted in Nvidia's marketshare remaining at 80% and AMD's 20% being divided between AMD and Intel) to further marginalize AMD in the x86 space (for example, by using influence with OEMs to have an Intel CPU matched with an Intel GPU and further diminish AMDs position among OEMs, which is how Intel devastated AMD in the 2000s BTW), and it would have been trading a marginally better GPU market for a much worse CPU market, imo. Although it'd never happen, what would be really improve the market would be if Nvidia got broken up like AT&T did in the 80s...
 
Last edited:
They marketing the card as RTX 4080 12GB for take more money from buyers when is in the reality the RTX 4070, is time to tell the truth to the people and no take any more bullshit from Nvidia
... the 4080 16 GB variant is barely a 4070, tbh, much less the 12 GB variant.
 
AD103 supposed to be RTX 4070 and AD104 supposed to be RTX 4060, but as there is no competition, they renamed them up and bumped prices 3 times up.

How so?

GK104 GTX680 @ 294mm2 full die with 1536 Cuda = $499, adjusted for inflation $645 USD.

GP104 GTX1080 @ 314mm2 full die with 2560 Cuda = $599, adjusted for inflation, $740 USD.


I'll agree the 4080 12GB is overpriced, but it's not the first time Nvidia has done this relatively speaking. :) Not to defend Nvidia.. but.. They've been doing this crap for years.

If they priced it at $700 it would have been more in line with some of their previous G104 full die x80 class GPUs. Margins are obviously higher now.. Hardware EE is more expensive too.
 
Last edited:
How so?

GK104 GTX680 @ 294mm2 full die with 1536 Cuda = $499, adjusted for inflation $645 USD.

GP104 GTX1080 @ 314mm2 full die with 2560 Cuda = $599, adjusted for inflation, $740 USD.


I'll agree the 4080 12GB is overpriced, but it's not the first time Nvidia has done this relatively speaking. :)
Look at the sheer difference in SM counts between the 4080 16 GB and the 4090. The 4090 has a lot more SMs than the 4080 16 GB (128 vs 76). So the 4080 16 GB variant is around 60% of the 4090, and the 4080 12 GB variant with 60 SM is around 47% of the 4090.


Meanwhile, the 3090 vs the 3080: 82 vs 68, which means the 3080 has around 83% of the 3090's SM count activated. And the 3070 Ti has 58% of the SM count of the 3090.

So, yes. You know what, the 4080 16 GB variant is actually a 4070 Ti. And the 4080 12 GB variant reminds of the 3060 Ti actually (since both have around 47% of their respective lineups' 90 card core count).

So, Nvidia basically named them both "4080" just so they didn't show up as asking +1000 euros for a 60-70 class card.
 
I have some performance figures:

Cyberpunk 2077 @3840x2160 RT on:

RTX 4090: 46.5 FPS (+97% higher performance)
RTX 3090 Ti: 23.6 FPS
ASRock Radeon RX 6950 XT OC Formula Review - Ray Tracing | TechPowerUp

1663944048386.png


1663943995161.png

GeForce RTX 4090 Performance Figures and New Features Detailed by NVIDIA (wccftech.com)
 
Worthless comparison because the test systems are completely different. W1zzard was using a Ryzen 5800X, with 16 GB of RAM, on Windows 10.

Those guys were using an unknown version of Windows 11, a Core i9 12900K, and 32 GB of RAM, speed and timings unknown
 
Worthless comparison because the test systems are completely different. W1zzard was using a Ryzen 5800X, with 16 GB of RAM, on Windows 10.

Those guys were using an unknown version of Windows 11, a Core i9 12900K, and 32 GB of RAM, speed and timings unknown

At 4K it doesn't matter. :D
 
I’m assuming once 3000 series stock sells out nVidia will quietly release a 4070 that is identical to the 4080 12gb and replaces it.
 
I am looking forward to the 4080 10GB and the 4080 8GB, i think one of those would be good for an upgrade for my wife, or maybe a 4080 6GB..........................................................................................................
 
Look at the sheer difference in SM counts between the 4080 16 GB and the 4090. The 4090 has a lot more SMs than the 4080 16 GB (128 vs 76). So the 4080 16 GB variant is around 60% of the 4090, and the 4080 12 GB variant with 60 SM is around 47% of the 4090.


Meanwhile, the 3090 vs the 3080: 82 vs 68, which means the 3080 has around 83% of the 3090's SM count activated. And the 3070 Ti has 58% of the SM count of the 3090.

So, yes. You know what, the 4080 16 GB variant is actually a 4070 Ti. And the 4080 12 GB variant reminds of the 3060 Ti actually (since both have around 47% of their respective lineups' 90 card core count).

So, Nvidia basically named them both "4080" just so they didn't show up as asking +1000 euros for a 60-70 class card.

So you do realize that SM count isn't linear performance generation to generation, right? Nvidia has also moved around the "class" of GPU's for multiple generations.

Like I said, the 4080 12GB isn't too far off from what cards like the GTX680 or GTX1080 were if you factor inflation. The only difference these days is that Nvidia moved the "top end" to a higher goal post. Thats it..

Is the 4080 12GB overpriced? Yes, but it isn't too far off from certain previous x80 GPUs with full G104 dies. Like I said EE design/cooling is also WAY more expensive these days. We're not talking about 150-200w cards anymore.

Am I the only one who realizes Nvidia has been doing this shit for years?
 
Last edited:
GTX680 or GTX1080

Speaking of which... if history is anything to go by, then we won't see competition from Radeon. Those were exactly the worst times for Radeon with Vega 64 and HD 7970.

What does AMD prepare to counter the RTX 4090 launch? :confused:
 
Speaking of which... if history is anything to go by, then we won't see competition from Radeon. Those were exactly the worst times for Radeon with Vega 64 and HD 7970.

What does AMD prepare to counter the RTX 4090 launch? :confused:

Who knows. Hope the leaks aren't true. AMD seems to be going the NVIDIA route by downgrading specs per generation to ensure people "upgrade" sooner.

And I don't trust AMD to be a savior either. MSRP pricing on later released RX6000 cards durring the mining crisis was a joke..

The truth is, both these companies only care about your dollar. Let them fight for it.
 
..are the gaming tech review sites going to persist in playing along with Nvidia's sham naming of this card or will they have the integrity to call it out for what it actually is?

First of all TPU is not a ''gaming'' tech reviewer , it is an information website centered on technology .

Secondly , reviewers are legally bound to call a given products what the manufacturer calls it in their presentations , it's not like they can go out and call it random names ... They can express their oppinion on the naming/segmentation but they can't make up names so no need to make a fuss about it .

As long as they provide accurate information about the perf and price/perf ratio then they've done their job. It's up to the customer to make the final decision based on this information .
 
Last edited:
And I don't believe Intel's presence would have improved the situation much, not as much as a wholly new company in the GPU space would have, because Intel would have leveraged success in the GPU market (which would have probably been carved away from AMD's limited marketshare instead of Nvidia's and would have resulted in Nvidia's marketshare remaining at 80% and AMD's 20% being divided between AMD and Intel) to further marginalize AMD in the x86 space (for example, by using influence with OEMs to have an Intel CPU matched with an Intel GPU and further diminish AMDs position among OEMs, which is how Intel devastated AMD in the 2000s BTW), and it would have been trading a marginally better GPU market for a much worse CPU market, imo. Although it'd never happen, what would be really improve the market would be if Nvidia got broken up like AT&T did in the 80s...
To be fair, I don't think I've ever seen an Intel CPU paired with an AMD dGPU in a laptop.
 
Insane world when a 4090 is the best value GPU.

There is no information out there to allow for an educated oppinion about value , for this you need reviews first ...
 
So, they are basically 4070 and 4060.

Yes, the confusion comes from the seemingly large leap but bear in mind that the old generation was built on Samsung 8N, which is a 12nm process node in reality.
 
Yes, the confusion comes from the seemingly large leap but bear in mind that the old generation was built on Samsung 8N, which is a 12nm process node in reality.
I don't think we are confused about anything here. There were node leaps in the past as well, case in point, from Maxwell to Turing, but never seen the Core ratio of Titan (biggest chip) to 1080 be this skewed.
 
Yes, the confusion comes from the seemingly large leap but bear in mind that the old generation was built on Samsung 8N, which is a 12nm process node in reality.
It is closest to TSMC's 10 nm; TSMC's 12 nm is 16 nm with different standard cells.
 
Back
Top