• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA AD103 and AD104 Chips Powering RTX 4080 Series Detailed

Which means a 4080ti (when it arrives) will probably be only 10% faster than 4080 16gb (76/80 SM's enabled), using a maxed out AD103 die (with 80/80 SM's enabled).

Which will still leave a huge gap to the 4090...

I guess they could cut down the AD102 die a ton, to create something truly in the middle, of the 4090 and 4080 16gb. But I think it's just going to be a maxed out ad103 die. :/
 
It is closest to TSMC's 10 nm; TSMC's 12 nm is 16 nm with different standard cells.

No..

It is closest to 12 nm Ground Rules:

1663949213465.png

10 nm process - Wikipedia
 
..are the gaming tech review sites going to persist in playing along with Nvidia's sham naming of this card or will they have the integrity to call it out for what it actually is?

Yep, I cannot believe they are charging $900 for a 295.4mm2 die. That's a massive reduction in size compared to the 3070 and less than half the size of the 3080. Mind you Nvidia is charging $200 more than a 3080. It's got to be a joke.
 
If you consider the 3080 was superior to previous gen 2080ti by around 25%-30%, it'll be telling where the 4080 12GB comes in. Notably, it'd have to be apples to apples without artificially constrained software implementation (ie DLSS3).

But if it's not similarly superior to the 3080ti, then I'll consider the 4080 12GB to be a ludicrous proposition.
 
In the end, as always, what's matter is performance per $.

The name is the last important part fallow by memory bus.

I'd say that die size is pretty important as well. If you know the die size of a product you can see how much value you are getting relative to how much it costs Nvidia and to the rest of the stack. This "4080" has an extremely small die size compared to past generations and the 4090 so it stands to reason that at it's current price it's terrible value and that Nvidia could have provided far more value for the customer even in comparison to just last generation.
 
No they don't, horse shit naming schemes need pointing out to noobs and only an Nvidia shareholder or fan would disagree.

Nvidia makes the confusion, not the press.

If the press cause more confusion,. Good it might make buyer's think more before purchasing.

A 104 die is not a X80 card GPU over nvidia's entire life span until today.

Wuuuut? GTX 1080 begs to differ... or GTX 980... or the 680, or...

The new kid on the block here is the fact there is a 103-SKU in between the 102 and 104. And even the 102 is a spin off from Titan's 110.

The fact is we're looking at the exact opposite situation: they can't place a 104 that high in the stack anymore. They need TWO bigger SKUs to cater to their top end of the stack, and 104 is upper midrange but at best - it no longer 'stands in as early high end'. It used to only get succeeded by a 102 later in gen, now they have to place it at the front of the gen to make even a tiny bit of impact compared to the last. ADA's current positioning is the best example: they're not even removing the 3xxx cards below it; we're looking at ONLY 102 dies from gen to gen populating half their new stack for a while. These are all signs Nvidia's running into new territory wrt their wiggle room: we're paying their wiggle room now on the Geforce stack; the 102/103 SKUs are simply new tiers also in terms of price, and they need every single piece of it to survive against the competition.

Back in the HD-days, they could make do with a 104 and then destroy AMD with a 102 either cut down or full later on. Which they have been doing up until they started pushing RT. Ever since, the changes happened, prices soared and VRAM magically went poof. The price of RT... again... ;) We still laughing about this fantastic tech?
 
Last edited:
No they don't, horse shit naming schemes need pointing out to noobs and only an Nvidia shareholder or fan would disagree.

Nvidia makes the confusion, not the press.

If the press cause more confusion,. Good it might make buyer's think more before purchasing.

A 104 die is not a X80 card GPU over nvidia's entire life span until today.
There's quite a justifiable riot around the not-RTX4070 and that's correct, but calling it a name its not called by its makers might confuse people who are completely out of the loop. We're talking about an official capacity here, not the deep depth of tech forums.

This will become worse when an actual RTX 4070 (or not RTX 4060) come out.

Press's opinions will be shared on the naming in their own content pieces.
 
Lets see how this plays out.... "Insert Popcorn meme here"
We have seen it in 2012 when Kepler debuted. x80 naming for xy104 gpu, move along, nothing new to see here.
 
RTX 4080 12 gigs = GTX 1060 3 gigs Deja Vu all over again and we all know how poorly 1060 3 gigs aged over time. Nvidia's greed just hit new highs. I really hope AMD shoots them down, but I'm not holding my breath as industry rumors suggest AMD has been allocating it's wafers from Radeon division to Zen4 and Epyc CPUs lately. Navi 3 might be awesome, but it looks like there won't be enough of it around to eat into Ngreedia's market share and Huang knows that that's why he's confidently showing middle finger to value buyers :banghead:
 
Exactly. All chips and chiplets should be added together for total die area.
The point is that they are manufactured separately on different nodes. And thus both have better yields and lower cost than one monolithic chip manufactured on a more expensive process with worse yields. In this sense it is disingenuous to count them all together as one big die.
 
Which means a 4080ti (when it arrives) will probably be only 10% faster than 4080 16gb (76/80 SM's enabled), using a maxed out AD103 die (with 80/80 SM's enabled).

Unless NVIDIA decides to use the 4090's AD102 die instead for the 4080 Ti.

They actually did this with the previous Ampere generation. The 3080 Ti uses the same GPU die as the 3090 and 3090 Ti.

Most PC hardware reviewers errorneously compared the 3080 Ti to the 3080. The 3080 Ti was essentially a slightly binned 3090 with half the VRAM.
 
[ ... ]
A 104 die is not a X80 card GPU over nvidia's entire life span until today.
untrue.

historically, the x80 (which had always been an anemic idiot choice tbh - not to be confused w/ the x80ti) had always been the 104 die. maxwell, pascal turing - you name it.
but those were also the days of the x80ti being like twice as powerful and the gap between the x80 and the x80ti larger than between x60 and x80

'twas not until ampere that nv decided to change their tiering and abolish the x80ti as the halo card.
 
Wuuuut? GTX 1080 begs to differ... or GTX 980... or the 680, or...

The new kid on the block here is the fact there is a 103-SKU in between the 102 and 104. And even the 102 is a spin off from Titan's 110.

The fact is we're looking at the exact opposite situation: they can't place a 104 that high in the stack anymore. They need TWO bigger SKUs to cater to their top end of the stack, and 104 is upper midrange but at best - it no longer 'stands in as early high end'. It used to only get succeeded by a 102 later in gen, now they have to place it at the front of the gen to make even a tiny bit of impact compared to the last. ADA's current positioning is the best example: they're not even removing the 3xxx cards below it; we're looking at ONLY 102 dies from gen to gen populating half their new stack for a while. These are all signs Nvidia's running into new territory wrt their wiggle room: we're paying their wiggle room now on the Geforce stack; the 102/103 SKUs are simply new tiers also in terms of price, and they need every single piece of it to survive against the competition.

Back in the HD-days, they could make do with a 104 and then destroy AMD with a 102 either cut down or full later on. Which they have been doing up until they started pushing RT. Ever since, the changes happened, prices soared and VRAM magically went poof. The price of RT... again... ;) We still laughing about this fantastic tech?
Most of the criticism is due to the price. Ampere has created the discontinuity, because from Maxwell to Turing, the 2nd largest die was used for the x80 GPU while the largest was used for the x80 Ti. Ampere used the largest die for both the x80 and what used to be the x80 Ti tier (x90 now). When the smaller die was used for the x80 tier, the gap in prices was also greater than is the case now. Moreover, AD102 has 80% more SMs than AD103; this hasn't been the case in a long time. The last time this happened, we had the 770 and the 780/780 Ti. There the gap was 87.5% in favour of the larger die, but the gap in price was also much greater than now. The 770 was selling for $330 by the time the 780 Ti was being sold for $700. The 1080 was sold for $500 when the 1080 Ti was $700, and the 2080 was sold for $699-$799 compared to $999-$1199 for the 2080 Ti.
 
'twas not until ampere that nv decided to change their tiering and abolish the x80ti as the halo card.

The x90 cards are really Titans in all but name. Whether they are called Titan or 4090 is a marketing decision that now has a habit of changing without notice.

It's not wise to compare NVIDIA's graphics card generations by comparing model numbers since they aren't consistent with what each model number represents. It's really just a numerical slot relative to a given card's placement within that generation's product stack.

Clearly NVIDIA's current strategy is to use binning to maximize gross margin. They're not going to sell Joe Gamer an $800 graphics card with a GPU that can be overclocked +30%. Those days are over. They're going to keep those binned chips, relabel them, and sell them at a higher price like $1200.
 
Last edited:
Unless NVIDIA decides to use the 4090's AD102 die instead for the 4080 Ti.

They actually did this with the previous Ampere generation. The 3080 Ti uses the same GPU die as the 3090 and 3090 Ti.

Most PC hardware reviewers errorneously compared the 3080 Ti to the 3080. The 3080 Ti was essentially a slightly binned 3090 with half the VRAM.
The 3080 is also derived from the same die; they used one die for a ridiculous number of SKUs:
  • 3080 10 GB
  • 3080 12 GB
  • 3080 Ti
  • 3090
  • 3090 Ti
 
There's quite a justifiable riot around the not-RTX4070 and that's correct, but calling it a name its not called by its makers might confuse people who are completely out of the loop. We're talking about an official capacity here, not the deep depth of tech forums.

This will become worse when an actual RTX 4070 (or not RTX 4060) come out.

Press's opinions will be shared on the naming in their own content pieces.
This could have all been avoided but, Nvidia. ..

untrue.

historically, the x80 (which had always been an anemic idiot choice tbh - not to be confused w/ the x80ti) had always been the 104 die. maxwell, pascal turing - you name it.
but those were also the days of the x80ti being like twice as powerful and the gap between the x80 and the x80ti larger than between x60 and x80

'twas not until ampere that nv decided to change their tiering and abolish the x80ti as the halo card.
Show me a Tpu review of a X80 class card, Any, with a 104 chip then.
 
The 3080 is also derived from the same die; they used one die for a ridiculous number of SKUs:
  • 3080 10 GB
  • 3080 12 GB
  • 3080 Ti
  • 3090
  • 3090 Ti
Ack, you're right. I must have been thinking about something else.

Anyhow, the way NVIDIA binned their GA102 GPUs, the 3080 Ti still ended up much closer to the 3090 (CUDA, RT, Tensor) than the 3080.

The main point is regardless of the given die, NVIDIA is going to bin the foundry's output and differentiate GPUs to maximize gross margin regardless of what final model number they slap on the chip.

One thing that is becoming increasingly evident is that their best silicon is destined for datacenter systems not consumer gaming PCs.
 
Last edited:
No competition? What do you mean by that? RDNA2 matched or beat 30 series in raster, FSR 2.0 has great reviews, and most certainly RDNA3 will compete, and because AMD's chiplet approach should be cheaper to manufacture, RDNA3 should offer better performance per dollar....but despite all of that, everyone will buy Nvidia and reward their behavior and perpetuate Nvidia's constant price increases in perpetuity.

Let's be honest everyone, AMD could release a GPU that matched Nvidia in every way including raytracing, and have FSR equal to DLSS in every way and charge less than Nvidia for it, and everyone would STILL buy Nvidia (which only proves consumer choices are quite irrational and are NOT decided by simply comparing specs, as the existence of fanboys should testify to...)...and as long as that's true, the GPU market will ALWAYS be hostile to consumers. The ONLY way things are going to improve for consumers is if AMD starts capturing marketshare and Nvidia is punished by consumers... but based on historical precedent, I have no hope for that...

And I don't believe Intel's presence would have improved the situation much, not as much as a wholly new company in the GPU space would have, because Intel would have leveraged success in the GPU market (which would have probably been carved away from AMD's limited marketshare instead of Nvidia's and would have resulted in Nvidia's marketshare remaining at 80% and AMD's 20% being divided between AMD and Intel) to further marginalize AMD in the x86 space (for example, by using influence with OEMs to have an Intel CPU matched with an Intel GPU and further diminish AMDs position among OEMs, which is how Intel devastated AMD in the 2000s BTW), and it would have been trading a marginally better GPU market for a much worse CPU market, imo. Although it'd never happen, what would be really improve the market would be if Nvidia got broken up like AT&T did in the 80s...
Nope, amd did not match nvidia in raster. Not in 4k where it matters. Also there was no far back in 2020 and rt performance is mediocre. WHEN and IF amd has a competitive product, people will buy amd cards. Pretending RDNA2 was comparable to ampere doesnt cut it. It just wasn't
 
Show me a Tpu review of a X80 class card, Any, with a 104 chip then.




The only post-Fermi x80s NOT based on a 104 were the 780 and 3080.
 
SM count, CUDA cores, Tensor cores, RT cores, memory bus width, L2 cache size etc. All those specs are secondary when it comes to actual performance per Watt and real-world smackeroonies necessary to buy the product. To make an informed judgement on the current gen of Nvidia graphics cards, we need reliable tests. Then we can have a fruitful discussion about Ada.
 
I'd say that die size is pretty important as well. If you know the die size of a product you can see how much value you are getting relative to how much it costs Nvidia and to the rest of the stack. This "4080" has an extremely small die size compared to past generations and the 4090 so it stands to reason that at it's current price it's terrible value and that Nvidia could have provided far more value for the customer even in comparison to just last generation.
Y the flying f* do you care about the manufactor profit?? How can it be a factor at all?
If the product suit your needs and in your budget frame and it's the best price\preformance in it's segments then get it.
Simple as that.
 
Back
Top