Monday, November 7th 2022

NVIDIA GeForce RTX 4080 Founders Edition PCB Pictured, Revealing AD103 Silicon

Here's the first picture of the PCB of the NVIDIA GeForce RTX 4080 Founders Edition. With NVIDIA cancelling the AD104-based GeForce RTX 4080 12 GB, the significantly buffed, AD103-based RTX 4080 16 GB is now referred to as simply the RTX 4080. The picture reveals an asymmetric PCB shape to fit with the Founders Edition dual-axial flow-through design. The card pulls power from a 16-pin ATX 12VHPWR connector, and appears to use roughly a 16-phase VRM. The PCB has many blank VRM phase traces, although just eight memory-chip pads to go with the 256-bit wide GDDR6X memory interface of the AD103 silicon.

The AD103 silicon features a rectangular die, and a fiberglass substrate that looks about the same size as past-generation NVIDIA GPUs with 256-bit wide memory interfaces, such as the GA104. The AD103 GPU is probably pin-compatible with the smaller AD104, at least as far as substrate-size is concerned; so minimal PCB design R&D effort is put into designing the 12 GB and 16 GB variants of the RTX 4080. The RTX 4080 12 GB is now gone, and the AD104 will power --70 classs SKUs with fewer shaders than what would've been the RTX 4080 12 GB. The display output configuration remains the same as the RTX 4090, with three DisplayPort 1.4a, and an HDMI 2.1a. NVIDIA is expected to launch the GeForce RTX 4080 on November 16, priced at USD $1,199 (MSRP).
Sources: KittyYYuko (Twitter), VideoCardz
Add your own comment

40 Comments on NVIDIA GeForce RTX 4080 Founders Edition PCB Pictured, Revealing AD103 Silicon

#26
rusTORK
$1200 and no secure frame around GPU die? WTH...
Posted on Reply
#27
Bwaze
"Finally, we have the Vulkan benchmarks where the NVIDIA GeForce RTX 4080 graphics card ends up being only 5.5% faster than the RTX 3090 Ti"

I'm pretty sure we'll see games where RTX 4080 16 GB actualy looses to RTX 3090 Ti due to much less memory bandwidth!
Posted on Reply
#28
Dirt Chip
There was a time, way way back, when you had 16bit vs 32 bit charts (it is a fun nostalgic read). Playing the game with 16bit was faster until a point when a new tech came along and 32bit was faster then 16bit. That was the tipping point and 16bit died that moment.

Some day in the future, at least 5 years if not 10 from now, a time will come when RT on will be faster than RT off. Than I will use it. Untill than, leave it off. It doesn't worth the pref hit.

Anyway and to the point, 4080 is a nice but pointless GPU outside of professional CUDA usages, like 4090, in it's current price.
For gaming, wait for AMD offer.
Posted on Reply
#29
Keullo-e
S.T.A.R.S.
TheDeeGeeWish that was possible, in the netherlands that means a 1030 or something, i may as well ditch my PC then.

And no, i will never buy second hand.
I've been a PC hobbyists for almost 20 years and I may have bought about 5 cards new.. :laugh:
Posted on Reply
#30
Arkz
CrackongSame size as the 4090 = same 12VHPWR placement problem as the 4090
It won't be a problem on this card though.
Posted on Reply
#31
Crackong
ArkzIt won't be a problem on this card though.
Fire hazard aside, it is still a problem for average sized PC case users trying to close their side panels.
Posted on Reply
#33
Sisyphus
At least there is a chance, 4080 16 GB will be available around 1200 $.
Posted on Reply
#34
Vayra86
Dirt ChipThere was a time, way way back, when you had 16bit vs 32 bit charts (it is a fun nostalgic read). Playing the game with 16bit was faster until a point when a new tech came along and 32bit was faster then 16bit. That was the tipping point and 16bit died that moment.

Some day in the future, at least 5 years if not 10 from now, a time will come when RT on will be faster than RT off. Than I will use it. Untill than, leave it off. It doesn't worth the pref hit.

Anyway and to the point, 4080 is a nice but pointless GPU outside of professional CUDA usages, like 4090, in it's current price.
For gaming, wait for AMD offer.
Minor exception, in that 16-32 bit age we still had many node shrinks ahead of us.

But today? Already the whole thing is escalating to meet gen to gen perf increases... 5-10 years better bring a game changer in that sense or RT is dead or of too little relevance. Besides, its not 'raster OR rt'. Its 'and'. Another thing 16>32 bit was not. So devices will need raster perf still...
Posted on Reply
#35
dir_d
Hxxall these 4080s will be OOS in 5 minutes on launch day. plenty of gamers with more money than sense out there
Dont get me wrong, i do believe these cards will sell out in record time but not bought by consumers. I believe they will be bought by scalpers right away and the cards price will rise. This will make it seem like this card has sold out and is a hit for Nvidia but the actuality is scalpers trying to make a profit. To be fair Nvidia does not care if scalpers or gamers are buying these cards, they just want them sold out like the 4090. You can buy plenty of 4090's now they are just all scalped.
Posted on Reply
#36
tvshacker
TheDeeGeeWish that was possible, in the netherlands that means a 1030 or something, i may as well ditch my PC then.

And no, i will never buy second hand.
BwazeAnd are now probably wondering if they should buy a two year Ampere card for a launch MSRP
Here in Portugal there are a lot of 1660 Super's at about 300€. Even though they're better than a 1030, it's insane to think it might still cost more than the launch MSRP for a 3 year old, low-midrange card...

Oh and BTW the ones marked "Melhor Preço" means it's the lowest price recorded (for a specific SKU) on this price comparison website, which I can tell you I've been using to compare this kind of stuff for well over 3 years...
Posted on Reply
#37
Minus Infinity
BwazeThere's tonns of people who gave RTX 20 (Turing) series a miss since it didn't bring any price / performance increase over GTX 10 (Pascal), and then they couldn't get a RTX 30 (Ampere) cards because of the cryptoidiotism.

And are now probably wondering if they should buy a two year Ampere card for a launch MSRP, or wait for a full RTX 40 (Ada) release and pay even more money for the same performance...
My 2080 Super kicks the crap out of my 1070, it a was plenty big enough upgrade for me for 1440p gaming. I got it for less than half the price 3070s were going for at the time. Now I will upgrade my 1080 Ti to 7900XT(X) and give Lovelace a wide berth.
Posted on Reply
#38
The Von Matrices
People need to keep in mind just how absurdly expensive a 4nm wafer is.

If AD103 is 372mm^2 like the TPU database says, there are ~140 die per 300mm wafer (using an online wafer layout calculator). At $18000 per 5nm/4nm wafer that's $130 just for the silicon (even assuming no defective die). Then you have to include the cost of assembling the chip package, the cost of the memory chips and all the other stuff on the PCB, and the cost of mounting everything to the PCB.

In the end there's no way that a card based on an AD103 chip is costing less than $250 just to manufacture let alone pay for R&D and marketing. $1200 may be a bit much to ask for a 4080, but unless TSMC drops its wafer prices, these simple calculations make me conclude that AD103 based cards can't be priced less than $650 while still selling for a profit. AMD pricing their new GPU at $1000 is about right to keep the same profit margins as the past.

Perhaps this demonstrates that there needs to be a new paradigm of rebranding the previous generations of high-end GPUs to sell to the mid range and low end. Making mid-range and low-end GPUs on the latest process node doesn't seem to make financial sense anymore.
Posted on Reply
#39
Dirt Chip
The Von MatricesPeople need to keep in mind just how absurdly expensive a 4nm wafer is.

If AD103 is 372mm^2 like the TPU database says, there are ~140 die per 300mm wafer (using an online wafer layout calculator). At $18000 per 5nm/4nm wafer that's $130 just for the silicon (even assuming no defective die). Then you have to include the cost of assembling the chip package, the cost of the memory chips and all the other stuff on the PCB, and the cost of mounting everything to the PCB.

In the end there's no way that a card based on an AD103 chip is costing less than $250 just to manufacture let alone pay for R&D and marketing. $1200 may be a bit much to ask for a 4080, but unless TSMC drops its wafer prices, these simple calculations make me conclude that AD103 based cards can't be priced less than $650 while still selling for a profit. AMD pricing their new GPU at $1000 is about right to keep the same profit margins as the past.

Perhaps this demonstrates that there needs to be a new paradigm of rebranding the previous generations of high-end GPUs to sell to the mid range and low end. Making mid-range and low-end GPUs on the latest process node doesn't seem to make financial sense anymore.
Indeed.
I think we will see a change in the near future: The same architecture with different process levels to different performance level\tier.
It will be good if only the top tier will use the latest process to achieve max absolut pref. The people who buy them will 'gladly' pay the extra to be on the bleeding edge of tech and will also pay for the extra work regarding design of two node processes for the same architecture.
Mid and low tier will use older, more mature and higher yielded process.
No need to xx30/xx50/xx60 to use 4nm if cost to pref is what you after and new wafer cost is skyrocket.
7/6nm is very much fine with me right now to any mid-level-GPU, as long as it come with enough memory.
Architecture improvement, process refinement and new software\tech (DLSS\LA,FSR,XESS ect.) will take care of the pref improvement.
Basically a 1 process lag for the mid and low tier, so when NV 5xxx series will be out on a better 2/3nm node for the 5080/5090 we will have 5030/5050/5060 on 4/5nm node.

And with that segmentation, we will be one step closer to 'GAMERS master race' who pay big and all the others who make the economic decision and don't care about races.
Posted on Reply
#40
toooooot
Dirt ChipIndeed.
I think we will see a change in the near future: The same architecture with different process levels to different performance level\tier.
It will be good if only the top tier will use the latest process to achieve max absolut pref. The people who buy them will 'gladly' pay the extra to be on the bleeding edge of tech and will also pay for the extra work regarding design of two node processes for the same architecture.
Mid and low tier will use older, more mature and higher yielded process.
No need to xx30/xx50/xx60 to use 4nm if cost to pref is what you after and new wafer cost is skyrocket.
7/6nm is very much fine with me right now to any mid-level-GPU, as long as it come with enough memory.
Architecture improvement, process refinement and new software\tech (DLSS\LA,FSR,XESS ect.) will take care of the pref improvement.
Basically a 1 process lag for the mid and low tier, so when NV 5xxx series will be out on a better 2/3nm node for the 5080/5090 we will have 5030/5050/5060 on 4/5nm node.

And with that segmentation, we will be one step closer to 'GAMERS master race' who pay big and all the others who make the economic decision and don't care about races.
I wholeheartedly support this. With a middle end cpu costing 300 dollars, middle end GPU shouldnt be 700-800 dollars.
Posted on Reply
Add your own comment
May 21st, 2024 23:00 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts