• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA AD103 and AD104 Chips Powering RTX 4080 Series Detailed

ARF

Joined
Jan 28, 2020
Messages
4,060 (2.57/day)
Location
Ex-usa
Man people don't know what TSMC is charging way more dough than Samsung, Samsung 8N cost ~4k usd while TSMC 5nm cost ~17k usd per wafer according to 2020 data

AD104 chip could cost 2x as much as GA102 chip that are on 3090Ti/3090/3080Ti/3080

That means Ampere will have the cost benefit for now while ADA will have performance and efficiency benefit.

This means that nvidia should have carefully analysed the situation and completely stopped the project towards TSMC N4.
Instead, they could have made more significant IPC/architectural improvements and backported AL to Samsung N8, so that the lineup would look something like this:

chip | die size | pricing
GAL104-2 750mm^2 1200
GAL104-2
750mm^2 1000
GAL104-2
750mm^2 800
GAL106-2
520mm^2 660
GAL106-2
520mm^2 500
GAL107-2 400mm^2 400
 
Joined
Aug 21, 2013
Messages
1,709 (0.43/day)
It's paid, it's proprietary and it's worse in any way. Why paying for that? Industry should completely abolish HDMI.
You're preaching to the choir my friend. Unfortunately if there is one thing we can be sure in our lifetime it's taxes, death and a million different standards to do the same thing.
 
Joined
Nov 11, 2016
Messages
3,141 (1.14/day)
System Name The de-ploughminator Mk-II
Processor i7 13700KF
Motherboard MSI Z790 Carbon
Cooling ID-Cooling SE-226-XT + Phanteks T30
Memory 2x16GB G.Skill DDR5 7200Cas34
Video Card(s) Asus RTX4090 TUF
Storage Kingston KC3000 2TB NVME
Display(s) LG OLED CX48"
Case Corsair 5000D Air
Audio Device(s) KEF LSX II LT speakers + KEF KC62 Subwoofer
Power Supply Corsair HX850
Mouse Razor Viper Ultimate
Keyboard Corsair K75
Software win11
This means that nvidia should have carefully analysed the situation and completely stopped the project towards TSMC N4.
Instead, they could have made more significant IPC/architectural improvements and backported AL to Samsung N8, so that the lineup would look something like this:

chip | die size | pricing
GAL104-2 750mm^2 1200
GAL104-2
750mm^2 1000
GAL104-2
750mm^2 800
GAL106-2
520mm^2 660
GAL106-2
520mm^2 500
GAL107-2 400mm^2 400

Not sure how people would react to Ampere 4080 12GB using 500W, only to save 100-150usd compare to TSMC 4080 12GB @ 285W
 
  • Haha
Reactions: ARF

ARF

Joined
Jan 28, 2020
Messages
4,060 (2.57/day)
Location
Ex-usa
Not sure how people would react to Ampere 4080 12GB using 500W, only to save 100-150usd compare to TSMC 4080 12GB @ 285W

It doesn't work like that. First of all, 12 GB is not enough and will not be enough for a high-end SKU from 2023 onwards. And second, why do you presume that the power consumption will increase that drastically? IPC/architectural improvements are made exactly to counter power uplifts.

Also, look at how AMD manages to solve the problem with the high wafers cost. It's called multi-chip module first, and second - AMD uses the less expensive N5:

1664006792191.png

AMD's main Navi 31 GPU reportedly has 96MB of Infinity Cache, Navi 32 is up to 7680 cores - VideoCardz.com
 
Joined
Nov 11, 2016
Messages
3,141 (1.14/day)
System Name The de-ploughminator Mk-II
Processor i7 13700KF
Motherboard MSI Z790 Carbon
Cooling ID-Cooling SE-226-XT + Phanteks T30
Memory 2x16GB G.Skill DDR5 7200Cas34
Video Card(s) Asus RTX4090 TUF
Storage Kingston KC3000 2TB NVME
Display(s) LG OLED CX48"
Case Corsair 5000D Air
Audio Device(s) KEF LSX II LT speakers + KEF KC62 Subwoofer
Power Supply Corsair HX850
Mouse Razor Viper Ultimate
Keyboard Corsair K75
Software win11
It doesn't work like that. First of all, 12 GB is not enough and will not be enough for a high-end SKU from 2023 onwards. And second, why do you presume that the power consumption will increase that drastically? IPC/architectural improvements are made exactly to counter power uplifts.

Also, look at how AMD manages to solve the problem with the high wafers cost. It's called multi-chip module first, and second - AMD uses the less expensive N5:

AMD's main Navi 31 GPU reportedly has 96MB of Infinity Cache, Navi 32 is up to 7680 cores - VideoCardz.com

LOL, Navi31 still use 533mm2 of N5+N6 wafer, let see where the performance end up vs ADA first before making any judgement here
 
  • Haha
Reactions: ARF
Joined
Nov 11, 2016
Messages
3,141 (1.14/day)
System Name The de-ploughminator Mk-II
Processor i7 13700KF
Motherboard MSI Z790 Carbon
Cooling ID-Cooling SE-226-XT + Phanteks T30
Memory 2x16GB G.Skill DDR5 7200Cas34
Video Card(s) Asus RTX4090 TUF
Storage Kingston KC3000 2TB NVME
Display(s) LG OLED CX48"
Case Corsair 5000D Air
Audio Device(s) KEF LSX II LT speakers + KEF KC62 Subwoofer
Power Supply Corsair HX850
Mouse Razor Viper Ultimate
Keyboard Corsair K75
Software win11
Wrong. It is split into two:
GCD 308mm^2 on N5 + 6 x MCD 37.5mm^2 on N6.

+ substrate cost to inter-connect all the things
 
Last edited:
Joined
Jan 14, 2019
Messages
10,080 (5.15/day)
Location
Midlands, UK
System Name Holiday Season Budget Computer (HSBC)
Processor AMD Ryzen 7 7700X
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 16 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 6500 XT 4 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
A card based on a GPU less than 300 mm2 sold for nearly a grand. Now that's what I call an insult!
 
Joined
Aug 25, 2021
Messages
1,061 (1.06/day)
A380 is ACM-G11 chip. Rest will be ACM-G10. Specs aren't fully announced yet.

It's paid, it's proprietary and it's wporse in any way. Why paying for that? Industry should completely abolish HDMI.
Oh, please... More than billion HDMI devices ship every year around the globe. Consumer electreonics industry seems happy about it, including big companies like LG, Samsung, Sony, Yamaha, seminconductors and others who run it in the background.

Plus, no consumer group has ever got organised to mount a legal challenge against HDMI Administrator for changing naming scheme without educating consumers properly and mandating vendors to be more transparent with features and signalling in advertisements.

Perhaps you could be the first one to organise consumer group and a legal team?

+ substrate cost to inter-connect all the things
Exactly all those things make it cheaper and ensure better margins for AIB, so there is more room for price tuning, unlike Ada.
 
Joined
Oct 27, 2020
Messages
789 (0.60/day)
The wafer cost increase alone is nearly inconsequential when we are talking about $1600 SRP.
If Samsung 8LPP wafer cost was $4000 like some posters said, this means $50 or less per die on a 12" wafer. Even if TSMC 4N has 3X the wafer cost we are talking $150 or less.
In AD103 and AD104 cases the wafer cost alone is even less a deciding factor in relation with GA104/GA106 if you factor the insane price increase.
 
Last edited:
Joined
Jun 18, 2021
Messages
2,311 (2.16/day)

So if i read this correct it affects the 3 series that AIB's need to have an extra conversion chip included. The 5 and 7 series per Ryan Shrout natively include this and thus should be 2.1 compliant. Just seems like an odd omission even on budget cards. HDMI 2.1 is not exactly new and AMD's 6400 card even supports it.

It announced and explained from the start and also happens with the 5 and 7 series. Intel Cards ("limited edition" or whatever) will have it, others is to be seen, but the prospect of there being many other cards other than Intel (or that use their reference PCB) is doubtfull anyway :D

Thank you for this. I really appreciate it.
Wow! I cannot believe that Intel waited for so long to tell us that only A750 and A770 Limited Edition cards have PCON converter chip on PCB for HDMI 2.1 FRL signal (unclear if 24Gbps?, 32 Gbps?, 40 Gbps? or 48 Gbps?), and all other cards are HDMI 2.0 with 18 Gbps speed.

This is what happens when HDMI Administrator publishes their decision to rebrand 2.0=2.1. And companies sell us a total mess...

It doesn't really have anything to do with the HDMI Forum shenanigans, they just fucked up I guess. The cards have been a LONG time coming and when that simpler part was done, maybe early on, they thought they could get away with it - if they launched within 2020 it would have been true, HDMI 2.1 became really a thing after the new console launch

It's paid, it's proprietary and it's wporse in any way. Why paying for that? Industry should completely abolish HDMI.

It's not like DisplayPort is free, they have different licensing schemes. HDMI was able to corner the market, and even though DisplayPort is superior in most instances you'll have a though time convincing everyone to switch over (currently it also allows manufacturers to deprecate products because of HDCP which is an advantage for them I guess, that's an even bigger crime imo)
 
Joined
Aug 25, 2021
Messages
1,061 (1.06/day)
It doesn't really have anything to do with the HDMI Forum shenanigans
It does. Eversince the first scandal occured in October 2020 (reported by TUP) with new AV receivers that could not pass through HDMI 2.1 signals from new Xbox X through AV receiver to 4K/120 TVs, companies standing behind HDMI told their administrator to nullify 2.0 spec and all devices were suddenly to be called 2.1, regardless of bandwidth provided on the port.

That's the moment when any vendor could publish on their website that their old 2.0 port is 2.1 port. This created an explosion of deceptive marketing, from motherboards to CPUs and GPUs. Vendors do not seem obliged to mention HDMI 2.1 TMDS (2.0 at 18 Gbps) or HDMI 2.1 FRL (real 2.1 up to 48 Gbps). Some of course do and thjey are honest.

TFT Central wrote an expensive article about it in December 2021. Have a look when you get a spare moment.
 
Joined
Jun 18, 2021
Messages
2,311 (2.16/day)
It does. Eversince the first scandal occured in October 2020 (reported by TUP) with new AV receivers that could not pass through HDMI 2.1 signals from new Xbox X through AV receiver to 4K/120 TVs, companies standing behind HDMI told their administrator to nullify 2.0 spec and all devices were suddenly to be called 2.1, regardless of bandwidth provided on the port.

That's the moment when any vendor could publish on their website that their old 2.0 port is 2.1 port. This created an explosion of deceptive marketing, from motherboards to CPUs and GPUs. Vendors do not seem obliged to mention HDMI 2.1 TMDS (2.0 at 18 Gbps) or HDMI 2.1 FRL (real 2.1 up to 48 Gbps). Some of course do and thjey are honest.

TFT Central wrote an expensive article about it in December 2021. Have a look when you get a spare moment.

But that's not what's happening with Intel, they clearly have on their spec that the cards run natively with HDMI 2.0 and can support 2.1 through protocol conversion. I don't know what bandwith or feature limitation will look like, but they didn't hide that their solution is not "native" or as native as usual I guess (I mean I don't know how it's usually done anyway, I think the protocol display driver is not part of the gpu die or we wouldn't have gpus with different number of dp/hdmi ports namely on workstation cards but whatever).
 
Joined
Aug 25, 2021
Messages
1,061 (1.06/day)
But that's not what's happening with Intel, they clearly have on their spec that the cards run natively with HDMI 2.0 and can support 2.1 through protocol conversion. I don't know what bandwith or feature limitation will look like, but they didn't hide that their solution is not "native" or as native as usual I guess (I mean I don't know how it's usually done anyway, I think the protocol display driver is not part of the gpu die or we wouldn't have gpus with different number of dp/hdmi ports namely on workstation cards but whatever).
I know there was this video where Intel's rep finally clarified the situation surrounding HDMI. But this was after questions were explicitly asked. They would not tell us on their own. You haved to beg them to publish and challenge them to provide basic connectivity information.

On Intel's official website, the situation is still confusing and they should be called out for this by tech community. Here is the screen shot of the official spec.

Screenshot 2022-09-24 at 18-20-45 - Product Specifications Intel.png

As you can see, both HDMI 2.0b and HDMI 2.1 have a star "*" attached to it, but at the bottom of the page this star small print is nowhere to be found. Intel, WTF?! DP has two stars "**" and there is a clear explanation. This is what always makes me suspicious that something is being hidden from the public.

They should simply say the following, for the sake of simplicity and transparency:
*HDMI 2.0b TMDS 18 Gbps - A380 and A580 cards
*HDMI 2.1 FRL 40/48 Gbps - installed on founders A750 and A770 Limited Edition only; AIBs have an option to add FRL support via PCON

There are two ways to get FRL signal on GPU:
1. install native HDMI 2.1 chip on PCB (Nvidia and AMD GPUs)
2. install PCON chip on PCB that is fed by DP1.4 signal and then converted into HDMI FRL protocol (A750 and A770)
 
Joined
Jul 15, 2020
Messages
982 (0.70/day)
System Name Dirt Sheep | Silent Sheep
Processor i5-2400 | 13900K (-0.025mV offset)
Motherboard Asus P8H67-M LE | Gigabyte AERO Z690-G, bios F26 with "Instant 6 GHz" on
Cooling Scythe Katana Type 1 | Noctua NH-U12A chromax.black
Memory G-skill 2*8GB DDR3 | Corsair Vengeance 4*32GB DDR5 5200Mhz C40 @4000MHz
Video Card(s) Gigabyte 970GTX Mini | NV 1080TI FE (cap at 85%, 800mV)
Storage 2*SN850 1TB, 230S 4TB, 840EVO 128GB, WD green 2TB HDD, IronWolf 6TB, 2*HC550 18TB in RAID1
Display(s) LG 21` FHD W2261VP | Lenovo 27` 4K Qreator 27
Case Thermaltake V3 Black|Define 7 Solid, stock 3*14 fans+ 2*12 front&buttom+ out 1*8 (on expansion slot)
Audio Device(s) Beyerdynamic DT 990 (or the screen speakers when I'm too lazy)
Power Supply Enermax Pro82+ 525W | Corsair RM650x (2021)
Mouse Logitech Master 3
Keyboard Roccat Isku FX
VR HMD Nop.
Software WIN 10 | WIN 11
Benchmark Scores CB23 SC: i5-2400=641 | i9-13900k=2325-2281 MC: i5-2400=i9 13900k SC | i9-13900k=37240-35500
As I pointed out in my last comment, it let's you know how much value you are getting relative to the entire stack or market at large.

If there is a large difference between SKUs die size wise (and by extension manufacturer cost) that would indicate to the customer that Nvidia is likely to release products to fill that gap or AMD will do it for them. In addition, comparing the die size of the 3080 and 4080 shows you that you are getting less than half the die area. Even accounting for inflation and the cost increases of smaller nodes, it does not even come close to making a die less than 300mm2 in size worth $900 USD, especially when you compare it to last gen products.

I think just about any customer would be mad to know they are getting much less relative to the 4090 compared to the prior GPU generation while also being charged $200 more.

Your criteria for what product to buy is simply far too naive. You advise customers to just blindly buy without considering factors that could net then massive savings or a better end product.
Unless you have unlimited budget, why the value (that is not preformance\$) is any factor?

Why do you care about how the product name stack against previous gen as long it is fits your needs? How the die size effecting the usege of the product??

I can understand your point if you need to decide whether to buy now of a fetures in 6-12 month
As I pointed out in my last comment, it let's you know how much value you are getting relative to the entire stack or market at large.

If there is a large difference between SKUs die size wise (and by extension manufacturer cost) that would indicate to the customer that Nvidia is likely to release products to fill that gap or AMD will do it for them. In addition, comparing the die size of the 3080 and 4080 shows you that you are getting less than half the die area. Even accounting for inflation and the cost increases of smaller nodes, it does not even come close to making a die less than 300mm2 in size worth $900 USD, especially when you compare it to last gen products.

I think just about any customer would be mad to know they are getting much less relative to the 4090 compared to the prior GPU generation while also being charged $200 more.

Your criteria for what product to buy is simply far too naive. You advise customers to just blindly buy without considering factors that could net then massive savings or a better end product.
If you but gpu by the die size than, well, good luck.
I advise to buy according to pragmatic factors- price to preformance in a given budget factoring spacific needs (say you must have cuda cors or need free sync).

You factor your felling s about the market status and the company general pricing. That might get you a bad deal, when involving your fellings in it.
 
Joined
Jun 21, 2021
Messages
2,903 (2.72/day)
System Name daily driver Mac mini M2 Pro
Processor Apple proprietary M2 Pro (6 p-cores, 4 e-cores)
Motherboard Apple proprietary
Cooling Apple proprietary
Memory Apple proprietary 16GB LPDDR5 unified memory
Video Card(s) Apple proprietary M2 Pro (16-core GPU)
Storage Apple proprietary onboard 512GB SSD + various external HDDs
Display(s) LG 27UL850W (4K@60Hz IPS)
Case Apple proprietary
Audio Device(s) Apple proprietary
Power Supply Apple proprietary
Mouse Apple Magic Trackpad 2
Keyboard Keychron K1 tenkeyless (Gateron Reds)
Software macOS Ventura 13.6 (with latest patches)
Benchmark Scores (My Windows daily driver is a Beelink Mini S12 Pro. I'm not interested in benchmarking.)
How the die size effecting the usege of the product??

I can understand your point if you need to decide whether to buy now of a fetures in 6-12 month

If you but gpu by the die size than, well, good luck.
I advise to buy according to pragmatic factors- price to preformance in a given budget factoring spacific needs (say you must have cuda cors or need free sync).

Choosing a graphics card based directly on die size might be a bit silly but for sure die size affects cost and heat. Remember that there are some who look at metrics like performance-per-watt and performance-per-dollar, especially datacenter customers.

For sure, bigger dies will generate more heat which requires bigger and more expensive thermal solutions. Even if you're not taking die physical dimensions directly into account, most likely you are looking indirectly while observing power specifications.
 
Joined
Jul 15, 2020
Messages
982 (0.70/day)
System Name Dirt Sheep | Silent Sheep
Processor i5-2400 | 13900K (-0.025mV offset)
Motherboard Asus P8H67-M LE | Gigabyte AERO Z690-G, bios F26 with "Instant 6 GHz" on
Cooling Scythe Katana Type 1 | Noctua NH-U12A chromax.black
Memory G-skill 2*8GB DDR3 | Corsair Vengeance 4*32GB DDR5 5200Mhz C40 @4000MHz
Video Card(s) Gigabyte 970GTX Mini | NV 1080TI FE (cap at 85%, 800mV)
Storage 2*SN850 1TB, 230S 4TB, 840EVO 128GB, WD green 2TB HDD, IronWolf 6TB, 2*HC550 18TB in RAID1
Display(s) LG 21` FHD W2261VP | Lenovo 27` 4K Qreator 27
Case Thermaltake V3 Black|Define 7 Solid, stock 3*14 fans+ 2*12 front&buttom+ out 1*8 (on expansion slot)
Audio Device(s) Beyerdynamic DT 990 (or the screen speakers when I'm too lazy)
Power Supply Enermax Pro82+ 525W | Corsair RM650x (2021)
Mouse Logitech Master 3
Keyboard Roccat Isku FX
VR HMD Nop.
Software WIN 10 | WIN 11
Benchmark Scores CB23 SC: i5-2400=641 | i9-13900k=2325-2281 MC: i5-2400=i9 13900k SC | i9-13900k=37240-35500
Choosing a graphics card based directly on die size might be a bit silly but for sure die size affects cost and heat. Remember that there are some who look at metrics like performance-per-watt and performance-per-dollar, especially datacenter customers.

For sure, bigger dies will generate more heat which requires bigger and more expensive thermal solutions. Even if you're not taking die physical dimensions directly into account, most likely you are looking indirectly while observing power specifications.
So you shop by preformance to Watt of $, not by die size.
The same way you don't shop by memory-bus and, in most situations, not by memory size.
 
Joined
Jan 14, 2019
Messages
10,080 (5.15/day)
Location
Midlands, UK
System Name Holiday Season Budget Computer (HSBC)
Processor AMD Ryzen 7 7700X
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 16 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 6500 XT 4 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Unless you have unlimited budget, why the value (that is not preformance\$) is any factor?

Why do you care about how the product name stack against previous gen as long it is fits your needs? How the die size effecting the usege of the product??

I can understand your point if you need to decide whether to buy now of a fetures in 6-12 month

If you but gpu by the die size than, well, good luck.
I advise to buy according to pragmatic factors- price to preformance in a given budget factoring spacific needs (say you must have cuda cors or need free sync).

You factor your felling s about the market status and the company general pricing. That might get you a bad deal, when involving your fellings in it.
Die size does little to affect usage (heat dissipation maybe?), but it does affect manufacturing costs. The 2070 has a 1.5x larger GPU die, a 256-bit memory bus and had a starting MSRP of 499 USD. In comparison, the 4080 12 GB has a 192-bit bus, a smaller GPU die and nearly double the starting MSRP. The only thing it has more of is memory chips (and maybe VRM components?). It's either TSMC that increased manufacturing costs significantly, or Nvidia trying to milk consumers as much as possible (or both).
 
Joined
Jul 15, 2020
Messages
982 (0.70/day)
System Name Dirt Sheep | Silent Sheep
Processor i5-2400 | 13900K (-0.025mV offset)
Motherboard Asus P8H67-M LE | Gigabyte AERO Z690-G, bios F26 with "Instant 6 GHz" on
Cooling Scythe Katana Type 1 | Noctua NH-U12A chromax.black
Memory G-skill 2*8GB DDR3 | Corsair Vengeance 4*32GB DDR5 5200Mhz C40 @4000MHz
Video Card(s) Gigabyte 970GTX Mini | NV 1080TI FE (cap at 85%, 800mV)
Storage 2*SN850 1TB, 230S 4TB, 840EVO 128GB, WD green 2TB HDD, IronWolf 6TB, 2*HC550 18TB in RAID1
Display(s) LG 21` FHD W2261VP | Lenovo 27` 4K Qreator 27
Case Thermaltake V3 Black|Define 7 Solid, stock 3*14 fans+ 2*12 front&buttom+ out 1*8 (on expansion slot)
Audio Device(s) Beyerdynamic DT 990 (or the screen speakers when I'm too lazy)
Power Supply Enermax Pro82+ 525W | Corsair RM650x (2021)
Mouse Logitech Master 3
Keyboard Roccat Isku FX
VR HMD Nop.
Software WIN 10 | WIN 11
Benchmark Scores CB23 SC: i5-2400=641 | i9-13900k=2325-2281 MC: i5-2400=i9 13900k SC | i9-13900k=37240-35500
Die size does little to affect usage (heat dissipation maybe?), but it does affect manufacturing costs. The 2070 has a 1.5x larger GPU die, a 256-bit memory bus and had a starting MSRP of 499 USD. In comparison, the 4080 12 GB has a 192-bit bus, a smaller GPU die and nearly double the starting MSRP. The only thing it has more of is memory chips (and maybe VRM components?). It's either TSMC that increased manufacturing costs significantly, or Nvidia trying to milk consumers as much as possible (or both).
Ok, but how is it relevant to your purchase?
 
Joined
Mar 10, 2010
Messages
11,878 (2.29/day)
Location
Manchester uk
System Name RyzenGtEvo/ Asus strix scar II
Processor Amd R5 5900X/ Intel 8750H
Motherboard Crosshair hero8 impact/Asus
Cooling 360EK extreme rad+ 360$EK slim all push, cpu ek suprim Gpu full cover all EK
Memory Corsair Vengeance Rgb pro 3600cas14 16Gb in four sticks./16Gb/16GB
Video Card(s) Powercolour RX7900XT Reference/Rtx 2060
Storage Silicon power 2TB nvme/8Tb external/1Tb samsung Evo nvme 2Tb sata ssd/1Tb nvme
Display(s) Samsung UAE28"850R 4k freesync.dell shiter
Case Lianli 011 dynamic/strix scar2
Audio Device(s) Xfi creative 7.1 on board ,Yamaha dts av setup, corsair void pro headset
Power Supply corsair 1200Hxi/Asus stock
Mouse Roccat Kova/ Logitech G wireless
Keyboard Roccat Aimo 120
VR HMD Oculus rift
Software Win 10 Pro
Benchmark Scores 8726 vega 3dmark timespy/ laptop Timespy 6506
Die size does little to affect usage (heat dissipation maybe?), but it does affect manufacturing costs. The 2070 has a 1.5x larger GPU die, a 256-bit memory bus and had a starting MSRP of 499 USD. In comparison, the 4080 12 GB has a 192-bit bus, a smaller GPU die and nearly double the starting MSRP. The only thing it has more of is memory chips (and maybe VRM components?). It's either TSMC that increased manufacturing costs significantly, or Nvidia trying to milk consumers as much as possible (or both).
That's known, node's are effectively doubling the cost of manufacturing each time, half nodes and enhanced node's would be less of a jump but the die you mentioned could be expected to cost twice as much if not more, because it's Samsung V Tsmc and Tsmc rule so price higher and Tsmc increased prices to all, the actual wager has gone up in cost too.

This is why Big chip's have a limited future, the yield rate's don't go up, only down with node swaps.
 
Joined
Jun 21, 2021
Messages
2,903 (2.72/day)
System Name daily driver Mac mini M2 Pro
Processor Apple proprietary M2 Pro (6 p-cores, 4 e-cores)
Motherboard Apple proprietary
Cooling Apple proprietary
Memory Apple proprietary 16GB LPDDR5 unified memory
Video Card(s) Apple proprietary M2 Pro (16-core GPU)
Storage Apple proprietary onboard 512GB SSD + various external HDDs
Display(s) LG 27UL850W (4K@60Hz IPS)
Case Apple proprietary
Audio Device(s) Apple proprietary
Power Supply Apple proprietary
Mouse Apple Magic Trackpad 2
Keyboard Keychron K1 tenkeyless (Gateron Reds)
Software macOS Ventura 13.6 (with latest patches)
Benchmark Scores (My Windows daily driver is a Beelink Mini S12 Pro. I'm not interested in benchmarking.)
So you shop by preformance to Watt of $, not by die size.
The same way you don't shop by memory-bus and, in most situations, not by memory size.

It depends. I have more than one computer and I have different primary usage cases for each one.

For my primary gaming build, I went for something with high performance, performance-per-watt metric be damned. If it's not as efficient as another system, that's fine while I'm gaming.

And I don't run my gaming computer 24x7. A lot of my desktop computing (like answering your post) is done on a Mac mini 2018 -- not on the build described in my System Specs. If I'm just surfing the 'net (couch, bed, etc.) I often just pick up my iPad.
 
Last edited:
Joined
Jan 14, 2019
Messages
10,080 (5.15/day)
Location
Midlands, UK
System Name Holiday Season Budget Computer (HSBC)
Processor AMD Ryzen 7 7700X
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 16 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 6500 XT 4 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Ok, but how is it relevant to your purchase?
GPU classes that I used to be able to afford when I was at university with no job are out of my budget now even with a full-time job. That's all.

Besides, instead of blindly accepting what I see on price tags, I like at least trying to understand why things are the way they are. Is that bad?

That's known, node's are effectively doubling the cost of manufacturing each time, half nodes and enhanced node's would be less of a jump but the die you mentioned could be expected to cost twice as much if not more, because it's Samsung V Tsmc and Tsmc rule so price higher and Tsmc increased prices to all, the actual wager has gone up in cost too.

This is why Big chip's have a limited future, the yield rate's don't go up, only down with node swaps.
But new nodes aren't new forever, so maybe it's time for the industry to sit back a bit and wait for these nodes to come down in costs? They surely can't double the price of every new generation of GPUs forever.
 
Joined
Jul 15, 2020
Messages
982 (0.70/day)
System Name Dirt Sheep | Silent Sheep
Processor i5-2400 | 13900K (-0.025mV offset)
Motherboard Asus P8H67-M LE | Gigabyte AERO Z690-G, bios F26 with "Instant 6 GHz" on
Cooling Scythe Katana Type 1 | Noctua NH-U12A chromax.black
Memory G-skill 2*8GB DDR3 | Corsair Vengeance 4*32GB DDR5 5200Mhz C40 @4000MHz
Video Card(s) Gigabyte 970GTX Mini | NV 1080TI FE (cap at 85%, 800mV)
Storage 2*SN850 1TB, 230S 4TB, 840EVO 128GB, WD green 2TB HDD, IronWolf 6TB, 2*HC550 18TB in RAID1
Display(s) LG 21` FHD W2261VP | Lenovo 27` 4K Qreator 27
Case Thermaltake V3 Black|Define 7 Solid, stock 3*14 fans+ 2*12 front&buttom+ out 1*8 (on expansion slot)
Audio Device(s) Beyerdynamic DT 990 (or the screen speakers when I'm too lazy)
Power Supply Enermax Pro82+ 525W | Corsair RM650x (2021)
Mouse Logitech Master 3
Keyboard Roccat Isku FX
VR HMD Nop.
Software WIN 10 | WIN 11
Benchmark Scores CB23 SC: i5-2400=641 | i9-13900k=2325-2281 MC: i5-2400=i9 13900k SC | i9-13900k=37240-35500
It depends. I have more than one computer and I have different primary usage cases for each one.

For my primary gaming build, I went for something with high performance, performance-per-watt metric be damned. If it's not as efficient as another system, that's fine while I'm gaming.

And I don't run my gaming computer 24x7. A lot of my desktop computing (like answering your post) is done on a Mac mini 2018 -- not on the build described in my System Specs. If I'm just surfing the 'net (couch, bed, etc.) I often just pick up my iPad.
Preformance per Watt in it's segment, unless you have unlimited budget

GPU classes that I used to be able to afford when I was at university with no job are out of my budget now even with a full-time job. That's all.

Besides, instead of blindly accepting what I see on price tags, I like at least trying to understand why things are the way they are. Is that bad?


But new nodes aren't new forever, so maybe it's time for the industry to sit back a bit and wait for these nodes to come down in costs? They surely can't double the price of every new generation of GPUs forever.
It is not bad, just plain irrelevant to purchase. I'm also very into the why but when out to shop I put it aside because it will only hinder
 
Joined
Jan 14, 2019
Messages
10,080 (5.15/day)
Location
Midlands, UK
System Name Holiday Season Budget Computer (HSBC)
Processor AMD Ryzen 7 7700X
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 16 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 6500 XT 4 GB
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Audio Device(s) Logitech Z333 2.1 speakers, AKG Y50 headphones
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
It is not bad, just plain irrelevant to purchase. I'm also very into the why but when out to shop I put it aside because it will only hinder
It's not irrelevant.
Realising an item's real worth vs. what the price tag says is part of what makes you an informed consumer. Just because something costs $900 it doesn't mean it's actually worth $900, even if you can afford it.
 
Top