• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA "Blackwell" GeForce RTX to Feature Same 5nm-based TSMC 4N Foundry Node as GB100 AI GPU

dgianstefani

TPU Proofreader
Staff member
Joined
Dec 29, 2017
Messages
4,431 (1.90/day)
Location
Swansea, Wales
System Name Silent
Processor Ryzen 7800X3D @ 5.15ghz BCLK OC, TG AM5 High Performance Heatspreader
Motherboard ASUS ROG Strix X670E-I, chipset fans removed
Cooling Optimus AMD Raw Copper/Plexi, HWLABS Copper 240/40+240/30, D5, 4x Noctua A12x25, Mayhems Ultra Pure
Memory 32 GB Dominator Platinum 6150 MHz 26-36-36-48, 56.6ns AIDA, 2050 FLCK, 160 ns TRFC
Video Card(s) RTX 3080 Ti Founders Edition, Conductonaut Extreme, 18 W/mK MinusPad Extreme, Corsair XG7 Waterblock
Storage Intel Optane DC P1600X 118 GB, Samsung 990 Pro 2 TB
Display(s) 32" 240 Hz 1440p Samsung G7, 31.5" 165 Hz 1440p LG NanoIPS Ultragear
Case Sliger SM570 CNC Aluminium 13-Litre, 3D printed feet, custom front panel with pump/res combo
Audio Device(s) Audeze Maxwell Ultraviolet, Razer Nommo Pro
Power Supply SF750 Plat, transparent full custom cables, Sentinel Pro 1500 Online Double Conversion UPS w/Noctua
Mouse Razer Viper Pro V2 Mercury White w/Tiger Ice Skates & Pulsar Supergrip tape
Keyboard Wooting 60HE+ module, TOFU Redux Burgundy w/brass weight, Prismcaps White & Jellykey, lubed/modded
Software Windows 10 IoT Enterprise LTSC 19053.3803
Benchmark Scores Legendary
Nvidia can't come close to 40% increase on the same node, & has never achieved this.
Second, it's not happening. Not if they're planning to increase the front end's L1 cache.
The increase on the L1 cache shows their Shaders/or other parts are just sitting idle & still not being used mostly.
The best Nvidia can do right now is 20% increase in performance compared to the 4090 at the same power.]
Raytracing side hasn't increase more than 6% pre-clock compared to each generation. The only time Raytracing has had a major increase on Nvidia is when rasterization was massively increase above it.
I guess we'll see, won't we.
 
Joined
Apr 14, 2022
Messages
670 (0.88/day)
Location
London, UK
Processor AMD Ryzen 7 5800X3D
Motherboard ASUS B550M-Plus WiFi II
Cooling Noctua U12A chromax.black
Memory Corsair Vengeance 32GB 3600Mhz
Video Card(s) Palit RTX 4080 GameRock OC
Storage Samsung 970 Evo Plus 1TB + 980 Pro 2TB
Display(s) Asus XG35VQ
Case Asus Prime AP201
Audio Device(s) Creative Gigaworks - Razer Blackshark V2 Pro
Power Supply Corsair SF750
Mouse Razer Viper
Software Windows 11 64bit
If they manage to extract 20-25% more performance using the same node, would be a success.
Pricewise they will be cheaper than Ada according to tick tock policy (cheap Pascal, expensive Turing, cheap Ampere, expensive Ada....)

What I don't want to see is bullshXt software tweaks, just for the charts.
Ok, they may introduce a DLSS 4+, available only on 5000, but I don't want to see a performance jump because of this only.
 
Joined
Jul 13, 2016
Messages
2,887 (1.01/day)
Processor Ryzen 7800X3D
Motherboard ASRock X670E Taichi
Cooling Noctua NH-D15 Chromax
Memory 32GB DDR5 6000 CL30
Video Card(s) MSI RTX 4090 Trio
Storage Too much
Display(s) Acer Predator XB3 27" 240 Hz
Case Thermaltake Core X9
Audio Device(s) Topping DX5, DCA Aeon II
Power Supply Seasonic Prime Titanium 850w
Mouse G305
Keyboard Wooting HE60
VR HMD Valve Index
Software Win 10
For their GPUs? Highly unlikely.

Hmm, depends. AMD's GCD die is 304mm2, which isn't huge. When you factor the node shrink in you are talking about a lot of performance on a pretty manufacturable die size. One of the advantages to the chiplet based approach is that each individual die is smaller and thus will yield better on nodes with higher defect rates (newer nodes). The only reason GPUs tend to use more mature nodes is due to their size but a chiplet based architecture may enable them to have an advantage over Nvidia in regards to node.
 
Joined
Nov 26, 2021
Messages
1,372 (1.52/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
They have two levers to increase performance with a bigger die:
  1. Increase power limit to just a hair under 600 W so that clocks don't decrease
  2. keep power limit the same as the 4090, but increase die size to 2080 Ti proportions. This would allow more cores, perhaps up to the 192 in the rumours
Another option is to use Chip-On-Wafer-On-Substrate-L (CoWoS-L) which is used for GB200, but that seems unlikely for a gaming GPU.
 
Joined
Dec 31, 2020
Messages
785 (0.64/day)
Processor E5-2690 v4
Motherboard VEINEDA X99
Video Card(s) 2080 Ti WINDFROCE OC
Storage NE-512 KingSpec
Display(s) G27Q
Case DAOTECH X9
Power Supply SF450
OC models could definitely benefit from 600W power limit. The same case of 980 Ti and 2080 Ti +25% performance by OC alone, both second gen on same node. CUDA cores can be increased at a very little die area cost 1536 to 2048 in each GPC for example 750mm2. But if kopite still fantasizes about 512 bit memory GB202 is definitely a maxed out reticle sized GPU.
 
Joined
Aug 10, 2020
Messages
96 (0.07/day)
Wait isn't TSMC 4N what Lovelace is on? If Blackwell is on this too it'll be a very weak new generation. Probably large increases in TGP as you can only do so much with layout optimization and GDDR7. It's too early to tell but I might be waiting for a RTX 6000 now.
 
Joined
Nov 27, 2023
Messages
1,149 (6.72/day)
System Name The Workhorse
Processor AMD Ryzen R9 5900X
Motherboard Gigabyte Aorus B550 Pro
Cooling CPU - Noctua NH-D15S Case - 3 Noctua NF-A14 PWM at the bottom, 2 Fractal Design 180mm at the front
Memory GSkill Trident Z 3200CL14
Video Card(s) NVidia GTX 1070 MSI QuickSilver
Storage Adata SX8200Pro
Display(s) LG 32GK850G
Case Fractal Design Torrent
Audio Device(s) FiiO E-10K DAC/Amp, Samson Meteorite USB Microphone
Power Supply Corsair RMx850 (2018)
Mouse Razer Viper (Original)
Keyboard Cooler Master QuickFire Rapid TKL keyboard (Cherry MX Black)
Software Windows 11 Pro (23H2)
Hmm, depends. AMD's GCD die is 304mm2, which isn't huge. When you factor the node shrink in you are talking about a lot of performance on a pretty manufacturable die size. One of the advantages to the chiplet based approach is that each individual die is smaller and thus will yield better on nodes with higher defect rates (newer nodes). The only reason GPUs tend to use more mature nodes is due to their size but a chiplet based architecture may enable them to have an advantage over Nvidia in regards to node.
That’s absolutely true, but that’s not why I was skeptical. I just have strong doubts that any 3nm allocation will be left this year for AMD after Apple places the orders it needs.
 
Joined
Mar 15, 2023
Messages
885 (2.07/day)
System Name Stugots V
Processor Ryzen 7 5800X3D
Motherboard MSI MAG B550 Tomahawk
Cooling Thermalright PA-120 Black
Memory 2 x 16GB G.Skill 3600Mhz CL16
Video Card(s) ASUS Dual RTX 4070
Storage 500GB WD SN750 | 2TB WD SN750 | 6TB WD Red +
Display(s) Dell S2716DG (1440p / 144Hz)
Case Fractal Meshify 2 Compact
Audio Device(s) JDS Labs Element | Audioengine HD3 + A8 | Beyerdynamic DT-990 Pro (250)
Power Supply Seasonic Focus Plus 850W
Mouse Logitech G502 Lightspeed
Keyboard Leopold FC750R
Software Win 10 Pro x64
Prices went down with Supers so there's hope for at least price/performance to not just continue scaling. I guess we'll see.

$1000 5080 would be nice, and possible, unlike the $600 5080 some people seem to expect/want.

The 3080 was $700 MSRP, that's ~$836 today - Jensen shifted the goal post with the 4080 big time asking $1200.
Four figures for an 80-series card is still too high IMO, you shouldn't break that barrier until the 90-series.
 
Joined
Jul 13, 2016
Messages
2,887 (1.01/day)
Processor Ryzen 7800X3D
Motherboard ASRock X670E Taichi
Cooling Noctua NH-D15 Chromax
Memory 32GB DDR5 6000 CL30
Video Card(s) MSI RTX 4090 Trio
Storage Too much
Display(s) Acer Predator XB3 27" 240 Hz
Case Thermaltake Core X9
Audio Device(s) Topping DX5, DCA Aeon II
Power Supply Seasonic Prime Titanium 850w
Mouse G305
Keyboard Wooting HE60
VR HMD Valve Index
Software Win 10
That’s absolutely true, but that’s not why I was skeptical. I just have strong doubts that any 3nm allocation will be left this year for AMD after Apple places the orders it needs.

That's a valid concern although the way the industry works is that silicon design companies like AMD sign contracts years in advance guarantying a certain amount of allocation at a specified rate. This ensures that years spent designing a product aren't foiled by market demand swings or competitors and enables foundries time to build production to meet demand. It wouldn't make much sense for TSMC or their design parters if allocation was determined dynamically as the cost of designing a product is extremely high and time investment of silicon products and fabs takes years. No company is going to want to invest years and billions in designing a chip they cannot gaurantee wafers for.

In otherwords AMD would have to specifically forget to include 3nm capacity for it's GPUs in order for them to not get any allocation (assuming they are going 3nm of course), which would be the highest level of incompetence possible given securing wafers is essential to operating a silicon design business. A company like AMD does not simply forget to secure wafers for it's products and continue to ignore that deficiency over the multi-year design period of a product.

What could be a problem for AMD though is if they didn't purchase enough allocation in advance, as in their sales exceed their expectations. In that instance they would be competiting with other companies like Apple for any unearmarked capacity TSMC has. There's also the possibility that TSMC would not be able to meet AMDs required 3nm wafer allotment as a reason AMD could not go 3nm, although I doubt this potentialitiy given TSMC's capacity has increased while simulatniously demand has decreased.
 
Joined
Nov 27, 2023
Messages
1,149 (6.72/day)
System Name The Workhorse
Processor AMD Ryzen R9 5900X
Motherboard Gigabyte Aorus B550 Pro
Cooling CPU - Noctua NH-D15S Case - 3 Noctua NF-A14 PWM at the bottom, 2 Fractal Design 180mm at the front
Memory GSkill Trident Z 3200CL14
Video Card(s) NVidia GTX 1070 MSI QuickSilver
Storage Adata SX8200Pro
Display(s) LG 32GK850G
Case Fractal Design Torrent
Audio Device(s) FiiO E-10K DAC/Amp, Samson Meteorite USB Microphone
Power Supply Corsair RMx850 (2018)
Mouse Razer Viper (Original)
Keyboard Cooler Master QuickFire Rapid TKL keyboard (Cherry MX Black)
Software Windows 11 Pro (23H2)
@evernessince
That’s how it usually works, yes, but Apple is a special case in terms of their relationship with TSMC. They are a VVIP customer and they get first dibs on any new node, no ifs or buts. Everyone else has to fight for scraps and if there ARE no scraps… tough. The fact that NV didn’t manage to get any or elected not to speaks volumes to me. Even if AMD does get some allocation, there is absolutely no way they would have decided to spend it on the GPU part of the business and not what actually makes them money - CPUs and/or CDNA HPC accelerators. As such, while it’s theoretically possible that we will see a case of NV being on an older node while new AMD GPUs get their chiplets made on 3nm, I just don’t see it. Especially since in the Ada vs RDNA 3 it was AMD who used an older (if very marginally) node for their GPUs.
 
Joined
Jul 13, 2016
Messages
2,887 (1.01/day)
Processor Ryzen 7800X3D
Motherboard ASRock X670E Taichi
Cooling Noctua NH-D15 Chromax
Memory 32GB DDR5 6000 CL30
Video Card(s) MSI RTX 4090 Trio
Storage Too much
Display(s) Acer Predator XB3 27" 240 Hz
Case Thermaltake Core X9
Audio Device(s) Topping DX5, DCA Aeon II
Power Supply Seasonic Prime Titanium 850w
Mouse G305
Keyboard Wooting HE60
VR HMD Valve Index
Software Win 10
@evernessince
That’s how it usually works, yes, but Apple is a special case in terms of their relationship with TSMC. They are a VVIP customer and they get first dibs on any new node, no ifs or buts. Everyone else has to fight for scraps and if there ARE no scraps… tough. The fact that NV didn’t manage to get any or elected not to speaks volumes to me. Even if AMD does get some allocation, there is absolutely no way they would have decided to spend it on the GPU part of the business and not what actually makes them money - CPUs and/or CDNA HPC accelerators. As such, while it’s theoretically possible that we will see a case of NV being on an older node while new AMD GPUs get their chiplets made on 3nm, I just don’t see it. Especially since in the Ada vs RDNA 3 it was AMD who used an older (if very marginally) node for their GPUs.

I wouldn't say the GPU segment makes them no money, the MI300 and MI300X are both big margin parts and include GCDs. This is particularly important if AMD wants to break into the AI market and just so happens to carry huge margins for them. It's also counter-intuitive to approach the market like "well we don't earn much from it now so it's not worth investing in or pushing for 3nm". That's a mentallity that begets loosing. AMD has massive revenue earning potential in both the AI and consumer graphics markets.

In addition, a good part of AMD's investment into GPUs is shared with their CPUs. Any improvements to infinity fabric and additional modularization that come with GPU chiplets undoubtedly help advance the packaging of their CPUs as well. Even if AMD only performs so-so in AI / Gaming GPUs, that technical expertise and knowledge is invaluable to the company as a whole. The MI300 and MI300X are excellent examples of that.

At the end of the day, regardless of how Vs Apple has in addition to being a VIP, AMD and TSMC would have known years in advance wafer allotments. If TSMC told AMD straight up years back that they couldn't fill a theoretical 3nm allotment for GPUs, that would be a failure on TSMCs part for sure. Again Apple can eat up all the uncontracted capacity but a lack of an ability for TSMC to build out capacity? I'm not seeing it given the lowered demand and massive investments by TSMC to do just that, build out capacity.

Again this all assumes that AMD wanted to go with 3nm for it's GPUs but I tink if they wanted to they could.
 
Joined
Nov 26, 2021
Messages
1,372 (1.52/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
@evernessince
That’s how it usually works, yes, but Apple is a special case in terms of their relationship with TSMC. They are a VVIP customer and they get first dibs on any new node, no ifs or buts. Everyone else has to fight for scraps and if there ARE no scraps… tough. The fact that NV didn’t manage to get any or elected not to speaks volumes to me. Even if AMD does get some allocation, there is absolutely no way they would have decided to spend it on the GPU part of the business and not what actually makes them money - CPUs and/or CDNA HPC accelerators. As such, while it’s theoretically possible that we will see a case of NV being on an older node while new AMD GPUs get their chiplets made on 3nm, I just don’t see it. Especially since in the Ada vs RDNA 3 it was AMD who used an older (if very marginally) node for their GPUs.
I suspect N3 yields are still too low for anyone but Apple to bother with it, but we'll know by the time Zen 5c rolls out as that's rumoured to be on N3. If that rumour turns out to be true, then that would lend more weight to the hypothesis that N3 isn't mature enough for Nvidia's giant datacenter dies.
 
Joined
Nov 27, 2023
Messages
1,149 (6.72/day)
System Name The Workhorse
Processor AMD Ryzen R9 5900X
Motherboard Gigabyte Aorus B550 Pro
Cooling CPU - Noctua NH-D15S Case - 3 Noctua NF-A14 PWM at the bottom, 2 Fractal Design 180mm at the front
Memory GSkill Trident Z 3200CL14
Video Card(s) NVidia GTX 1070 MSI QuickSilver
Storage Adata SX8200Pro
Display(s) LG 32GK850G
Case Fractal Design Torrent
Audio Device(s) FiiO E-10K DAC/Amp, Samson Meteorite USB Microphone
Power Supply Corsair RMx850 (2018)
Mouse Razer Viper (Original)
Keyboard Cooler Master QuickFire Rapid TKL keyboard (Cherry MX Black)
Software Windows 11 Pro (23H2)
I wouldn't say the GPU segment makes them no money, the MI300 and MI300X are both big margin parts and include GCDs. This is particularly important if AMD wants to break into the AI market and just so happens to carry huge margins for them. It's also counter-intuitive to approach the market like "well we don't earn much from it now so it's not worth investing in or pushing for 3nm". That's a mentallity that begets loosing. AMD has massive revenue earning potential in both the AI and consumer graphics markets.
I specifically mentioned CDNA accelerators in my post, I am not sure what you are disagreeing with. I remind you that AMDs HPC cards run explicitly different architecture to their gaming offerings. Yes, even the GCDs are different. They would not be in the same equation for AMD.

At the end of the day, regardless of how Vs Apple has in addition to being a VIP, AMD and TSMC would have known years in advance wafer allotments. If TSMC told AMD straight up years back that they couldn't fill a theoretical 3nm allotment for GPUs, that would be a failure on TSMCs part for sure. Again Apple can eat up all the uncontracted capacity but a lack of an ability for TSMC to build out capacity? I'm not seeing it given the lowered demand and massive investments by TSMC to do just that, build out capacity.
I am actually not sure about this. TSMC had yield issues with 3N up until last summer and I have no idea if they still are having them. There might not have been any offers to anyone except Apple at all.

I suspect N3 yields are still too low for anyone but Apple to bother with it, but we'll know by the time Zen 5c rolls out as that's rumoured to be on N3. If that rumour turns out to be true, then that would lend more weight to the hypothesis that N3 isn't mature enough for Nvidia's giant datacenter dies.
That, and historically early node iterations aren’t good for anything that’s not fairly low-power. This is fine for Apple with their SoCs. This is absolutely not fine for GPUs.
 
Joined
Aug 26, 2021
Messages
290 (0.29/day)
Man the doom and gloom on this thread is hilarious. Hopefully the product won't be bad and if it is don't buy it.
 
Joined
Sep 17, 2014
Messages
21,049 (5.96/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
Guys, node wars don't really matter anymore. It's the performance they can extract out of the nodes they're using.

On that point, I have no doubt NVIDIA will be able to extract enough performance from the node.
This was always the case. There are no node wars. Old node, you can make a bigger GPU at better yields and put it in the market cheaper. New node, you make a smaller die because its f'ing expensive. If you can keep some semblance of performance parity you're good.

There are however, bad nodes; like Samsung's 8nm. As a result of that, while Ampere wasn't bad in performance, it was absolutely stellar in TDP. Not in a good sense.
 
Joined
Dec 31, 2020
Messages
785 (0.64/day)
Processor E5-2690 v4
Motherboard VEINEDA X99
Video Card(s) 2080 Ti WINDFROCE OC
Storage NE-512 KingSpec
Display(s) G27Q
Case DAOTECH X9
Power Supply SF450
Man the doom and gloom on this thread is hilarious. Hopefully the product won't be bad and if it is don't buy it.
Kopite said back in November TSMC N3, now this, let's see if the 512 bit bus holds true. But he flip flopped on that too already maybe twice. the future is written in the end remains for events to play out. i don't mind as long as there is a good performing card at $799.
 
Joined
Nov 26, 2021
Messages
1,372 (1.52/day)
Location
Mississauga, Canada
Processor Ryzen 7 5700X
Motherboard ASUS TUF Gaming X570-PRO (WiFi 6)
Cooling Noctua NH-C14S (two fans)
Memory 2x16GB DDR4 3200
Video Card(s) Reference Vega 64
Storage Intel 665p 1TB, WD Black SN850X 2TB, Crucial MX300 1TB SATA, Samsung 830 256 GB SATA
Display(s) Nixeus NX-EDG27, and Samsung S23A700
Case Fractal Design R5
Power Supply Seasonic PRIME TITANIUM 850W
Mouse Logitech
VR HMD Oculus Rift
Software Windows 11 Pro, and Ubuntu 20.04
This was always the case. There are no node wars. Old node, you can make a bigger GPU at better yields and put it in the market cheaper. New node, you make a smaller die because its f'ing expensive. If you can keep some semblance of performance parity you're good.

There are however, bad nodes; like Samsung's 8nm. As a result of that, while Ampere wasn't bad in performance, it was absolutely stellar in TDP. Not in a good sense.
I'm surprised by how many of our forum members think that process nodes are unimportant and can be worked around. There's no substitute for smaller, faster, and lower power transistors. None of Nvidia, AMD or Apple would be where they are now without continuously improving nodes. As far as the flatlining cost per transistor is concerned, that is a new phenomenon. For most of the history of making microprocessors, newer processors have also brought cost reductions.
 
Joined
Jul 13, 2016
Messages
2,887 (1.01/day)
Processor Ryzen 7800X3D
Motherboard ASRock X670E Taichi
Cooling Noctua NH-D15 Chromax
Memory 32GB DDR5 6000 CL30
Video Card(s) MSI RTX 4090 Trio
Storage Too much
Display(s) Acer Predator XB3 27" 240 Hz
Case Thermaltake Core X9
Audio Device(s) Topping DX5, DCA Aeon II
Power Supply Seasonic Prime Titanium 850w
Mouse G305
Keyboard Wooting HE60
VR HMD Valve Index
Software Win 10
I am actually not sure about this. TSMC had yield issues with 3N up until last summer and I have no idea if they still are having them. There might not have been any offers to anyone except Apple at all.

Apple was not the only one provided access given MediaTek has already publicly announced a 3nm chip slated for sometime in 2024: https://www.neowin.net/news/mediatek-develops-its-first-3nm-chip-using-tsmc-process-coming-in-2024/

TSMC is targeting 80% yield for 3nm: https://www.androidheadlines.com/2024/02/tsmc-double-3nm-production-2024.html

They are also looking at doubling capacity with Qualcomm, MediaTek, and others having 3nm chips in the pipe.

A GPU in late 2024 would not be infeasible at an 80% yield rate, particularly when we are talking about a chiplet based GPU where the individual dies are smaller and thus work better on lower yield nodes. There are no official die size numbers for the M3 Max, Apple's largest 3nm product, but it has 3.7 times the transistors of the base M3. The base M3 of which has a approximate die size of 150 - 170mm2. Based on the lowest estimate, a rough guess of the die size would be 555mm2. Even if that estimately is significantly off, it's definitely possible to see that if Apple is able to get high enough yield to make the M3 Max possible, it should definitely be possible to get a 304mm2 GPU die working. I'm also thinking it's possible that the rumors that AMD is ceding the high end could potentially be people confusing a smaller GCD size with them giving up the high end. It could be that AMD lowers the GCD size to even smaller than 304mm2 to further increase yields and then simply adds multiple to it's higher end products. AMD could just keep the MCDs on an older node given those don't really benefit much, although they are extremely tiny so they would yield well on newer nodes. Using different nodes though would allow them to leverage more capacity.

I don't know the probability that AMD goes 3nm, without access to both TSMC's numbers and AMD's numbers it's very hard to say.
 
Joined
Nov 27, 2023
Messages
1,149 (6.72/day)
System Name The Workhorse
Processor AMD Ryzen R9 5900X
Motherboard Gigabyte Aorus B550 Pro
Cooling CPU - Noctua NH-D15S Case - 3 Noctua NF-A14 PWM at the bottom, 2 Fractal Design 180mm at the front
Memory GSkill Trident Z 3200CL14
Video Card(s) NVidia GTX 1070 MSI QuickSilver
Storage Adata SX8200Pro
Display(s) LG 32GK850G
Case Fractal Design Torrent
Audio Device(s) FiiO E-10K DAC/Amp, Samson Meteorite USB Microphone
Power Supply Corsair RMx850 (2018)
Mouse Razer Viper (Original)
Keyboard Cooler Master QuickFire Rapid TKL keyboard (Cherry MX Black)
Software Windows 11 Pro (23H2)
@evernessince
We have no idea what the yields are like on Apple silicon, true, but Apple did announce that they are buying 50% more 3nm capacity in 2024. I have no idea what this tells us about yields (do they expect more demand or are the yields still poor even for what demand is there), but it means that a lot of extra capacity they will gobble up. Then there is Intel who also already booked quite a bit, apparently. MediaTek and Qualcomm are in line, I guess that they will get what is left. Again, the fact that after all the rumors and leaks it turns out that NV was unable or unwilling to get some for themselves is, IMO, telling. It’s obviously not a question of money, not for NV. Whatever is the reason - lack of capacity or current 3nm being unsuitable for their needs is a separate question. I do want to note that even the M3 Max is a fairly low power chip. It very well might be that, until N3E is off the ground, producing high wattage parts which desktop GPUs absolutely are is just off the table. If AMD can design the chiplets in a way that would make them suitable they MIGHT have a chance to use 3nm, but those parts will be quite limited in how capable they would be, I think. So far all the AMD focused 3nm rumors were about Zen5c, another low power part. I think that we will know more in approximately May when it’s speculated that Zen 5 proper will be revealed. If regular Zen 5 isn’t on 3nm (or at least initially isn’t, shrinks are possible) then I’d say no way in hell is RDNA 4 on 3nm.
It doesn’t really matter in the end. I have no faith that even with a hypothetical node advantage AMD could actually make NV stumble and present themselves as a peer competitor. It would take a miracle at this point, AMD’s last time they could be one was what, Hawaii, arguably? Or hell, the HD5000 series where they capitalized on NV eff up that was OG Fermi?
 
Joined
Oct 31, 2022
Messages
138 (0.25/day)
It really doesn't look good. Ada is already close to B100 in density.
What could happen is that GB202 uses 2x GB203. :D That does make a bit sense, if GB202 is 512bit and GB203 is 256bit... But I doubt it.
How much extra performance is there with a 30% density and shader increase? More VRAM bandwidth with GDDR7 also gives some performance, but +70% performance in total like some rumors said???
I won't buy a GPU that uses more power then the 4090, I even undervolted it below 300W...

If the latest rumors are true, Blackwell might be dead for consumers...

If they manage to extract 20-25% more performance using the same node, would be a success.
I doubt many enthusiasts would buy a 5090 at that point for over 1700$...
 
Joined
Jan 11, 2022
Messages
498 (0.58/day)
If they changed the architecture for better rasterization they might just be able to pull it off
 
Joined
Sep 9, 2022
Messages
42 (0.07/day)
IF the rumors are true: Very smart move by nVidia. They are going to make dozens of billions by allocating all 3nm capacities to the professional AI/datacenter stuff while gaming GPUs will not compete over scarce 3nm capacities because they are going to be made on the older 4N node. At the same time, the 4N node is super-mature so nVidia will make very nice and juicy margins from the gaming products.

It would be pretty unfortunate for us gamers if these rumors are really true but it is kind of inevitable that gaming is taking a backseat for now if you look at the sheer numbers. nVidia made more than SIX times more revenue from AI/datacenter than from gaming ($18.404bn vs. $2.865bn). And yet another ~$0.75bn was made from Automotive and Professional Visualization.

nVidia have certainly almost exclusively been focusing on datacenter for quite a while by now and moved all of their top talent (engineering *and* software) to the datacenter segment. That is what makes this rumor pretty plausible. They are probably putting a pretty low effort into the next gaming generation. Compared to the previous gens that had lots of surprises on the feature/software side (DLSS3/FG/Remix etc.) the RTX 5000 series will probably be a disappointment but that's how it goes when the spotlight shifts in such a MASSIVE way...
 
Joined
Sep 15, 2011
Messages
6,487 (1.40/day)
Processor Intel® Core™ i7-13700K
Motherboard Gigabyte Z790 Aorus Elite AX
Cooling Noctua NH-D15
Memory 32GB(2x16) DDR5@6600MHz G-Skill Trident Z5
Video Card(s) ZOTAC GAMING GeForce RTX 3080 AMP Holo
Storage 2TB SK Platinum P41 SSD + 4TB SanDisk Ultra SSD + 500GB Samsung 840 EVO SSD
Display(s) Acer Predator X34 3440x1440@100Hz G-Sync
Case NZXT PHANTOM410-BK
Audio Device(s) Creative X-Fi Titanium PCIe
Power Supply Corsair 850W
Mouse Logitech Hero G502 SE
Software Windows 11 Pro - 64bit
Benchmark Scores 30FPS in NFS:Rivals
We will see if this new "Jensen's Law" looks like this now:

2020, RTX 3080 - $700
2022, RTX 4080 - $1200
2024, RTX 5080 - $2060
No gaming card is worth more than 1000$, with all taxes included. Not even the top dog.
But hey, people are paying 1000+ $ for the same phones every year, so there is no lack of idiots with more money than common sense...
 
Joined
Jan 11, 2022
Messages
498 (0.58/day)
IF the rumors are true: Very smart move by nVidia. They are going to make dozens of billions by allocating all 3nm capacities to the professional AI/datacenter stuff while gaming GPUs will not compete over scarce 3nm capacities because they are going to be made on the older 4N node. At the same time, the 4N node is super-mature so nVidia will make very nice and juicy margins from the gaming products.

It would be pretty unfortunate for us gamers if these rumors are really true but it is kind of inevitable that gaming is taking a backseat for now if you look at the sheer numbers. nVidia made more than SIX times more revenue from AI/datacenter than from gaming ($18.404bn vs. $2.865bn). And yet another ~$0.75bn was made from Automotive and Professional Visualization.

nVidia have certainly almost exclusively been focusing on datacenter for quite a while by now and moved all of their top talent (engineering *and* software) to the datacenter segment. That is what makes this rumor pretty plausible. They are probably putting a pretty low effort into the next gaming generation. Compared to the previous gens that had lots of surprises on the feature/software side (DLSS3/FG/Remix etc.) the RTX 5000 series will probably be a disappointment but that's how it goes when the spotlight shifts in such a MASSIVE way...
That might be a good thing, splitting the segments completely and allowing them to diverge.
3bn isn't peanuts
 
Joined
May 11, 2018
Messages
996 (0.45/day)
The 3080 was $700 MSRP, that's ~$836 today - Jensen shifted the goal post with the 4080 big time asking $1200.
Four figures for an 80-series card is still too high IMO, you shouldn't break that barrier until the 90-series.

Yeah, but the usual Nvidia apologizing line goes:

- "RTX 3080 was never really $700 card, for most of it's lifetime it was sold for over $1500, and people (not gamers, though) were buying it!"

Even original TechPowerUP review told us to compare it to cryptoinflated RTX 3090 Ti, which officialy launched for insane $2000! And then crypto collapsed.

And when RTX 5080 hits at close to $2000, we'll hear that we can't compare it to RTX 4080 SUPER's $1000 MSRP or other discounts but with original generation releases $1200, and add two years worth of inflation - which in this economy is what people think it is. So in view of many a $2000 card in start of 2025 will be perfectly same value as $1200 card in 2022, and they'll have charts to prove it to you! It's not even more expensive!
 
Last edited:
Top