• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

TSMC Reportedly Surpasses 90% Production Yield Rate with 2 nm Process

T0@st

News Editor
Joined
Mar 7, 2023
Messages
3,315 (4.02/day)
Location
South East, UK
System Name The TPU Typewriter
Processor AMD Ryzen 5 5600 (non-X)
Motherboard GIGABYTE B550M DS3H Micro ATX
Cooling DeepCool AS500
Memory Kingston Fury Renegade RGB 32 GB (2 x 16 GB) DDR4-3600 CL16
Video Card(s) PowerColor Radeon RX 7800 XT 16 GB Hellhound OC
Storage Samsung 980 Pro 1 TB M.2-2280 PCIe 4.0 X4 NVME SSD
Display(s) Lenovo Legion Y27q-20 27" QHD IPS monitor
Case GameMax Spark M-ATX (re-badged Jonsbo D30)
Audio Device(s) FiiO K7 Desktop DAC/Amp + Philips Fidelio X3 headphones, or ARTTI T10 Planar IEMs
Power Supply ADATA XPG CORE Reactor 650 W 80+ Gold ATX
Mouse Roccat Kone Pro Air
Keyboard Cooler Master MasterKeys Pro L
Software Windows 10 64-bit Home Edition
At the tail end of Q1'25, industry whispers suggested that TSMC's premier facilities had completed cutting/leading-edge 2 nm (N2) trial production runs. By early April, company insiders alluded to a confident push into preparations for a futuristic 1.4 nm node at the "P2" Baoshan Plant. This is a very distant prospect; watchdogs envision a 2028 release window. According to expert predictions, cross-facility 2 nm wafer mass production phases are expected to start by the end of this year. Foundry staff seem to be actively pursuing an improvement in yields; earlier guesstimates indicated the crossing of a 70% milestone—good enough for full-blown manufacturing runs.

Fresher musings point to staffers and advanced equipment achieving and stepping just beyond an impressive 90% mark, albeit with silicon intended for "memory products." As of mid-May, Samsung's competing "SF2" product—allegedly—remains in testing phases. South Korean insider news reports posit 2 nm GAA trial yields passing 40%—a significant development for the megacorp's foundry business. Roughly a month ago, (in public) TSMC leadership spoke about an unprecedented demand for 2 nm wafers. Due to rumors of greater than anticipated charges for important TSMC clients, Samsung Semi's top brass is supposedly trying to woo the likes of NVIDIA and Qualcomm.



View at TechPowerUp Main Site | Source
 
Ok, this is impressive. But how well does 2nm work?

I want to see what the power via backplane at 1.6 gets us, if they can eliminate power delivery traces with just adding some VIA's and its significantly more stable power it will hopefully ramp the Giggle hurts even more. The last report I saw with Power Via Backplane for 2nm was 2023 and I don't find anything newer mentioning it. Maybe it will magically appear in the "P" performance segment.

 
Some of the other reports pointed out that over 90% yield for SRAM. I do not remember if they mentioned die size of any sort.
Especially since this seems to be directly from TSMC we should be careful with how we interpret that.
 
I want to see what the power via backplane at 1.6 gets us, if they can eliminate power delivery traces with just adding some VIA's and its significantly more stable power it will hopefully ramp the Giggle hurts even more. The last report I saw with Power Via Backplane for 2nm was 2023 and I don't find anything newer mentioning it. Maybe it will magically appear in the "P" performance segment.

Or they wait until samsung or intel wasted the billions to get that going and simply copy them.
 
What will be the next step after 2mm? I guess it is not very far from the point of which quantum law takes place.
 
Last edited:
Gimme sub-sub-sub nm, or gimme death :D
 
Unlikely to happen. That would get into the subatomic range. So unless IC's go to the scales of quarks, just not going to happen.
If/when we reach that point it’s going to be atomic assembly is my guess, nano machines building atomic transistors.
 
Ok, this is impressive. But how well does 2nm work?

I can only surmise, that it's going to be amazing, considering what we know on performance/power of the current 4nm. 90% production yield rate is off the charts, I would like to say that we might see products at a lower cost as a result, but I am not holding my breath on that anymore.

There is a reason I didn't upgrade to 9800X3D, impressive a chip as it is. (still tempted, but the 5900X is sufficient for now) however, the 10800X3D, will no doubt be on 2nm, with 1x 12-Core CCD, 6Ghz monster, at like, what 50W? Not to mention, I am not to sure if they will keep using the same socket, probable, however, not risking that, motherboards have become stupid expensive for no reason and RAM, mmmm, want to see what the new chip can take for a 1:1 Command Rate between memory/CPU.

Not enthusiastic on the GPU side, as nGreedia will just make a RTX6090 for $10 000 and call it a day, something with what, 10% performance increase and tell you it's the best, and money will just be thrown at it, as for AMD, they will look at what big daddy nGreedia does price/performance wise and follow suit, instead of just knocking them on their asses.
 
Giggle hurts
As a non english speaker, that one left me like "WTF??" for about 10 minutes... Now I can't stop giggling.
 
I can only surmise, that it's going to be amazing, considering what we know on performance/power of the current 4nm. 90% production yield rate is off the charts, I would like to say that we might see products at a lower cost as a result, but I am not holding my breath on that anymore.

There is a reason I didn't upgrade to 9800X3D, impressive a chip as it is. (still tempted, but the 5900X is sufficient for now) however, the 10800X3D, will no doubt be on 2nm, with 1x 12-Core CCD, 6Ghz monster, at like, what 50W? Not to mention, I am not to sure if they will keep using the same socket, probable, however, not risking that, motherboards have become stupid expensive for no reason and RAM, mmmm, want to see what the new chip can take for a 1:1 Command Rate between memory/CPU.

Not enthusiastic on the GPU side, as nGreedia will just make a RTX6090 for $10 000 and call it a day, something with what, 10% performance increase and tell you it's the best, and money will just be thrown at it, as for AMD, they will look at what big daddy nGreedia does price/performance wise and follow suit, instead of just knocking them on their asses.
This post is so stock full of current internet tech bullshit that I could not just let it go. Please take mainstream sites and especially headlines - even more so techtubers and their clickbait titles - with a grain of salt.

- Performance/power improvements have been slowing down. There are a number of technical reasons why that is the case. 4nm is a variant of 5nm and its properties over 7/6nm are good but not to the scale of node shrinks of old. Same seems to apply for 3nm. There are good improvements there but these have slowed down a lot. 2nm is the next gen after that with no signs of this getting better.
- 90% yield rate claim according to some other new coverage is from SRAM. This is a very regular set of transistors, relatively dense but also easy to manufacture. Also I have not seen anything about die sizes and without that the 90% is meaningless. Also, off the charts? For a reasonably sized dies - think mobile SoC or AMD CCD or some Intel tile - on a mass-production ready node this is more of a prerequisite than "off the charts".
- TSMC has stated that 2nm will be more expensive. Compared to 3nm that is already more expensive than 5/4nm and more expensive than processes this mature have been historically. No, there will not be lower cost. Maybe later when 2nm becomes mainstream but that is years away.
- 9800X3D is on 4nm. AMD has not even used 3nm for mainstream products so far. 3nm has been in mass production since late 2022. 2nm is a generation newer than 3nm, there is a while until mass production and it is quite likely that the delay for something like CPUs or GPUs is on top of that. 10800X3D is said to be on 3nm but afaik AMD has not officially confirmed that yet. It is rumored to be shrinked to 2nm later but see above - it'll be quite a while later if it'll happen.
- 6GHz seems to be going further away rather than coming closer. Its about power. Intel got burnt and AMD is keeping official boost clocks still in latter half of 5GHz. There is a reason for that. And remember that intel did 6GHz on 14nm. There have been 10nm, 7nm, 5nm and 3nm processes after that and the frequencies have not increased. Even getting to 5+GHz needs the performance variation of a manufacturing process (which is not power efficient at all in the top end) and specific tweaking of (micro)architecture.
- The point about 50W might be reasonable one. In a sense that there will be efficiency improvements and 50W is in the range where that should apply quite nicely for a CPU with desktop amount of cores.
- More expensive motherboards aren't exactly for no reason. There is PCI Express 5.0, there is DDR5 and both are faster, need better more reliable signals to run across motherboard. This in turn makes the board more complex and more expensive in various ways. Also, CPUs have become power hogs. Intel was like that for a while but AMD followed suit with their own 220+W CPUs. Any motherboard for the socket that you as a manufacturer build should be able to run any CPU that runs with the socket so essentially that cheap A620 mobo needs to be able to run the 220W 9950X properly or the manufacturer gets ridiculed online. better, bigger VRM = more cost. Manufacturers did abuse the new tech as excuse for price hikes but at the end of the day it is clear that all this did raise the baseline price of a motherboard significantly.

- GPUs are not exempt from the same problems. Only much worse due to larger size. In quite a few last generations AMD did not even create a response to Nvidia's flagship but compete from one step lower. For example, 5090 is alone, 9070 (AMDs biggest chip) can compete against 5080 (Nvidias 2nd biggest). Same for 4090, 7900XT and 4080. Yes, Nvidia's flagships are expensive but they are also huge and are pushing limits - often enough, reticle size for one. Power consumption is another and this ties into the problem I mentioned before with performance/power improvements having slowed down.
- The impression that AMD could just sail in and give us consumers GPUs that are awesome and cheap etc is utopian. This has no touchpoints with reality. AMD has been struggling with the exact same problems.
- Chiplets for GPUs are probably the future but problem is that nobody has figured out a way to efficiently use those yet for consumer GPUs. AMD's RDNA3 and separated memory controllers with attached cache was the best attempt and it unfortunately showcased the efficiency and slight performance hits that are expected. Chiplets is not a better solution compared to monolithic and never has been. Chiplets are good for one thing - split up the die so it can be manufactured more efficiently/cheaply. It allows to reduce cost and/or create an ASIC that would not be possible otherwise - canonical example is chip so large that it exceeds the reticle limit or with yields that are not usable. As the downside - moving data between dies costs power (read: hit to efficiency) and might come with (largely mitigatable) hits to latency.

/rant
 
Last edited:
Compared to the N3, its density increases by 15%, then by 7% in the A16.

Nvidia better be doubling down on the MFG 8X to transform 15 FPS into 120 FPS as soon as possible
progress is slowing down to a halt and more abruptly very soon as the reticle size is halved to 429mm² on High NA.
 
2nm is just callous marketing term. 2nm is not actually 2nm, but rather is typically referred to as a gate pitch of 40nm and a metal pitch of 18nm (TSMC).

Source:
1749116038026.png


So going beyond "2nm" is no issue yet. There is a long way until going pico size, or atom size. Don't let the marketing fool you.
 
Compared to the N3, its density increases by 15%, then by 7% in the A16.

Nvidia better be doubling down on the MFG 8X to transform 15 FPS into 120 FPS as soon as possible
progress is slowing down to a halt and more abruptly very soon as the reticle size is halved to 429mm² on High NA.

They haven't even released Reflex 2.0 with Frame Warp yet. I don't see how they can make the smear/input from 8x delay any better without it, you can already feel the 2x/3x in some titles, where I haven't found a game where 4x works/feels great.

That being said, not everyone has the same sensitivity to input latency or noticing frame irregularities. A new can of worms have been opened, just look at the audiophile side of things, what shall we call this? :P

Oh yeah and lol, good luck cramming 8x additional frames into a 8GB VRAM buffer. :roll:
 
Personally, I think that would be going in the wrong direction. We need to focus on a better way to do computing.
The current quantum computing machines are BS according to engineering teams that work on them. The biggest issue holding us back is thermal flux, how to get heat out of tiny metallic traces on a silicon substrate, stacking dies would work a lot better if we could cool both sides, or add a power delivery/thermal transfer layer, which is why I hope backside power delivery with a copper plate is feasible to help with cooling as well, plus print on both sides of the die and cool both.

Imagine a die stack of cache and cores separated by just thick enough copper to transfer out the heat.
This post is so stock full of current internet tech bullshit that I could not just let it go. Please take mainstream sites and especially headlines - even more so techtubers and their clickbait titles - with a grain of salt.

- Performance/power improvements have been slowing down. There are a number of technical reasons why that is the case. 4nm is a variant of 5nm and its properties over 7/6nm are good but not to the scale of node shrinks of old. Same seems to apply for 3nm. There are good improvements there but these have slowed down a lot. 2nm is the next gen after that with no signs of this getting better.
- 90% yield rate claim according to some other new coverage is from SRAM. This is a very regular set of transistors, relatively dense but also easy to manufacture. Also I have not seen anything about die sizes and without that the 90% is meaningless. Also, off the charts? For a reasonably sized dies - think mobile SoC or AMD CCD or some Intel tile - on a mass-production ready node this is more of a prerequisite than "off the charts".
- TSMC has stated that 2nm will be more expensive. Compared to 3nm that is already more expensive than 5/4nm and more expensive than processes this mature have been historically. No, there will not be lower cost. Maybe later when 2nm becomes mainstream but that is years away.
- 9800X3D is on 4nm. AMD has not even used 3nm for mainstream products so far. 3nm has been in mass production since late 2022. 2nm is a generation newer than 3nm, there is a while until mass production and it is quite likely that the delay for something like CPUs or GPUs is on top of that. 10800X3D is said to be on 3nm but afaik AMD has not officially confirmed that yet. It is rumored to be shrinked to 2nm later but see above - it'll be quite a while later if it'll happen.
- 6GHz seems to be going further away rather than coming closer. Its about power. Intel got burnt and AMD is keeping official boost clocks still in latter half of 5GHz. There is a reason for that. And remember that intel did 6GHz on 14nm. There have been 10nm, 7nm, 5nm and 3nm processes after that and the frequencies have not increased. Even getting to 5+GHz needs the performance variation of a manufacturing process (which is not power efficient at all in the top end) and specific tweaking of (micro)architecture.
- The point about 50W might be reasonable one. In a sense that there will be efficiency improvements and 50W is in the range where that should apply quite nicely for a CPU with desktop amount of cores.
- More expensive motherboards aren't exactly for no reason. There is PCI Express 5.0, there is DDR5 and both are faster, need better more reliable signals to run across motherboard. This in turn makes the board more complex and more expensive in various ways. Also, CPUs have become power hogs. Intel was like that for a while but AMD followed suit with their own 220+W CPUs. Any motherboard for the socket that you as a manufacturer build should be able to run any CPU that runs with the socket so essentially that cheap A620 mobo needs to be able to run the 220W 9950X properly or the manufacturer gets ridiculed online. better, bigger VRM = more cost. Manufacturers did abuse the new tech as excuse for price hikes but at the end of the day it is clear that all this did raise the baseline price of a motherboard significantly.

- GPUs are not exempt from the same problems. Only much worse due to larger size. In quite a few last generations AMD did not even create a response to Nvidia's flagship but compete from one step lower. For example, 5090 is alone, 9070 (AMDs biggest chip) can compete against 5080 (Nvidias 2nd biggest). Same for 4090, 7900XT and 4080. Yes, Nvidia's flagships are expensive but they are also huge and are pushing limits - often enough, reticle size for one. Power consumption is another and this ties into the problem I mentioned before with performance/power improvements having slowed down.
- The impression that AMD could just sail in and give us consumers GPUs that are awesome and cheap etc is utopian. This has no touchpoints with reality. AMD has been struggling with the exact same problems.
- Chiplets for GPUs are probably the future but problem is that nobody has figured out a way to efficiently use those yet for consumer GPUs. AMD's RDNA3 and separated memory controllers with attached cache was the best attempt and it unfortunately showcased the efficiency and slight performance hits that are expected. Chiplets is not a better solution compared to monolithic and never has been. Chiplets are good for one thing - split up the die so it can be manufactured more efficiently/cheaply. It allows to reduce cost and/or create an ASIC that would not be possible otherwise - canonical example is chip so large that it exceeds the reticle limit or with yields that are not usable. As the downside - moving data between dies costs power (read: hit to efficiency) and might come with (largely mitigatable) hits to latency.

/rant
/rant

The dies are designed for a frequency and then respins of lithograph help attenuation to those frequencies. If you think Intel or AMD haven't been paying attention to how nodes work in the performance, normal or efficient guides there is a lot to read. Part of verification of a new node is architecture design for clients (AMD, Apple, Intel, Nvidia etc...) and the clock domain is set by finite measures of resistance, capacitance, and switching (quantum tunneling). There is a reason designs all "mysteriously" reach essentially the same clock speeds. At operating temps resistance and capacitance are the biggest hurdles and the half size shrinks that allowed progressive jumps are essentially over. Until we print truly 3D chips 6Ghz is our ceiling, reaching that is at the cost of latency in hardware. Thus the push for keeping the transistors busier, which is harder to do as dies become more complex and we monkeys can't as an individual keep the millions or billions of traces and transistors in our head and become more reliant on the rocks to do the thinking about how to make a better faster rock happen.
 
The current quantum computing machines are BS according to engineering teams that work on them.
Quantum computing has it's place and they are real. They're just not very viable for a desktop PC or handheld device.

The biggest issue holding us back is thermal flux, how to get heat out of tiny metallic traces on a silicon substrate, stacking dies would work a lot better if we could cool both sides, or add a power delivery/thermal transfer layer, which is why I hope backside power delivery with a copper plate is feasible to help with cooling as well, plus print on both sides of the die and cool both.

Imagine a die stack of cache and cores separated by just thick enough copper to transfer out the heat.
Those are all good points. To be fair, I have no idea the direction we need to go, only that we can't keep going the direction we have been.

We need new substrate materials and circuit designs so that resistance is reduced and thus heat is not produced as byproduct of the compute process. Better semiconductors that conduct better when in a conduction state and insulate better when in a non-conductive state. We also need to better solve the problems on electron migration and electron migation.
 
Last edited:
Really glad to hear this is going well. 2nm might be THE magical go-to node throughout 2028 and I am so here for it.
Still a DDR2/DDR3/DDR4 user and kind of married to some mediocre flash technologies. We need memory products.
A lot of people cannot justify running the kinds of old tech that I do and I fully empathize with that, they need MORE.
This year I dipped into GDDR6 and scurried back to some GDDR5 product. We might see some new GDDR7 soon™.
While I can still justify old USB flash, sata SSDs and g3x4 M.2 cards, I'm just finally dipping into g4x4 M.2 RAID.
We may very well be staring down a future where single g5x4 or even g6x4 M.2 runs circles around that. Good.
 
Back
Top