Wednesday, June 4th 2025

TSMC Reportedly Surpasses 90% Production Yield Rate with 2 nm Process
At the tail end of Q1'25, industry whispers suggested that TSMC's premier facilities had completed cutting/leading-edge 2 nm (N2) trial production runs. By early April, company insiders alluded to a confident push into preparations for a futuristic 1.4 nm node at the "P2" Baoshan Plant. This is a very distant prospect; watchdogs envision a 2028 release window. According to expert predictions, cross-facility 2 nm wafer mass production phases are expected to start by the end of this year. Foundry staff seem to be actively pursuing an improvement in yields; earlier guesstimates indicated the crossing of a 70% milestone—good enough for full-blown manufacturing runs.
Fresher musings point to staffers and advanced equipment achieving and stepping just beyond an impressive 90% mark, albeit with silicon intended for "memory products." As of mid-May, Samsung's competing "SF2" product—allegedly—remains in testing phases. South Korean insider news reports posit 2 nm GAA trial yields passing 40%—a significant development for the megacorp's foundry business. Roughly a month ago, (in public) TSMC leadership spoke about an unprecedented demand for 2 nm wafers. Due to rumors of greater than anticipated charges for important TSMC clients, Samsung Semi's top brass is supposedly trying to woo the likes of NVIDIA and Qualcomm.
Sources:
Economic Daily TW, TSMC
Fresher musings point to staffers and advanced equipment achieving and stepping just beyond an impressive 90% mark, albeit with silicon intended for "memory products." As of mid-May, Samsung's competing "SF2" product—allegedly—remains in testing phases. South Korean insider news reports posit 2 nm GAA trial yields passing 40%—a significant development for the megacorp's foundry business. Roughly a month ago, (in public) TSMC leadership spoke about an unprecedented demand for 2 nm wafers. Due to rumors of greater than anticipated charges for important TSMC clients, Samsung Semi's top brass is supposedly trying to woo the likes of NVIDIA and Qualcomm.
21 Comments on TSMC Reportedly Surpasses 90% Production Yield Rate with 2 nm Process
www.anandtech.com/show/21370/tsmc-2nm-update-n2-in-2025-n2p-loses-bspdn-nanoflex-optimizations
Especially since this seems to be directly from TSMC we should be careful with how we interpret that.
www.tomshardware.com/tech-industry/semiconductors/tsmc-could-charge-up-to-usd45-000-for-1-6nm-wafers-rumors-allege-a-50-percent-increase-in-pricing-over-prior-gen-wafers
There is a reason I didn't upgrade to 9800X3D, impressive a chip as it is. (still tempted, but the 5900X is sufficient for now) however, the 10800X3D, will no doubt be on 2nm, with 1x 12-Core CCD, 6Ghz monster, at like, what 50W? Not to mention, I am not to sure if they will keep using the same socket, probable, however, not risking that, motherboards have become stupid expensive for no reason and RAM, mmmm, want to see what the new chip can take for a 1:1 Command Rate between memory/CPU.
Not enthusiastic on the GPU side, as nGreedia will just make a RTX6090 for $10 000 and call it a day, something with what, 10% performance increase and tell you it's the best, and money will just be thrown at it, as for AMD, they will look at what big daddy nGreedia does price/performance wise and follow suit, instead of just knocking them on their asses.
- Performance/power improvements have been slowing down. There are a number of technical reasons why that is the case. 4nm is a variant of 5nm and its properties over 7/6nm are good but not to the scale of node shrinks of old. Same seems to apply for 3nm. There are good improvements there but these have slowed down a lot. 2nm is the next gen after that with no signs of this getting better.
- 90% yield rate claim according to some other new coverage is from SRAM. This is a very regular set of transistors, relatively dense but also easy to manufacture. Also I have not seen anything about die sizes and without that the 90% is meaningless. Also, off the charts? For a reasonably sized dies - think mobile SoC or AMD CCD or some Intel tile - on a mass-production ready node this is more of a prerequisite than "off the charts".
- TSMC has stated that 2nm will be more expensive. Compared to 3nm that is already more expensive than 5/4nm and more expensive than processes this mature have been historically. No, there will not be lower cost. Maybe later when 2nm becomes mainstream but that is years away.
- 9800X3D is on 4nm. AMD has not even used 3nm for mainstream products so far. 3nm has been in mass production since late 2022. 2nm is a generation newer than 3nm, there is a while until mass production and it is quite likely that the delay for something like CPUs or GPUs is on top of that. 10800X3D is said to be on 3nm but afaik AMD has not officially confirmed that yet. It is rumored to be shrinked to 2nm later but see above - it'll be quite a while later if it'll happen.
- 6GHz seems to be going further away rather than coming closer. Its about power. Intel got burnt and AMD is keeping official boost clocks still in latter half of 5GHz. There is a reason for that. And remember that intel did 6GHz on 14nm. There have been 10nm, 7nm, 5nm and 3nm processes after that and the frequencies have not increased. Even getting to 5+GHz needs the performance variation of a manufacturing process (which is not power efficient at all in the top end) and specific tweaking of (micro)architecture.
- The point about 50W might be reasonable one. In a sense that there will be efficiency improvements and 50W is in the range where that should apply quite nicely for a CPU with desktop amount of cores.
- More expensive motherboards aren't exactly for no reason. There is PCI Express 5.0, there is DDR5 and both are faster, need better more reliable signals to run across motherboard. This in turn makes the board more complex and more expensive in various ways. Also, CPUs have become power hogs. Intel was like that for a while but AMD followed suit with their own 220+W CPUs. Any motherboard for the socket that you as a manufacturer build should be able to run any CPU that runs with the socket so essentially that cheap A620 mobo needs to be able to run the 220W 9950X properly or the manufacturer gets ridiculed online. better, bigger VRM = more cost. Manufacturers did abuse the new tech as excuse for price hikes but at the end of the day it is clear that all this did raise the baseline price of a motherboard significantly.
- GPUs are not exempt from the same problems. Only much worse due to larger size. In quite a few last generations AMD did not even create a response to Nvidia's flagship but compete from one step lower. For example, 5090 is alone, 9070 (AMDs biggest chip) can compete against 5080 (Nvidias 2nd biggest). Same for 4090, 7900XT and 4080. Yes, Nvidia's flagships are expensive but they are also huge and are pushing limits - often enough, reticle size for one. Power consumption is another and this ties into the problem I mentioned before with performance/power improvements having slowed down.
- The impression that AMD could just sail in and give us consumers GPUs that are awesome and cheap etc is utopian. This has no touchpoints with reality. AMD has been struggling with the exact same problems.
- Chiplets for GPUs are probably the future but problem is that nobody has figured out a way to efficiently use those yet for consumer GPUs. AMD's RDNA3 and separated memory controllers with attached cache was the best attempt and it unfortunately showcased the efficiency and slight performance hits that are expected. Chiplets is not a better solution compared to monolithic and never has been. Chiplets are good for one thing - split up the die so it can be manufactured more efficiently/cheaply. It allows to reduce cost and/or create an ASIC that would not be possible otherwise - canonical example is chip so large that it exceeds the reticle limit or with yields that are not usable. As the downside - moving data between dies costs power (read: hit to efficiency) and might come with (largely mitigatable) hits to latency.
/rant
Nvidia better be doubling down on the MFG 8X to transform 15 FPS into 120 FPS as soon as possible
progress is slowing down to a halt and more abruptly very soon as the reticle size is halved to 429mm² on High NA.
Source:
So going beyond "2nm" is no issue yet. There is a long way until going pico size, or atom size. Don't let the marketing fool you.
That being said, not everyone has the same sensitivity to input latency or noticing frame irregularities. A new can of worms have been opened, just look at the audiophile side of things, what shall we call this? :P
Oh yeah and lol, good luck cramming 8x additional frames into a 8GB VRAM buffer. :roll:
Imagine a die stack of cache and cores separated by just thick enough copper to transfer out the heat. /rant
The dies are designed for a frequency and then respins of lithograph help attenuation to those frequencies. If you think Intel or AMD haven't been paying attention to how nodes work in the performance, normal or efficient guides there is a lot to read. Part of verification of a new node is architecture design for clients (AMD, Apple, Intel, Nvidia etc...) and the clock domain is set by finite measures of resistance, capacitance, and switching (quantum tunneling). There is a reason designs all "mysteriously" reach essentially the same clock speeds. At operating temps resistance and capacitance are the biggest hurdles and the half size shrinks that allowed progressive jumps are essentially over. Until we print truly 3D chips 6Ghz is our ceiling, reaching that is at the cost of latency in hardware. Thus the push for keeping the transistors busier, which is harder to do as dies become more complex and we monkeys can't as an individual keep the millions or billions of traces and transistors in our head and become more reliant on the rocks to do the thinking about how to make a better faster rock happen.
We need new substrate materials and circuit designs so that resistance is reduced and thus heat is not produced as byproduct of the compute process. Better semiconductors that conduct better when in a conduction state and insulate better when in a non-conductive state. We also need to better solve the problems on electron migration and electron migation.
Still a DDR2/DDR3/DDR4 user and kind of married to some mediocre flash technologies. We need memory products.
A lot of people cannot justify running the kinds of old tech that I do and I fully empathize with that, they need MORE.
This year I dipped into GDDR6 and scurried back to some GDDR5 product. We might see some new GDDR7 soon™.
While I can still justify old USB flash, sata SSDs and g3x4 M.2 cards, I'm just finally dipping into g4x4 M.2 RAID.
We may very well be staring down a future where single g5x4 or even g6x4 M.2 runs circles around that. Good.