• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Samsung Reportedly Progressing Well with 2 nm GAA Yields, Late 2025 Mass Production Phase Looms

T0@st

News Editor
Joined
Mar 7, 2023
Messages
3,328 (3.79/day)
Location
South East, UK
System Name The TPU Typewriter
Processor AMD Ryzen 5 5600 (non-X)
Motherboard GIGABYTE B550M DS3H Micro ATX
Cooling DeepCool AS500
Memory Kingston Fury Renegade RGB 32 GB (2 x 16 GB) DDR4-3600 CL16
Video Card(s) PowerColor Radeon RX 7800 XT 16 GB Hellhound OC
Storage Samsung 980 Pro 1 TB M.2-2280 PCIe 4.0 X4 NVME SSD
Display(s) Lenovo Legion Y27q-20 27" QHD IPS monitor
Case GameMax Spark M-ATX (re-badged Jonsbo D30)
Audio Device(s) FiiO K7 Desktop DAC/Amp + Philips Fidelio X3 headphones, or ARTTI T10 Planar IEMs
Power Supply ADATA XPG CORE Reactor 650 W 80+ Gold ATX
Mouse Roccat Kone Pro Air
Keyboard Cooler Master MasterKeys Pro L
Software Windows 10 64-bit Home Edition
Samsung's foundry operation has experienced many setbacks over the past six months, according to a steady feed of insider reports. Last November, industry moles leaked details of an apparent abandonment of the company's 3 nm Gate-All-Around (GAA) process. Significant yield problems prompted an alleged shift into 2 nm territories, with a next-gen flagship Exynos mobile processor linked to this cutting-edge node. According to a mid-week Chosun Daily article, Samsung and its main rival—TSMC—are in a race to establish decent yields of 2 nm wafers, ahead of predicted "late 2025" mass production kick-offs. The publication's inside track points to the Taiwanese foundry making the most progress (with an estimated 60%), but watchdogs warn that it is too early to bet against the South Korean competitor.

Despite murmurs of current 20 - 30% yields, the Samsung's Hwaseong facility is touted to make "smooth" progress over the coming months. Chosun's sources believe that Samsung engineers struggled to get 3 nm GAA "up to snuff," spending around three years on development endeavors (in vain). In comparison, the making of 2 nm GAA is reported to be less bumpy. A fully upgraded "S3" foundry line is expected to come online by the fourth quarter of this year. An unnamed insider commented on rumors of better than anticipated forward motion chez Samsung Electronics: "there are positive aspects to this as it has shown technological improvements, such as the recent increase in the yield of its 4 nm process by up to 80%." Recent-ish reports suggest that foundry teams have dealt with budget cuts, as well as mounting pressure from company leadership to hit deadlines.



View at TechPowerUp Main Site | Source
 
A successful Samsung foundry means customers not eating into TSMC capacity which in the short term means more chips which means available graphics cards.
 
Just remember, early Samsung 2nm more like N3P. So is Intel's 18A, kinda-sorta. And Rapidus.

None of them are *exactly* the same, but you get my point. They are all trying to get that approximate level of p/p/a (different ratios) to a practical yield...Don't put a ton of stock into the node name.

Beyond that is pretty expensive to develop (w/ little advancement), so that's where I think a lot of people are going to see returns diminish to the point they don't upgrade most of their stuff (for a long time).

In some ways that sucks for us (as nerds that love watching this stuff advance), in some ways I'm ready to invest in something that if it's going to be expensive, it least it will last for a good, long while.
 
So did I understood it well?
After we've have completely mastered the art of Gate All Around transistor we are than ready for the B2B Tunnel Fet or I'm still missing something ?????
 
Just remember, early Samsung 2nm more like N3P. So is Intel's 18A, kinda-sorta. And Rapidus.

None of them are *exactly* the same, but you get my point. They are all trying to get that approximate level of p/p/a (different ratios) to a practical yield...Don't put a ton of stock into the node name.

Beyond that is pretty expensive to develop (w/ little advancement), so that's where I think a lot of people are going to see returns diminish to the point they don't upgrade most of their stuff (for a long time).

In some ways that sucks for us (as nerds that love watching this stuff advance), in some ways I'm ready to invest in something that if it's going to be expensive, it least it will last for a good, long while.
We can expect significant gains from accelerators for specific tasks, compute in memory will also make things quite a bit faster, stacking of logic and not just memory will reduce distances between components and increase speed. Stuff like intel's power via will allow for higher clocks.
So i kinda think there is still plenty of head room even without easy node shrinks.

There is the problem for the industry that a big chunk of users are perfectly fine doing their thing on 15 year old hardware as they just don't need anything more.
 
A successful Samsung foundry means customers not eating into TSMC capacity which in the short term means more chips which means available graphics cards.
Much more important is that it would mean TSMC is not a monopolist at advanced fab tech.

I will keep my fingers crossed.
 
Much more important is that it would mean TSMC is not a monopolist at advanced fab tech.

I will keep my fingers crossed.
Somehow it seems like exactly 0 foundry customers seem to be concerned about a TSMC monopoly. Hopefully my perception is wrong but I don't know of any major processor from the last year built primarily outside of TSMC other than Intel server chips and one Samsung watch chip. So this news is a welcome change.
 
Somehow it seems like exactly 0 foundry customers seem to be concerned about a TSMC monopoly. Hopefully my perception is wrong but I don't know of any major processor from the last year built primarily outside of TSMC other than Intel server chips and one Samsung watch chip. So this news is a welcome change.
those Interested in the latest and greatest nodes are still on the AI high, I doubt those in other sectors are so happy with the high prices in a global economy that is practically dead since 2022.
 
I wish for the Samsung team to get all the required support to mount a credible challenge to TSMC - it will be much better for our wallets if serious competition is reestablished.
 
We can expect significant gains from accelerators for specific tasks, compute in memory will also make things quite a bit faster, stacking of logic and not just memory will reduce distances between components and increase speed. Stuff like intel's power via will allow for higher clocks.
So i kinda think there is still plenty of head room even without easy node shrinks.

There is the problem for the industry that a big chunk of users are perfectly fine doing their thing on 15 year old hardware as they just don't need anything more.

Sure, that's fair-enough. Stacking and chiplets/interposers/etc do allow for a different kind of advancement; BSPD a small bump to perf making things that perhaps were not practical before more-so; I agree.
But those things add a lot of extra cost, and truly imo just shift advancement from impossible to impractical. R&D expensive, and the market smaller for the cost of those advancements.

I just wonder how many consumers need a (just an example of something that could exist) 12-chiplet/48GB Radeon that uses 675W and costs a ton, even if they can build it and wouldn't have before.

I personally think the second-coming of Conroe/Merom is happening, but for personal enthusiasts. I truly think any further enhancements will be cost-prohibitive; consumers instead transferring to cloud.
Again, maybe not tomorrow as the worldwide infrastructure isn't there, but in the next decade or so.

We live in a world where (last I checked) the average PC core usage in general tasks was just over 6 (something like 6.18x?) and even wrt GPUs most are just looking to scale the next consoles to 1440p/HFR.

It used to be there were practical hurdles towards parity with certain ecosystems like that; either core counts, RAM, segment tiers, or resolution-chasing that were simply not practical for most consumers.

Now, with AMD moving to 12-core chiplets, and the real practical reality there might be two whole GPU tiers above the next console at launch, if not even soon-after 3, with platform costs like those rising...
...I just don't see people truly upgrading once they make an investment. There is no new 4k; there is no need for higher core counts. There is RT (etc), but even that feels like a push to keep the industry afloat.

Barring some strange instances of greater than 32GB of VRAM/RAM being used, which itself somewhat feels an artificial limitation on consumer products, I think many will figure out what it takes to scale the next consoles and then sit.

Potentially for the better part of a decade, if not even more. While it is true things are good-enough for the general population in many sectors already, I think it's a relatively new notion to our hobby.
At least this quickly, and there are likely not going to be great advancements come to new tiers over time at the same rate as they did before. Smaller improvements; perhaps mostly gimmicks.
Things that perhaps allow those tier systems to continue to exist but without any radical advancement or sincere trickle-down, as we've already seen happen to some extent.

JMO.

If there is any saving grace, it will be there are and/or will be many companies (Samsung/Rapidus/IFG/?), not just TSMC, with similar-performing nodes available to customers; perhaps driving costs down.
 
Last edited:
Somehow it seems like exactly 0 foundry customers seem to be concerned about a TSMC monopoly
You mean companies don't post "WTF TSMC BBQ" on twitter.

Yeah, I also see that as a proof of "no problem".

Record TSMC quarter after record quarter in combination of price hikes on electronics is just a coincidence.
 
Back
Top