• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel 18A Process Node Clocks an Abysmal 10% Yield: Report

I remember getting some hate for pointing out that Broadcoms disappointment was likely due to this failure
 
Are you saying that AMD has never had this issue?
Sorry if it came out weird, but I meant that AMD faced tons of issues on windows, but not on linux. Same goes for intel.
 
If this doesn’t put Intel underground I don’t know what will. The whole company is a disaster by now, with no saving grace in sight. CPU? Too slow too inefficient. GPUs? Too slow too late. Foundry? Broken and expensive. They have nothing left. There’s scraps of data center stuff and that’s it, just selling because AMD doesn’t get enough wafers from tsmc.
 
You are correct. But something doesn't add up. When Intel cancelled 20A, they said they were doing it because 18A was doing so well, 20A isn't needed anymore.

Come on. Everyone knew that was bullshit. They were also supposed to introduce BSPD and GAA (RibbonFET) with Intel 20A on Arrow Lake CPUs but didn't. They used to call Intel 20A an important "bridge" towards 18A.

So, 20A was an important bridge to the 18A process that Gelsinger bet the whole company on (his words) and it was supposed to introduce not one but TWO very important future KEY technologies. Then this process gets cancelled. What does that tell us?

Right.

Intel is super-fucked. Their foundry is producing nothing but garbage but they can not sell the foundry because of Chips Act. They are an unattractive acquisition target as long as they can not spin off the foundry business. It's a vicious circle and the writing is on the wall: intel ded is no longer a joke but a factual statement.
They are a living dead company on life support. They are not too big to fail. When Intel goes tits up, there will be a spinoff of the critical divisions (military etc.) and the rest will be flushed down the shitter.

The only remaining question is: How much longer can they sustain their zombie-like existence? A year, two years, three? Who knows? But, barring a biblical miracle, the outcome is inevitable. Intel is gone.
 
It's not like new nodes start with great yields...
It takes time for them to improve yields. This is true for TSMC/Samsung/everyone else as well.

In a way, I do hope people get their wish of Intel being no more. Less competition is great for everyone!
 
The only remaining question is: How much longer can they sustain their zombie-like existence? A year, two years, three? Who knows? But, barring a biblical miracle, the outcome is inevitable. Intel is gone.
If their next architecture isn't good they're over, but who knows, maybe the US will keep them on life support forever.
It's not like new nodes start with great yields...
No, but TSMCs nodes usually start with way better yields than this, as far as I know. No alarming news about those nodes either, here we have multiple. So it's not the same situation. Samsung also got into the news because of yield issues, only TSMC has no bad news.
 
Samsung also got into the news because of yield issues, only TSMC has no bad news.
Don't you think it's odd where the information is coming from? How would South Korean media know about Intel's foundry yields?
Who is to gain from this type of article? Are there any South Korean Foundries trying to move attention away from their low yields? ;)
Since the flavor of the month is to dump on Intel, no one is going to question the legitimacy of such an article or sources. :rolleyes:

1733474146424.png


Broadcom's switches use enormous dies which are suspected to be running into the reticle limit. They should be at least 600 mm^2. In fact, plugging in a 800 mm^2 die with a defect density of 0.4 per square cm into isine's die yield calculator results in a yield of 9%. Pat Gelsinger claimed that defect density for 18A was less than 0.4 in September. For context, TSMC's defect density for N10 was also above 0.4 three quarters before mass production; N5 and N7 fared better.

View attachment 374517

Yeah here they mention the defect density being <0.4
 
Last edited:
Don't you think it's odd where the information is coming from? How would South Korean media know about Intel's foundry yields?
No problem, this is all just kinda speculative, but given that Intel had a lot of issues with nodes in the last years, I find it highly likely that these rumors or news will be true. This is also the second bad news about the 1.8nm node alone. Let alone the fact Intel is skipping nodes which also makes it highly likely that they are just not able to do that, they already did this mistake back then with 10nm (later renamed Intel7), when they wanted too much density etc and it simply didn’t work out and made the node release super late. This time it’s not about their ambition towards themselves but ambition towards the market to be competitive and possibly number 1 again, a different situation but again Intel trying too hard. Why would a company that had a lot of issues with nodes try to skip multiple nodes? Where’s Intel 5 and Intel 3nm? It’s insane that they’re trying to skip multiple nodes and overtake TSMC. And thus I’m not surprised if it doesn’t work out. They’ll probably be years late with this just as with 10nm back then, and TSMC will stay ahead. To me TSMC is simply like the Nvidia of foundries, they’re just better at it.
 
But 14A will be nice and shinny, Just Wait©
They'll just borrow the pluses from 14nm again and ride that sucker for a decade now!

That is pretty abysmal and explains alot. Right now Intel really needs some changes to bring them back in the game. I mean they are not making money hand over fist like they were 7 years ago and they really have no excuse. They are nowhere near as bad as AMD was with Bulldozer, but man its amazing how many issues Intel currently has.
You say that, but if you look at their enterprise/server chips, I think Epyc has certainly made the gap similar to what Bulldozer was against Sandy Bridge on consumer platforms back then.

It's not like new nodes start with great yields...
It takes time for them to improve yields. This is true for TSMC/Samsung/everyone else as well.

In a way, I do hope people get their wish of Intel being no more. Less competition is great for everyone!
To clarify, I don't want to see Intel go, but they definitely need a good shake up to get them back on track. Apparently its a human thing, shit needs to decisively hit the fan for us to improve.

I used to say, despite various failures, Intel has enough talent to turn things around. I'm not so sure anymore. They just seem increasingly overconfident. Or clueless. Or both.
Well the gap's just been widening, and they've never had the guts to just stop the train from rolling and truly get back to the drawing board. They keep releasing upgrades in tiny iterations, they have numerous development tracks and they are all one big entangled mess. Its like they're Agile developing Core and that shit just ain't working. You can also see that they've tried to diversify in the past years by buying up several companies and all of those attempts failed. They're doing far too much, while their core business (dat pun) deserved full attention.
 
Last edited:
To clarify, I don't want to see Intel go, but they definitely need a good shake up to get them back on track. Apparently its a human thing, shit needs to decisively hit the fan for us to improve.

In the jungle it is like this - law of natural selection. The weak will go, anyways. Especially, the weak that previously was a criminal. What goes around, comes around.
 
Ah, okay - I didn't spot that it was for monster dies, I assumed it was for Intel CPUs.
Another unknown variable is how many defects per chip Broadcom was willing to tolerate. A CPU monolithic chip or chiplet or a GPU chip with a defect or two can often make it into a lesser grade product. NAND chips have many defects each, the yield would be close to 0% if you only counted defect-free ones. But Broadcom may have developed a many-port fast switch or something, with some CPU cores and cache, some specific functional blocks and lots of complex I/O units, and they want all of that to be 100% operative. They could have designed a chip with defects in mind but Pat promised them great yields, so they didn't. That's a speculation for sure but quite plausible.
 
Another unknown variable is how many defects per chip Broadcom was willing to tolerate. A CPU monolithic chip or chiplet or a GPU chip with a defect or two can often make it into a lesser grade product. NAND chips have many defects each, the yield would be close to 0% if you only counted defect-free ones. But Broadcom may have developed a many-port fast switch or something, with some CPU cores and cache, some specific functional blocks and lots of complex I/O units, and they want all of that to be 100% operative. They could have designed a chip with defects in mind but Pat promised them great yields, so they didn't. That's a speculation for sure but quite plausible.
Definitely, Broadcom's switches would be far more IO rich than your typical CPU or GPU.
 
I kinda wonder if this is one of the reasons they dropped SMT. One more layer of complexity. There’s nothing like opening Task Manager on my i7 and seeing 2 physical cores at 100% while the other 10 “cores” are twiddling their thumbs while system responsiveness falls off.
I think that too. A P core with HT behaves like two types of core. P without HT is high performance. P with HT is 2x medium performance, with thread priority being unpredictable and out of control of the OS. E cores would be a third type of core in the same CPU here.

Funnily enough, those things seem to just werk on linux (not that surprising given how big.LITTLE has been a thing in phones for ages).
The thread director barely brings any benefit, and even AMD hasn't had the same issues that they had on windows with core parking and whatnot.
Can the Thread Director be disabled, so we could measure its effectiveness with everything else being equal?
Sure, a CPU company should work with the vendor of the OS their CPUs are going to be mostly used with in the consumer space to make things work fine, but given how both companies have had their fair share of issues, I really wonder how bad Window's scheduler is.
Yes. I never accepted the argument that Apple controls the entire HW-SW stack, so they can have everything integrated and optimised for smoothness and best performance and low power and better use of memory and et cetera ... while Intel + AMD + Microsoft + Linux community can't possibly do that. Turns out, Intel can't even properly collaborate with MS. I'm wondering if they even have a joint group of twenty engineers and twenty testers or something similar.
Now imagine Meteor Lake, which had E-cores in a low power island, E-cores in the main compute tile, P-cores, and its SMT threads. 4 levels of different logical cores, fun :laugh:
At least the island has a specific purpose, which is listening to the user 24/7. OS services and apps shouldn't run there.

They'll just borrow the pluses from 14nm again and ride that sucker for a decade now!
Let's wait and see what new strategy and PR style Pat++ will bring.

No, but TSMCs nodes usually start with way better yields than this, as far as I know. No alarming news about those nodes either, here we have multiple. So it's not the same situation. Samsung also got into the news because of yield issues, only TSMC has no bad news.
TSMC's N3 launch wasn't smooth. Not nearly smooth. N3B kept Apple happy for a while - because they have no choice but to be happy - then it died.
 
Well the gap's just been widening, and they've never had the guts to just stop the train from rolling and truly get back to the drawing board. They keep releasing upgrades in tiny iterations, they have numerous development tracks and they are all one big entangled mess. Its like they're Agile developing Core and that shit just ain't working. You can also see that they've tried to diversify in the past years by buying up several companies and all of those attempts failed. They're doing far too much, while their core business (dat pun) deserved full attention.
The way I understood it, they had separate teams working on the next and the node after that. Presumably if one team faltered, the other would learn from those mistakes and be able to do something about it in time. Maybe not keeping the scheduled untouched, but, you know, in the same ballpark. I don't know what became of all that.
 
In the jungle it is like this - law of natural selection. The weak will go, anyways. Especially, the weak that previously was a criminal. What goes around, comes around.
Natural selection with a twist - the US prevents it by lending them endless amounts of money, the too big to fail paradox.
TSMC's N3 launch wasn't smooth. Not nearly smooth.
Never heard anything bad about it.
 
Seems a lot of people without a clue in this thread which is completely to be expected given the quality of news articles the last year or so involving Intel. A lot of stuff seems to have been seeded by upset investors looking for it to be sold for parts. Fixing Intel was always going to take a long time, but if there's one thing wall street hates it's spending a lot of money without short term returns. That seems to be the actual breaking point here, and given who's on the board should be unsurprising (for anyone who doesn't know here's some insight: https://www.fabricatedknowledge.com/p/the-death-of-intel-when-boards-fail). Intel is sadly likely doomed now and won't exist as it has.

As for this article specifically: yield rates depend on die size and design so some outlet in Korea making these claims doesn't actually hold any weight. We'll all find out next year when PTL/CWF are supposed to launch and what nodes they're using. Barring anything official coming out before then it'll be hard to say with any certainty where things are.
I dunno, I can actually see them selling or spinning off their foundries.
None of Intel's DUV nodes are viable for third party usage and they won't have one which is until the UMC partnership bears fruit which is supposed to be 2027. This makes selling/spinning off a losing proposition and you only have to look at GloFo to see how that plays out.
Intel 7 is a straight up name change of 10 nm
No it was a node refinement so think of it more along the lines of TSMC N6 and N4 being refinements of N7 and N5 respectively.
Intel 4 only used for a few laptop CPUs
A limited number of SKUs perhaps, but that was still 10s of millions of CPU tiles. This node was always going to be a one and done node due to limited PDK.
Intel 3 only used for a few data center CPUs
All Xeon 6 CPUs are on Intel 3 and this should be a long term node, but also uses the same equipment as Intel 4 which has undoubtedly hampered capacity until Intel could stop making MTL cores. This is the idiotic decision to not buy EUV machines having far reaching consequences which are beyond the 10nm failure.
Did I miss anything or get something wrong because the above means that nothing really came after 10 nm from Intel?
You missed a lot, but the GPU/CPU TSMC use is spot on. I'd expect GPUs to remain on TSMC for the time being, but with the board causing the messes they have this might not change at all.
I guess they didn't learn much from 10nm... They are trying to get PowerVia done, which was supposed to come with 20A. With that being canned, there still isn't an implementation on an otherwise mature node.

Having a shrink and PowerVia onto a single process may be too much to swallow at once.
BSPDN was developed on a custom Intel 4 process so if it didn't pan out it wouldn't impact 20A/18A development as those were implementing GAAFET. It should have no bearing on the progress of 18A as they could have just dropped it if they couldn't get it working.
I don't know much about Intel's planned nodes, but they're already behind schedule, and failing on yields - 18A should have been out in the second half of this year.
No 18A was never a 2024 node, even the branding of "5N4Y" says that: Gelsinger wasn't hired until 2021 so it was always a 2025 node. Intel 4 had a pretty big delay and Intel 3 took as long as it was originally supposed to after Intel 4, but the Intel 4 delay factors in here. In theory if Intel wasn't lying about the 20A/18A situation 20A would have been mostly on time and 18A will be, but we won't know any of this until next year or if Intel states otherwise on the record.
 
No it was a node refinement so think of it more along the lines of TSMC N6 and N4 being refinements of N7 and N5 respectively.
No, Intel 10nm was later renamed to "Intel7" due to marketing issues of not "appearing too far behind TSMC" and others. This has nothing to do with any improvements they made on Intel 10nm node.
A lot of stuff seems to have been seeded by upset investors looking for it to be sold for parts. Fixing Intel was always going to take a long time, but if there's one thing wall street hates it's spending a lot of money without short term returns. That seems to be the actual breaking point here, and given who's on the board should be unsurprising (for anyone who doesn't know here's some insight: https://www.fabricatedknowledge.com/p/the-death-of-intel-when-boards-fail). Intel is sadly likely doomed now and won't exist as it has.
I have read through the link until the paywall. I wasn't convinced in the sacking of Pat in general, now i'm even less convinced. At least now the incompetence of Intel in the last several years is explained. Doubtful if another CEO can do better than Pat did.
 
No, Intel 10nm was later renamed to "Intel7" due to marketing issues of not "appearing too far behind TSMC" and others. This has nothing to do with any improvements they made on Intel 10nm node.
This is correct about why the rename happened, but incorrect regarding node advancement: TGL was 10SF and then ADL was on 10ESF and it's this node which was renamed to Intel 7.
I have read through the link until the paywall. I wasn't convinced in the sacking of Pat in general, now i'm even less convinced. At least now the incompetence of Intel in the last several years is explained. Doubtful if another CEO can do better than Pat did.
That's precisely how I feel and why I don't believe Intel will exist in the scale they do today much longer.
 
  • Like
Reactions: AcE
No 18A was never a 2024 node, even the branding of "5N4Y" says that: Gelsinger wasn't hired until 2021 so it was always a 2025 node. Intel 4 had a pretty big delay and Intel 3 took as long as it was originally supposed to after Intel 4, but the Intel 4 delay factors in here. In theory if Intel wasn't lying about the 20A/18A situation 20A would have been mostly on time and 18A will be, but we won't know any of this until next year or if Intel states otherwise on the record.
That image I posted showing 18A as a 2024 node is from Intel's own presentation. Sure, delays happen, but don't say it was "never a 2024 node" when Intel announced it would be done in 2024.
 
Last edited:
That image I posted showing 18A as a 2024 node is from Intel's own presentation. Sure, delays happen, but don't say it was "never a 2024 node" when Intel announced it would be done in 2024.
You picked a bad slide to suit the nonsense you're saying. Pat Gelsinger was hired in 2021 and that's when the 5 nodes in 4 years was coined. 2021 + 4 years = 2025 this isn't rocket science.

Here's the first time it all came up and as you can see not a 2024 node (this very much predates the slide you picked): https://www.anandtech.com/show/1682...nm-3nm-20a-18a-packaging-foundry-emib-foveros
 
Can the Thread Director be disabled, so we could measure its effectiveness with everything else being equal?
Yup, just blacklist the intel_hfi module.
Yes. I never accepted the argument that Apple controls the entire HW-SW stack, so they can have everything integrated and optimised for smoothness and best performance and low power and better use of memory and et cetera ... while Intel + AMD + Microsoft + Linux community can't possibly do that. Turns out, Intel can't even properly collaborate with MS. I'm wondering if they even have a joint group of twenty engineers and twenty testers or something similar.
To be honest the Apple argument is a valid one since they can easily impose the use of some frameworks like CoreML (to make use of ML accelerators), their specific encoders/decoders and whatnot, whereas you barely see programs in Windows land that have support for the NPUs out there. Same goes for those scheduling issues.
On the other hand, as I had said before, Linux doesn't present such issues, seems to be a problem on the Windows side, and I bet there's some bureaucracy that makes it really hard for any engineer from either AMD or Intel to collaborate.
At least the island has a specific purpose, which is listening to the user 24/7. OS services and apps shouldn't run there.
Shouldn't be doesn't mean that's what actually happens, given you're at the mercy of the scheduler.
 
Back
Top