• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel Considers Abandoning 18A Node for 14A Chipmaking Process

I could write a long diatribe about how Intel just needs to shutdown but I'll leave this graphic to explain how this is all playing out. I hope some of you older peeps get it.

View attachment 406239
Intel is a real blockhead.
I want competition in the x86 market, but Intel did this to themselves.
In April, 18A was supposed to be Intel's holy grail, outperforming the competition's 2nm class nodes. What changed so drastically in the last3 months ? Guess the shareholders are being lied to.


My opinion withstanding that Intel should spin off its fabs.
The 20A node process was also supposed to save them, but was cancelled before it could be used for Arrow Lake.
IMO, Intel should've sold their fabs after being stuck on 14nm for so long.
 
Last edited:
On the positive side, hope they succeeded. More choices for competitive manufacturer processes is always welcome.
This I agree with. While we have plenty of competition in the CPU space, we have very little competition in the fabrication space. This is why I and others hope that Intel fabs will get spun off and run by a more competent entity or persons. Even if the fabs were just used for the Intel 3 process and higher that would be something.
 
...I was just getting excited the other day about what a great deal the 256K is. What node was that on?
 
AI analysis of Intel nodes.

BEGIN
// Step 1: Announce new node with fanfare
DECLARE newNode
SET newNode = GenerateNodeName() // e.g., "Intel 7", "Intel 4", "Intel 3"
PROMOTE newNode WITH {
ClaimIndustryLeadingPerformance()
PromiseBreakthroughEfficiency()
AnnounceAmbitiousRoadmap()
HighlightTechnologicalAdvancements()
}

// Step 2: Develop and release limited products
DECLARE productsReleased = []
FOR i = 1 TO smallBatchSize (e.g., 1-3 products)
product = DevelopProduct(newNode)
IF product.MeetsExpectations() THEN
ADD product TO productsReleased
ELSE
DELAY product
INCREMENT engineeringChallenges
END IF
END FOR

// Step 3: Evaluate strategic value
strategicValue = AssessNodePerformance(newNode)
IF strategicValue < expectedThreshold THEN
DECLARE statement = "newNode is not the strategic opportunity we sought"
COMMUNICATE statement TO stakeholders
END IF

// Step 4: Shift focus to next node
DECLARE nextNode = GenerateNodeName() // e.g., "Intel 20A", "Intel 18A"
REDIRECT resources TO nextNode
UPDATE roadmap TO PRIORITIZE nextNode
REPEAT ALGORITHM WITH nextNode
END
 
This is different from the strategy of the former CEO, Pat Gelsinger
Is it? He partially abandoned 4 and 3 and completely abandoned 20A. Now 18A is going to be partially abandoned.
 
Serious question: when was the last time we heard good news about Intel's nodes?
 
It feels like I'm watching HGTV. New head honcho walks into the house, says 'I don't like this and this and this, let's change everything', spends nearly the home's entire original value in renovations and substituting the wood for steel because the changes jeopardized the original structure only for it to be some tasteless 'open concept modern home' wannabe that absolutely no one else would want to live in.

At a time like this, I would imagine that everyone's clasping their hands together praying that Intel will keep the promises they made before the throne got handed over. This is the first in a series of financially devastating disappointments, pessimistically speaking.
 
22FF was a good node, not many problems. 14 was a good node - six month delay though; after that - crickets.
I disagree. Ivy Bridge wasn't a lot better than Sandy Bridge. 22 was good at Haswell refresh. That's 2 years of delay. 14 was good at Coffee Lake that's 4 years of delay. 10 was a good one only after they renamed it to 7, that's 5 years of delay. Intel 4 (initially 7nm) is now 7 years after its initial expected date and it still isn't good. So for me the last good one was 32nm. On the other hand they had problems with 90nm. Well Netburst was not a good architecture, but Prescott was partially caused by 90nm not being able to handle higher frequencies as initially expected. Back then they were able to overcome this by changing architecture. That was good enough up to 32nm.
 
I disagree. Ivy Bridge wasn't a lot better than Sandy Bridge. 22 was good at Haswell refresh. That's 2 years of delay. 14 was good at Coffee Lake that's 4 years of delay. 10 was a good one only after they renamed it to 7, that's 5 years of delay. Intel 4 (initially 7nm) is now 7 years after its initial expected date and it still isn't good. So for me the last good one was 32nm. On the other hand they had problems with 90nm. Well Netburst was not a good architecture, but Prescott was partially caused by 90nm not being able to handle higher frequencies as initially expected. Back then they were able to overcome this by changing architecture. That was good enough up to 32nm.
14nm SkyLake-X was a very good node.
I still use a i9 7900x as my daily driver.
My own bench marks show it is 20-30% slower then newer cpus with comparable core counts on real work.
8 years and only a 20-30% boost in productivity.
That is the real issue.
 
14nm was easily Intel's best node. Too good.

Hard times forge strong corporations, which drive prosperity. Good times breed weak corporations, whose failures spark hard times.
 
I disagree. Ivy Bridge wasn't a lot better than Sandy Bridge. 22 was good at Haswell refresh. That's 2 years of delay. 14 was good at Coffee Lake that's 4 years of delay. 10 was a good one only after they renamed it to 7, that's 5 years of delay. Intel 4 (initially 7nm) is now 7 years after its initial expected date and it still isn't good. So for me the last good one was 32nm. On the other hand they had problems with 90nm. Well Netburst was not a good architecture, but Prescott was partially caused by 90nm not being able to handle higher frequencies as initially expected. Back then they were able to overcome this by changing architecture. That was good enough up to 32nm.
Ivy Bridge wasn't much faster than Sandy (iirc it overclocked a bit worse actually), but it did bring feature updates like PCI-E 3.0 and native USB 3.0 and had lower idle power consumption
 
Serious question: when was the last time we heard good news about Intel's nodes?
Easy answer!

Coffee Lake Refresh
My old 9900KS (Intel 14nm++)

Raptor Lake Refresh
My current 14900KS (Intel 7)

These are Intel best nodes, without these two nodes Intel would not exist today.

(Intel 3) is for Intel Xeon 6 processor family!

Both (Intel 18A-P) Node & (Intel 18A-PT) Node are very good especially Intel 18A-PT 3D Chiplets (Intel Foveros)

Definitely looking forward to Intel Razer Lake.
1.8nm (Intel 18A-PT PowerVia Backside Power 3D Intel Foveros) 18A-PT is Intel secret weapon.

Cheers
 
I would like to know about the progress in reducing the size of the SRAM memories (cache memory) of these new nodes.
 
In April, 18A was supposed to be Intel's holy grail, outperforming the competition's 2nm class nodes. What changed so drastically in the last3 months ? Guess the shareholders are being lied to.
My opinion withstanding that Intel should spin off its fabs.

Isn't that the bottom line with everything coming from China? Just look at the car industry.
Did anybody read past the headline? They are considering stopping the offer of 18A to Foundry customers and try and get customers for 14A. This currently has little to do with Intel's own production that seems to be going on as planned on 18A - for consumer space the Panther Lake at the end of this year. Foundry has been struggling to find customers one way or another for multitude of reasons and given the lead times these contracts have if they start the process of onboarding someone the production is quite likely to be at the time when 14A is (estimated) to be in HVM.

It's not that Intel has shown any clear signs of things going particularly well but everyone just jumps on usual "Intel fail!" comments that get old and really have nothing to do with contents of the piece.
 
Until i see external customers and mass production on their promised node (and i dont mean just one internal mobile chip) then it *IS* intel fail to me.
They have failed too much in a row for me to trust them when they say 14A is going to solve all their problems. Why should i believe that when 20A didn't and 18A is already restricted to one internal mobile chip and thus likely low volume/low yielding.
 
Until i see external customers and mass production on their promised node (and i dont mean just one internal mobile chip) then it *IS* intel fail to me.
They have failed too much in a row for me to trust them when they say 14A is going to solve all their problems. Why should i believe that when 20A didn't and 18A is already restricted to one internal mobile chip and thus likely low volume/low yielding.
Intel as we knew it is dead, it’s going to be slow pathetic bleed like ibm to irrelevance.

I hope some of the talent finds a good place
 
Intel as we knew it is dead, it’s going to be slow pathetic bleed like ibm to irrelevance.

I hope some of the talent finds a good place
Is IBM bleeding into irrelevance? When did it happen?

IBM has always worked and made billions of dollars in large-scale computing. What company do you think the US government and large companies (like banks) hire for their operations?
 
...I was just getting excited the other day about what a great deal the 256K is. What node was that on?
Yeap, it's faster than 10k$ cpus from the competition, imagine what they can achieve with superior node.

untitled.JPG

 
Yeap, it's faster than 10k$ cpus from the competition, imagine what they can achieve with superior node.

View attachment 406406

Where is that chart reference from? There are a lot of chips faster than the $10K chips including EPYC but those expensive chips are built for massive throughput not raw speed.
 
Is IBM bleeding into irrelevance? When did it happen?

IBM has always worked and made billions of dollars in large-scale computing. What company do you think the US government and large companies (like banks) hire for their operations?
Then read some history about IBM and what they used to be. Even in the server market they are pretty much irrelevant.
Where is that chart reference from? There are a lot of chips faster than the $10K chips including EPYC but those expensive chips are built for massive throughput not raw speed.
Looks like a review from Tech Notice Youtube channel.

Another weird take from an ignored user. Posting a chart from Premiere Pro testing where old Zen 4 based Threadripper loses to Core Ultra series in a program that does not scale to near 100 cores.

Then claiming "it's faster than 10k$ cpus from the competition", when the only 10k CPU in the chart is a 96c/192t thread CPU.
I guess it goes over his head that Premiere does not scale to so many cores/threads. Or even the max RAM (2TB) that TR can support.
It's as stupid as claiming that 2TB RAM is pointless because in premiere 128GB performs faster at much lower price.

7995WX is obviously thrown in to the chart to illustrate that paying more for the CPU does not guarantee the best performance there.
I dont think anyone buying a 10k server CPU is running Premiere all day.
 
Back
Top