• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel "Nova Lake-S" Tapes Out on TSMC N2 Node

"difficult manufacturing target due to the heterogenous complexity." Ka-ching... :pimp:

Intel is playing with fire... Zen 6 X3D chips will steamroll this out of relevance in the DIY channel if it's not priced well, even if they turn out to be slower overall.

This definitely needs to be a banger people on either side will have to buy a whole new likely expensive platform so it has to be clearly the best option it can't be barely faster than Raptor lake at gaming lol. Just looking at this though it looks like gaming is near the bottom of the list of priorities. Unless they release a really nice ultra 5 or ultra 7 with mostly P cores and maybe the cache tile I keep hearing about.

I don't have a ton of faith in Zen6 like some do due to how mediocre Zen5 was but then Intel came out with Arrow lake and said hold my beer so While I want to be excited for both I will wait till W1z test them to have an opinion on them likely late next year assuming they actually launch on time.
 
Windows 10 is being decommissioned this October. Despite wanting to hold out I think all my computers will have to move to PopOS or Windows 11 at that time.
Yeah, there is a lot of people still clinging to Windows 10, and not just masses of enterprise users. The ever-increasing bloat takes a big toll on older hardware.

While Linux is in a completely different league when it comes to performance and stability, crap is really starting to seep in there too. I've been used to Linux being able to keep old hardware usable as long as possible, but as evident with upgrading my older i7-3930K from Ubuntu 18.04(?) to 24.04, the desktop went from smooth to noticeably laggy. So there is some increasing bloat in kernel/drivers/DE, not just applications. Normal "light" applications like LibreOffice has become a glitchy mess too on even more recent hardware. It's sad to see bad code creeping in everywhere hindering the progress of technology. There is no good reason why a modern PC should struggle more than my old Pentium 90 running Excel…

But this reluctance with upgrading has been a recurring issue with every other Windows release or so. It seems like the last Windows version people eagerly upgraded to was Windows 98, since then users have been clinging to XP, 7, and 10, not because any of them were good, just because the successor was so much worse.

recent AGESA firmware updates and drivers tries to park the 2nd ccd (non 3d vcacne one) as much as possible while detecting gaming loads (either via the driver hooks or via the windows xbox game bar) still it dips, that's the only gripe I am having for the past few weeks, I just went back to my Intel Platform just to rid me off the stress..
I believe people are attributing far too much of their experienced issues to the chiplet layout. While there are cases where it matters, even single-CCD CPUs have their latency issues, and that's also why messing with drivers etc. only will get you so far. (Also remember big Xeons have different mesh layouts without that causing too severe issues.) When performance issues caused by core layout/buses happen, they are usually very severe, as if a game's thread or driver is in the "wrong" core, the performance penalties will stack up quickly, and it will take a while before the OS scheduler rebalances it.

If it's more "minor" but very frequent stutter or latency, then it's probably due to bottlenecks internal to the core. When it comes to low-level stuff, the performance characteristics of Intel's and AMD's CPUs are a bit different. In very broad strokes AMD excel with "simpler" but more computationally dense loads, Intel handle logic better thanks to a much stronger front-end. I've seen this when doing low-level optimization of code; some operations are seemingly almost twice as fast on AMD, but it's far more "sensitive" to have just the right combination of instructions to saturate the core, whereas Intel handles better a wider range of loads.
 
It might as well be Celestial for all we know.

Yeah this could be any number of things. Could be a beefed up SoC with a monster NPU, or it could also be an Xe4 GPU tile. For that matter, it could be a Celestial dGPU, since it is also expected in H2 2026.
 
What I meant is LPE sticks out on the desktop like a sore thumb.
LPE cores feels like Intel's desperate way to advertise a higher core count. I feel the problem with removing SMT/ HT is the fact that you will need to put in more physical cores to make up for the lack of threads. With AMD expected to increase core count on Zen 6, Intel is clearly trying to keep up with more cores. On a desktop, one will wonder what is the point of the LPE cores. Lunar Lake replaced Meteor Lake and dropped the LPE cores which are exceptionally slow and high in latency. Probably only useful if you leave your computer at idle with minimal background activity most of the time.

The focus on NPU will probably do Intel in as well. In truth, most people are using ChatGPT , etc, which will not benefit from having NPUs. If I am so keen to run AI workflow locally, the likes of Strix Halo or other onboard dGPU will run circles around NPU built into the CPU. It is a quarter bake solution which we can already observe the lack of interest over the past 2 years.
 
LPE cores feels like Intel's desperate way to advertise a higher core count. I feel the problem with removing SMT/ HT is the fact that you will need to put in more physical cores to make up for the lack of threads. With AMD expected to increase core count on Zen 6, Intel is clearly trying to keep up with more cores. On a desktop, one will wonder what is the point of the LPE cores. Lunar Lake replaced Meteor Lake and dropped the LPE cores which are exceptionally slow and high in latency. Probably only useful if you leave your computer at idle with minimal background activity most of the time.

Considered it's 16P+32E+4LP, I don't necessarily think it's a desperate attempt to advertise a higher core count, but genuinely a power-saving measure. One that doesn't make much sense on desktops, but likely carried over from their mobile design. As long as the OS can address these cores properly, it's a welcome addition. You can always disable that domain otherwise, with no meaningful performance losses.
 
Considered it's 16P+32E+4LP, I don't necessarily think it's a desperate attempt to advertise a higher core count, but genuinely a power-saving measure. One that doesn't make much sense on desktops, but likely carried over from their mobile design. As long as the OS can address these cores properly, it's a welcome addition. You can always disable that domain otherwise, with no meaningful performance losses.

I'm 100% interested in how windows handles this, going that wide with different core types can't be easy and we all know games don't always do the best job behaving on hybrid architectures.

It'll also be interesting how large the die is and how expensive it is for intel to manufacturer on N2

I honestly thought they'd have their foundry issues sorted by now.

I'm sure it'll wreck at cinebench though.
 
I'm 100% interested in how windows handles this, going that wide with different core types can't be easy and we all know games don't always do the best job behaving on hybrid architectures.

It'll also be interesting how large the die is and how expensive it is for intel to manufacturer on N2

I honestly thought they'd have their foundry issues sorted by now.

I'm sure it'll wreck at cinebench though.

Linux has been able to handle heterogeneous ISA processors for quite some time now, the issue has always been Windows. Windows NT kernel is an actual fossil at this point, and Windows is in a dire need of a development reset at this point. Kind of like they did with Longhorn. The problem is, that will inevitably cause another Vista to happen, even if the OS is executed well (as Vista was), device drivers and software developers will never be ready for it.
 
".. difficult manufacturing target due to the heterogenous complexity."
Get that right out of my life! AVOID.

It's a 10 year old deprecated MS OS who would use 2026 hardware on ancient garbage?
A group here on TPU are die hard windows 7 & 10 users.
recent AGESA firmware updates and drivers tries to park the 2nd ccd (non 3d vcacne one) as much as possible while detecting gaming loads (either via the driver hooks or via the windows xbox game bar) still it dips, that's the only gripe I am having for the past few weeks, I just went back to my Intel Platform just to rid me off the stress..
Cmd prompt code can force windows to stop that behaviour. Look it up.
 
Cmd prompt code can force windows to stop that behaviour. Look it up.
I use my PC like a normal person, and that's the way it should be out of the box as expected..those who have a lot of free time to run scripts before using the computer, have to skim the bios to enable/disable something may do so, but not for me..not everyday is a tweaking day for me, I just want to simply enjoy what I built and paid for..not beta test til its EOL in the cycle.

I do know that trick as I use it when benching..but for normal usage I just use my stuff as it is..
 
Though in honesty, if you really want a problem free experience, you avoid that with AMD as well. They don't have different cores, but they had different core arrangements that did cause problems. It's why I picked 5800X, because it's all big cores, single CCD. There is NOTHING that can schedule anything in a wrong way. Where initial Ryzens had quite a lot of issues when same workload was split between 2 CCD's Even latest 9950X3D that's the pinnacle of AMD's tech, while everyone hyped that with latest Windows scheduler updates and AMD's new chipset drivers, it was addressed, but can you ever be sure it's working exactly as it should? You can say that for certainty with 5800X/X3D or with 9800X3D, but not with 9950X.
You can see with a real time overlay what cores are being used and even force what cores are used with process affinity software. Shouldn't be needed but at least the option is there.

I'm not sure how I feel about all these different types of cores being packaged together for desktop chips personally I'd rather have all large cores . How many desktop apps actually see significant performance increases with them? It's more of a enthusiast or work from home benefit your average Joe probably doesn't need or care.
 
LPE cores feels like Intel's desperate way to advertise a higher core count. I feel the problem with removing SMT/ HT is the fact that you will need to put in more physical cores to make up for the lack of threads. With AMD expected to increase core count on Zen 6, Intel is clearly trying to keep up with more cores. On a desktop, one will wonder what is the point of the LPE cores. Lunar Lake replaced Meteor Lake and dropped the LPE cores which are exceptionally slow and high in latency. Probably only useful if you leave your computer at idle with minimal background activity most of the time.
But it's working, isn't it? Since Alder Lake (8P/16T+8E), Intel on desktop has kept up with AMD multicore performance, even with the loss of hyperthreading this generation. And since Ryzen 3000 (8P/16T+8P/16T) and its chiplet architecture, AMD's idle power consumption has been atrocious. Intel's idle power with Arrow Lake's (8P+16E) tile architecture on desktop while not as bad is still much worse than Raptor Lake (8P/16T+16E). That's what LPE cores fix which is why Arrow Lake mobile (2LPE+6P+8E), Meteor Lake (2LPE+6P+8E), and even Lunar Lake (4LPE+4P) have LPE cores. And yes they're only useful for near-idle workloads, so they don't even contribute meaningfully to multicore workloads like the E cores do.
I'm not sure how I feel about all these different types of cores being packaged together for desktop chips personally I'd rather have all large cores . How many desktop apps actually see significant performance increases with them? It's more of a enthusiast or work from home benefit your average Joe probably doesn't need or care.
If you want to zip or unzip a large file, encode a video, or run an LLM, more cores is nice. Intel's all P-core server chips are still using older Redwood Cove cores and yet they're a lot more expensive. For roughly the same cost as 12 P cores Arrow Lake desktop has 24 cores. If you really need more than what the 8 P cores can provide, then odds are 16 E cores are more useful than 4 more P cores. Even if you only needed 12 cores total, E cores won't hurt because the last 4 P cores won't have the power or thermal budget to operate much beyond their base clock speed, so they might as well be E cores. I suspect the reason the 9950X isn't 8 Zen 5 + 16 Zen 5c is because AMD sees the little cores as bad for PR rather than bad for performance.
 
If you want to zip or unzip a large file, encode a video, or run an LLM, more cores is nice. Intel's all P-core server chips are still using older Redwood Cove cores and yet they're a lot more expensive. For roughly the same cost as 12 P cores Arrow Lake desktop has 24 cores. If you really need more than what the 8 P cores can provide, then odds are 16 E cores are more useful than 4 more P cores. Even if you only needed 12 cores total, E cores won't hurt because the last 4 P cores won't have the power or thermal budget to operate much beyond their base clock speed, so they might as well be E cores. I suspect the reason the 9950X isn't 8 Zen 5 + 16 Zen 5c is because AMD sees the little cores as bad for PR rather than bad for performance.
Exactly for anything outside of those use cases the cores aren't of very much use for the average desk top user as the 285k for example proves I can't help but think it was only intended as a stop gap to get us towards hybrid cores but unfortunately that's now cancelled which is a pity because they would have been IMO the future.

I can't help but feel this will only get worse as zen 6 , 7 and beyond continue to expand how many P cores are offered to average Joe's like us I guess it will depend on how afraid AMD is of these parts eating into entry lavel Thread ripper but one thing for sure is it looks like AMD is fully going to exploit Intel being on the back foot with jumping to N2X and N2P.
 
Panther lake cpu tile is confirmed using 18a tech, so this tsmc 2nm tech should be used in celestial GPU tile.
 
Panther lake cpu tile is confirmed using 18a tech, so this tsmc 2nm tech should be used in celestial GPU tile.

N3 at best, it's a tiny tile. there is no good reason to port it to the newest node.

I'm not sure how I feel about all these different types of cores being packaged together for desktop chips personally I'd rather have all large cores . How many desktop apps actually see significant performance increases with them? It's more of a enthusiast or work from home benefit your average Joe probably doesn't need or care.

I'm willing to trade 32 Ecore for 12 P-cores because that's the equivalent chip area. The 4c island is 1.66 times larger than the 1C, and from a performance perspective it's 3.3 times faster.
 
Last edited:
Intel's idle power with Arrow Lake's (8P+16E) tile architecture on desktop while not as bad is still much worse than Raptor Lake (8P/16T+16E).
How is it worse?
It's the same.
Or do you mean that it's bad that it's the same and not lower?
 
Considered it's 16P+32E+4LP, I don't necessarily think it's a desperate attempt to advertise a higher core count, but genuinely a power-saving measure. One that doesn't make much sense on desktops, but likely carried over from their mobile design. As long as the OS can address these cores properly, it's a welcome addition. You can always disable that domain otherwise, with no meaningful performance losses.
The main motivation for Intel to push hybrid designs on the desktop are the big PC vendors (Dell, Lenovo, HP, etc.), which mostly sell people and businesses upgrades based on specs like "GHz" and "cores", which is why these companies would advertise the hybrid CPUs like "52 cores, up to 6.0 GHz at 65W". The secondary motivation is that it looks good in some synthetic benchmarks.

Just imagine for a moment if they manage to create a new 4-core with massive IPC gains, like >2x, it would sell terribly thanks to low information buyers. Probably many forum users in here wouldn't even trade more cores for faster cores, even though performance scaling would be superior.

Linux has been able to handle heterogeneous ISA processors for quite some time now, the issue has always been Windows. Windows NT kernel is an actual fossil at this point, and Windows is in a dire need of a development reset at this point. Kind of like they did with Longhorn. The problem is, that will inevitably cause another Vista to happen, even if the OS is executed well (as Vista was), device drivers and software developers will never be ready for it.
Most of that is right. While we don't know whether Longhorn was good or not, Vista was bad because they abandoned it and stuck with the old kernel with lots of fancy bloat.
But yeah, the "New Technology" kernel is an ancient relic at this point. I kind of would expect them to continue the patchwork in perpetuity, until they're forced to switch to a Linux kernel or something out of desperation…

As for being able to "handle" heterogeneous CPUs, it probably depends on what you mean by that. If the expectation is that most user-interactive application should stay on fast cores, and efficiency cores are used for background, idle or intentionally for some batch loads (when the application is aware of it), then yeah, I believe Linux is already there. The problems arises when applications aren't aware of the differences between cores, and just sees "40 threads" and spawns too many threads for synchronous loads ending up causing delays or latency. In a way, I would argue this is the "fault" of the application, and such problems are inevitable when most just query for thread count. I'm not aware of any other option than to query CPUID for the different core's abilities and determine how many are fast etc.
 
How is it worse?
It's the same.
Or do you mean that it's bad that it's the same and not lower?
I didn't notice these results from Tech Power Up. It's at odds with Tom's Hardware
LMS3A32uFLLbNBeL2xmqfD-1024-80.png.webp

and with PC Mag.
1856.jpg
 
Most of that is right. While we don't know whether Longhorn was good or not, Vista was bad because they abandoned it and stuck with the old kernel with lots of fancy bloat.
But yeah, the "New Technology" kernel is an ancient relic at this point. I kind of would expect them to continue the patchwork in perpetuity, until they're forced to switch to a Linux kernel or something out of desperation…

As for being able to "handle" heterogeneous CPUs, it probably depends on what you mean by that. If the expectation is that most user-interactive application should stay on fast cores, and efficiency cores are used for background, idle or intentionally for some batch loads (when the application is aware of it), then yeah, I believe Linux is already there. The problems arises when applications aren't aware of the differences between cores, and just sees "40 threads" and spawns too many threads for synchronous loads ending up causing delays or latency. In a way, I would argue this is the "fault" of the application, and such problems are inevitable when most just query for thread count. I'm not aware of any other option than to query CPUID for the different core's abilities and determine how many are fast etc.

There are plenty of builds of Longhorn floating around on the internet, both pre- and post-reset, as well as videos for you to check them out on YouTube. But I'm able to make a case with something even more rudimentary: Neptune, the earliest prototype of an NT-based home operating system, largely based on Windows 2000. Even in its unfinished state, it's a far more cohesive operating system than Windows 11 has EVER been. While I do believe that the "under the hood" parts of Windows are in healthy shape, the workflow is terrible and the OS has started to miss on essential functionality, often replacing it with bloat, AI or cloud apps (see: new Notepad).



Also IIRC the NT 5.2 (2003R2) > 6.0 signified a major kernel overhaul, which stuck through OS version 6.2 (Windows 8), and with this final version being reported as the kernel version starting with 8.1 (6.3), Microsoft decided to change the OS internal version with 10 to NT 10.0 so since then, the internal OS version has no meaning in regards to that anymore. Quite the mess made.
 
Seems that Intel's Valleytronic can also wait for another 10 years .....

P.S: Intel, just die.
 
Last edited by a moderator:
Seems like Intel's Valleytronics can also wait for another 10 years .....

P.S: Intel, just die.

Yeah, die Intel, so we can see just how consumer friendly AMD truly is :fear:

You should be praying for Intel to overcome their difficulties every single day, for their next processor to lay a smackdown on Ryzen they won't soon forget. Competition is good. Monopoly, bad.
 
I'm willing to trade 32 Ecore for 12 P-cores because that's the equivalent chip area. The 4c island is 1.66 times larger than the 1C, and from a performance perspective it's 3.3 times faster.
If you use applications that use them absolutely if not they are a waste of sand and this is mostly the case. Honestly, talking to a lot of customers and they think bigger number is better they flat out have no clue what the difference is between P, E, LPE or any of the other cores. If you're an informed consumer and buy these products for a specific purpose congratulations you're the tiny minority.
 
I didn't notice these results from Tech Power Up. It's at odds with Tom's Hardware
LMS3A32uFLLbNBeL2xmqfD-1024-80.png.webp

and with PC Mag.
View attachment 407594
Well I don't know what say, except the truth might be somewhere in the middle?
 
If you use applications that use them absolutely if not they are a waste of sand and this is mostly the case. Honestly, talking to a lot of customers and they think bigger number is better they flat out have no clue what the difference is between P, E, LPE or any of the other cores. If you're an informed consumer and buy these products for a specific purpose congratulations you're the tiny minority.
I'm just mad at Intel .... I would like them to take off from the ground - But honestly!
Do you still believe in miracles at your age?

P.S: 10 years of continuous degradation won't be forgiven even by the God, so good luck with Intel.
 
Last edited by a moderator:
It might as well be Celestial for all we know.
It could also just be A tile for Nova Lake like the GPU or PCH where as the cores and interposer may be on Intel nodes.
 
Back
Top