• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

True nature of E-cores and how effective are they?

Everything on that list can be fixed with kernel scheduling, software optimisation and bios limits. The hardware is fine.
But won't be, and that's why this implementation won't win me over.

They're true nature is a spec list booster, and I would admit they work well on that term.

They're effective all right, without them Intel would be f£#@ed, simple.
 
But won't be, and that's why this implementation won't win me over.
So your argument is that Microsoft won't continue to update windows 11 and it's core scheduler, Intel will no longer release bios updates for it's motherboards, and software development will completely stagnate as of the release of 12th/13th gen Intel?
 
So your argument is that Microsoft won't continue to update windows 11 and it's core scheduler, Intel will no longer release bios updates for it's motherboards, and software development will completely stagnate as of the release of 12th/13th gen Intel?
No my argument is Intel won't put that effort in.
It is not how they are using those core's.

This isn't arms big little, there's work's.
 
No my argument is Intel won't put that effort in.
It is not how they are using those core's.

This isn't arms big little, there's work's.
So Intel's long term strategy, which consists of parallel core development, which they are specifically betting on with many (if not all) their upcoming architectural designs with, including tile based chiplets, is also to, wait for it, completely discontinue optimizations for said designs?

Interesting opinion.
 
Some things can't be fixed with kernel optimizations. Just giving you an example like AMD's bulldozer wasn't really "fixed" till win8 IIRC.
AMD's bulldozer was more like bull**** tbh, joke architecture that almost bankrupted AMD, Zen literally saved them. Not sure the comparison is fair here, but I take your point.
 
So Intel's long term strategy, which consists of parallel core development, which they are specifically betting on with many (if not all) their upcoming architectural designs with, including tile based chiplets, is also to, wait for it, completely discontinue optimizations for said designs?

Interesting opinion.
How you turn what I say into your theology's is amusing.

So wrote simply.

They work EXACTLY as Intel intended already.

Yes, they will continue to develop them and things could change.

But as is now.

They're pushing E core's to higher core counts.

And higher frequencies.

Solely to compete on multi core performance and core count.

They're not using them right IMHO and never will in this design with or without Microsoft assist.

Because if they did they're performance would be sub par verses AMD.

Later designs might differ but not likely since they cannot make a core count equal design, performance parity part economically that competes in Any other way ATM.
 
So your argument is that Microsoft won't continue to update windows 11 and it's core scheduler, Intel will no longer release bios updates for it's motherboards, and software development will completely stagnate as of the release of 12th/13th gen Intel?
Whoa, slow down there maestro. There's a Windows 11? I think you meant to say Windows 1 - 16 bit right? Software and hardware upgrades, patches, new releases? When has that ever happened?
 
Whoa, slow down there maestro. There's a Windows 11? I think you meant to say Windows 1 - 16 bit right? Software and hardware upgrades, patches, new releases? When has that ever happened?
I know right? Progress? Is that what they call what those pesky developers do?
 
Mobile phones have it at least for a decade already.....
And Qualcomm says they want to enter the desktop market in 2024. It will be interesting to see what design they (along with Nuvia which they purchased) may bring.
 
How you turn what I say into your theology's is amusing.

So wrote simply.

They work EXACTLY as Intel intended already.

Yes, they will continue to develop them and things could change.

But as is now.

They're pushing E core's to higher core counts.

And higher frequencies.

Solely to compete on multi core performance and core count.

They're not using them right IMHO and never will in this design with or without Microsoft assist.

Because if they did they're performance would be sub par verses AMD.

Later designs might differ but not likely since they cannot make a core count equal design, performance parity part economically that competes in Any other way ATM.

This is true for the K and the highest end SKUs, they have to yeet those products for that 2% win, but the i5's and non-K skus where they're actually tuned properly really do benefit. Just no one ever pays attention to those - in those cases the performance / $ is massive and the power is completely under control.
 
And Qualcomm says they want to enter the desktop market in 2024.
Where QC is at today they'll be pouring unleaded Gasoline on their investments on anything in the desktop space. It's already shrinking massively & unless they have an x86 Apple Killer of a chip they'll not even get consolation prize for competing against AMD or Intel at the time.
 
I've brought it up before but I strongly believe that the E-cores are a step Intel has taken towards increasing processor density in the same package area. The main target of this would obviously be the server market, but largely thanks to Ryzen, we've also seen common desktop tasks beginning to take advantage of 8+ core processors. The idea is ingenious, increase density and develop the architecture into a high-performance one at the same time, reaping the rewards for both.

Currently, the E-cores perform worse but as time goes on, they will increase the density and performance of these cores, eventually coupling this with Foveros 3D packaging and ever more advanced lithography nodes that permit transistor densities that are not currently feasible with 7 nm-class technology such as Intel 10 or TSMC N7, and the result is that you may some day have a veritable multithreading monster within a regular desktop footprint. They already managed to double E-core density in one generation with Raptor Lake and this is just the desktop market. I would not be surprised if Intel eventually offers an advanced processor with say, 8 P-cores and 120 E-cores targeted at something like a workstation/HEDT market, just like AMD does with Threadripper Pro today. Eventually, those will be our i9's, too.
 
I've brought it up before but I strongly believe that the E-cores are a step Intel has taken towards increasing processor density in the same package area. The main target of this would obviously be the server market, but largely thanks to Ryzen, we've also seen common desktop tasks beginning to take advantage of 8+ core processors. The idea is ingenious, increase density and develop the architecture into a high-performance one at the same time, reaping the rewards for both.

Currently, the E-cores perform worse but as time goes on, they will increase the density and performance of these cores, eventually coupling this with Foveros 3D packaging and ever more advanced lithography nodes that permit transistor densities that are not currently feasible with 7 nm-class technology such as Intel 10 or TSMC N7, and the result is that you may some day have a veritable multithreading monster within a regular desktop footprint. They already managed to double E-core density in one generation with Raptor Lake and this is just the desktop market. I would not be surprised if Intel eventually offers an advanced processor with say, 8 P-cores and 120 E-cores targeted at something like a workstation/HEDT market, just like AMD does with Threadripper Pro today. Eventually, those will be our i9's, too.
I would take it even further and say the E cores do not perform worse at all - they do exactly what they're designed to do - increase multithread at the expense of much less die space than P cores, which do not really scale past eight, realistically, in both chip area cost or actual need for them.
 
I would take it even further and say the E cores do not perform worse at all - they do exactly what they're designed to do - increase multithread at the expense of much less die space than P cores, which do not really scale past eight, realistically, in both chip area cost or actual need for them.

Agreed, E-cores are probably the future, and I think AMD will also eventually adopt a similar strategy with a hybrid architecture processor. Zen 2 is a great candidate to be used as efficient cores with Ryzen, as we've seen it's a relatively unassuming yet high performance architecture, and Mendocino showed us that using Zen 2 with modern lithography nodes makes for exceptionally small chips that can pack a punch, assuming AMD's engineers are able to modify it and package it in such a way, I would definitely expect a product like that to eventually exist, say, a hypothetical processor mixing 16 Zen 4 cores with another 16 3D stacked Zen 2 cores, for example.
 
I know right? Progress? Is that what they call what those pesky developers do?
Not once has any of your counter arguments in any way pushed aside any of my opinions or claims.

Instead you went with trying to prove me irrational.

All whilst the best you got is Microsoft will fix it.

We're six months in, no, no they won't.

It's working AS intended Intel fanboi types have core parity and highest IPC argument in the bag.

What else mattered to Intel when making efficiency core's.
Sure as shit wasn't power use or temperature or efficiency ftm.

You got nothing but a laugh, great counter, not.
 
Last edited:
Or how about x3D with regular Zen cores? The issue with Intel's approach isn't just the E cores or mismatched instruction set support ~ they're also dealing with something like switching from 1c/2t HT cores to normal ones without HT, that also will affect performance & efficiency IMO negatively.
 
The global cabal is hell bent on getting off of oil and gas and therefore energy prices are expected to skyrocket. Consumers won't be purchasing that new phone every year or new PC every 3-4 years, and enterprise won't be replacing servers every 4-5 years if their power bills are all double and tripling.
Cool but what does it have to do with ecores?

Isn't used for gaming anyway.
And never will be if this keeps up.

they're also dealing with something like switching from 1c/2t HT cores to normal ones without HT
That's not how it works. You switch a thread, not a core.
 
Cool but what does it have to do with ecores?


And never will be if this keeps up.


That's not how it works. You switch a thread, not a core.
I think his point is two interdependent threads on one HT core to two separate core's.
 
Not once has any of your counter arguments in any way pushed aside any of my opinions or claims.

Instead you went with trying to prove me irrational.

All whilst the best you got is Microsoft will fix it.

We're six months in, no, no they won't.

We only use Windows in our day-to-day lives because of the large back catalog of supported legacy software and the commercial software pledge to it. It's always been glacial in the pace which it adapts to modern computing and it carries decades of baggage in legacy code it simply cannot get rid of. I mean; really, Windows 11 22H2 still ships with the phone dialer application introduced in NT 4.0.

If mostly everything I've ever used wasn't designed straight against Microsoft Windows, i'd be a long-time Linux user by now, and it is in Linux that you should expect to see proper support for the bleeding edge, for the corner cases and for all sorts of wacky hardware that may appear someday. I don't think this is a discredit towards Intel - but rather, towards Microsoft, and even then, it's not like Microsoft can do much about it - you can already imagine the endless whining and complaints if they ever decide to axe software backcompat and limit it to say, apps designed for Windows 8.1 and later only - damned if they do, damned if they don't.

Or how about x3D with regular Zen cores? The issue with Intel's approach isn't just the E cores or mismatched instruction set support ~ they're also dealing with something like switching from 1c/2t HT cores to normal ones without HT, that also will affect performance & efficiency IMO negatively.

A properly optimized operating system should be able to discern between these two types of architectures and assign tasks suitable for each type of core, Android phones have been doing it for years, and now there's hardware-assisted thread scheduling in Alder Lake so it really should be a matter of software optimization, which... Windows is just not, you step outside of its comfort zone and all things can happen.
 
That's not how it works. You switch a thread, not a core.
You're switching the thread from a P core to an E core, all ADL chips have HT on P cores IIRC. How is that wrong? I didn't say switching from just one core to another, more like P (cores) to E which are quite different.

A properly optimized operating system should be able to discern between these two types of architectures and assign tasks suitable for each type of core, Android phones have been doing it for years, and now there's hardware-assisted thread scheduling in Alder Lake so it really should be a matter of software optimization, which... Windows is just not, you step outside of its comfort zone and all things can happen.
Optimizations only come after the hardware's out there in the market, this is their first real test case on a full fledged desktop chip. Android is quite different in that regard & addresses such needs differently, you're after all doing it on a phone.
 
I think his point is two interdependent threads on one HT core to two separate core's.
If anything that would be a benefit.

You're switching the thread from a P core to an E core, all ADL chips have HT on P cores IIRC.
Yes, but it's still just one thread. Not two. Not more.
 
That depends on the application. Some applications spawn one thread per core even if it's for HT, like 7zip for example probably winRAR as well. If threading & handling multiple cores was so easy we would've had 1000c/2000t (E?) cores by now.
 
You're switching the thread from a P core to an E core, all ADL chips have HT on P cores IIRC. How is that wrong? I didn't say switching from just one core to another, more like P (cores) to E which are quite different.


Optimizations only come after the hardware's out there in the market, this is their first real test case on a full fledged desktop chip. Android is quite different in that regard & addresses such needs differently, you're after all doing it on a phone.

I agree in general, and that also applies to Arc - hardware will only ever mature and kinks will only ever be fixed if there is wide deployment of any given hardware architecture. Alder Lake is the first such widely-deployed product, and any inefficiencies should be iterated upon as newer generations roll out.

Limited field testing with hybrid architectures before Alder Lake had been a thing with the Lakefield CPUs, though that was restricted to just a few laptop models, mostly from Samsung. The i5-L16G7 for example actually pioneered more than just hybrid architecture (it was a 5-core chip with 1 Sunny Cove P-core and 4 Tremont E-cores), it was also the first Foveros 3D SKU to ship, complete with 8 GB of 3D-stacked, on-package LPDDR4X DRAM, and this was back in 2020.


Microsoft likely had a sample of this even earlier, as they no doubt have an interest in supporting Intel's latest technological advancements, so all things considered, they have known about this for some time and it's been about a year since it's a mass-deployed product, being slow to adapt is entirely in MSFT's back if you ask me.

That depends on the application. Some applications spawn one thread per core even if it's for HT, like 7zip for example probably winRAR as well. If threading & handling multiple cores was so easy we would've had 1000c/2000t (E?) cores by now.

Software should be able to adapt if the constraints of the operating system are dropped. 7-zip and other archivers like RAR are quite multi-core friendly, and I suspect the developers of both software can optimize priority to make the best of such an architecture, the same should go for video encoders (which already do this based on CPU architecture and instruction set).
 
PLEASE, NO INTEL/AMD FANBOY ANTAGONISM!

What is your opinion on intels motivation for introducing e-cores. Sometime ago I met a self-confessed AMD jock suggesting intels E-cores are just a poor attempt to overshadow AMD's "core count" which can't be achieved with performance cores alone. Although my buying decision is mostly based on benchmarks and price, i have to admit now everytime I come across e-cores I do end up second guessing the purpose behind them.

Whether the above statement is true or not, are e-cores effectively achieving what they're designed for?

- If yes, how effectively and in what type of workloads

- If no, might this be just down to poor implemention with ADL with positive signs going forward (next Gen/Gens)?

In my view, the strategy of small cores is directly linked to intel's slowness in advancing in chip manufacturing. With no extra space from a denser lithograph, they needed effective cores per area to match the massive MT performance of the Ryzen line up

This strategy is limited by TDP and heating etc...
 
Back
Top