• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

True nature of E-cores and how effective are they?

Joined
Jan 20, 2019
Messages
1,671 (0.73/day)
Location
London, UK
System Name ❶ Oooh (2024) ❷ Aaaah (2021) ❸ Ahemm (2017)
Processor ❶ 5800X3D ❷ i7-9700K ❸ i7-7700K
Motherboard ❶ X570-F ❷ Z390-E ❸ Z270-E
Cooling ❶ ALFIII 360 ❷ X62 + X72 (GPU mod) ❸ X62
Memory ❶ 32-3600/16 ❷ 32-3200/16 ❸ 16-3200/16
Video Card(s) ❶ 3080 X Trio ❷ 2080TI (AIOmod) ❸ 1080TI
Storage ❶ NVME/SSD/HDD ❷ <SAME ❸ SSD/HDD
Display(s) ❶ 1440/165/IPS ❷ 1440/144/IPS ❸ 1080/144/IPS
Case ❶ BQ Silent 601 ❷ Cors 465X ❸ Frac Mesh C
Audio Device(s) ❶ HyperX C2 ❷ HyperX C2 ❸ Logi G432
Power Supply ❶ HX1200 Plat ❷ RM750X ❸ EVGA 650W G2
Mouse ❶ Logi G Pro ❷ Razer Bas V3 ❸ Logi G502
Keyboard ❶ Logi G915 TKL ❷ Anne P2 ❸ Logi G610
Software ❶ Win 11 ❷ 10 ❸ 10
Benchmark Scores I have wrestled bandwidths, Tussled with voltages, Handcuffed Overclocks, Thrown Gigahertz in Jail
PLEASE, NO INTEL/AMD FANBOY ANTAGONISM!

What is your opinion on intels motivation for introducing e-cores. Sometime ago I met a self-confessed AMD jock suggesting intels E-cores are just a poor attempt to overshadow AMD's "core count" which can't be achieved with performance cores alone. Although my buying decision is mostly based on benchmarks and price, i have to admit now everytime I come across e-cores I do end up second guessing the purpose behind them.

Whether the above statement is true or not, are e-cores effectively achieving what they're designed for?

- If yes, how effectively and in what type of workloads

- If no, might this be just down to poor implemention with ADL with positive signs going forward (next Gen/Gens)?
 
While I don't know how Intel's E-core implementation works, for sure they are motivated to improve performance-per-watt which is a super important metric for their Datacenter business.

Apple's philosophy is that performance cores handle intensive workloads that are sensitive to latency and efficiency cores handle background and threaded workloads. This M-series SoC philosophy carries over from their long history doing the same with A-series mobile SoCs. I remember Apple making some sort of claim that their efficiency (Blizzard) cores provide about 70% of the performance of their performance (Avalanche) cores at a fraction of the power.

I surmise that a lot of the actual performance gains are heavily tied to the CPU task scheduler and its ability to accurately assign workloads to the right silicon.

Of course, Apple takes this a step further by putting ML cores and GPU cores on the same package, all of them drawing from the same RAM. Apple also has hardware media transcoders, security silicon, and I believe some signal processing stuff. They introduced these on their homegrown T2 Security Chip before they unveiled the M-series silicon.

For Apple, it's important to acknowledge that over 85% of their Mac unit sales are notebook models. They are supremely motivated to offer great performance-per-watt for battery powered devices.

It's important to recognize that Datacenter is the fastest growing business for the big three: Intel, AMD, and Nvidia. Much of their focus on silicon features addresses that business instead of the traditional PC (desktop or notebook) market which was pretty stagnant before pandemic driven work-at-home policies temporarily bolstered PC sales.

For sure, Datacenter business growth prospects are also driving Intel's dive into discrete GPU development far more than PC gaming.
 
Last edited:
I believe TPU already has a article on this topic.
 
Intel needed them to compete with AMD's multi threaded performance.
Intels cores are less power efficient, and in order to compete they had to keep adding cores or raising clock speeds - and they're already pushing the wattages far higher than we've ever seen before.

In order to keep the ST performance and gain MT performance, they added power efficient cores to boost the MT performance.


For most users, they're pretty useless. Gamers have no benefits from them, home users have no benefits - it's not like windows will shut down to E-cores only to save power on the desktop or anything, they just turn on to help with heavily threaded workloads.
 
They are space-efficient cores that give more mutlithreaded oomph, as well as additional threads for background tasks for the least amount of die space, there's a piece of silicon (thread director) that automatically runs background tasks on them while the foreground application runs on the performance cores.

In practice the implementation works well - especially for a first gen product.

I would disagree with @Mussels - for gamers they allow the P cores to run the game while providing additional threads for background tasks, making the cost much less on the foreground application.

12700k maxing the E-cores when downloading a game in the background and still having P-cores for gaming ! : intel (reddit.com)

The end result is that you have the same or better MT and extremely high ST on a less dense and less power efficient node than tsmc 5/7. So I would say they work well enough that without them Intel would not be competitive in the desktop space.
 
Last edited:
What is your opinion on intels motivation for introducing e-cores
a desperate attempt to not lose by a LOT in any kind of multithreaded workloads against a 142WZen 2 Ryzen from 3 years ago.
more cores are barely possible when 8 cores already pull north of 200W.

in my opinion e-cores have absolutely no reason to exist in something that does not run off a battery.
 
Last edited:
It's worth pointing out that US federal government has power efficiency mandates for computing equipment, starting with the power supply unit but realistically encompassing everything in a PC (as well as peripherals) from a holistic viewpoint. It's safe to assume that the feds will continue to push for higher efficiency levels in the future.

Remember that electricity is money. If you have 10,000 people at the General Accounting Office step away from their desktop PCs a couple of hours a day for meals, breaks, meetings, whatever, that's power/money to be saved.

There's an EnergyStar sticker on your monitor for a reason.

Let's say the policy is to leave a desktop PC on 24x7 for maintenance (software updates, security scans) and backup purposes. A full-time employee (220 days a year, 8 hours/day) is only on their computer 20% of the time. Okay, maybe it'll go to sleep/hibernate, but "Wake on LAN" will revive the system. Still, any time a system is idling, it's using power so doing tasks in the background. If they shut off their PCs on weekends, they're still on their computers about a third of the time.

And as E-cores/P-cores silicon becomes more prevalent, it is likely that Microsoft will optimize Windows 11, Windows 12, and beyond to take better advantage of the differences between these cores. How Windows handles E-cores today is unlikely how it will be five years from now. Alder Lake is the first generation of consumer CPUs with this technology but it's here to stay.

It is foolish to look at Alder Lake and say, "This is how it will be forever." We are already surrounded by other instances of differentiated silicon. CPU E-cores are the latest most prominent consumer-facing development but it certainly isn't the first nor last.

Somewhere in a lab, there's a function being done on prototype silicon that is currently being handled in software by your typical CPU. I don't know what it is but people are working on it. AV1 decoding? Its successor? Its successor's successor?
 
Last edited:
They are space-efficient cores that give more mutlithreaded oomph, as well as additional threads for background tasks for the least amount of die space, there's a piece of silicon (thread director) that automatically runs background tasks on them while the foreground application runs on the performance cores.

In practice the implementation works well - especially for a first gen product.

I would disagree with @Mussels - for gamers they allow the P cores to run the game while providing additional threads for background tasks, making the cost much less on the foreground application.

12700k maxing the E-cores when downloading a game in the background and still having P-cores for gaming ! : intel (reddit.com)

The end result is that you have the same or better MT and extremely high ST on a less dense and less power efficient node than tsmc 5/7. So I would say they work well enough that without them Intel would not be competitive in the desktop space.

If you need to use process lasso and manually screw with things to get the E-cores to help, they're not useful - you could do the same on any regular CPU
The comments section on that very post says that windows already does this for dual CCX ryzen


snip for the lazy:
1661822861740.png


Edit: poster was playing starcraft II, a game that uses one thread for the game and one for the graphics rendering.
He said he notices lag downloading while gaming on an antique DX9 game - because his P cores lose the boost frequencies when they multi task
 
I feel like most buyers of ADL seem pretty happy with their purchases and me from the outside looking in it seems the E cores accomplished what intel set out for them to do, boost MT performance without making the CPU consume 400w of power. Alder lake is the first intel arch that is remotely exciting in forever though and both the 12600k and 12700k are hard to beat if building a new system price/performance.

Personally I'm not a huge fan and it will likely be at least till Meteor lake that I feel comfortable supporting intels hybrid arch assuming it's actually better than Zen4 X3D/Zen 5 if it slips even further.
 
E-cores IMO should really be focused on Laptop/Ultra small/Small form factor PCs for power efficency/cooling limitations. IF and WHEN operating systems AND software become more aware of big.LITTLE sort of architecture it will have relevance in higher end desktops/workstations etc.

Intel however have shoe horned them into High end desktop chips purely to make up for core counts/efficeny claims as they are fighting a node deficency currently and will be 2 nodes down when Zen 4 releases in <4 weeks now.
 
If you need to use process lasso and manually screw with things to get the E-cores to help, they're not useful - you could do the same on any regular CPU
That's poor support by the operating system today, not inferiority of the E-core technology as a concept. In this case, Intel can build it but Microsoft (or the Linux developers) need to properly implement the feature.

I don't recall Apple nailing E-core support the first time around on whatever iPhone SoC it debuted.

Eventually Microsoft will figure this out. I don't know when but they will.
 
Simplistically e-cores/e-peen doesnt do wonders.
 
Intel "Meteor Lake" 2P+8E Silicon Annotated

Using Meteor lake as an example

4 threads Ecore ~ 4,7 mm² and it lacks AVX 512 unit
4 threads Pcore ~ 8,6 mm² including the AVX 512 unit that is locked on Alder lake.

4 threads Ecore @ 4.0 Ghz is 2000 CPUz score
4 threads Pcore @ 5.0 Ghz is 2000 CPUz score but it needs 100% more power and it takes 100% real estate to do so
 
I think it doesn't matter this generation which CPU you go for, they're going to be very close in performance regardless.

to OPs original question: are E cores good at what they're designed for? I think everyone that has used them would say yes. Benchmarks would say yes. The fact that Intel are competitive at all would point to yes.

People who haven't used them or those who hate intel because they were run by a bunch of anti-competitive sheisters and BS artists for over a decade, would say no (and I kind of get it).

That being said if you're on the fence between AM5 and Raptor Lake, I don't think e cores are really the consideration. I would go AM5 u will have a platform last more than 1 generation and when X3D lands -- that's where the performance will be.
 
Anecdotal but my 12900K system is the smoothest and most responsive system I have ever had, this includes my previous 10900K and 3900X machines, So i guess the E-Cores are doing a good job at background tasks.
 

Here is the fundamental model taught to EEs about how much power a transistor uses. Its 16 pages, though dense and definitely need above-average math skills to make it through.

One of the major formula is as follows (there are other contributions to power, but I think this is one of the most "pieces" of power consumption. We'll ignore the other bits of the document for simplicity):

1661825850212.png


CL IIRC is a function of how small the transistors are. So the smaller the transistor, the less capacitance, the less power they use. This, along with density (ie: packing 1-million transistors in smaller-and-smaller areas) is why advanced nodes is such a big deal, less capacitance means less power, and more transistors mean more parallelism.

That being said, if we assume the same process, we are stuck as far as capacitance is concerned.

------------

The thing we can control as engineers is:

1. Voltage -- the lower the voltage, the slower the clock will be. But notice the "square" on voltage, 1.5V will have 2.25x more power usage than 1V.

2. Frequency -- The higher the frequency, the more power usage there is. 3000 MHz will have 50% more power than 2000 MHz.

3. Number of outputs -- The number of bits that change each clock tick is more of a software thing than a hardware thing. But different hardware designs could use fewer bits-per-clock tick, but they'd be slower.

Fundamentally, if you're aiming for low power usage, you'll make a dramatically different design than if you're aiming for absolute speed.

---------

E-cores probably are designed to have much lower clocks, at much much lower voltage, with fewer bits changing per clock-tick. This dramatically slows down code, but the power-decrease is multiplicative. If you can "speed the code up" with multiple threads, say using 4-cores (each with only say, 10% power usage), you'll have 40% overall power, rather than 100% power of 1-core running as fast as possible.

That's the general idea. Of course, there's more complications but this should give you at least an idea of what EEs are thinking with these chip designs. Overall, the physics make sense, but there's a big question if the software will be written correctly for this new model of computation. After all: we already know that its not possible to turn all algorithms into parallel forms. And in many cases, 4x threads may only lead to 2x speedup, so maybe the power-savings won't be as good as expected.
 
Anecdotal but my 12900K system is the smoothest and most responsive system I have ever had, this includes my previous 10900K and 3900X machines, So i guess the E-Cores are doing a good job at background tasks.
I mean sure, but I also felt the same way when i upgraded to my ryzen 1400 back in the day :P


You can't be sure it's the E cores that makes it smoother, and not just being a faster CPU overall with more cores/threads
 
appreciate everyones input on this. I understand for a gamer, basic business applications, photoshop, "MT-occasional" video editing/rendering and general use the e-cores are uncooperative hence not a feature worth aspiring to. If correct, this makes things a little easier to understand. Although multi-threaded video rendering demands may benefit to some degree these are just home videos I sometimes work on (maybe once or twice a month)

As someone rightly guessed, i'm looking into e-cores and sharpening up on the know-how prior to pulling the trigger on Z4/RPL. I've been leaning more towards the AM5 socket for that gen-2-gen support which was a mouthful with AM4 although I'm more than happy to wait and see everything come into benchmark fruition (Zen 4, RPL and 40-series cards). My 9700K and 2080 system is holding up well so no hurry!
 
I have used a laptop for a long time now, for me e-cores are worthless. Sure I have a fast laptop, its 6p cores, 8 ecores, but the battery life on intel laptop are way worse than modern day AMD processor laptops.

I couldnt wait any longer for the AMD 6000 processors to come out so I bought a laptop with a intel processor. And to this day, not enough laptops have the newer 6000 apu processor, the ones that do have them are not in the my market or its not in my target customer type.
 
Intels cores are less power efficient, and in order to compete they had to keep adding cores or raising clock speeds...For most users, they're pretty useless. Gamers have no benefits from them, home users have no benefits
but why the general obsession over core count, threads, e-cores, p-cores, i-core, x-core., etc.,etc., Shouldn't we be judging CPUs as a whole not just parts of them? As one of your fellow Aussie's, techspot/hardware unboxed, have stated many times "Games don't require a certain number of cores, they never have and they never will. Games require a certain level of CPU performance, it's really that simple"
 
That approach is the future and AMD will have to adapt.
We only need 6-8 extremely fast cores at ridiculously high frequency and a truckload of small ones to work when is needed (MT).

The concept all p cores is a dead end.
 
Anecdotal but my 12900K system is the smoothest and most responsive system I have ever had, this includes my previous 10900K and 3900X machines,

You can't be sure it's the E cores that makes it smoother, and not just being a faster CPU overall with more cores/threads

I'm currently all Intel (12600k, 9700k, 10400) plus laptops although 20 years ago it was the exact opposite. For work, I can't tell the difference between any of the CPUs, for gaming I can't tell the difference between the 9700k & 12600k (10400 is only used for work).
 
I swear some of you people love to make something harmless into something controversial. E-cores are fantastic at managing background processes, leaving more room for the P-cores to be open for heavier tasks.

This isn't about power usage. It's not about the Skylake equal of these smaller cores being, well, not as good. It's just support cores in a sense.
 
Hi,
If an os actually optimizes these e threads let me know :laugh:
Until then I'll stick to real cores with two threads thank you very much :cool:
 
ECores is only as good if software supports it well (looking at MS)
 
Back
Top