• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel to Go Ahead with "Meteor Lake" 6P+16E Processor on the Desktop Platform?

I really do understand the use of e-cores in laptops to extend battery life.

And possibly in corporate office desktops in their thousands to reduce power consumption.

But for workstation users - I see not benefit going beyond 4 e-cores to cover standby and background tasks with scheduler providing binding. All silicon and second processor sockets to p-cores! ie. keep e-cores off xeon thank you!

=====
6P + 16E is just nonsense and just marketing and PR
moar cores
lower power usage per core
show benchmarks? scratch that

Anyone that "needs" 6P cores would get better performance with 8P+8E than 6P+16E, but one says "16 cores" the other says "22 cores" and on retail shelves that might make a difference to Joey and Granddad.
That's not quite right it's 6/12 +16E for 28 logical cores verses 8/16+8 for 24.

Useful but not exactly enticing compared to last generation, no Ty Intel.
 
Is it the case that it is just too hot for 8 performance cores this time around?

Increased density of transistors and desktop power requirements could be challenging here, but I am curious to know why suddenly 8 performance cores design doesn't work.
 
Last edited:
Just 6P's , IPC must be really good...
 
The E-cores are not without its merits. Essentially, you have more physical cores when running applications that are optimized for multithreaded performance. I don't believe it only excels in benchmarks, but should also deliver in real life usage, again if you are running software that can utilised all the cores. In games, the E-cores don't contribute much since it likely only keeps the background processes from tapping on the P-cores. 6 cores for PC games is sufficient even with current game titles.
What I don't like are,
1. Intel pitched these E-cores as being efficient. They are for sure, but what is the point of 16 E-cores? Efficient or just an excuse to bump performance on the cheap?

2. Intel is effectively selling cheap E-cores at higher prices. Think Raptor Lake, and you can see Intel basically justifying a higher price tag for a bunch of E-cores. Whereas you are getting 16 cutting edge cores from AMD. Again not to say E-cores are worthless, but one is paying a lot for what should have been some Celeron/ Pentium Silver class processors.

I am using an i7 Alder Lake now and a Comet Lake before, and I am not impressed with Raptor Lake. And it seems like Intel is doubling down this path of spamming E-cores while keeping cutting edge and expensive P-cores low in number. Considering that AMD previously patented some P and E core CPU design, I guess it may be a matter of time AMD may eventually go down this path.
That's great and all, but 99% of consumer tasks do not need 16 cores hammering away to function. Most stuff will run on a potato. 4 e cores would be plenty for consumers, intel is shoveling in 16 e cores to boost benchmarks, not improve the user experience.

Is it the case that it is just too hot for 8 performance cores this time around?

Increased density of transistors and desktop power requirements could be challenging here, bit I am curious to know why suddenly 8 performance cores design doesn't work.
Difficulty in building a larger die when intel is already struggling with trying to move past 10nm. 20a or whatever its called may have terrible yields, and P cores are big.

I eagerly await people defending this rebranded i5 as the best i9 intel has ever made.
 
That's great and all, but 99% of consumer tasks do not need 16 cores hammering away to function. Most stuff will run on a potato. 4 e cores would be plenty for consumers, intel is shoveling in 16 e cores to boost benchmarks, not improve the user experience.
I never understood that argument. OBVIOUSLY lots of cores are good for - who would have guessed - multithreaded performance. The same applies to AMD cpus. Most people do not need 2xCCD's with 8 cores each. 4 cores would be enough for them as well. So what are you trying to say, that CPUs with lots of cores and lots of multithreaded performance should not exist? Im not sure I get your point
 
I'm a little surprised there is so much push back about E cores sure if intel released a 10/12 Pcore cpu I'd be on Alderlake/raptorlake but the majority of people I personally know that have 12/13th gen intel are very happy with their systems same with people I've personally done AM5 systems for... Don't get me wrong I'm not thrilled about meteor lake being 6P cores but I'd rather see benchmarks before deciding if it's not for me. My biggest worry if it does suck we will just get amd pricing 6 cores at 300+ and 8 cores at 400+ again which is worse than intel using E cores imo.
Unless your friends are all geeks who understand computers, 99% of the majority can barely define what a processor is. You mention architecture and "E" cores and they go full DUMMY MODE: ON.
 
So they are pulling another rocket lake. Meaning with 11 gen going from 10 core and down to 8 cores. Now it is just 8 cores down to 6 cores.

I dont hope for Intel that they plan to call a 6 p core cpu for a I9 or a i7 for that matter. That would make som people se it with rinkels in there forehead. I would for sure. We must see what they plan to do.
 
I never understood that argument.
That is what happens when you dont think about what you have read.
OBVIOUSLY lots of cores are good for - who would have guessed - multithreaded performance.
Nobody is saying otherwise. If you are running rendering workloads or code compilations, that's fantastic. The keyword here is "CONSUMER" workloads. For home use, most consumer software does not use 16 cores, or 8 cores. Games are the most demanding, and those rarely benefit past 6 cores on their own.
The same applies to AMD cpus. Most people do not need 2xCCD's with 8 cores each. 4 cores would be enough for them as well.
Demonstrably wrong. There is a significant benefit form having 6 cores, and a smaller but still noticeable benefit from 8 cores. A single 8 core CCD does great for consumers, see also the 5800x3d. The 4 cores I am referring to are the "E" cores, not the P cores. Not total cores.
So what are you trying to say, that CPUs with lots of cores and lots of multithreaded performance should not exist? Im not sure I get your point
You need to work on your reading comprehension skills, and stop coming to conclusions full of whataboutisms or red herrings.

Consumer work loads do not need 16 cores. Having 16 e cores only benefits benchmarks. For your average consumer, running average software in the background, 4 e cores are sufficient to run everything they need without taking up so much silicon room. Focusing on having a smaller e core cluster, and couple that with faster ring bus and L3 timings, would do far more for end users then shoveling 16 cores in, and help with the 1% and .1% frametime inconsistencies that occur when a game accidentally shifts back and forth to e cores. 16 cores will not run your chrome browser any faster, or stream your spotify faster. For those who run demanding multi threaded software, the 16 e cores are a great idea, given how many SKUs intel makes they could easily make a model for both markets.

Hope this helps :)
 
Is it the case that it is just too hot for 8 performance cores this time around?

Increased density of transistors and desktop power requirements could be challenging here, bit I am curious to know why suddenly 8 performance cores design doesn't work.
Meteor lake can cut power consumption by 30-40% compared to RPL with the same clock and number of cores based on the theoretical performance of Intel4, so heat dissipation problems are unlikely. If it is possible, I think the goal is to ensure the same or better benchmark performance as the 13900K while keeping the base die small.

In any case, I also doubt the 6+16E exists; judging by the sales of the RPL and Zen4, I don't see how high-end can sell in a highly inflated world.
MTL's 6P+8E driven at 120W is expected to be close to the performance of a 13700K or 7900X driven at 180W, and that seems sufficient until Arrow lake.
 
Meteor lake can cut power consumption by 30-40% compared to RPL with the same clock and number of cores based on the theoretical performance of Intel4, so heat dissipation problems are unlikely. If it is possible, I think the goal is to ensure the same or better benchmark performance as the 13900K while keeping the base die small.

In any case, I also doubt the 6+16E exists; judging by the sales of the RPL and Zen4, I don't see how high-end can sell in a highly inflated world.
MTL's 6P+8E driven at 120W is expected to be close to the performance of a 13700K or 7900X driven at 180W, and that seems sufficient until Arrow lake.
We heard the same claims about intel 10nm,a nd it has produced some of the most power hungry designs to date.
 
I find it hard to fathom in 2023 how some would go into battle in favor of reducing the number of CPU cores. These are the same type of people who, after we went to the moon 6 times, turned their gaze back to the primitivism at their feet.
 
Having 16 e cores only benefits benchmarks
The same way the 2nd ccd on the 7950x only benefits benchmarks?

We heard the same claims about intel 10nm,a nd it has produced some of the most power hungry designs to date.
In what way? Both 12th and 13th gen are way way way way more efficient than anything before that.
 
We heard the same claims about intel 10nm,a nd it has produced some of the most power hungry designs to date.
Alder lake (12900K) achieves 1.5 times the performance of Rocket lake (11900K) with almost identical power consumption, and 13900K is even 1.5 times the performance of 12900K. The improvement in power efficiency is offset by the increase in the number of cores. The 12900H, with fewer cores, achieves more performance than the 11900K with less than half the power consumption of the 11900K.

Intel was pushing too hard to compete with Ryzen 16-core, but it finally caught up, so AMD raised the power consumption to 230W for the Ryzen 7000 series.
 
That is what happens when you dont think about what you have read.

Nobody is saying otherwise. If you are running rendering workloads or code compilations, that's fantastic. The keyword here is "CONSUMER" workloads. For home use, most consumer software does not use 16 cores, or 8 cores. Games are the most demanding, and those rarely benefit past 6 cores on their own.

Demonstrably wrong. There is a significant benefit form having 6 cores, and a smaller but still noticeable benefit from 8 cores. A single 8 core CCD does great for consumers, see also the 5800x3d. The 4 cores I am referring to are the "E" cores, not the P cores. Not total cores.

You need to work on your reading comprehension skills, and stop coming to conclusions full of whataboutisms or red herrings.

Consumer work loads do not need 16 cores. Having 16 e cores only benefits benchmarks. For your average consumer, running average software in the background, 4 e cores are sufficient to run everything they need without taking up so much silicon room. Focusing on having a smaller e core cluster, and couple that with faster ring bus and L3 timings, would do far more for end users then shoveling 16 cores in, and help with the 1% and .1% frametime inconsistencies that occur when a game accidentally shifts back and forth to e cores. 16 cores will not run your chrome browser any faster, or stream your spotify faster. For those who run demanding multi threaded software, the 16 e cores are a great idea, given how many SKUs intel makes they could easily make a model for both markets.

Hope this helps :)
But you are talking as if Intel is going to shove 16 e-core in EVERY single meteor lake CPU. 16 is a maximum, not a baseline. The average conssumer can still get the core i5 that's going to be plenty fast and cheap for his needs.
 
AMD's 8P + 8P setup performs similarly to Intel's 8P + 16E setup. The key difference of an E core is that it is missing features that allow it to reach high clock speeds efficiently, but AMD's extra P cores rarely use those features because when all the cores are loaded they operate at a lower clock speed.

Redwood Cove and Crestmont will be new cores on a new node which will probably mean the most practical P:E ratio will be different. So was 6P + 16E chosen because it's the sweetspot or because this is best for a mobile lineup and Intel isn't bringing a dedicated desktop CPU this time? Will it have enough cache for a desktop CPU? Also will Crestmont be a lot faster? Will it support AVX-512?
 
Hi,
Intel selling thermal defective threads as progress :laugh:
 
Anyone thinking that e-cores are useless for gaming, is not thinking about every aspect of a gaming rig.
When I'm gaming with friends (100% of the time), I'm also:
- Talking on Discord
- Streaming on Discord
- Chrome's open
- Downloads running in the background (torrent)
- Possible game updates running on Steam/Origin/Epic ...

Sometimes they drag down gaming performance in isolated game benchmark reviews but who's doing that in real life?
Same goes for HT on AMD.

This review from Tweakers clearly shows how it matters in a real life scenario (especially compared to an i5 CPU without e-cores):

FdrOrHvXwAAgHI7.png


Source: https://tweakers.net/reviews/10506/...-4-pijlsnel-of-bloedheet-games-streaming.html
 
The MT performance of 8P+8E is equivalent to 12P+0E, and 6P+16E is equivalent to 14P+0E in MT perf. If someone is using it mainly for rendering and encoding, 6P+16E will work better. And for most people, there will not be much difference.
That’s not what you said/showed in your earlier post with the benchmarks. You had
%E3%82%B9%E3%82%AF%E3%83%AA%E3%83%BC%E3%83%B3%E3%82%B7%E3%83%A7%E3%83%83%E3%83%88-2023-02-27-161634-png.285646

If that table is a fair assessment of performance, then 8E is equivalent to 2P in both the ST and MT benchmarks.

So 8P+8E == 10P

And 6P+16E == 8P

However, these are not my benchmarks, nor do I have direct experience of using E-cores. So, I'm not sure I'm able to add anything more insightful until I see or experience more objective performance data.

My contribution to this thread is as follows:

  • Horses for courses, let the E-cores be prevalent where they make the greatest impact
  • I see their merit in laptops and low power desktops and certain server applications, e.g. webservers, domain servers, firewall, etc. where there are large n users requiring relatively low-complexity work
  • They don't help workstation or HTPC in any way except reducing power in idle(-ish) situations
  • Thread schedulers are getting more complicated: needing to "know" the content of a thread and allocate them to appropriate p- or e-core otherwise they become horrendously inefficient through misallocation. The scheduler therefore needs to start creating databases of performance logs to "learn" where to allocate certain threads. You probably need a whole e-core just to run the thread scheduling optimisation
  • It's like netburst - there is so much thread management / rescheduling that is becomes a overhead burden
  • We are at a point that it might be better to "bind" various applications/threads to single cores, ie. certain activities to e-cores, like background tasks (in the workstation HTPC environment), even though this results in redundant silicon, a bit like unused sound, network, GPU, silicon not being used when not being used
  • Dynamic allocation of threads across asymmetric cores is not easy, and never optimal, and will differ depending on the mix of p- to e- cores, their versions, whether intel or amd etc. Whilst we might get it to "work" now, just imagine the complexity of thread optimisation in 5 years time when there are x versions of p-core, y versions of e-core, and the number of permutations of this mix given relative core counts. It will be a nightmare and will become suboptimal
  • If we like the concept of asymmetric cores, a bit like 8086+8087 FPU from 1980's, then conceptually, this is like having two processor sockets, and stuff one with p-core exclusive, and the other with e-core exclusive. What core-count would you put in each socket? (This is a thought experiment, not actually suggesting they should build it.)
 
Last edited:
That’s not what you said/showed in your earlier post with the benchmarks. You had
%E3%82%B9%E3%82%AF%E3%83%AA%E3%83%BC%E3%83%B3%E3%82%B7%E3%83%A7%E3%83%83%E3%83%88-2023-02-27-161634-png.285646

If that table is a fair assessment of performance, then 8E is equivalent to 2P in both the ST and MT benchmarks.

So 8P+8E == 10P

And 6P+16E == 8P

However, these are not my benchmarks, nor do I have direct experience of using E-cores. So, I'm not sure I'm able to add anything more insightful until I see or experience more objective performance data.

My contribution to this thread is as follows:

  • Horses for courses, let the E-cores be prevalent where they make the greatest impact.
  • I see their merit in laptops and low power desktops and certain server applications, e.g. webservers
  • They don't help workstation or HTPC in any way except reducing power in idle(-ish) situations
  • Thread schedulers are getting more complicated: needing to "know" the content of a thread an allocate them to appropriate p- or e-core otherwise they become horrendously inefficient through misallocation. The scheduler therefore needs to start creating databases of performance logs to "learn" where to allocate certain threads
  • It's like netburst - there is so much thread management / rescheduling that is becomes a overhead burden
  • We are at a point that it might be better to "bind" various applications/threads to single cores, ie. certain activities to e-cores, like background tasks (in the workstation HTPC environment).
  • Dynamic allocation of threads across asymmetric cores is not easy, and never optimal
  • If we like the concept of asymmetric cores, a bit like 8086+8087 FPU from 1980's, then conceptually, this is like having two processor sockets, and stuff one with p-core exclusive, and the other with e-core exclusive. What core-count would you put in each socket? (This is a thought experiment, not actually suggesting they could build it - although it is no different in concept from CUDA/GPU compute).
Think about why CPU Package Power is listed together: 4P+8E (16T) has the same MT performance at the same power consumption as 8C16T Zen3.

Each cases are based on actual report of CPU Package Power:
12500H (12C16T) @ 45W 11124 (GIGABYTE G5 entertainment mode)
12500H (12C16T) @ 95W 14435 (HP OMEN 16 performance mode)
5700X (8C16T) @ 76W 13802 (TDP65W, PPT76W)
5800X (8C16T) @ 130W 15228 (TDP105W, PPT142W)
 
Anyone thinking that e-cores are useless for gaming, is not thinking about every aspect of a gaming rig.
When I'm gaming with friends (100% of the time), I'm also:
- Talking on Discord
- Streaming on Discord
- Chrome's open
- Downloads running in the background (torrent)
- Possible game updates running on Steam/Origin/Epic ...

Sometimes they drag down gaming performance in isolated game benchmark reviews but who's doing that in real life?
Same goes for HT on AMD.

This review from Tweakers clearly shows how it matters in a real life scenario (especially compared to an i5 CPU without e-cores):

View attachment 285682

Source: https://tweakers.net/reviews/10506/...-4-pijlsnel-of-bloedheet-games-streaming.html
When I see your nice list of gaming activities I feel like I should be in the gamer hall of shame. :shadedshu: I simply just turn on pc and play game.
It's nice to see some kind of positive impact of e-Cores. I don't have and e-core Intel CPU chip and I don't get (and/or forget) what all the fuss is about.
 
Let's just hope it's in real 4nm litography at least the P-cores and not in intel's lying "4" that's 14 or 10nm; we don't need another 250 W furnace, especially now when AVX512 instructions have been found to be so useful, by intel no less.
 
Yeah so what, 6+8 is very different than 6+16/16+6. I don't mind some e-cores, i'd prefer to have everything full power but if they're able to leverage better efficiency and price (by using smaller die) sure, give me a couple e-cores that can run things in the background. I'm into virtualization so I can have for example the e-cores running less demanding stuff and/or the hypervisor while the p-cores do the demanding stuff like run a game or whatever.

But 16 e-cores!? What am I (or anyone) supposed to do with that? Servers will probably eat up this increased reliance on e-cores, it seems like it's the only response Intel has to massive epyc/threadripper core counts, but for client? It just doesn't compute for me
It means I could have many poorly threaded Adobe apps open at once rather than 1 or 2.

Not everything is about gaming.
 
Let's just hope it's in real 4nm litography at least the P-cores and not in intel's lying "4" that's 14 or 10nm; we don't need another 250 W furnace, especially now when AVX512 instructions have been found to be so useful, by intel no less.
There's no real 4 nm lithography anywhere. TSMC lies, Samsung lies, and Intel lies, but it doesn't matter. The processes continue to get better even though SRAM scaling has reached a dead end that's hopefully temporary. Intel's 10 nm process, now renamed Intel 7, is equivalent to TSMC's N7. That justifies the name change.
 
I don't know if you misspoke or don't understand even simple arithmetic, but the 6C-12T of Zen3 and the 12T of Alder lake (in any core configuration) are tuned to be basically equal.

For example, in Cinebench, Geekbench, Blender Benchmark, and 7-zip benchmarks, the 1235U (10C12T) and 5600U (6C12T), strictly limited to 15W, shows almost the same MT score, while the 12500H (12C16T) and 5800H (8C16T) show almost the same MT.

P.S.: 12500H can score comparable to 5600X even when turned down to 45W.

Each cases are based on actual report of CPU Package Power:
12500H (12C16T) @ 45W 11124 (GIGABYTE G5 entertainment mode)
12500H (12C16T) @ 95W 14435 (HP OMEN 16 performance mode)
5700X (8C16T) @ 76W 13802 (TDP65W, PPT76W)
5800X (8C16T) @ 130W 15228 (TDP105W, PPT142W)
Yeah I misspoke, meant #cores in the case of AMD.
Still though - 12T vs 12T; so whichever way it goes, on a straight bench that can use every core thrown at it, E cores don't extract a real advantage. 'Tuned to meet' - rather, I'd say, E cores are tuned to meet a TDP target to ensure the chips don't straight up burn in hell. Of course, this happens on the AMD side too, except they're a lot smarter about it now.

But the fact still remains, that on an AMD CPU, you can use every core for every task without scheduling or other shenanigans. The fact is also that in any full-blown, unlimited load the Intels go straight into crazy land wrt power usage, while AMD's recent 7950X3D peaks at half TDP of a top- and even subtop- Intel part.

So that's where we see the real thing. In any limited scenario, Intel can keep up. Remove limitations and the AMD parts deliver peak performance at fantastic efficiency, and the Intel parts start showing their true, excessive TDPs. And the difference it so seems, is mind blowing.

So the matchup is pretty much equal die space for similar performance, but twice the power usage at peak due to 'Efficient Cores'. Well played, Intel, well played indeed, gullible consumers buying the marketing. And why? So Intel can 'keep up'. Yeah, that Big little sure is a winner on the eternally rehashed Core CPUs, go go.

1677521981214.png
1677522002563.png


Efficiency chart is hilarious, even. Tell us again Intel didn't get stuck on quad, maybe hexacore since forever; the only parts that have any semblance of effiency in the current day are low core count parts. Apparently mix&match your old crap to make ends meet doesn't quite suffice against actual technological progress :)

1677522343091.png


Anyone thinking that e-cores are useless for gaming, is not thinking about every aspect of a gaming rig.
When I'm gaming with friends (100% of the time), I'm also:
- Talking on Discord
- Streaming on Discord
- Chrome's open
- Downloads running in the background (torrent)
- Possible game updates running on Steam/Origin/Epic ...

Sometimes they drag down gaming performance in isolated game benchmark reviews but who's doing that in real life?
Same goes for HT on AMD.

This review from Tweakers clearly shows how it matters in a real life scenario (especially compared to an i5 CPU without e-cores):

View attachment 285682

Source: https://tweakers.net/reviews/10506/...-4-pijlsnel-of-bloedheet-games-streaming.html
You either have sufficient core count or you don't, its that simple, and it always has been. But then again, there's a lot of blundering going on @ Tweakers, be wary taking those reviews too seriously. They're Hardware.info level now - bottom barrel, up to and including straight up wrong results. I've had my share of experiences. Even prior to HWInfo invading to take over the abysmal review quality, they 'oopsied' on for example The Witcher 3 testing with Hairworks on. Yes you read it right. It took some heavy complaining from this person to correct that nonsense. Reviewers are liable to speak for the very thing they spoke against less than a month ago, etc. Its a mess.

Also, interesting that you do full blown downloads in background while gaming, that'll be some enjoyable ping!
 
Last edited:
Back
Top