• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Core i5-13600K

I am not really understanding how that relates to my post, talking about the difference between P-cores and E-cores within the same CPU in the same system.
 
Is it?

I don't think I have ever seen any reviewer who is talking about power control for the many factors that influence power consumption results. It isn't just a simple matter of swapping out the CPU.

Unfortunately, most of these sites and tubers justify unequal platforms by saying something about 'out of the box' experience. The problem with that logic is they have just switched from analyzing the power characteristics of a *CPU* to evaluating a *motherboards default settings*.

Even if you use the same motherboard, do we know what it's default power limits and VF curve looks like? My Asus TUF for example, came out of the box completely power unlocked.

I mean, without knowing all those details, you really don't know anything about what you just saw or what the reviewer did. This is especially true when testing between different CPUs and different vendors (AMD/Intel).

To give an example of what I'm talking about, study these two charts - yes that's 116W difference for negative performance, even the MSI MAG B660 Tomahawk is drawing 67W more for about 1.2% performance loss in CB MC :

View attachment 267085



View attachment 267086

If you're going to compare CB scores, you should also be comparing CB power draw. Unless it's been otherwise proved that AIDA stability and CB scale consistently relative to each other.
 
I am not really understanding how that relates to my post, talking about the difference between P-cores and E-cores within the same CPU in the same system.

The same motherboard can give different results with different CPUs, especially from different generations.


If you're going to compare CB scores, you should also be comparing CB power draw. Unless it's been otherwise proved that AIDA stability and CB scale consistently relative to each other.

It's not incumbent on me, the reader, to determine that. The reviewer showed that the motherboards had wildly different power draws with the same CPU under a load, while the CB scores were largely the same.

I can pretty much conclude from that the bulk of disparity in power draws on two different motherboards is from the motherboard itself and its configuration.

For example, what was the load line calibration set to on the CPUs in that comparison? I can instantly overheat my rig and greatly increase power draw by increasing that to max. Does anyone ever even mention these settings? Nope.
 
The same motherboard can give different results with different CPUs, especially from different generations.

But it is the same CPU, the 13600K. Three tests - all cores, just P-cores and just E-cores. Did you not see what video I was quoting? The 12600K comparison there is irrelevant, I was just talking about the differences between the three configurations of the 13600K.


This can actually be tested with the 13700K. One can compare 8P+0E to 6P+8E. Gaming performance, productivity performance and power draw.
 
Wow, gaming performance is really impressive.

But I am actually surprised at the productivity performance. In the 13600K, E-cores basically have 50% of the P-cores performance at slightly less than 50% power. Does that not confirm that it would be better to have 2 extra P-cores instead of 8 E-cores?
6 P-cores consume 118 W, so 8 would consume about 157 W, and the stock 13600K hybrid consumes 149 W in their testing. What is the point?

This really is a gimmick on desktop, where they do not care about efficiency anyway. Applications that can utilize 20+ threads will get slightly more performance at slightly less power, but that really seems irrelevant.
That's the thing, it IS a gimmick, a gimmick no one asked for that's useful ONLY for laptops.

Now, i understand why intel is shoving it down everyone's throat: because if they don't put it across all their desktop product stack, neither OS makers nor developers will give a frick about a laptop-only feature that has so many problems and would end up a "nice but flawed" feature. This way everyone becomes a beta tester for something only useful for laptop.

Yes, the silicon space taken by all those e-trash would be much better served by huge P-cores, in fact, e-cores should be capped at 4 at the most, i mean, ¿aren't they for "background tasks" and "power efficiency"?, then why is it that a "high end" CPU has 16 "background" cores and only 8 performance ones?, and the further up the stack you go, the P-cores remain stagnant and only the e-core count increase?.
When the p-core count is the one that should increase whilst leaving the e-cores fixed at 4, maaaybe 6 for the i9

it makes no sense at all and honestly annoys me that those issues are completely ignored by reviewers.
 
The same here.
With 4090, the 7600X is faster.

View attachment 266756
7600X, still not fast enough imo. 13600K, a little bit faster than 7600X but based on a dead end platflom.
I think I'll hold on my 8700K until 7800X3D comes out. Invest into Z790 platflom at this point is pretty dumb. Besides, the X3D would beat any CPU in Gaming even the upcoming 13900KS.

Yeah I have the review almost finished
Excellent works you've been doing so far! Please do 4090 covers with 13900K and 7950X. (13900KS & 7800X3D when it come out).
 
That's the thing, it IS a gimmick, a gimmick no one asked for that's useful ONLY for laptops.
I don't think so. If you can have efficiency cores handle the operating system itself and all of the associated background processes and services, the performance cores can concentrate on handling your heavy-lifting tasks like running your games and such. That essentially means that the performance cores can keep on trucking along with your game or whatever heavy-lifting task you're running instead of having to handle stupid things like your operating system.

I think I'll hold on my 8700K until 7800X3D comes out.
What gets me is why didn't AMD come even close to the amount of cache that the 5800X3D version had. If they had, they wouldn't have had the kind of issue that we have today.
 
What gets me is why didn't AMD come even close to the amount of cache that the 5800X3D version had. If they had, they wouldn't have had the kind of issue that we have today.
I believe they thought releasing only a 8C/16T with huge 3D V-Cache is the most sensible choice for gaming. They leave the space between 7700X and 7900X specifically on purpose in case 7700X & 7950X are beaten by 13700K & 13900K, 7800X3D would come to the rescue and leads them to overtake Intel's best flagship in gaming which could be true based on how 5800X3D leads AMD neck and neck with 12900K & 12900KS.

7800X3D's gaming performances should be double, likely even better than that of 13900KS.
 
Last edited:
That's the thing, it IS a gimmick, a gimmick no one asked for that's useful ONLY for laptops.

Now, i understand why intel is shoving it down everyone's throat: because if they don't put it across all their desktop product stack, neither OS makers nor developers will give a frick about a laptop-only feature that has so many problems and would end up a "nice but flawed" feature. This way everyone becomes a beta tester for something only useful for laptop.

Yes, the silicon space taken by all those e-trash would be much better served by huge P-cores, in fact, e-cores should be capped at 4 at the most, i mean, ¿aren't they for "background tasks" and "power efficiency"?, then why is it that a "high end" CPU has 16 "background" cores and only 8 performance ones?, and the further up the stack you go, the P-cores remain stagnant and only the e-core count increase?.
When the p-core count is the one that should increase whilst leaving the e-cores fixed at 4, maaaybe 6 for the i9

it makes no sense at all and honestly annoys me that those issues are completely ignored by reviewers.

They can already barely cool eight highly-clocked P-cores. How are they going to deal with more?
 
I don't think so. If you can have efficiency cores handle the operating system itself and all of the associated background processes and services, the performance cores can concentrate on handling your heavy-lifting tasks like running your games and such. That essentially means that the performance cores can keep on trucking along with your game or whatever heavy-lifting task you're running instead of having to handle stupid things like your operating system.

Yes. The E-cores also perform extremely well in many multi-threaded tasks. Alder Lake took back the MT performance crown from Zen 3 largely on the strength of E-cores. Calling them a "gimmick," or "only useful for laptops," is a biiiig stretch.

If all you care about is gaming, then sure, the E-cores don't do a whole lot for you. You're free to disable them or to buy a CPU that doesn't have them, but there's no downside to having them active either. Yeah there were some scheduling hiccups at first, but as far as I know they've been ironed out by now.
 
Last edited:
I don't think so. If you can have efficiency cores handle the operating system itself and all of the associated background processes and services, the performance cores can concentrate on handling your heavy-lifting tasks like running your games and such. That essentially means that the performance cores can keep on trucking along with your game or whatever heavy-lifting task you're running instead of having to handle stupid things like your operating system.


What gets me is why didn't AMD come even close to the amount of cache that the 5800X3D version had. If they had, they wouldn't have had the kind of issue that we have today.
And why would i want efficiency cores for that?, simply put more performance cores that handle the same thing with greater performance. On a laptop it makes sense on a desktop?, nah, just give me more big cores

If all you care about is gaming, then sure, the E-cores don't do a whole lot for you. You're free to disable them or to buy a CPU that doesn't have them, but there's no downside to having them active either. Yeah there were some scheduling hiccups at first, but as far as I know they've been ironed out by now.
Except they haven't, and there are lots of downsides: you need to run windows 11, and even then they're problematic with older software (and even with newer soft), and that zen4 feels smoother and more responsive.
And also, i'm not going to pay intel for something i will have to disable on first boot(it was bad enough with integrated gpus, at least they added the F cpus for that), give me a pure homogeneous ultra high performance classic cpu, that's what i pay for, not some laptop-gimmick.
i don't only care for gaming but it's my main concern, i do all sort of stuff but i'm a very heavy multitasker(think of games+browser/s with 1400 tabs+WA dekstop+PDFs+illustrator/indesign+discord+assorted TSRs all running at the same time) and i won't downgrade to win11
 
Last edited:
Well I've never touched Windows 11 and my i7-12700 performs just fine. Take it for what it's worth, but for my money you're obsessing over something that really isn't a big deal.
 
I don't think so. If you can have efficiency cores handle the operating system itself and all of the associated background processes and services, the performance cores can concentrate on handling your heavy-lifting tasks like running your games and such. That essentially means that the performance cores can keep on trucking along with your game or whatever heavy-lifting task you're running instead of having to handle stupid things like your operating system.

You are literally just quoting Intel's marketing that so many people are buying into. This only applies to a situation where a game uses 100% of your P-cores. But guess what? If you add 2 more P-cores, suddenly they will be able to perform those background tasks too, which consist of single digit percentages when it comes to CPU usage.
This is not an issue with multi-core CPUs. Back in the day of slow single-core CPUs, an extra E-core would have made a big difference off-loading all the background stuff to it. But these days common CPUs have 12 or 16 extremely fast threads. Background tasks will not cause stutters or hitches with such CPUs.

They can already barely cool eight highly-clocked P-cores. How are they going to deal with more?

Have you seen the P-core vs. E-core comparison?
6 fully loaded P-cores in the 13600K consume 118 W. How much would 12 P-cores consume? Using simple math, I would guess about 236 W.
How much does the 13900K consume? 300-400 W. With just 8 P-cores. Where does that power consumption come from? From those super efficient E-cores, I guess.
16 P-cores would consume about 315 W (at 13600K clocks, so ~5.1 GHz) using the same simple math. Would that be harder to cool than the actual 13900K, that consumes way more?

I get it. E-cores are good for productivity. But my point is that including E-cores in lower SKUs is just marketing (and benchmark scores).
The 13600K would be an even better gaming CPU if it had 8 P-cores and 0 E-cores. It would still beat the 7600X, but I guess it would only match the 7700X instead of beating it in Cinebench.
 
You are literally just quoting Intel's marketing that so many people are buying into. This only applies to a situation where a game uses 100% of your P-cores. But guess what? If you add 2 more P-cores, suddenly they will be able to perform those background tasks too, which consist of single digit percentages when it comes to CPU usage.

This is not an issue with multi-core CPUs. Back in the day of slow single-core CPUs, an extra E-core would have made a big difference off-loading all the background stuff to it. But these days common CPUs have 12 or 16 extremely fast threads. Background tasks will not cause stutters or hitches with such CPUs.
I've not read any marketing material from Intel at all. I'm just basing what I said upon theories as to how tasks are handled by the processor and how each time it has to change a task it has to do what is known a context switch and that every time a processor has to do one of those, you lose five to ten clock cycles.

Again, I've not read a single damn line of Intel marketing. I'm simply basing it upon my own theories.
 
You are literally just quoting Intel's marketing that so many people are buying into. This only applies to a situation where a game uses 100% of your P-cores. But guess what? If you add 2 more P-cores, suddenly they will be able to perform those background tasks too, which consist of single digit percentages when it comes to CPU usage.
This is not an issue with multi-core CPUs. Back in the day of slow single-core CPUs, an extra E-core would have made a big difference off-loading all the background stuff to it. But these days common CPUs have 12 or 16 extremely fast threads. Background tasks will not cause stutters or hitches with such CPUs.



Have you seen the P-core vs. E-core comparison?
6 fully loaded P-cores in the 13600K consume 118 W. How much would 12 P-cores consume? Using simple math, I would guess about 236 W.
How much does the 13900K consume? 300-400 W. With just 8 P-cores. Where does that power consumption come from? From those super efficient E-cores, I guess.
16 P-cores would consume about 315 W (at 13600K clocks, so ~5.1 GHz) using the same simple math. Would that be harder to cool than the actual 13900K, that consumes way more?

I get it. E-cores are good for productivity. But my point is that including E-cores in lower SKUs is just marketing (and benchmark scores).
The 13600K would be an even better gaming CPU if it had 8 P-cores and 0 E-cores. It would still beat the 7600X, but I guess it would only match the 7700X instead of beating it in Cinebench.

I've read a handful of P-vs-E articles, but can't remember/find one specifically about power. If you've got a link, I'd definitely be interested to read (or possibly re-read) it.

Let's operate for the moment on the assumption that 12 P-cores (the number that would hypothetically fit on the die based on an eyeball analysis of the below diagram) would operate within the same thermal envelope. We now have a 12c/24t chip instead of a 24c/32t and have lost thread-count parity with AMD's top desktop model. Would it be functionally superior? Maybe. The number of MSDT use cases where that's true is fairly small, I'd wager. But it could easily be mostly about the marketing. In any case, we don't have a 12 P-core RPL processor to prove any of this.

1666802718338.png
 
I've read a handful of P-vs-E articles, but can't remember/find one specifically about power. If you've got a link, I'd definitely be interested to read (or possibly re-read) it.

Let's operate for the moment on the assumption that 12 P-cores (the number that would hypothetically fit on the die based on an eyeball analysis of the below diagram) would operate within the same thermal envelope. We now have a 12c/24t chip instead of a 24c/32t and have lost thread-count parity with AMD's top desktop model. Would it be functionally superior? Maybe. The number of MSDT use cases where that's true is fairly small, I'd wager. But it could easily be mostly about the marketing. In any case, we don't have a 12 P-core RPL processor to prove any of this.

View attachment 267341


Alder Lake would have started 5 years before release, meaning 2016. It's unlikely that Intel was deciding to use E-Cores as a response to anything regarding process nodes or high core count competition.

Intel and AMD are both on different tracks to reach the same destination. They are both trying to get more efficiency while increasing compute capacity.

AMD used chiplets in its first phase, Intel used hybrid.

Intel's next step is disaggregation - chiplets just not separating the compute chiplet like AMD did (yet).

AMDs next phase is likely to have different types of cores on a chiplet, and mixing those chiplets. These would be like e-cores and p-cores.

Linux patches to support this on AMD systems were released early this year.



 
Alder Lake would have started 5 years before release, meaning 2016. It's unlikely that Intel was deciding to use E-Cores as a response to anything regarding process nodes or high core count competition.

Intel and AMD are both on different tracks to reach the same destination. They are both trying to get more efficiency while increasing compute capacity.

AMD used chiplets in its first phase, Intel used hybrid.

Intel's next step is disaggregation - chiplets just not separating the compute chiplet like AMD did (yet).

AMDs next phase is likely to have different types of cores on a chiplet, and mixing those chiplets. These would be like e-cores and p-cores.

Linux patches to support this on AMD systems were released early this year.




Not disputing any of that. The point was supposed to be that a hypothetical RPL CPU that uses the E-core die space for P-cores wouldn't be the ball of amazing that some like to think. Or maybe it would. I'm no CPU expert.
 
And also, i'm not going to pay intel for something i will have to disable on first boot(it was bad enough with integrated gpus, at least they added the F cpus for that)
Wait till you hear what AMD forces you to pay for on new 7000-series CPUs (or are they APUs now?) :roll:
 
Wait till you hear what AMD forces you to pay for on new 7000-series CPUs (or are they APUs now?) :roll:
the "barely 3D" integrated graphics are actually a good thing, specially when troubleshooting, if they offered a non-gpu version cheaper like intel's-F series then it would be great.
Also, several gimped unwanted cores are hardly the same as a block that does something no other part of the chip can do
 
Not disputing any of that. The point was supposed to be that a hypothetical RPL CPU that uses the E-core die space for P-cores wouldn't be the ball of amazing that some like to think. Or maybe it would. I'm no CPU expert.

I was just providing context, because there is a tale spun in this same thread that Intel spammed e-cores in response to Zen.

The first Zen came out (2017) a year after Alder Lake was started (2016), it wasn't on chiplets, topped out at 8/16, was on an inferior process node and didn't perform nearly as well as its Coffee Lake competitor (8700K) despite its 2-core advantage. This persisted even with Zen 1+ on TSMC "12nm" vs 9th gen (9900K) which went to 8/16 cores.

It really wasn't until Zen 2 with its chiplets that core count was significantly shifted (3900X / 3950X), and that was 2019 - 3 years after Alder Lake would have been started.

So no, Alder Lake and E-cores is not a knee jerk reaction to AMDs more cores strategy. It was in the pipeline long before that. If there was such a reaction to AMDs Zen 2 surprise, it was probably 10th Gen.
 
This review has been updated with new performance numbers for the 13900K. Due to an OS issue the 13900K ran at lower than normal performance in heavily multi-threaded workloads. All 13900K test runs have been rebenched (the 13900K review has been updated, too).
 
Wizzard,
is there a way for you to add or do a P-core only results(and preferably on win10, tangent: ¿would it also be possible to do a win10 to 11 analysis piece with RKL and zen4 now that win11 has been out for quite some time?) to "some" benchmarks?
 
best for gaming it also has good app performance. I'm not sure if I can switch from 12600k? It is more logical to wait for the 14th generation.
 
Back
Top