• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel "Alder Lake-S" Comes in a 6+0 Core Die Variant

Part of me kinda agrees, but I also don't want to see what 2 P-cores + E-core gang looks like in demanding games like MW2019 (probably Vanguard and MW2022 since same engine). Upon every reinstall/major update(?)/drivers change, it runs balls to the wall AVX on all cores possible (dual Ryzen will only fill 1CCD, but still ridiculous heat). Everything is a stuttery mess until it's finished after a few minutes. Should it be better optimized? Definitely, but you know it's not going to happen. BF1 and BFV (esp) are also aggressive on AVX past 2 cores, so BF2042 is surely going to continue that.

Ngl kinda frustrated the E-cores play the same role on desktop (ie. relied on for 90% of work until something "deserving" of P-core). Since 8th gen there's a huge discrepancy on Intel on MT perf between PL2 and PL1. The E-cores' should help bridge that perf gap - but they should be only a performance reserve, instead of this insane E-cores-all-the-time BS. It's the opposite of AMD - on the slightest twitch or click of the mouse Zen 3 will aggressively turbo for a responsive experience. And Intel also did the same, up until it suddenly decided we should pay P-core money for E-core usage :confused:

Even 14nm Intel had idle power better than chiplet Ryzen (not APU), so chasing idle/low load power on desktop is a solution searching for a problem - it's the free MT perf that Intel's looking for with the E-cores. Mobile power obviously it's always important, but frankly, for a 12400F-class desktop chip, it's irrelevant. So, 6+0 is great.
It's a compromise, like most things in CPU design. How much performance can you fit on 215 mm² if you have a choice of either 2+32 or 4+24 or 6+16 or 8+8 or 10+0? (Not exactly, 4×E is bigger than 1×P, Locuza on Twitter now has exact measurements.)
 
Just seems really weird that these SKUs aren’t all some configuration of P+E cores. It waters down the entire product line, IMO. I figured they’d do something like 2P+4E for i3, 4P+4E for i5, and i7 and i9 having 6P and 8P, respectively. I mean, if you’re marketing this, you have to play up the value of the E cores. Because they are less performant than the E-cores, they would emphasize energy savings. Leaving them out of the lesser lines makes it look like saving energy is exclusive for the premium buyer. I guess if they did what I mention, we’d end up with performance regressions over 11 series? Just seems to devalue the purpose and merits of the E-cores, IMO.
 
Could you please use proper terminology instead of "... don't cannibalize sales of ..." in your articles? Thanks.
How would you suggest the author phrase it?

That sentence makes perfect sense to me: The author is postulating that the product segmentation is set up so that the xx400 SKUs don't take away sales from the xx600 SKUs. Not sure what "proper terminology" you're referring to here
 
Just seems really weird that these SKUs aren’t all some configuration of P+E cores. It waters down the entire product line, IMO. I figured they’d do something like 2P+4E for i3, 4P+4E for i5, and i7 and i9 having 6P and 8P, respectively. I mean, if you’re marketing this, you have to play up the value of the E cores. Because they are less performant than the E-cores, they would emphasize energy savings. Leaving them out of the lesser lines makes it look like saving energy is exclusive for the premium buyer. I guess if they did what I mention, we’d end up with performance regressions over 11 series? Just seems to devalue the purpose and merits of the E-cores, IMO.

I am of the exact same opinion and it damages Alder Lake as an architecture too. Market share matters for the technology going forward. It just looks... messy, not credible.
 
I would like an overclockable 6+0 CPU please intel. Thank you.
 
I am of the exact same opinion and it damages Alder Lake as an architecture too. Market share matters for the technology going forward. It just looks... messy, not credible.
It's really no different to Rocket Lake where the lower end i3-11xxx chips were just rebadged Comet Lake's. Personally I think it's a good thing these chips exist as it'll make it a lot easier to differentiate where the performance gains are coming from (do you really need more "energy" cores, or is a large chunk of the claimed gains coming from just IPC gains of the larger cores + DDR5) in real world applications and games.
 
Time to stop talking about that Alder Lake, guys. Ark says that the new CPUs are "Products formerly Alder Lake". I already forgot completely what Alder Lake was, I can barely remember ever hearing that name, it must have been many years ago.
 
Just seems really weird that these SKUs aren’t all some configuration of P+E cores. It waters down the entire product line, IMO. I figured they’d do something like 2P+4E for i3, 4P+4E for i5, and i7 and i9 having 6P and 8P, respectively. I mean, if you’re marketing this, you have to play up the value of the E cores. Because they are less performant than the E-cores, they would emphasize energy savings. Leaving them out of the lesser lines makes it look like saving energy is exclusive for the premium buyer. I guess if they did what I mention, we’d end up with performance regressions over 11 series? Just seems to devalue the purpose and merits of the E-cores, IMO.
I'd be very surprised if Intel haven't tried that during development.
What I suspect it comes down to is what combination gives the best performance to stay competitive vs AMD and still not go absolutely overboard with powerdraw.

From what I can read from early numbers and rumours, it doesn't look that brilliant for Intel.
Intel seems to have the edge in single core performance, but in multi core it's a different matter. Intel looks to be able to beat AMD's Ryzen 5800x and maybe 5900x, and come close to 5950x ... as long as Intel can run in High Power mode, but as soon as the heat goes up and clocks go down AMD has the upper hand, and it looks like Intel can only do this at a significant higher powerdraw.

But that is only the impression I get from early numbers, we will soon see what's what when reviews drop come Thursday.
 
Alder Lake NDA lifted on a monday. I boycot purchases from anyone that lifts NDA's on a monday.......................

I want my reviews to review on the weekend :)
 
Both of these claims are entirely wrong. I'm not allowed to tell you why (yet), but you can take or leave my opinion on it

There are not many options in this one.
Intel demand Win11 scheduler for a reason

The Win10 scheduler problem must be either one of these:

1. Focus on the E cores and leave the P cores idle
2. Focus on the P cores and leave the E cores idle
3. Complete mess in scheduler choosing between P and E cores

You kinda spoiled the entire picture by saying No.1 is wrong.

No.3 seems like a logical guess for why Intel refuses to "Leak" any Win10 related scores.
 
There are not many options in this one.
Intel demand Win11 scheduler for a reason

The Win10 scheduler problem must be either one of these:

1. Focus on the E cores and leave the P cores idle
2. Focus on the P cores and leave the E cores idle
3. Complete mess in scheduler choosing between P and E cores

You kinda spoiled the entire picture by saying No.1 is wrong.

No.3 seems like a logical guess for why Intel refuses to "Leak" any Win10 related scores.
Sounds like Intel's version of FX! (the scheduler) Like when AMD wasn't using SMT. Like the cluster-threaded-tech that AMD used for FX, where the modules were interpreted as cores by software.
 
Look for Lakefield reviews to see how poorly x86 big.LITTLE performs on Windows 10. I recall one reviewer saying that when running a single thread benchmark (Cinebench), the load would jump around on all the CPUs and not stick to the one performance core, resulting in poor performance. I can’t see how Adler Lake would do much better. With Lakefield’s short lifespan and extremely limited number of devices that used it (2 devices made production, but Surface Neo never did), Lakefield might have just existed to give MS and Intel something to work with for the new scheduler. It was clearly a bad product that most OEMs never used.
 
Look for Lakefield reviews to see how poorly x86 big.LITTLE performs on Windows 10. I recall one reviewer saying that when running a single thread benchmark (Cinebench), the load would jump around on all the CPUs and not stick to the one performance core, resulting in poor performance. I can’t see how Adler Lake would do much better. With Lakefield’s short lifespan and extremely limited number of devices that used it (2 devices made production, but Surface Neo never did), Lakefield might have just existed to give MS and Intel something to work with for the new scheduler. It was clearly a bad product that most OEMs never used.
I have no problem throwing the Windows scheduler under the bus for that.
 
No reason to go with Alderlake if u wont buy a K or high price like a 12600 it would be,
better way is to take a Socket 1200, 10400 or 11400 and even cheapter boards too.

In the class where are E Cores usefull they dont bring them (non k 65w, 35w), but with 125w u get E Cores :kookoo:
Is there any logic by Intel ?, it seems not.
 
Look for Lakefield reviews to see how poorly x86 big.LITTLE performs on Windows 10. I recall one reviewer saying that when running a single thread benchmark (Cinebench), the load would jump around on all the CPUs and not stick to the one performance core, resulting in poor performance. I can’t see how Adler Lake would do much better. With Lakefield’s short lifespan and extremely limited number of devices that used it (2 devices made production, but Surface Neo never did), Lakefield might have just existed to give MS and Intel something to work with for the new scheduler. It was clearly a bad product that most OEMs never used.

Lakefield got a boost with Win 11 - only 2% in single core but about 5.5% in multi-core.
1635707830637.png


For big.LITTLE these differences between Win10 and 11 are significant, but others not so much. We're talking losing 2 out of 187 FPS for Win 11 on Zen 3 - and no the patch didn't make any difference in the aggregate.

For that matter, Rocket Lake lost 1 FPS. That kind of difference is really in the margin of error (185 vs 187 on Zen 5900X, and 174 vs 173 on RKL) :

1635708979785.png
 
There are not many options in this one.
Intel demand Win11 scheduler for a reason

The Win10 scheduler problem must be either one of these:

1. Focus on the E cores and leave the P cores idle
2. Focus on the P cores and leave the E cores idle
3. Complete mess in scheduler choosing between P and E cores

You kinda spoiled the entire picture by saying No.1 is wrong.

No.3 seems like a logical guess for why Intel refuses to "Leak" any Win10 related scores.
Look for Lakefield reviews to see how poorly x86 big.LITTLE performs on Windows 10. I recall one reviewer saying that when running a single thread benchmark (Cinebench), the load would jump around on all the CPUs and not stick to the one performance core, resulting in poor performance. I can’t see how Adler Lake would do much better. With Lakefield’s short lifespan and extremely limited number of devices that used it (2 devices made production, but Surface Neo never did), Lakefield might have just existed to give MS and Intel something to work with for the new scheduler. It was clearly a bad product that most OEMs never used.

According to Ian, the Win10 problem is that it's not quite as aware as 11 of "efficiency" in addition to load and capability:


Alder Lake isn't quite the same as Lakefield in that the hardware-based Thread Director means that it technically isn't wholly at the mercy of Microsoft like Lakefield was.

Allegedly, the core switching based on window focus goes away when you use a High Performance power plan. I guess it's time for Intel users to share in the angst of Windows' load balancing shenanigans.

Unfortunately, if Ian is right, the end result looks much the same as Ryzen 3000/5000 and Lakefield. The Windows scheduler still reigns supreme over Thread Director, the latter only "suggests" changes to the former. Same deal as Ryzen where the first ranked CPPC core may be significantly better quality, but Windows demands two "first-ranked" cores to be available, and will still use Core 0 even if CPPC says it's dead last quality, whenever it pleases.

And much like AGESA, it looks like Thread Director's logic isn't dynamic (doesn't learn as it goes with say DL), but might be possible to continuously improve through microcode updates, much like AGESA. So we could either see both companies focus on long-term firmware support, or Intel goes back to its old ways and Alder Lake quickly falls into neglect in favour of the next "latest and greatest".
 
No reason to go with Alderlake if u wont buy a K or high price like a 12600 it would be,
better way is to take a Socket 1200, 10400 or 11400 and even cheapter boards too.

In the class where are E Cores usefull they dont bring them (non k 65w, 35w), but with 125w u get E Cores :kookoo:
Is there any logic by Intel ?, it seems not.

The logic is, that stack is built to win benchmarks where it matters, and the rest is lucky to get an IPC bump.

Not bad. But nothing game changing either and it speaks volumes of Intel's faith/long term plan with this stuff. They're in wait and see mode, I reckon, while its the best they can come up with at the same time.

But then again, mainstream desktop is also not the best view of the market, let's face it. We're in no-mans land here, DIYing with our PCs. No allegiance to anyone. We get scraps and leftovers, which is ADL-S.
 
So, interestingly, stumbled on something about that latency issue.

Apparently it hits not so much when you do a fresh install of Win 11, but when you subsequently swap out to a different AMD CPU.

This may explain why some people are reporting really bad results, and others are reporting no or negligible differences.

Edit: Farther down he is tracing this back to what appears to also be Radeon drivers affecting Win 11.
So, have to swap an AMD CPU and have an AMD GPU, or so it appears right now.

1635735928094.png

1635736015637.png


1635736344331.png
 
Last edited:
So, interestingly, stumbled on something about that latency issue.

Apparently it hits not so much when you do a fresh install of Win 11, but when you subsequently swap out to a different AMD CPU.

This may explain why some people are reporting really bad results, and others are reporting no or negligible differences.

Edit: Farther down he is tracing this back to what appears to also be Radeon drivers affecting Win 11.
So, have to swap an AMD CPU and have an AMD GPU, or so it appears right now.

What a nightmare for reviewers.

Another Windows 11 exclusive.

Untitled.jpg
 
Last edited:
So, interestingly, stumbled on something about that latency issue.

Apparently it hits not so much when you do a fresh install of Win 11, but when you subsequently swap out to a different AMD CPU.

This may explain why some people are reporting really bad results, and others are reporting no or negligible differences.

Edit: Farther down he is tracing this back to what appears to also be Radeon drivers affecting Win 11.
So, have to swap an AMD CPU and have an AMD GPU, or so it appears right now.

View attachment 223168
View attachment 223169

View attachment 223170
Shouldn't the conclusion there be the issue is with the Adrenalin driver, not Windows 11? Unless there is some proof that Windows 10 with/without display drivers has notably lower latency. I'm also unsure how increased L3 Cache and DRAM latency would leader to a 10-20% performance loss in games, which is traditionally not an activity bound by memory speed.
 
Shouldn't the conclusion there be the issue is with the Adrenalin driver, not Windows 11? Unless there is some proof that Windows 10 with/without display drivers has notably lower latency. I'm also unsure how increased L3 Cache and DRAM latency would leader to a 10-20% performance loss in games, which is traditionally not an activity bound by memory speed.

Yes, I think that is where he was going. Some kind of interplay between swapping AMD CPU, and the new AMD GPU drivers, and upgrading Win 11, resulted in a latency penalty.

Tom's did a bunch of tests that showed effectively no difference between Win 11 and Win 10 with Zen 3, but they probably have a cleaner process than the majority youtube reviewers, i.e. clean install or even separate but identical test rigs with just different CPU and a totally clean install on each and so on. Youtubers having fewer resources are more likely to be re-using the same gear and taking shortcuts without totally clean installs, so they are stumbling onto this.
 
Shouldn't the conclusion there be the issue is with the Adrenalin driver, not Windows 11? Unless there is some proof that Windows 10 with/without display drivers has notably lower latency. I'm also unsure how increased L3 Cache and DRAM latency would leader to a 10-20% performance loss in games, which is traditionally not an activity bound by memory speed.
Why Adrenalin driver? He said the latency of the CPU memory is growing so it is not Adrenalin driver related. It is windows related. Somehow when you swap a CPU windows does not seem to handle that change correctly. At least that is how I get it.
 
Last edited:
Shouldn't the conclusion there be the issue is with the Adrenalin driver, not Windows 11? Unless there is some proof that Windows 10 with/without display drivers has notably lower latency. I'm also unsure how increased L3 Cache and DRAM latency would leader to a 10-20% performance loss in games, which is traditionally not an activity bound by memory speed.
Same setup, same process, only Windows 11 had issues.
Why would it be an Adrenalin driver problem?
 
Shouldn't the conclusion there be the issue is with the Adrenalin driver, not Windows 11? Unless there is some proof that Windows 10 with/without display drivers has notably lower latency. I'm also unsure how increased L3 Cache and DRAM latency would leader to a 10-20% performance loss in games, which is traditionally not an activity bound by memory speed.
That's what happens when you are an AMD-centric channel and use 6900xt (I presume) instead of the actual fastest gpu - the 3090
 
That's what happens when you are an AMD-centric channel and use 6900xt (I presume) instead of the actual fastest gpu - the 3090
3090 the fastest?

In 4k, YES
1080p, Nope
 
Back
Top