• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel Core 12th Gen Alder Lake Preview

How exciting this is. I dont feel the need to upgrade my 10700kf, but just to get one of these, I want to!

Wizzard, if you feel too tired or worn out from doing the review, send the hardware my way, I'll .....review it..... for you. ;)
 
Are you seriously expecting Intel did this on porpuse? Thats an issue to be fixed with AMD and Microsoft, those tests were made Oct 1st. Even intel could improve some stuff with BIOS and stuff from there to now.
Yes absolutely, chipzilla knows exactly what it is doing. They could have benched with Windows 10 but chose to put out false results to make their new chips look even better. AMD probably would have done the same thing if the shoe was on the other foot.
 
it's 4E core vs 1P core at same area, same clocks 5.2Ghz, if 1.27 IPC and 1,25 hyperthreading/. 4x vs 1.6x at same power.
I don't think it's quite that simple - the E core cluster is visibly wider than a P core in the die shot Intel used for their review packaging, and while it is a bit smaller in the other dimension I would think the overall area is a bit larger. Still far more perf/area/clock of course. But I sincerely doubt these cores can reliably clock to 5.2GHz. Given that they are designed for efficiency it would make sense for less effort being put into making them clock far beyond 4GHz, and it's quite common for architectures to hit hard clock limitations at any point from 3GHz to 5GHz - so they might not go much higher than 3.9GHz as they are rated on the 12900K.
Yes absolutely, chipzilla knows exactly what it is doing. They could have benched with Windows 10 but chose to put out false results to make their new chips look even better. AMD probably would have done the same thing if the shoe was on the other foot.
Given that W10 lacks the scheduler to handle ADL properly that would have been quite problematic though - you try to make an even playing field, and only test across OSes if you really have to. Still, they had time to re-test, especially with the Insider patch being out for a while now. I rather think this is "opportunistic laziness" than malevolence though. "Oops, we didn't have time to fix that before our huge event, sorry" is far more likely than some grinning PR Disney villain being behind it.
 
Win10 will more likely use only the P cores and ignore the E ones. Until the MS update the cpu scheduler there also. For now the big push for their Win10.5=11 OS is only the AL platform (maybe that's why they collaborated heavily with Intel and messed the Ryzen CPU performance), so they will leave Win10 on the back sit for a while me thinks.
 
I'm not sure I understand the need for E cores on a desktop. Laptops I understand, it's all about efficiency. But desktops just want raw power.

Looking forward to reviews
I think it is about marketing. Intel thinks most people are too dumb to know the difference and buy CPUs based solely on core counts now. They can say look we have 16 cores too and at the same time be able to profess their advantage in gaming while not making a GPU sized processor with a 500Watt Peak TDP.

Given that W10 lacks the scheduler to handle ADL properly that would have been quite problematic though - you try to make an even playing field, and only test across OSes if you really have to. Still, they had time to re-test, especially with the Insider patch being out for a while now. I rather think this is "opportunistic laziness" than malevolence though. "Oops, we didn't have time to fix that before our huge event, sorry" is far more likely than some grinning PR Disney villain being behind it.
You are right, I totally forgot about that. So it's not 100% malicious and devious. I'm not planning on embracing MS spyware OS 2.0 so I guess no 12xxx CPU for me. Maybe in Linux instead...

Am I the only one wondering what all e-core chip would be like? At that size, with those power requirements and only 1% slower than Sky Lake. They could have made a 40 core chip in the same die area(This assumes 1P=4E, 8P * 4 = 32 + 8 existing E = 40 Cores). Maybe that's a good server strategy for them.
 
Win10 will more likely use only the P cores and ignore the E ones. Until the MS update the cpu scheduler there also. For now the big push for their Win10.5=11 OS is only the AL platform (maybe that's why they collaborated heavily with Intel and messed the Ryzen CPU performance), so they will leave Win10 on the back sit for a while me thinks.
You think W10 will get the new scheduler? Call me a pessimist, but I doubt it.
Am I the only one wondering what all e-core chip would be like? At that size, with those power requirements and only 1% slower than Sky Lake. They could have made a 40 core chip in the same die area(This assumes 1P=4E, 8P * 4 = 32 + 8 existing E = 40 Cores). Maybe that's a good server strategy for them.
Nope, others have asked more or less the same question. 40 cores with that little L3, 2MB L2 per 4 cores and only a single link to the L3 and ring bus per four cores would likely be a pretty mixed bag in terms of performance though, and many server workloads want tons of cache. The 4c cluster would be kind of like an L3-less CCX, but on a monolithic die with a single L3, but a small one... I could definitely see this being done in the server space, but they might also change the clusters (2-core clusters? Single cores? Four, but more shared L2?) and stick them in a mesh for really high core counts.
 
I think it is about marketing. Intel thinks most people are too dumb to know the difference and buy CPUs based solely on core counts now. They can say look we have 16 cores too and at the same time be able to profess their advantage in gaming while not making a GPU sized processor with a 500Watt Peak TDP.

It's funny to watch AMD fans talk about marketing a CPU based only on core count.

So tell me, serious question, was there a worse x86 processor than Zen 2 released in the past 4 years for games?

And how many AMD fans here and elsewhere pushed Zen 2 onto gamers for its future proofing due to high core counts?
 
It's funny to watch AMD fans talk about marketing a CPU based only on core count.

So tell me, serious question, was there a worse x86 processor than Zen 2 released in the past 4 years for games?

And how many AMD fans here and elsewhere pushed Zen 2 onto gamers for its future proofing due to high core counts?
Not everyone who dislikes the way Intel has historically operated their business is an AMD "fan". I buy strictly on value.

This may shock you but gaming isn't the only purpose for a high-performance processor.

I still believe core counts will matter more for gaming in the coming years. The current generation of consoles embracing 8 core processors will have an impact.
 
Not everyone who dislikes the way Intel has historically operated their business is an AMD "fan". I buy strictly on value.
If the company has 'Inc' at the end it is fundamentally evil. See Google.

This may shock you but gaming isn't the only purpose for a high-performance processor.

I still believe core counts will matter more for gaming in the coming years. The current generation of consoles embracing 8 core processors will have an impact.

That's called a red herring and goalpost shifting. I was responding to a post about Intel core counts and #1 gaming performance as marketing hijinks. We weren't talking about running CineBench. I don't run that but I do run a lot of MS Office apps and browser based apps and...

Oh wait!

1635429251392.png


1635429329242.png
 
now that would be telling wouldn't it.
8 P cores running an application v/s 8 E course running an application with information on the power consumption difference between the two.

Theoretically, 1.27x IPC * (5 GHz/3.9 GHz) * 1.25x Hyper Threading = 2.03 times the multithreaded performance of 8 P cores running 16 threads vs 8 E cores running 8 threads.
Yes it would. Great minds think alike. ;)
 
slide-25.jpg


If Intel is telling the truth (which is very doubtful) - Gracemont is only 20% behind Golden Cove?!! Damn, 8x Gracemont is looking like a very impressive processor for budget laptops. Hell, even budget gaming laptops with RTX 3050/3060s will be well suited with 8x Gracemont cores.


They're not - the frequency for Golden Cove is still almost 2x Gracemont (hence why they have clock-for-clock compaarisons)

So you still need nearly twice as many cores to match Golden Cove! The real world performance difference is 60% of Golden Cove.
 
How exciting this is. I dont feel the need to upgrade my 10700kf, but just to get one of these, I want to!

Wizzard, if you feel too tired or worn out from doing the review, send the hardware my way, I'll .....review it..... for you. ;)
me first.

you do need the upgrade....riiiiiiight? :laugh:
 
They're not - the frequency for Golden Cove is still almost 2x Gracemont (hence why they have clock-for-clock compaarisons)

So you still need nearly twice as many cores to match Golden Cove! The real world performance difference is 60% of Golden Cove.
2x? 5.1 v 3.9GHz is a 30.8% advantage (or a 23.5% disadvantage). You're right other than that, but it's nowhere near 2x.
 
If the company has 'Inc' at the end it is fundamentally evil. See Google.



That's called a red herring and goalpost shifting. I was responding to a post about Intel core counts and #1 gaming performance as marketing hijinks. We weren't talking about running CineBench. I don't run that but I do run a lot of MS Office apps and browser based apps and...

Oh wait!

View attachment 222748

View attachment 222750

Its not shifting goalposts in the argument about core count. Chronology matters and the market has shifted.

- prior to Zen, quad core was what Intel wrote and stayed at for years. The performance in gaming relied on single thread efficiency, the market ran stuff on DX9-11. The hardware was clearly fighting a losing battle against increased gaming demands. Threading was fundamentally problematic for the industry. Nvidia won GPU comparisons in part by virtue of higher single thread efficiency with an API like Dx11 (CPU driver overhead).

- Leading into Zen we had the Mantle and DX12 initiatives, finally making the API more flexible for threading.

- Consoles adopted x86

- Zen and post Skylake generations started pushing core counts to 8+ for MSDT.

- Gaming is now once more, as it was during the Sandy Bridge days - except now both camps offer highly capable stacks of CPU for it. You can game perfectly fine on anything midrange and up, even from yesteryear. This is the norm Sandy Bridge and later had turned Intel into gaming king. It readily applies to AMDs latest 2-3 generations of Zen.

So right now, for desktop gaming purposes, anything works and new releases barely matter. The new and other ground covered here is higher peak performance for HEDT-like, and more parallel workloads. Again - both ADL and Zen are perfectly tuned for it, albeit different under the hood, and DDR5 is the big enabler for both too, to take it further.
 
Last edited:
Its not shifting goalposts in the argument about core count. Chronology matters and the market has shifted.

- prior to Zen, quad core was what Intel wrote and stayed at for years. The performance in gaming relied on single thread efficiency, the market ran stuff on DX9-11. The hardware was clearly fighting a losing battle against increased gaming demands. Threading was fundamentally problematic for the industry. Nvidia won GPU comparisons in part by virtue of higher single thread efficiency with an API like Dx11 (CPU driver overhead).

- Leading into Zen we had the Mantle and DX12 initiatives, finally making the API more flexible for threading.

- Consoles adopted x86

- Zen and post Skylake generations started pushing core counts to 8+ for MSDT.

- Gaming is now once more, as it was during the Sandy Bridge days - except now both camps offer highly capable stacks of CPU for it. You can game perfectly fine on anything midrange and up, even from yesteryear. This is the norm Sandy Bridge and later had turned Intel into gaming king.

So right now, for desktop gaming purposes, anything works and new releases barely matter. The new and other ground covered here is higher peak performance for HEDT-like, and more parallel workloads. Again - both ADL and Zen are perfectly tuned for it, albeit different under the hood, and DDR5 is the big enabler for both too, to take it further.

It's goalpost shifting when someone is talking about one topic / set of metrics - gaming performance and core counts - and someone chimes in :

  • "This may shock you but gaming isn't the only purpose for a high-performance processor."
    • The above is a goal post shift, we aren't talking about productivity apps - and they're were never a slam dunk with Zen 2 either
  • "I still believe core counts will matter more for gaming in the coming years. The current generation of consoles embracing 8 core processors will have an impact."
    • This is a red herring, talking about what *may* be, it's exactly the same type of statement that had a bunch of people buying Zen 1 and Zen 2 *for gaming*.
    • Reality check - they are the two worst series of CPUs to have bought in the last 4 years *for a gamer*.
 
It's goalpost shifting when someone is talking about one topic / set of metrics - gaming performance and core counts - and someone chimes in :

  • "This may shock you but gaming isn't the only purpose for a high-performance processor."
    • The above is a goal post shift, we aren't talking about productivity apps - and they're were never a slam dunk with Zen 2 either
  • "I still believe core counts will matter more for gaming in the coming years. The current generation of consoles embracing 8 core processors will have an impact."
    • This is a red herring, talking about what *may* be, it's exactly the same type of statement that had a bunch of people buying Zen 1 and Zen 2 *for gaming*.
    • Reality check - they are the two worst series of CPUs to have bought in the last 4 years *for a gamer*.

The real reality check that I am pointing at is to stop looking at what niches of gaming still offer tangible benefits and of diving into what is nearly margin of error territory... and instead zoom out a little bit and see what is new to the MSDT market with the current advances. The notable advances of ADL dont really point to better gaming in any way. Not the increased core count and not DDR5. Nor the better MT efficiency.

Good gaming was never about the highest benchmark result and the GPU is pivotal while the CPU if good enough is just irrelevant beyond that point. Zen from 2xxx series onwards offered that - and here is the kicker - alongside a set of other USPs like initially price, and core count.

Its good to recognize that because the mainstream market is going to move in kind. You only upgrade a CPU when it starts to show age.
 
The real reality check that I am pointing at is to stop looking at what niches of gaming still offer tangible benefits instead of diving into what is nearly margin of error territory... and instead zoom out a little bit and see what is new to the MSDT market with the current advances.

Good gaming was never about the highest benchmark result and the GPU is pivotal while the CPU if good enough is just irrelevant beyond that point. Zen from 2xxx series onwards offered that - and here is the kicker - alongside a set of other USPs like initially price, and core count.

Its good to recognize that because the mainstream market is going to move in kind. You only upgrade a CPU when it starts to show age.

The pre-Zen 3 CCX architecture was the worst choice for gaming, that's all that I said. This isn't philosophy, if you choose a CPU for gaming and that choice becomes hopelessly obsolete in < 1 year then you made a bad choice. The main thing that saved the Zen 2 gaming thing from being in everyone's face was COVID and the GPU price jump / scarcity.

Recent benchmarks don't even list Zen 2 on the charts anymore - gen 9 is usually bottom of the list. And here's why -

The difference is not margin of error, that's something people who can't read a chart repeated until they all believed it (feedback loop).

This is with a 3080, the contemporary of Zen 2 CPU was the gen 9 and gen 10, here you have 15% higher FPS with a 9900K vs 3900X and almost 20% with a 10900K.

Zen 2 was mostly fine with 2XXX series Nvidia cards but that fell apart in the space of 12 months. Again, Zen 1 and Zen 2 were demonstrably two of the worst CPUs one could have bought for gaming in the past 3-4 years.

1635439683672.png

T
 
The pre-Zen 3 CCX architecture was the worst choice for gaming, that's all that I said. This isn't philosophy, if you choose a CPU for gaming and that choice becomes hopelessly obsolete in < 1 year then you made a bad choice. The main thing that saved the Zen 2 gaming thing from being in everyone's face was COVID and the GPU price jump / scarcity.

Recent benchmarks don't even list Zen 2 on the charts anymore - gen 9 is usually bottom of the list. And here's why -

The difference is not margin of error, that's something people who can't read a chart repeated until they all believed it (feedback loop).

This is with a 3080, the contemporary of Zen 2 CPU was the gen 9 and gen 10, here you have 15% higher FPS with a 9900K vs 3900X and almost 20% with a 10900K.

Zen 2 was mostly fine with 2XXX series Nvidia cards but that fell apart in the space of 12 months. Again, Zen 1 and Zen 2 were demonstrably two of the worst CPUs one could have bought for gaming in the past 3-4 years.

View attachment 222766
T

While I see the point you are trying to make. Worse for gaming in 3-4 years while showing it doing 200fps is abit of a stretch.

I would believe that more if it was going from non playable fps to playable. Yes prior to Zen 3 intel provides more fps but for some people they were fine with that. A side by side test between the 10900k in the chart and the 3900X in the chart would not be noticeable to anyone while playing without a fps counter up.

How deep does this argument need to go when you are talking about 200+ fps from both chips?
 
The pre-Zen 3 CCX architecture was the worst choice for gaming, that's all that I said. This isn't philosophy, if you choose a CPU for gaming and that choice becomes hopelessly obsolete in < 1 year then you made a bad choice. The main thing that saved the Zen 2 gaming thing from being in everyone's face was COVID and the GPU price jump / scarcity.

Recent benchmarks don't even list Zen 2 on the charts anymore - gen 9 is usually bottom of the list. And here's why -

The difference is not margin of error, that's something people who can't read a chart repeated until they all believed it (feedback loop).

This is with a 3080, the contemporary of Zen 2 CPU was the gen 9 and gen 10, here you have 15% higher FPS with a 9900K vs 3900X and almost 20% with a 10900K.

Zen 2 was mostly fine with 2XXX series Nvidia cards but that fell apart in the space of 12 months. Again, Zen 1 and Zen 2 were demonstrably two of the worst CPUs one could have bought for gaming in the past 3-4 years.

View attachment 222766
T
I suppose serious gamers like you who need the best CPU to play on high settings and high res will feel inferior when the games run on average 10% slower eh? Kudos then! Because that is the difference at stock between the best CPUs back in 2019 (3950X vs 11900K).
1635441288513.png
 
While I see the point you are trying to make. Worse for gaming in 3-4 years while showing it doing 200fps is abit of a stretch.

That's true insofar as my choice of showing the games and settings Tom's used. It's not true on newer / higher end games though.

To illustrate - Cyperpunk 2077 is a heavily threaded game, at a setting I would think is common for fairly dedicated games. This should be ideal for all Zen, but it isn't quite. You've got 10600K 10700K and 10900K all beating the top Zen 2 3950X by wide margins. Interestingly the 9900K ties it. Again, this should be an ideal game for high thread count Zen and the test is at a very reasonable 1440P medium that many gamers will expect :

1635443054289.png
 
Pice of shit, self builder get again only a 32EU IGP :laugh: but now with 1450MHz and not 1300 like Rocket Lake.

In SFF and Notebooks there are 80-96 EU:kookoo:

im too dont buy a R5 5600G for 260€ if i can get a GTX 970 for 130€ :p


What would be faster a config of:
I5 12600K/5600G, B450 Board, IGP, 16GB of RAM for 420€
A8 5500, A68H Board, 16GB and a GTX 970 for about 300€ (in my way a GTX 970 for 130€)
 
Last edited:
If the company has 'Inc' at the end it is fundamentally evil. See Google.



That's called a red herring and goalpost shifting. I was responding to a post about Intel core counts and #1 gaming performance as marketing hijinks. We weren't talking about running CineBench. I don't run that but I do run a lot of MS Office apps and browser based apps and...

Oh wait!

My last comment on this because I'm pretty sure you just like to argue.

There are a lot more things to do outside the examples you've provided. Personally, I used to do a lot of software video encoding. Note the software part, it typically yields superior quality and compression to the hardware encoders in some CPUs and graphics cards. That and the former price advantage got me to buy into Ryzen processors. I could encode faster and still play some games. I don't play at 4K or 500FPS so losing a bit of gaming performance was fine by me.
 
My last comment on this because I'm pretty sure you just like to argue.

There are a lot more things to do outside the examples you've provided. Personally, I used to do a lot of software video encoding. Note the software part, it typically yields superior quality and compression to the hardware encoders in some CPUs and graphics cards. That and the former price advantage got me to buy into Ryzen processors. I could encode faster and still play some games. I don't play at 4K or 500FPS so losing a bit of gaming performance was fine by me.

You are the one that posted something about nebulous 'productivity' when the subject was gaming. Now we get the stereotypical 'I do rendering and encoding'.
 
My last comment on this because I'm pretty sure you just like to argue.

There are a lot more things to do outside the examples you've provided. Personally, I used to do a lot of software video encoding. Note the software part, it typically yields superior quality and compression to the hardware encoders in some CPUs and graphics cards. That and the former price advantage got me to buy into Ryzen processors. I could encode faster and still play some games. I don't play at 4K or 500FPS so losing a bit of gaming performance was fine by me.
Those way is totaly ill, my IGP in form of HD6550D (A8 3800) can even render faster than any AM4 CPU :kookoo:
 
Back
Top