• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Core i9-12900K

I see where you are going to.
Actually what you propose is that Intel should cut the P cores altogether and glue as many E cores as possible.
For example 32 E cores on a single die.

And see what happens in a 105-watt power budget :D
LOL I'd buy that. Extrapolating from the Anandtech review, a 32 E-Core processor will roughly use 192W @ Max utilization
 
PC World on power with the 12900K.

Are we done yet?

1636055396346.png
 
So +11% faster (1080p) falling to 7% faster (1440p) in games on average for +23% higher power consumption on a newer 10nm process vs 2-gen old i9-10900K on 14nm process and 92-100c temps even with a Noctua NH-U14S? That's... not very impressive...
Uhm , it OBVIOUSLY does not consume 23% more power OR run at 100c in gaming. It actually consumes less or equal power to amd cpus in gaming, and with pretty muchthe same temperatures.
 
Once eggs were fried on Fermi (GTX 480), now it will be boiling water for coffee or tea on 12900K (don't thank for the idea, just do such a test - who has this CPU of course).
Will it boil the coolant in the custom loop that's needed to cool it? Probably. Only time will tell.
 
Uhm , it OBVIOUSLY does not consume 23% more power OR run at 100c in gaming. It actually consumes less or equal power to amd cpus in gaming, and with pretty muchthe same temperatures.
People buy i9's and 24 thread CPU's in general for more than just gaming (and TPU isn't just a pure gaming site). Many gamers do video editing and have other mixed workloads too. If I wanted just a pure gaming chip, given that barely 0.4-4.0% separate the i5-12600K vs the i9-12900K (down to just 0.4% for 4k resolution), I'd buy the cheaper chip and spend more on the GPU. Or not upgrade at all and spend even more on the GPU. Put under heavy productivity load though, the temps and power are what they are, and I don't see why they should be arbitrarily excluded simply because "it's not gaming" or why a GPU bottleneck should be used to measure CPU power usage in a CPU review. The flip side of that is to declare the RTX 3070 a "73w card like the 1050Ti" by pairing it with a really slow CPU that matches the 60Hz VSync numbers, cherry pick that as the "real power figures" and throw all other measurements out of the window that actually load the component in question being tested 100%...
 
Last edited:
Apple has experience with big.LITTLE for close to a decade, yes it isn't iOS but you're telling me that their experience with Axx chips or ARM over the years won't help them here? Yes technically MS also had Windows on ARM but we know where that went.
CPU development cycles for a new arch are in the ~5 year range. In other words, MS has known for at least 3+ years that Intel is developing a big+little-style chip. Test chips have been available for at least a year. If MS haven't managed to make the scheduler work decently with that in that time, it's their own fault.
No of course not but without the actual chips out there how can MS optimize for it? You surely don't expect win11 to be 100% perfect right out the gate with something that's basically releasing after the OS was RTMed? Real world user feedback & subsequent telemetry data will be needed to better tune for ADL ~ that's just a reality. Would you say that testing AMD with those skewed L3 results was also just as fair?
Perfect? No. Pretty good? Yes. See above.

And ... the AMD L3 bug is a bug. A known, published bug. Are there any known bugs for ADL scheduling? Not that I've heard of. If there are, reviews should be updated. Until then, the safe assumption is that the scheduler is doing its job decently, as performance is good. These aren't complex questions.
View attachment 223678View attachment 223679
So why does the GPU test bench use 4000Mhz modules with the 5800x? Also, Previous benchmarks show even higher fps. 112 vs 96.
Because the GPU test bench is trying to eliminate CPU bottlenecks, rather than present some sort of representative example of CPU performance? My 5800X gives me WHEA errors at anything above 3800, so ... yeah.
According to Igor Lab's review (<- linked here) where they measure CPU power consumption when gaming -
24-1440-Wattage.png

and measure watts consumed per fps
25-1440-Efficiency-2.png

Alder Lake is doing very very well.
That looks pretty good - if that's representative, the E cores are clearly doing their job. I would guess that is highly dependent on the threading of the game and how the scheduler treats it though.
Anandtech does that iirc, but I feel for our enthusiast audience that it's reasonable to go beyond the very conservative memory spec and use something that's fairly priced and easily attainable
Yep, as I was trying to say I see both as equally valid, just showing different things. It's doing anything else - such as pushing each chip as far as it'll go - that I have a problem with.
Wait, are those light blue numbers idle numbers? How on earth are they managing 250W idle power draw? Or are those ST load numbers? Why are there no legends for this graph? I can't even find them on their site, wtf? If the below text is supposed to indicate that the light blue numbers are indeed idle, there is something very wrong with either their configurations or they measure that. Modern PC platforms idle in the ~50W range, +/- about 20W depending on the CPU, RAM, GPU and so on.

LOL I'd buy that. Extrapolating from the Anandtech review, a 32 E-Core processor will roughly use 192W @ Max utilization
Well, you'd need to factor in a fabric capable of handling those cores, so likely a bit more. Still, looking forward to seeing these in mobile applications.
 
No, it's not, DDR4 vs DDR5.

And it's not about it being unfair, having just one platform on DDR5 isn't enough to infer how good these CPUs actually are. Any CPU with faster memory will also perform better, nothing new here.
ComputerBase has the answer, DDR5 6200 vs DDR4 3800, practically no difference.
 
People buy i9's and 24 thread CPU's in general for more than just gaming (and TPU isn't just a pure gaming site). Many gamers do video editing and have other mixed workloads too. If I wanted just a pure gaming chip, given that barely 0.4-4.0% separate the i5-12600K vs the i9-12900K (down to just 0.4% for 4k resolution), I'd buy the cheaper chip and spend more on the GPU. Or not upgrade at all and spend even more on the GPU. Put under heavy productivity load though, the temps and power are what they are, and I don't see why they should be arbitrarily excluded simply because "it's not gaming" or why a GPU bottleneck should be used to measure CPU power usage in a CPU review. The flip side of that is to declare the RTX 3070 a "73w card like the 1050Ti" by pairing it with a really slow CPU that matches the 60Hz VSync numbers, cherry pick that as the "real power figures" and throw all other measurements out of the window that actually load the component in question being tested 100%...
LOL. But YOU mentioned only the gaming performance and then tossed the power consumption and temperatures from blender. Now you are telling me CPUs ain't just for gaming. Then why did you use the gaming numbers?

CPU's aren't just for n-multithreaded workloads either. If my job consists of lightly threaded tasks (like photoshop / premiere and the likes), that single thread performance of the 12900k is king. Without the power consumption and temperature baggage either. If your workloads consists of n threads scaling then you should be looking at the the threadrippers i guess.
 
So +11% faster (1080p) falling to 7% faster (1440p) in games on average for +23% higher power consumption on a newer 10nm process vs 2-gen old i9-10900K on 14nm process and 92-100c temps even with a Noctua NH-U14S? That's... not very impressive...
So you first state the (supposedly small) increase in gaming performance, then in the same sentence you quote power and temp figures from an all-core stress test? To use your own phrase - that's not very impressive argumentation...
 
CPU development cycles for a new arch are in the ~5 year range. In other words, MS has known for at least 3+ years that Intel is developing a big+little-style chip. Test chips have been available for at least a year. If MS haven't managed to make the scheduler work decently with that in that time, it's their own fault.

This isn't even the first big.little CPU from Intel either, Lakefield shipped in Q2'20 with 1P+4E ;)

And ... the AMD L3 bug is a bug. A known, published bug. Are there any known bugs for ADL scheduling? Not that I've heard of. If there are, reviews should be updated. Until then, the safe assumption is that the scheduler is doing its job decently, as performance is good. These aren't complex questions.

The hotfix for AMD L3 bug isn't perfect either:
inLyJlX.png

There are latency regressions even with the update applied. Especially for dual chiplet models.
NtF6T6p.png

Bandwidth is not at the Win10 levels either, but dramatically better than the original Win11.
Edit: broken graphs.
 
People buy i9's and 24 thread CPU's in general for more than just gaming (and TPU isn't just a pure gaming site). Many gamers do video editing and have other mixed workloads too. If I wanted just a pure gaming chip, given that barely 0.4-4.0% separate the i5-12600K vs the i9-12900K (down to just 0.4% for 4k resolution), I'd buy the cheaper chip and spend more on the GPU. Or not upgrade at all and spend even more on the GPU. Put under heavy productivity load though, the temps and power are what they are, and I don't see why they should be arbitrarily excluded simply because "it's not gaming" or why a GPU bottleneck should be used to measure CPU power usage in a CPU review. The flip side of that is to declare the RTX 3070 a "73w card like the 1050Ti" by pairing it with a really slow CPU that matches the 60Hz VSync numbers, cherry pick that as the "real power figures" and throw all other measurements out of the window that actually load the component in question being tested 100%...

LOL. But YOU mentioned only the gaming performance and then tossed the power consumption and temperatures from blender. Now you are telling me CPUs ain't just for gaming. Then why did you use the gaming numbers?

CPU's aren't just for n-multithreaded workloads either. If my job consists of lightly threaded tasks (like photoshop / premiere and the likes), that single thread performance of the 12900k is king. Without the power consumption and temperature baggage either. If your workloads consists of n threads scaling then you should be looking at the the threadrippers i guess.
Bingo! It's always the same with them lot - when trying to make Intel look bad and AMD good, all and every tactic is fair, the dirtier the better actually... :rolleyes:
 
ComputerBase has the answer, DDR5 6200 vs DDR4 3800, practically no difference.

Yeah but DDR4-3200 is a bit too slow. From different reviews it seems like if you are running DDR4-3600 or higher with decent latency (like CL16) then it's fine, zero or almost zero difference, but the tests with DDR4-3200 on AL are highly variable vs DDR5.
 
Compared to this power consumption Bulldozer seems like a good CPU. It wasn't as fast as Intel's offerings at the time, but then again it wasn't trying to burn your house down either.
 
Bingo! It's always the same with them lot - when trying to make Intel look bad and AMD good, all and every tactic is fair, the dirtier the better actually... :rolleyes:
Considering I own a 10th Gen Intel, I've no idea who this dumb anti-fanboyism fanboyism of yours is even aimed at. I just have zero interest in space heaters of either brand and 100c with an $80 Noctua NH-U14S is piss-poor thermals... :rolleyes:
 
Considering I own a 10th Gen Intel, I've no idea who this dumb anti-fanboyism fanboyism of yours is even aimed at. I just have zero interest in space heaters of either brand and 100c with an $80 Noctua NH-U14S is piss-poor thermals... :rolleyes:

Then set the power limit to 88W on the AL and still walk all over your neighbors 5900X.

Computerbase.de :

1636058355485.png
 
Next week :) Intel CPUs arrived yesterday. I've been rebenching everything else since the W11 AMD L3 cache fix came out. Will rebench Zen 2 and 9th gen too and add them to the reviews.

I understand how much trouble it is to rebench everything. Thanks for the extra effort, we really do appreciate your thoroughness.
 
This is not going to change my plans for upgrade to Ryzen someday, but good job Intel.
P.S.
I will never understand complaints about high power draw and heat when you are spending cash for top end product. Does eletricity bill really applies to someone who can shell out the cash for this?! I doubt it very much...
 
This is not going to change my plans for upgrade to Ryzen someday, but good job Intel.
P.S.
I will never understand complaints about high power draw and heat when you are spending cash for top end product. Does eletricity bill really applies to someone who can shell out the cash for this?! I doubt it very much...

I'm more concerned with the heat output in one room. Though I'm in central texas so it's a bigger concern for me than others. (my solution was a mini-split in my server room and switching to a minipc at my desk where I remote to my gaming system. Stays cools and I don't care how much heat it generates.)
 
"Fighting for the Performance Crown"

And yet fails to beat the 1 year old rival with the same amount of cores. With TWICE worse efficiency. The i7 model is also 5% behind the 5900X while having the same number of cores. The only model able to beat its rival is the i5 - having 4 extra cores compared to the 5600X.
 
1636062220128.png

- Did you use a U14s or a U12s for overclocking? @W1zzard :D
 
Very good work but if you allow me I can find out the reason why you changed the Zen setup from EVGA X570 DARK with 4000mhz@2000 IF memories that you used in the last reviews you did on MSI X570 and 3600@1800 IF memories?
 
Software has always been behind but maybe this transition to big.LITTLE will change that.
I have my doubts. Because this is the first x86 product like this on Windows, and the hybrid approach is limited to only part of the 12 series, it means 99% of the hardware out there will still be homogeneous CPU architecture, and for many years to come with the way our hardware can now last for so long. It’s going to be on Intel to make this work, then MS, and maybe developers will jump in. I could easily see developers just saying “use different hardware” if you have issues, at least for a while.
ComputerBase has the answer, DDR5 6200 vs DDR4 3800, practically no difference.
Anandtech came to the conclusion that DDR5 does contribute to the performance increase. They have 2 pages of DDR4 vs DDR5 that show measurable gains. It’s not across the board, but significant, especially mutlithread. They even concede that AMD should see similar gains when they implement DDR5, though they are obviously further out.
 
I have my doubts. Because this is the first x86 product like this on Windows, and the hybrid approach is limited to only part of the 12 series, it means 99% of the hardware out there will still be homogeneous CPU architecture, and for many years to come with the way our hardware can now last for so long. It’s going to be on Intel to make this work, then MS, and maybe developers will jump in. I could easily see developers just saying “use different hardware” if you have issues, at least for a while.

It's actually the second - Lakefield was released in Q2 2020 with 1P+4E. The difference here is that Intel Thread Director is present to help the Windows scheduler make sensible decisions. The AnandTech article explains in detail what is happening behind the scenes, especially on Windows 10, which lacks ITD support.

I don't think software vendors will ignore the potential issues, but the worst solution would probably be "disable E-cores in BIOS" instead of "use different hardware", because the P-cores are superior to previous Intel cores ;)
 
ComputerBase has the answer, DDR5 6200 vs DDR4 3800, practically no difference.
Only half the story, still need to see how AMD will perform under DDR5.

Software has always been behind but maybe this transition to big.LITTLE will change that.
Doubt it, big.LITTLE will always produce terrible results under certain situations, it's a problem impossible to solve without negative side effects. It's just that on mobile those sides effects aren't that noticeable but now it became obvious that on desktop PCs they are.
 
Last edited:
Back
Top