• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel Core i9-12900K 36% Faster Than Stock in Maximum Turbo Power Mode

Yes, peak power consumption might not matter in many users’ day-to-day use, but I suspect that even in random workloads, the boosting to one core is going to put similar strain on the power delivery. Basically, motherboards still must be designed to accommodate the CPU’s boosting behavior to achieve that very top performance. That adds complexity, which adds cost. You get to pay for it either way, and I think the real take home is that this looks like the future state of high-end desktop computing. This power boosting can deliver more performance, but we’re already running into a thermal density-cooling limit—one that will likely only get worse with smaller nodes. This is going to be a one-time grab for that extra performance that will require some huge architectural gains to be able to walk back from. I think this is just how CPUs are going to behave from now on.
 
The Intel Core i9-12900K scores 7492 points when running at its TDP of 125 W and 10180 points or 36% faster when operating at the MTP of 241 W.

Use those electrons, think of all the power utility workers you will help support :)
 
142 watts @ PL2 is a requirement for advertised (stock) clock boosting on the 5950x using an AM4 socket.

A stock (non-oc board) vrm is designed to deliver ~142 watts. AMD doesn't just sell these things to enthusiasts. Vendors designing boards for non-oc purposes build the vrm to AMD spec to curb costs. Greater than or equal to 142 watts @ PL2.

The boards everyone uses here deliver more than 142 watts to the socket for 5xxx processors. All-core OCs on 5xxx processors use more than 142 watts and PBO2 uses more than stock spec wattage per individual core(s).

Shocker: The AMD AM4 socket successor will deliver greater than 142 watts for PL2 at stock config.
 
Last edited:
Intel cpu's always run cool even with high power draw

They run cool because they are mostly used for tasks that don´t use the same power as prime95 or cinebench. Useless applications that most buyers don´t even install on their machines.

Show me a game using 150w on Intel and then I start to worry.

For the "but rendering! encoding!" guys, just get out. If your life depends on that kind of tasts, then you have way better options. Don´t bother with mainstream platforms.
 
So it takes a 241W 12900K (Basically OC) to match a stock 5950X......oh dear
Oh dear indeed....for AMD, considering it will be priced much closer to 5900x, not to mention 12600k can even whoop 5800x's ass and therefore beat 5600x to a pulp. Once again the redsters will have a singe argument left - Moar coars! tm of course applicable only to the top of the line 5950x which is out of reach and/or out of practicality for like 99% of home users.
 
Turbo 36% faster than stock ... isn't that a fancy way of saying sustainable speed is 27% slower than momentary peak turbo?
 
So you double the Power. (125w to almost 250W) to receive 36% more performance.
"Sounds like the best Deal in the history of deals, maybe ever" -Donald Intel Trump.
 
Nice. I don't give a shit about power use, as long as my custom loop keeps it cool, and my PSU is enough. If it's the fastest CPU for the cost, who gives a shit how much power it uses, unless you can't cool it, or pay your electricity bill.

Go Intel
 
Just for giggles I just ran R20 (5950x), and HWiNFO was reporting 125W PPT during the run and scored 9934. Or, 2442 points higher than a 12900K at 125W.

Removing power limits, 10789 reporting 220W PPT, or 609 points higher than 12900K at 241W
 
Oh dear indeed....for AMD, considering it will be priced much closer to 5900x, not to mention 12600k can even whoop 5800x's ass and therefore beat 5600x to a pulp. Once again the redsters will have a singe argument left - Moar coars! tm of course applicable only to the top of the line 5950x which is out of reach and/or out of practicality for like 99% of home users.
Yeah but no, because it doesn't matter anyway, AMD have been clearing most of what they made this last year, even if they do have to drop prices on a year old product line they have a refresh on the way and they already banked change.

And your hyping Intel's moar core answer days before reviews, seems wise to me.

Those pointing at goggles rarely look over the rim of they're own.

I would suggest neither Intel nor AMD require any pity or scorn ATM ,for a change they're both on the job.
 
Oh dear indeed....for AMD, considering it will be priced much closer to 5900x, not to mention 12600k can even whoop 5800x's ass and therefore beat 5600x to a pulp. Once again the redsters will have a singe argument left - Moar coars! tm of course applicable only to the top of the line 5950x which is out of reach and/or out of practicality for like 99% of home users.
So one company can use “moar” power consumption to achieve performance that 99% of users practically don’t need, but it’s dubious for the other company to add cores in order to add performance. Makes perfect sense. You are, of course, forgetting that AMD can simply set new pricing. They’ve had a year to sell Zen3 at pretty much whatever pricing they want, and now they can use Zen 3 to squeeze Intel pricing at various price points. We also have no idea how supply constrained 12900K will be. 10900K was pretty hard to get at launch, so much so that Intel pushed out the 10850K a little later to supply a 10C at better volume. It all depends heavily on how much they are pushing their design. Based on power consumption ratings, I’d say pretty hard.
 
I don't think 241 watts is a good idea even for the highest end CPU coolers :fear:
 
To keep the playing field painfully balanced, let's discuss 3DV on Zen3. Ain't competition fun!?

That much extra L3 is going to take up a large percentage of the package power budget. It will most likely be variable power managed depending on how much L3 is in use.

The upside: The extra cache will mean the cpu can achieve similar metrics (game, app, synthetic) at lower clock rates, in many cases, when compared to non 3DV skus.

The downside: The 142watt stock power draw at PL2 will most likely mean lower advertised boost clock speeds when compared to current Zen 3 skus. Maybe? Maybe a ranged boost clock speed depending on how much L3 is addressed? Not sure..

Given most expect to see faster base and boost clocks from one release to subsequent, some might view this as bad even though it's not.

...

Just for giggles I just ran R20 (5950x), and HWiNFO was reporting 125W PPT during the run and scored 9934. Or, 2442 points higher than a 12900K at 125W.

Removing power limits, 10789 reporting 220W PPT, or 609 points higher than 12900K at 241W
This thread is useless without posted results..

Updated today:
 
Any leaked reviews yet? :)
Wanna see that 12600K go
Yes that's the one I think will be most interesting for most people and gamers, if you need a lot of cores for productivity I think AMD's 5900x and 5950x will turn out to be the better choice against Intel 12700k and 12900k.

Haven't seen any leaked reviews yet or trustworthy numbers but hey, only three more days to go.
 
Just for giggles I just ran R20 (5950x), and HWiNFO was reporting 125W PPT during the run and scored 9934. Or, 2442 points higher than a 12900K at 125W.

Removing power limits, 10789 reporting 220W PPT, or 609 points higher than 12900K at 241W

waiting for reviews, but that sounds accurate, if Intel could beat the 5950x they would for sure price the 12900k higher.
 
You mean temperature right ?

Since there is no physical work done within a CPU, all energy input is converting to heat.
So power = heat inside a CPU.
I didn't mean to upset any physicists. I'm not one, and I do apologise.

To put it in PC terms, Watts consumed by a CPU isn't equal to its core/package temperature. There's a whole bunch of other factors at play.
 
I didn't mean to upset any physicists. I'm not one, and I do apologise.

To put it in PC terms, Watts consumed by a CPU isn't equal to its core/package temperature. There's a whole bunch of other factors at play.
It's a leading cause of it, but not the final answer - and a lot of people dont get that

Energy produced over X area dissipated by Y cooler at Z rate
 
It's a leading cause of it, but not the final answer - and a lot of people dont get that

Energy produced over X area dissipated by Y cooler at Z rate
Exactly! That's why I stand by my findings that Rocket Lake is a lot easier to cool than Zen 2/3. Current gen Intel chips do not have heat issues. Power is a different story.
 
They run cool because they are mostly used for tasks that don´t use the same power as prime95 or cinebench. Useless applications that most buyers don´t even install on their machines.

Show me a game using 150w on Intel and then I start to worry.

For the "but rendering! encoding!" guys, just get out. If your life depends on that kind of tasts, then you have way better options. Don´t bother with mainstream platforms.
I edit videos on an Intel mobile cpu that draws 80w. It's good enough for the job it does.

It's going to be interesting when the 12th gen releases for the mobile market.
 
Last edited:
They run cool because they are mostly used for tasks that don´t use the same power as prime95 or cinebench. Useless applications that most buyers don´t even install on their machines.

Show me a game using 150w on Intel and then I start to worry.

For the "but rendering! encoding!" guys, just get out. If your life depends on that kind of tasts, then you have way better options. Don´t bother with mainstream platforms.
It's not just that. Even when configured to draw the same amount of power as a current gen Ryzen CPU, the Intel one will run cooler, probably due to the larger surface area of the chip. It's not theory, but fact that I can confirm by having owned several Ryzen 3000/5000 CPUs and now having an i7-11700 in my main rig - which I run with a 125 W PL1, where I struggled to run a R5 3600 at stock settings (88 W PPT) with the same cooling setup.
 
It's not just that. Even when configured to draw the same amount of power as a current gen Ryzen CPU, the Intel one will run cooler, probably due to the larger surface area of the chip. It's not theory, but fact that I can confirm by having owned several Ryzen 3000/5000 CPUs and now having an i7-11700 in my main rig - which I run with a 125 W PL1, where I struggled to run a R5 3600 at stock settings (88 W PPT) with the same cooling setup.
Yes, you're correct.

AMD run hotter on some chips because they have the higher heat density - 80mm2 vs 200mm2+
I mean... that alone explains a lot, heat wise.
On top of that, they measure temps different. Intel likes to report a more averaged temp, while AMD reports the peak temp - and far more often.
So if both chips measured 60C with a spike to 70C for 5ms, the intel wouldnt report that spike (Zen2 and 3 report every 1ms.... waaaaaaaay more often than intel - i cant find much, but estimates seem to be between 15ms and 30ms)
 
They run cool because they are mostly used for tasks that don´t use the same power as prime95 or cinebench. Useless applications that most buyers don´t even install on their machines.

Show me a game using 150w on Intel and then I start to worry.

For the "but rendering! encoding!" guys, just get out. If your life depends on that kind of tasts, then you have way better options. Don´t bother with mainstream platforms.
The title of this thread is about the i9-12900K.
If games and browsing the internet is all you ever do, why bother with the i9? An i5 will do everything you want with less power draw.
If you are not interested in mutli-core performance then you have no use for an i9, the i7 will have the same number of P-cores.
And since according to you nobody ever needs more than gaming, why should Intel bother releasing anything more than an i5?
 
Back
Top