• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel Core i9-12900K

That's why I mentioned heat transfer efficiency.
But what is "heat transfer efficiency" if not that compound measurement of many different factors that I mentioned? And how do you define it in a way that accounts for variables like hotspot placement? In short: you can't. So you have to compromise in some way.
Seems like average temperature of whole IHS would be the best way to do that, while also stating hotspot temp.
But then you have three numbers: power draw (W), tIHSavg and tIHSpeak. How do you balance the three when designing a cooler? And how do you measure the three at all? With a reference cooler? Without a cooler?
I would argue that they should do it anyway, as OEMs like Dell, HP or Acer have really poor reputation for many overheating machines. They cannot be making crap forever at some point it will hurt their sales.
You can argue that all you want, the most important priority for them is simplifying their production lines and system configurations to increase profit margins. You're not going to convince them to invest millions in complex thermal testing regimes. The only effective way of doing this is enforcing this on a component manufacturer level, ideally through either industry body or government standardization.
Watts aren't a problem, if you measure watts and then heat transfer from chip to IHS efficiency, what is then left unclear or misinforming? That covers odd chips like Ryzens.
Again: "heat transfer efficiency" is an incredibly complex thing, and cannot be reduced to a simple number or measurement without opening the door for a lot of variance. And it doesn't cover odd chip placements unless that number has huge margins built in, in which case it then becomes misleading in the first place.
GN did a video about AMD's TDP and asked Cooler Master about this, they said that TDP doesn't mean much and that it's certainly not clear how capable coolers they should design.
That's because all CPUs boost past TDP and people expect that performance to last forever. This is down to the idiotic mixed usage of TDP (cooler spec and marketing number), as well as the lack of any TDP-like measure for peak/boost power draws, despite all chips far exceeding TDP. If CM (or anyone else) built a cooler following the TDP spec for 105W, it would be fully capable of running a stock 5950X, but the CPU wouldn't be able to maintain its 141W boost spec over any significant amount of time, instead throttling back to whatever it can maintain within thermal limits at the thermal load the cooler can dissipate - i.e. base clock or a bit more. The issue here is that nobody would call that acceptable - you'd have a CPU running at near 90 degrees under all-core loads and losing significant performance. Most would call that thermal throttling, despite this being inaccurate (it would need to go below base clock for that to be correct), but either way it's easy to see that the cooler is insufficient despite fulfilling the spec. That is why TDP is currently useless for manufacturers, not that the measurement doesn't work but that it doesn't actually cover the desired use cases and behaviours of customers.
 
Just keep in mind the DDR5 used here is 6000.

Computerbase.de shows DDR4-3200 C12 and DDR4-3800 walking all over DDR5-4400. This is not surprising, but I doubt early adopters are going to be running DDR5-4400.

Tom's used DDR5-4800 C36 and DDR4-3200 on Alder Lake and the older platforms, a bit more realistic but they didn't specify the DDR4 settings that I saw. The kit was DDR5-6000 but they set it to one of the more normal speeds.

So far my take is that 'good' DDR4 is faster than DDR5, at least the obtainable DDR5-5200. I think that is not necessarily true of the DDR5-6000+ when it is full speed, but you can't actually buy that stuff.

Yeah I do usually consider latency as more important than bandwidth, I remember my old 3000CL12 config out performing 3200CL14. Some workloads do benefit from bandwidth a lot though so as you said it will depend.
 
Just keep in mind the DDR5 used here is 6000.

Computerbase.de shows DDR4-3200 C12 and DDR4-3800 walking all over DDR5-4400. This is not surprising, but I doubt early adopters are going to be running DDR5-4400.

Tom's used DDR5-4800 C36 and DDR4-3200 on Alder Lake and the older platforms, a bit more realistic but they didn't specify the DDR4 settings that I saw. The kit was DDR5-6000 but they set it to one of the more normal speeds.

So far my take is that 'good' DDR4 is faster than DDR5, at least the obtainable DDR5-5200. I think that is not necessarily true of the DDR5-6000+ when it is full speed, but you can't actually buy that stuff.
3200c12 is pretty uneralistic though - yes, you can OC there, but is there even a single kit on the market with those timings? It's clear that available DDR5 is slower than available DDR4, simply as available DDR4 is highly mature and DDR5 is not. What becomes clear from more balanced testing like AnandTech's comprehensive testing at JEDEC speeds (3200c20 and 4800c40) is that the latency disadvantage expected from DDR5 is much less in practice than the numbers would seem to indicate (likely down to more channels and other differences in how data is transferred), and that at those settings - both of which are bad, but of which the DDR5 settings ought to be worse - DDR5 mostly outperforms DDR4 by a slim margin.

That still means that fast DDR4 will be faster until we get fast(er) DDR5 on the market, but it also means that we won't need DDR5-8000c36 to match the performance of DDR4-4000c18.
 
3200c12 is pretty uneralistic though - yes, you can OC there, but is there even a single kit on the market with those timings? It's clear that available DDR5 is slower than available DDR4, simply as available DDR4 is highly mature and DDR5 is not. What becomes clear from more balanced testing like AnandTech's comprehensive testing at JEDEC speeds (3200c20 and 4800c40) is that the latency disadvantage expected from DDR5 is much less in practice than the numbers would seem to indicate (likely down to more channels and other differences in how data is transferred), and that at those settings - both of which are bad, but of which the DDR5 settings ought to be worse - DDR5 mostly outperforms DDR4 by a slim margin.

That still means that fast DDR4 will be faster until we get fast(er) DDR5 on the market, but it also means that we won't need DDR5-8000c36 to match the performance of DDR4-4000c18.

Latency is just one factor, and specifically the CL it's how long (in clock cycles, not time) it takes for the first word of a read to be available on the output pins of the memory. After that, the first number (3200, 4400, 4800, etc) is how fast data transmits.

I think in general for 'normal' applications high MT/s (like 5200) is better, for games lower latency is better. There are plenty of exceptions, especially when you get into the 'scientific' side of 'applications', but for normal user apps I think high MT/s helps.

So just to note, here at TPU they used DDR5-6000 C36 Gear 2 (1:2 ratio). This is some freaky fast DDR5 for now, probably more reflective of what will be available in 1H 2022. The DDR4 used on older platforms is quite good too though, DDR4-3600 C16-20-20-34 1T Gear 1 and 1:1 IF for AMD is no slouch. I think these are putting the older platforms pretty close to their best footing, that 90% of folks can get to run properly.
 
Latency is just one factor, and specifically the CL it's how long (in clock cycles, not time) it takes for the first word of a read to be available on the output pins of the memory. After that, the first number (3200, 4400, 4800, etc) is how fast data transmits.

I think in general for 'normal' applications high MT/s (like 5200) is better, for games lower latency is better. There are plenty of exceptions, especially when you get into the 'scientific' side of 'applications', but for normal user apps I think high MT/s helps.

So just to note, here at TPU they used DDR5-6000 C36 Gear 2 (1:2 ratio). This is some freaky fast DDR5 for now, probably more reflective of what will be available in 1H 2022. The DDR4 used on older platforms is quite good too though, DDR4-3600 C16-20-20-34 1T Gear 1 and 1:1 IF for AMD is no slouch. I think these are putting the older platforms pretty close to their best footing, that 90% of folks can get to run properly.
Uhm ... what, exactly, in my post gave you the impression that you needed to (rather poorly, IMO) explain the difference between RAM transfer rates and timings to me? And even if this was necessary (which it really wasn't), how does this change anything I said?

Your assumption is also wrong: Most consumer applications are more memory latency sensitive than bandwidth sensitive, generally, though there are obviously exceptions. That's why something like 3200c12 can perform as well as much higher clocked memory with worse latencies. Games are more latency sensitive than most applications, but there are very few realistic consumer applications where memory bandwidth is more important than latency. (iGPU gaming is the one key use case where bandwidth is king outside of server applications, which generally love bandwidth - hence why this is the focus for DDR5, which is largely designed to align with server and datacenter owners' desires.)

And while DDR5-6000 C36 might be fast for now (it's 6 clock cycles faster than the JEDEC 6000A spec, though "freaky fast" is hardly suitable IMO), it is slow compared to the expected speeds of DDR5 in the coming years. That's why I was talking about mature vs. immature tech. DDR5 JEDEC specifications currently go to DDR5-6400, with standards for 8400 in the works. For reference, the absolute highest DDR4 JEDEC specification is 3200. That means we haven't even seen the tip of the iceberg yet of DDR5 speed. So, again, even DDR5-6000c36 is a poor comparison to something like DDR4-3600c16, as one is below even the highest current JEDEC spec (let alone future ones), while the other is faster than the highest JEDEC spec several years into its life cycle.

The comment you responded to was mainly pointing out that the comparison you were talking about from Computerbase.de is deeply flawed, as it compares one highly tuned DDR4 kit to a near-base-spec DDR5 kit. The DDR4 equivalent of DDR5-4400 would be something like DDR4-2133 or 2400. Also, the Computerbase DDR5-4400 timings are JEDEC 4400A timings, at c32. That is a theoretical minimum latency of 14,55ms of latency compared to 7,37ms for DDR4-3800c14. You see how that comparison is extremely skewed? Expecting anything but the DDR4 kits winning in those scenarios would be crazy. So, as I said, mature, low latency, high speed DDR4 will obviously, be faster, especially in (mostly) latency-sensitive consumer workloads. What more nuanced reviews show, such as Anandtech's more equal comparison (both at JEDEC speed), is that the expected latency disadvantage of DDR5 is much less than has been speculated.
 
But then you have three numbers: power draw (W), tIHSavg and tIHSpeak. How do you balance the three when designing a cooler? And how do you measure the three at all? With a reference cooler? Without a cooler?
Cooler makes should only specify what they can dissipate. You as consumer would buy a chip, calculate what cooler you need from TDP (fixed) and efficiency. That's all. You as consumer are free to accommodate to peak or not.

You can argue that all you want, the most important priority for them is simplifying their production lines and system configurations to increase profit margins. You're not going to convince them to invest millions in complex thermal testing regimes. The only effective way of doing this is enforcing this on a component manufacturer level, ideally through either industry body or government standardization.
It's not that expensive to determine what coolers they would need and savings of metals will quickly outweigh modest RnD costs.

Again: "heat transfer efficiency" is an incredibly complex thing, and cannot be reduced to a simple number or measurement without opening the door for a lot of variance. And it doesn't cover odd chip placements unless that number has huge margins built in, in which case it then becomes misleading in the first place.
I don't see that happening tbh.

That's because all CPUs boost past TDP and people expect that performance to last forever. This is down to the idiotic mixed usage of TDP (cooler spec and marketing number), as well as the lack of any TDP-like measure for peak/boost power draws, despite all chips far exceeding TDP. If CM (or anyone else) built a cooler following the TDP spec for 105W, it would be fully capable of running a stock 5950X, but the CPU wouldn't be able to maintain its 141W boost spec over any significant amount of time, instead throttling back to whatever it can maintain within thermal limits at the thermal load the cooler can dissipate - i.e. base clock or a bit more. The issue here is that nobody would call that acceptable - you'd have a CPU running at near 90 degrees under all-core loads and losing significant performance. Most would call that thermal throttling, despite this being inaccurate (it would need to go below base clock for that to be correct), but either way it's easy to see that the cooler is insufficient despite fulfilling the spec. That is why TDP is currently useless for manufacturers, not that the measurement doesn't work but that it doesn't actually cover the desired use cases and behaviours of customers.
Might as well educate buyers that boost is not guaranteed, but Intel has been doing it for at least a decade and it ended up this way. Perhaps new Alder Lake measurements just make sense.
 
@W1zzard , did you enable SAM on the AM4's board UEFI when testing? I am almost sure Intel doesn't support it as much.
 
This will, no doubt, finish AMD not.

I'm not even convinced AMD needs to change pricing on any of its products.
 
This will, no doubt, finish AMD not.

I'm not even convinced AMD needs to change pricing on any of its products.
That's kinda obvious, but techtube says otherwise and many people listen to them.
 
Anyone know where Intel is fabbing these?
 
Yeah because no other cpu has been worth upgrading to over that old cpu. lol get real. And a solid 144 is important to you yet you held on this long to a cpu that cant even maintain 60 fps in some games.
This is why you never bother with these low-spec 4K60 peasants. Their statements are so dumb that they could actually work as bait. As soon as you see their 4K monitor and their 10 y/o CPU paired with a 2080 you just know they live in fantasy land. They are so deluded that they lose basic understanding of how tech works. They actually believe that their CPU is still good enough and nothing you do or say will change their mind because keeping that CPU for so long is the only meaningful achievement in their life so they have to defend it. It's like arguing with women. Just don't do it. Total waste of time, especially since staff and other members will always defend them for some reason.

On topic: The 12900K is great & efficient, hail Intel, AMD sucks, yadda yadda yadda. Gonna go buy an i9 right now and keep it for 15 years.
 
from business view and by looking at Die size , Intel new arch does cost more than Zen3 , therefore Intel had to chose a path in which is less profit than Zen3.
 
At least, this time they have balls to also admit that they can use nearly 300 watts of power. Not sure about you, but I treat TDP or base power are maximum expected power usage at base clocks without any turbo clocks. But I tested my own i5 and in prime95 small FFTs it uses less than 65 watts (I think it was up to 50 watts) of power with turbo off, so I guess any power number that Intel or AMD releases doesn't mean anything.
So far, TDP on Intel has meant PL1, that is long term power limit enforced by the motherboard by default. My 11700 can do 2.8 GHz (300 MHz above base clock) in Cinebench all-core while maintaining the factory 65 W limit. I'm not sure with Alder Lake, though.

As for AMD, TDP is nothing more than a recommendation for cooler manufacturers (bull****). It has nothing to do with power.

In FX 9590 era, we called that desperate, in 2021 we call that excellent. "Editor's Choice" and "Highly Recommended".
"Highly recommended"... to slap a huge liquid cooler on it. :D
 
@W1zzard , did you enable SAM on the AM4's board UEFI when testing? I am almost sure Intel doesn't support it as much.
Enabled on all platforms, it's supported just fine everywhere
 
Enabled on all platforms, it's supported just fine everywhere

By any chance is Alder Lake ever going to be benched using DDR4?
 
Alder lake is still vulnerable to attack, what’s the performance going to be when they fix it.

All CPUs featuring out of order speculative execution are vulnerable to Spectre class attacks. No matter if they are Intel, AMD, ARM or MIPS.

Each review of new Intel CPUs has seen at least one person blaming Intel for not fixing HW vulnerabilities. It's a sort of tradition nowadays.

A nice overview of affected CPU architectures and their status is on Wikipedia.
 
"Highly recommended"... to slap a huge liquid cooler on it. :D
I really wonder why it was given that award. It's more or less the same as recommending FX 9590, but this time Intel has performance edge at least, but unlike FX 9590, i9 is uncoolable. If 280mm AIO and D15 fails to cool it, then what can? Now minimum spec for it is 360mm AIO or custom loop? Good one Intel, I will wait till their flagship chips will need LN2 pot as minimum cooler.
 
"the new Socket AM5. An LGA package with 1,718 pins, AM5"

AM5 will have more pins therefore it must be faster ;)
 
Mind your attitude. :slap::slap:

@Valantar Thanks for the technical explanation, it does sound about right. I knew it was for reasons along these lines and said so in my post.

And yes, it's funny how some people get all personal over a friggin' CPU. :laugh:

And yeah, it's been wonderous for my wallet. Contrary to that immature child above, my CPU does well over 60fps in the all games I play, even the latest, but it can't reach the magic 144fps, or even 120fps in many cases although the experience is still surprisingly smooth. This thing probably has something like an 80-100% performance increase over my aged CPU so will have no trouble at all hitting those highs. Can't wait! :cool:
can maintain well over 60 fps in the latest games with a 2700k? thanks for proving just how delusional I thought you were with that asinine claim. you are so full of it that it is laughable. even my oced 4700k, which is quite a bit faster, could not maintain 60 fps in some games even 3 years ago and most certainly had plenty of drops well below 60 fps. knock yourself out with the last word as no point in fooling with someone like you.
 
Last edited:
A nice overview of affected CPU architectures and their status is on Wikipedia.
Sadly that's incomplete, missing 7 CVEs from Intel guidance and a few recent microarchitectures.

Edit: Looking closer at the Intel site it looks like Alder Lake is indeed vulnerable to CVE-2020-24511 and CVE-2020-8698 that Rocket Lake wasn't. Supposedly fixed in microcode and hardware respectively, so most likely release BIOSes are safe.
 
Last edited:
Fair enough advice I suppose (if you don't need/want a new PC right now, especially in the light of ridiculous graphics cards prices, but I suspect in this particular case, he'll hold on to his 2080 for the time being anyway), however the problem is, that we don't know which skus will get the 3D treatment; some indications say only octa-cores and up, some even only 12&16 and none mention the 6-core and the latter is just mind boggling. Not only have they already pretty much abandoned the sub $300 class so far, but with 5600x remaining all they will offer here, they'll lose it completely. Even if they drop its price to $200 (unlikely), there still won't be any competition for the 12600k, especially given that motherboard availability and pricing will only improve with time.
The 5600X looks to be dead in the water if AMD doesn't lower its price.

 
well, reviews out, although wrong contender ...
nonetheless after "reviewing the reviews" ( errrr ... :laugh: :oops: ) a 5600/5600x will be more profitable for me if i want an upgrade (not that the 3600 is a slouch, for my usage ) the 300$ 12600K is well ... ~50 more than the cheapest 5600X i can find ...

add that to the fact that i can keep my mobo ... and avoid win 11 (for now) and avoid skittish big.INTEL eerrrr i mean LITTLE scheduler, and for all the cons i see in the reviews kinda outweight the pros...
kinda ironic, "improved efficiency"/"not as efficient as Zen 3" (not the only one, but the most striking for me )

wait and see is the right thing to do at the moment (and keep some upgrade path i guess ... without going full throttle on a new platform )


The 5600X looks to be dead in the water if AMD doesn't lower its price.

well that's quite true ... ahhhh whatever ... at least i could be a buyer of a dead in the water CPU at 249 for a 5600X found some in listings, also even cheaper from second hand ('round 149/199), i guess future early adopters are selling, not gonna complain (will still be cheaper than a new mobo+cpu :ohwell: )

mmhhh, Intel innovated i reckon although not really appealing to make the switch again... later maybe, who knows ...
and no, the gap between the 2 is not abyssal, they retook the crown but given all, if looking at the whole picture (regardless of next amd product) the advantage is not that clear (pros/cons in account)
12600k is over a 5600X but consumption is higher too, price wise same same (not factoring new mobo/ram ofc, well my previous 6600K was priced around that, as it was 289 at the time i got it )

always take everything in account, before making a choice/opinion.
 
That 4K gaming summary, everything 95-100%. I'll be using my 9900KS for a long time. Because it doesn't matter how fast my Excel is or isn't, it's fast enough.
 
Back
Top