• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel lying about their CPUs' TDP: who's not surprised?

Curiously, do the PSU calculator websites take these power draw into account?
Most probably use TDP.

Logic and proof are here.
It could be a poorer binned chip being tested, but I believe Intel will keep the better binned chip for the K series chips since they are meant to run at high clockspeed and "overclockable" too. Where as the non K version runs at a lower clockspeed and locked out from overclocking.
The results are no doubt correct and lower-binned non-K CPU getting worse efficiency is not surprising.

I would still suspect this motherboard does something wrong. Would have liked Anandtech to look into that a little bit. Just turning off the limits (or moving them to where they do not matter) is an "interesting" approach. Based on what I have seen with previous sockets-platforms, non-Z motherboards are usually not doing these shenanigans and in most cases non-K CPUs get stock settings or close to that. Anandtech is running a high-end board (which are known to take aggressive approach). I cannot avoid thinking about MCE back when Coffee Lake came out and motherboards started applying their own understanding of settings unless you very specifically asked for stock :)
 
Well that is because you got yourself a POS you need to get yourself a real PSU a Corsair TX or CX Gold! you got the PSU gitters is all and Corsair will take all them away for good! ;):)
i trust my thermal take litepowers better then anything on the market
big name 80+g 600w psu 5 years
thermal take litepower
11+years
i just think its a little underspeced for what its doing
its not but im over scared
 
And you're just assuming CPU power draw is 0 when idle? Without knowing idle power draw, this method gives no usable information in regards to CPU power draw under load and isn't any more accurate than software methods for reading CPU power. In fact it's probably less accurate than software readings.

The best way to get CPU power draw is to put a multi-meter in line with the 12v CPU plug and directly measure the Amps going to the CPU. But even that isn't perfect, as it won't account for the inefficacy of the VRMs.



This is a gray area. The TDP rating of the CPU includes the iGPU as well. It's a package rating. So technically, if you are trying to compare it to the rating on the box, the iGPU is included. But on the other hand, when will you ever see both the CPU cores and the iGPU fully loaded? Even during gaming that isn't going to happen.

Exactly... multi meter and segmented load comparisons gets you closest, but still isn't entirely accurate.

But... the calculated draw from within software based on the actual processor metrics might still turn out to be a more accurate display of the actual power draw of that specific component, given those caveats.

This was the point @weekendgeek was trying to make. All measuring methods are in some way inaccurate because we're talking about a box of components linked together, AND with variable loads. This is the exact same thing that muddies the waters with Intel's TDP spec. They use different TDPs now for peak draw for example and they renamed turbos to fit that new limit.

Still, the per-core suspected / calculated wattages and their totals from say HWInfo provide a very plausible measure of actual power drawn by that specific component. After all, the CPU 'knows' what it needs, so why would the software not report that with accuracy? The polling is done on a pretty high rate. I think you're getting just as good an impression of the power draw when you use software, especially if you're just comparing to a Kill-a-Watt measuring from the wall (EVEN if you load separate components).
 
Not at all. I have done enough testing to know that regardless of the system being tested, the idle power usage is always very low, most of the time less than 50watts. Therefore testing power usage for one component or another is simple after a baseline is established.

And that doesn't tell you the idle power of the CPU itself so you have no baseline to tell CPU power draw. All your method tells you is how much extra the CPU draws under load, not how much it is actually drawing.
 
To be honest, I don't see what's wrong with what Intel is doing here.

Using this technique, in the future, they can build 32-core monster CPUs with 10W TDP and then just blame the user when their rig melts down trying to pull 700W. What's not to like?
 
On these new CPU's you should only be looking at PL2 TDP. Because at that point nothing else matters.
 
On these new CPU's you should only be looking at PL2 TDP. Because at that point nothing else matters.
I'd like to see a rig built to withstand the harshest of heat producing benchmarks (Furmark for CPUs, effectively) at a big overclock, with lowish temperatures and just what kind of hardware it takes to do this. Might have to go cryo.
 
And that doesn't tell you the idle power of the CPU itself so you have no baseline to tell CPU power draw. All your method tells you is how much extra the CPU draws under load, not how much it is actually drawing.
When the CPU is idle, it's power draw is minimal. The comparison between idle and full load is what matters. It's a very simple concept.
 
Its not even the wattage that's brutal, wattage was easy to deal with before with certain previous generations. The cores are so small and dense now, but the power is still there.. and that's what makes it hard to tame. Them being stuck at their node obviously doesn't help.

Just think, if it wasn't for TSMC AMD could have been where Intel is sitting because we all know they don't fab their own warez.

I don't know.. to me its not a big deal. I know to expect heat, and I am prepared for it.
 
When the CPU is idle, it's power draw is minimal. The comparison between idle and full load is what matters. It's a very simple concept.
Doesn't sound so simple.
If so why 6 pages to describe it?
 
So now have we or can we state that no one (including Intel) is lying?. That thermal heat and power go hand in hand and that this is all so confusing that sticking a 2 pound ball of metal with fans on it sucks and is all we have to cool with or water.
 
Work in this context doesn't have anything to do with CPU performance. Work in a physical system is all about the coversion of energy. What you're considering work here is only work in the conceptual sense
:( No I'm not. Please read the entire exchange that prompted my comments to understand what I was saying instead of just pulling out of context my comment then claiming what I said is wrong!

When I said "performance" I specifically said, "Performance determines how much "work" can be accomplished in a given amount of time with a given amount of energy." So "performance" in this context was about the amount of "work" being done, not how fast it was performed.

And yes, it is indeed the conversion of energy. I agree completely.

Had you taken the time to read and understand the entire exchange, you would have seen that I was responding to the incorrect claim that "energy in equals dissipation". That is, dissipation in the form of heat. While heat is a big part of that "conversion of energy", it does not "equal" the energy "in" (being consumed) because some of that energy in is being converted in "work" with "work" being the crunching of numbers - running the program (flipping and flopping gates).

****

@Mods - Since it is apparent many are not reading the entire thread before commenting, and as such, are taking comments out of context, and because it has now splintered off into many OT tangents, I pose the thread be closed.
 
As TDP they refer to PL1 power draw and that is the max sustainable power draw of the CPU. PL2 is much higher than that but by default for a certain period of time called "Tau" each (PL2/Tau) different for every CPU.

View attachment 186868
Talking about PL1 here, not PL2.
I limited PL1 to 100W in the BIOS and got max 98.9W with CoreTemp, so I consider that a accurate enough indicator of actual CPU package power draw.

"Machine 1" consumes 100W of energy per minute and gives off 95W in the form of heat. It moves 10 buckets of water 10 feet in that minute.

"Machine 2" consumes 100W of energy per minute and gives off 95W in the form of heat. But it moves 20 buckets of water 10 feet in that minute.
There's no real way to tell how much energy consumed by the CPU is utilized the way you state, and how much is not.
So I agree with what is said - best to assume all the power going in the CPU comes out as heat(for designing a cooler anyway).

some of that energy in is being converted in "work" with "work" being the crunching of numbers - running the program (flipping and flopping gates).
True, but how much? There's no way to know.
 
:( No I'm not. Please read the entire exchange that prompted my comments to understand what I was saying instead of just pulling out of context my comment then claiming what I said is wrong!

When I said "performance" I specifically said, "Performance determines how much "work" can be accomplished in a given amount of time with a given amount of energy." So "performance" in this context was about the amount of "work" being done, not how fast it was performed.

And yes, it is indeed the conversion of energy. I agree completely.

Had you taken the time to read and understand the entire exchange, you would have seen that I was responding to the incorrect claim that "energy in equals dissipation". That is, dissipation in the form of heat. While heat is a big part of that "conversion of energy", it does not "equal" the energy "in" (being consumed) because some of that energy in is being converted in "work" with "work" being the crunching of numbers - running the program (flipping and flopping gates).

****

@Mods - Since it is apparent many are not reading the entire thread before commenting, and as such, are taking comments out of context, and because it has now splintered off into many OT tangents, I pose the thread be closed.

Some light reading:

 
Seems if you really want to know that much about the power usage, Well then maybe go to school and figure it out!
6 going on 7 pages of TDP and it's not like you can fix it or anything about it save one thing , DEAL WITH IT AND MOVE ON!
 
There's no real way to tell how much energy consumed by the CPU is utilized the way you state, and how much is not.
Huh? I NEVER, NOT ONCE, stated any way. Those were just examples that you [hopefully, but apparently couldn't :(] use to picture the issue.
So I agree with what is said - best to assume all the power going in the CPU comes out as heat(for designing a cooler anyway).
"All" of the power? NO! And again, that is just wrong! If "all" of the power was being converted to heat, no "work" would be getting done.

Did you not understand my analogy using the incandescent light bulbs in post #68 above? Certainly "most" of the power going in is converted to heat. But "some" of that power going in is being converted into light. You can NOT leave that conversion to light out of the equation.
 
When the CPU is idle, it's power draw is minimal. The comparison between idle and full load is what matters. It's a very simple concept.

No, the comparison between idle and full load is not what matters. We are talking the actual power draw of the CPU, not the difference between idle and load. Those are two very different numbers. And you suggested this method as an alternative to software readings that you claim are less accurate. Sorry, but you're wrong. Your method is worse than the software readings(which are reading hardware sensors by the way). Your method will not give actual CPU power draw and actual CPU power draw is what matters.
 
No, the comparison between idle and full load is not what matters. We are talking the actual power draw of the CPU, not the difference between idle and load. Those are two very different numbers. And you suggested this method as an alternative to software readings that you claim are less accurate. Sorry, but you're wrong. Your method is worse than the software readings(which are reading hardware sensors by the way). Your method will not give actual CPU power draw and actual CPU power draw is what matters.
The CPU manufacture has a tool it's called a gauge and they use this to calculate the TDP I am sure of it. The only way the consumer is ever going to know exactly how much the CPU alone is drawing is if you can isolate it and test it can you do this?
If not all this is just pure BS and now is off the rails.
6 Pages and it's just as confusing as the first post!
Your CPU takes in power that power is expelled into heat and energy that must be cooled in some way. once it maxes out or reach the TDP limit the CPU will cook and you will be pissed so figure it out!
 
The CPU manufacture has a tool it's called a gauge and they use this to calculate the TDP I am sure of it. The only way the consumer is ever going to know exactly how much the CPU alone is drawing is if you can isolate it and test it can you do this?
If not all this is just pure BS and now is off the rails.
6 Pages and it's just as confusing as the first post!
Your CPU takes in power that power is expelled into heat and energy that must be cooled in some way. once it maxes out or reach the TDP limit the CPU will cook and you will be pissed so figure it out!

The gist and conclusion to this topic is that if you want to be safe with Intel, take the highest TDP they can write down about that CPU as your target to base cooling on.

Ergo, Intel is producing 125W and up CPUs across virtually half to 2/3rd of the stack. That's being honest with each other. They report those only for the K CPUs, but those actually go further on peak boost. With 11th gen, they settled for the 125W because there was no way back, but 10th gen...

Let's look at the 65W TDP 10900.,

1612464726533.png



Meanwhile, with 50W on idle... so there's a least 5W in there already from CPU, being generous:

Somebody tell me, how 140-50 ends up being 'somewhere around 65'.

And let's not even mention 'max turbo' ;)
Oh yeah, and let's not use the IGP either, because that won't end well.

Also, strange how they can mention 'Up to' with every clock except Base, but they can't mention 'Up To' when it comes to TDPs. Very strange indeed.

Also somebody explain how I should view that 125W TDP given that Max Turbo already hits royally over that number.

1612464914334.png
 
Last edited:
Hi,
Default clocks/ turbo on 10900k is pretty bad
Just running R20 it will throttle like grandmas wheel chair before it ends lol
You at the least have to activate MCE/ multicore enhancement and remove all limits or you'd be very disappointed in your score.
 
Also somebody explain how I should view that 125W TDP given that Max Turbo already hits royally over that number.

140w is royally over 125w? And that is whole system numbers, not just the CPU.
 
Last edited:
Just running R20 it will throttle like grandmas wheel chair before it ends
Here is a 10850K set to run at the same default speed as a 10900K. As R20 is just finishing, the CPU is still running at its full rated speed.

1612468335341.png


The Intel recommended default turbo power limits for the 10900K are 125W long term and 250W short term. The default turbo time limit is 56 seconds. R20 is a short test. A 10900K should have no problem completing R20 at full speed without a hint of throttling.

In a longer test like R23, then the turbo power limit will drop to 125W and it will be throttle city.

Instead of Intel is lying about TDP, the real problem is that Intel CPUs cannot deliver their full rated performance indefinitely when they drop down to their rated TDP. Most consumers do not understand this. Their mobile CPUs do the same thing. Long term throttling so they do not exceed rated TDP.

Intel is like a shady used car salesman. They only tell you what you want to hear. Run a quick R20 test in the store and everything looks great. Head out to the mountains and try to go up a long grade and your shiny new car will be throttling along in the slow lane.
 
In my head a cpu should not go over the tdp
sure by 10-30w for turbo
but these new Intel cups are like double there tdp
that means if you dont have enough headroom in your psu POOF
 
Back
Top