• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

How to do Power Consumption Testing in CPU Reviews?

I don't really think you need to change your methodology on power consumption I feel like anyone with half a brain can look at 4-5 different reviews look at power numbers and make an informed decision. I personally within reason don't care what a cpu draws power wise as long as it can be cooled without exotic methods and the performance justifies it.
 
Last edited:
This tread is about how we test CPU pwr consumption. As long as we control the other variables wallpwr is good at finding the difference in pwrdraw between CPUs, but we don't get an exact number. Total system usage on a normalized system is what matters most in my opinion, but if you prefer CPU pwr alone I respect that. Finding an exact consumption through software is not so interesting I think since different loadscenarios can increase MB consumption etc.
Traditional motherboards also have 4/8pin Power connectors. If the CPU only gets its power through here I'm sure something can be rigged to measure this power consumption.

Taking the measurement from here should remove the PSU efficiency out of the equation but not motherboard VRM efficiency. I Think
 
Last edited:
CPU/GPU alone results (HWinfo/self reported values are helpful here, or measuring at power connectors if possible to verify them)

There's only one good way to measure power, and that's with a high-accuracy resistor. In particular: you put an accurate, but low-valued, resistor inline with the circuit (say 0.001 ohms), and then measure the voltage drop (with a very accurate amplification circuit). I have severe doubts that the current-sensing resistors on a CPU or GPU have any degree of real accuracy. Lithography can make good capacitors and/or transistors, but they're just not setup well to make accurate resistors.

Even if the resistors inside the chip were accurate (and again: they're not accurate at all), there's still the problem that those power-measurement circuits are "controlled" by the company themselves. If review sites rely upon the self-reported power-levels of CPUs / GPUs, there are subtle games that companies can play.

-------------

I feel like the only thing I'd ever trust is the raw power-levels (voltages and amps) running through the wires from the power-supply.

------------

The Kill-a-watt (or similar) seems most friendly. You hook up an accurate current-sensor inline to the 120V wall-outlet, and measure everything that goes into the power-supply. You calculate the idle of the machine, and assume idle is the "baseload". You then run the benchmark, and subtract out the "idle" energy. This would get things like PCIe 4.0 (which is well known to be more power-hungry than PCIe 3.0), but maybe that's worth reporting. There are already a wide variety of 120V / wall-outlet measurement devices (such as Kill-a-watt). And given the simplicity of this methodology, I'd be in favor of it.

The downside to Kill-a-watt / wall outlet measurement, is that the PSU itself is a source of power (converting power to different voltages takes power, and your PSUs will change over time). If you isolated the 12V, 5V, and 3.3V lines and hooked up current-sensors to each of those, you'd be more accurate (at least, you'd remove the differences between PSUs), and only have motherboard, PCIe, RAM, and CPU getting measured. It'd be a lot more work to pull these power readings, but I'd appreciate it if that work was done.
 
Last edited:
@chrcoluk you made some great posts that you deleted, i feel they add to this topic and are relevant. want me to restore them?
Sure go ahead, I deleted as I didnt want to come across too strongly and I already had a say in an a earlier post.
 
Traditional motherboards also have 4/8pin Power connectors. If the CPU only gets its power through here I'm sure something can be rigged to measure this power consumption.

Taking the measurement from here should remove the PSU efficiency out of the equation but not motherboard VRM efficiency. I Think
Yea this works well with a clamp meter, and was how I tested motherboards, ensuring that the CPU was drawing the same on each board. Also very useful in OC as you can see the power jumps when increasing clocks very easily.
 
There has been some discussion about power testing methodologies with Alder Lake.

Thoughts, requests, ideas, suggestions how to change this for the future?

Obviously I cannot spend multiple hours on testing power consumption alone.

A trick if you are willing to build something simple.

Drill two separated holes (for thermocouples) in a copper cylinder that goes between the CPU and heatsink, then use the thermal conductivity equation to get the power transfer.
 

Attachments

  • power.jpg
    power.jpg
    5.7 KB · Views: 77
Last edited:
I'll just say this is one of the best charts that gives an idea of power / performance scaling. This has made its rounds with this release.

"Power/Performance Scaling" would probably be useful on all platforms. This would also be useful for all CPUs for someone who wants to for example run on an ITX platform.

So beyond just power usage here, since any OEM or any motherboard maker can set their default power limits to whatever they can handle, or for example tune them with automatic tuners like Asus "AI Tuner", an actual user won't know what they're getting unless they look.

This is OFC true of AMD as well.


1636474553192.png
 
A trick if you are willing to build something simple.

Drill two separated holes (for thermocouples) in a copper cylinder that goes between the CPU and heatsink, then use the thermal conductivity equation to get the power transfer.

I'm intrigued by this idea.

Wouldn't that make the cooling of the test-rig worse though?
 
Yes and no; for one could easily use some larger air cooling to compensate and it would not effect the functioning of the cylinder.

Now there is the intriguing idea that transistors are actually more efficient at higher temperatures and sometimes cooling is left off for this reason; I have in mind power transistors and diodes and not CPUs. i.e. one might not want to cool the diode bridge in a power supply.

There's only one good way to measure power, and that's with a high-accuracy resistor. In particular: you put an accurate, but low-valued, resistor inline with the circuit (say 0.001 ohms), and then measure the voltage drop (with a very accurate amplification circuit). I have severe doubts that the current-sensing resistors on a CPU or GPU have any degree of real accuracy. Lithography can make good capacitors and/or transistors, but they're just not setup well to make accurate resistors.

Not sure one needs much precision as one can just calibrate to compensate; or laser cut if one really wants precision.
 
Last edited:
Not sure one needs much precision as one can just calibrate to compensate; or laser cut if one really wants precision.

You're right, but I have severe doubts that anyone is calibrating these power-sensors in CPUs. (And even if they're calibrating them today, there seems to be no reason for companies to calibrate them in the future. A future CPU could be uncalibrated to save a few cents-per-die).

Laser-cut resistors are fractions of a penny and don't need calibration due to their precision. But laser-cut thin-film is a completely different process than CPU-design (which is semiconductors: silicon, p-doping, n-doping and stuff). Semiconductors are terrible at making accurate resistances. When you laser-cut a resistor for precision ohms, you're using a thin-film or other materials where the resistance is tightly controlled. An electronic-engineer who adds a laser-cut resistor to a motherboard can measure power cheaply and accurately... but that's clearly not how these cores are designed.

---------

The power-sensors on CPUs / GPUs are good enough for developers to benchmark their own programs and determine which programs use more or less power. But they probably aren't accurate in terms of actual wattage.
 
This tread is about how we test CPU pwr consumption. As long as we control the other variables wallpwr is good at finding the difference in pwrdraw between CPUs, but we don't get an exact number. Total system usage on a normalized system is what matters most in my opinion, but if you prefer CPU pwr alone I respect that. Finding an exact consumption through software is not so interesting I think since different loadscenarios can increase MB consumption etc.
The problem is
1. You can't control every variable across decades of testing. You've got to have a new PSU at some point, you're gonna change the GPU, upgrade storage, etc. Then all your previous testing is irrelevant.
2. The data you get can't be compared to readers' own systems, therefore it can only be used to compare different CPUs in the same test.

Whole system power consumption in relation to CPU testing is a comparative value (it's irrelevant outside of the test), whereas I'd prefer to see an absolute one that can be compared all across the board.
 
Last edited:
i dont think anyone uses h264 software decoding in 2021
True, but it was a good example of something to test thats multi threaded and easily replicated

We arent here to do GPU decoding of youtube testing, we need something CPU related - best example i had was a locally stored offline website with flashing animations and looped video
 
There has been some discussion about power testing methodologies with Alder Lake.

Thoughts, requests, ideas, suggestions how to change this for the future?

Obviously I cannot spend multiple hours on testing power consumption alone.

I'd very much continue measuring full-system wall-power, as I have a great data collection pipeline already setup for it. Data is collected digitally, over Ethernet, processed, and it spits out TPU-style charts for me, with minimal user interaction.

I do have definite plans to add "Gaming Power Consumption", using Cyberpunk 2077 at highest settings, v-sync off
Well you can't test a CPU directly without Motherboard, RAM heatsink/fan, video output, storage/OS, input devices...
 
This here is a great example of the kind of information that helps people decide on hardware

Now they didnt exactly test a wide range of CPU's, they compared top tier models from current and one gen ago - but even so, a 60W difference is definitely the difference between what PSU you'd buy to go with the system. 60W more? I'm gunna need a bigger CPU cooler, or more case fans, etc.



1636529248536.png



If a 5600x was in that list, seeing its performance vs its power consumption is 100% something a user could decide to do, to keep within the limits of an existing PSU they already own.
Hell i have a 750W, and measured at my UPS i dont even use that much power... and i've got two 32" monitors and a 2.1 speaker system running off that UPS, included in that reading.
1636529425341.png
 
1. You can't control every variable across decades of testing. You've got to have a new PSU at some point, you're gonna change the GPU, upgrade storage, etc. Then all your previous testing is irrelevant.
Yup, and then you'll just have to retest everything. Same with Windows 11, or new application versions, etc.. Welcome to the life of a reviewer.
 
Last edited:
This here is a great example of the kind of information that helps people decide on hardware

Now they didnt exactly test a wide range of CPU's, they compared top tier models from current and one gen ago - but even so, a 60W difference is definitely the difference between what PSU you'd buy to go with the system. 60W more? I'm gunna need a bigger CPU cooler, or more case fans, etc.



View attachment 224612
That's still only an example of various CPUs tested with a 6900XT and with various other components. It doesn't in any way compare to the system that you have. Everybody has different storage, PSU (efficiency), PCI-e and USB devices, etc., therefore the data presented here is only relevant when you compare different elements within the same data set. It can't be used to compare it to your own system or needs.

It's also a gaming test - the GPU might be under different load levels with different CPUs, so you can't extrapolate CPU power consumption from this. If you test with something like a 1050 Ti, fair enough, as it's always going to be 100% loaded with any modern CPU, but that makes newer CPUs sit idle more and naturally use less power, so... :confused:

Edit: I mean, if you look at the 5950X-based system using more power than the 3950X-based one, that's probably due to the GPU being under heavier load with the CPU that has the better IPC. One might falsely think that the 5950X needs more power under gaming, when this same chart might turn upside down with a 1050 Ti.

Edit 2: Also, nvidia's drivers have higher CPU overhead, so I would expect any CPU to use more power with an nvidia GPU of the same TDP. Let's not even go into more detail, like AMD using chip power draw, while nvidia going for board power draw when presenting TDP numbers, which also makes system power data virtually incomparable.

Yup, and then you just have to retest everything. Same with Windows 11, or new application versions, etc.. Welcome to the life of a reviewer.
Wouldn't testing for individual component power consumption solve this issue? (along with giving readers some data that they can compare to their own needs)
 
Last edited:
Wouldn't testing for individual component power consumption solve this issue? (along with giving readers some data that they can compare to their own needs)
Not if you replace that component at some point. Also you can't really fix CPU voltage and LLC to the 100% exact same value between motherboards
 
Also you can't really fix CPU voltage and LLC to the 100% exact same value between motherboards
That's true. Though I still think that it would give more accurate and usable data than total system power consumption with a million variables that naturally change over time and across different systems.
 
That's true. Though I still think that it would give more accurate and usable data than total system power consumption with a million variables that naturally change over time and across different systems.
taking things off auto is a problem, because users use default settings
With LLC, that can even vary between BIOS versions of a board, let alone revisions... it's often best to just note the board and BIOS, and leave as much as possible on auto
 
taking things off auto is a problem, because users use default settings
With LLC, that can even vary between BIOS versions of a board, let alone revisions... it's often best to just note the board and BIOS, and leave as much as possible on auto
I agree. Auto works 99% of the time.

Anyway, stuff like LLC are variables that you can't eliminate when testing for any sort of power consumption. When you're testing for total system power however, you're basically adding a million other variables that makes the information you're looking for more obscure. It's almost like testing for total household power while listing the number of light bulbs, your fridge, etc. as part of the system tested.
 
One suggestion - it may become more relevant when (and if) you get your hands on any of the more modest Z690 or H670* boards: make some idle power measurements with the IGP only, so we can see what the minimum practical power draw of the Alder Lake platform is. Single monitor, long idle, both on and off, that would basically suffice.

*I see that TPU has never reviewed a H570 or H470 board but now that overclocking makes less sense than ever, and given the sky high prices of Z690, I think H-series boards will become more popular with enthusiasts than they are now.
 
I think everyone has been programmed to expect only what they are given.

Even when it is not relevant to how much power they actually use.

I've had HWInfo64 running since early this morning, normal work day, about 4 hours in, overclocked 10850K. This is what I got.

38W minimum, 108W peak, 44.6W average - just CPU package power. In other words, during work my CPU consumes about 6W average over idle, and again it is overclocked.

View attachment 224400
Max power Usage is only good to know for the PSU, but the other side is only blablabla if u dont run 24/7 with rendering on the CPU etc.
 
Max power Usage is only good to know for the PSU, but the other side is only blablabla if u dont run 24/7 with rendering on the CPU etc.
FPS is only relevant for gamers so no need to include that in future GPU reviews :D. The wast majority that read these kind of GPU reviews can't be gamers. @seth1911 I think that you miss the point of a review.
 
FPS is only relevant for gamers so no need to include that in future GPU reviews :D. The wast majority that read these kind of GPU reviews can't be gamers. @seth1911 I think that you miss the point of a review.

Yes but tests on power unlocked max turbo systems with artificial AVX workloads are constantly used to create a mythos that Intel chips use gobs of power as soon as you open up a web browser. It's essentially a lie.
 
For myself I just use wall power and some simple math. You could clamp the EPS 12v as well as using system power at the wall.. To me it doesn't really matter much. If it runs hot with good cooling then you know she really doesn't like polar bears and is hitting the jo0s a little hard. My kinda girl :)

YouTube says new GPU's will do 500w @ stock, so figure maybe 600-750w if you clock it... probably going to need a new PSU anyways :D My 750 struggles with my system when I start leaning on it. At least that's what the meter says :D

Anywhoo.. I am just rambling.
 
Back
Top