• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Be careful when recommending B560 motherboards to novice builders (HWUB)

Joined
Jan 14, 2019
Messages
9,949 (5.13/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 7800 XT
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Benchmark Scores Cinebench R23 single-core: 1,800, multi-core: 18,000. Superposition 1080p Extreme: 9,900.
Don't disagree, the problem I have with these cheap LGA1200 boards is they are limiting boost performance of even lower midrange chips. It's bad for uninformed consumers too who will look at reviews of a 11400F in a high end z590 board where it is likely running power unlocked at 200w, then they will go and buy the cheapest B560 or H510 board and find the performance is 2/3 of that in the review because these boards have PL2 limits below 100w...

I think this is a much more realistic scenario than running a 5900x/5950x (or the zen2 parts) in a B450M-A Pro Max or similar $50 AM4 board...

3900x: $400, 5900x: $600+. Bottom of the barrel AM4 board: $50-60.
11400F: $170. Bottom of the barrel H510/B560 board: $70-90
That is true, though I don't expect those uninformed consumers to buy any high-end cooling solution with their $70 motherboard, so they probably can't even handle 200+ W anyway. Limiting power to what their cheap-ass motherboard and worthless Intel box cooler can handle is the safe option.

One party says Intel is lying about TDP numbers, as with proper cooling and a proper motherboard, their chips clearly eat more power than stated on the box. The other party says Intel is lying about turbo frequencies because cheap motherboards don't unlock the power targets that would allow the CPUs to run at the designated speeds. Who's right? I think no one is, as even Intel TDP numbers are only valid with locked power targets (and Intel gives a free hand to motherboard manufacturers in this), and all turbo speeds are "up to" values nowadays. You may or may not be able to run it depending on your system.

HWUnboxed tried to make big news out of this, but my rule of thumb has always been the same: look at the VRM. If it has no heatsink on it, then either stay away, or never put anything more powerful than a Pentium or Ryzen 3 in it. It's good that they came out with the video to help uninformed people be a little bit less uninformed (though I'm not sure how many non-enthusiasts watch their videos before buying), but the existence of cheap **** motherboards is not B560's fault.

Edit: Another must-watch from HWUnboxed. It's a bit long, but explains the situation very well. When Intel advertises a CPU as a "65 W, 4.4 GHz all-core turbo" part, it no longer means 65 W and 4.4 GHz all-core. It means 65 W or 4.4 GHz all-core, depending on motherboard VRM, cooling capacity and BIOS settings. Every CPU is designed to run at least at base frequencies. How close you get to max. turbo is up to you. It doesn't look as messy to me as it's said to be, or at least I find AMD's turbo and TDP situation much more confusing. My Ryzen 9 5950X never kept its 105 W TDP when nearing max. turbo (it was more like 130-135 W), but my Ryzen 3 3100 doesn't even come close to TDP, maxing out at about 50 W. Here's the video:

 
Last edited:
Joined
May 2, 2017
Messages
7,762 (3.03/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
No, when the 3000 series was coming out, they advertised PBO and gave performance number showing what it was capable of.
Well we can agree to disagree then, because I very much remember them advertising PBO and showing the performance difference it makes. It wasn't the only performance numbers they gave, but they did give PBO performance data in their advertising for PBO.
Because with PBO enabled, a feature AMD built into their processors and advertises as a feature of their processors, there are B550 boards that do the same thing.

Plus, none of the B560 boards limited the processors to their base clocks and weren't able to boost when the processors were run at stock configurations within Intel's specs.
There's a difference between advertising a(n optional, non-enabled at stock) feature that can increase performance, while explicitly pointing out that this is something that must be enabled, vs. advertising a level of stock performance that is contingent on uncommunicated factors that can't be easily identified.

Also, what does it matter if it's advertised if it's not stock? Intel advertises the overclocking capabilities of their CPUs, so you'd need to include that as well to be consistent in your logic.

In this case, the situations is as follows:
With Intel, there is 30-40% performance variance using the same CPU between motherboards that are nominally similar even on midrange CPUs.
With AMD, there is the expected +/- 5% variance that comes with motherboard design and BIOS tuning, though there's an optional mode that can be enabled where lower end boards will fail to keep up with higher end ones.

Are you really saying you don't see the difference?
 
Joined
Jan 14, 2019
Messages
9,949 (5.13/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 7800 XT
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Benchmark Scores Cinebench R23 single-core: 1,800, multi-core: 18,000. Superposition 1080p Extreme: 9,900.
There's a difference between advertising a(n optional, non-enabled at stock) feature that can increase performance, while explicitly pointing out that this is something that must be enabled, vs. advertising a level of stock performance that is contingent on uncommunicated factors that can't be easily identified.
That's a matter of interpretation, imo. Stock performance is the base clock. All turbo speeds are "up to" levels, which means that there is a chance for you to reach those levels, or stay anywhere between base and max. turbo with a good enough motherboard and cooling. It's not guaranteed, though. That's why 125 W K-SKUs have much higher base clocks, but pretty much the same turbo clocks as 65 W locked parts.

Again, an educated decision is needed before buying anything. You can't assume that the 11700 reaches 4.9 GHz within the 65 W power target, while the 11700K only goes up 100 MHz higher with almost double the TDP. It's just not logical. ;) Even if one knows nothing about computers, at least they should ask someone who does.

Edit: Speaking of educated decisions, what should I upgrade to? Core i7-11700 locked to 65 W (or whatever I can cool in my SFF box), or wait for the Ryzen 7 5700GE and hope that its TDP means something? :D
 
Joined
Sep 3, 2019
Messages
2,993 (1.75/day)
Location
Thessaloniki, Greece
System Name PC on since Aug 2019, 1st CPU R5 3600 + ASUS ROG RX580 8GB >> MSI Gaming X RX5700XT (Jan 2020)
Processor Ryzen 9 5900X (July 2022), 150W PPT limit, 79C temp limit, CO -9~14
Motherboard Gigabyte X570 Aorus Pro (Rev1.0), BIOS F37h, AGESA V2 1.2.0.B
Cooling Arctic Liquid Freezer II 420mm Rev7 with off center mount for Ryzen, TIM: Kryonaut
Memory 2x16GB G.Skill Trident Z Neo GTZN (July 2022) 3600MHz 1.42V CL16-16-16-16-32-48 1T, tRFC:288, B-die
Video Card(s) Sapphire Nitro+ RX 7900XTX (Dec 2023) 314~465W (390W current) PowerLimit, 1060mV, Adrenalin v24.4.1
Storage Samsung NVMe: 980Pro 1TB(OS 2022), 970Pro 512GB(2019) / SATA-III: 850Pro 1TB(2015) 860Evo 1TB(2020)
Display(s) Dell Alienware AW3423DW 34" QD-OLED curved (1800R), 3440x1440 144Hz (max 175Hz) HDR1000, VRR on
Case None... naked on desk
Audio Device(s) Astro A50 headset
Power Supply Corsair HX750i, 80+ Platinum, 93% (250~700W), modular, single/dual rail (switch)
Mouse Logitech MX Master (Gen1)
Keyboard Logitech G15 (Gen2) w/ LCDSirReal applet
Software Windows 11 Home 64bit (v23H2, OSB 22631.3155)
My Ryzen 9 5950X never kept its 105 W TDP when nearing max. turbo (it was more like 130-135 W), but my Ryzen 3 3100 doesn't even come close to TDP, maxing out at about 50 W.

It's been said more than a few times in TPU but I guess it’s really frustrating (not to me). Complicated maybe? I'm not arguing with that...

AMD’s TDP designation is not about max power consumption. AMD never said so, at least for latest generations. I believe it includes FX-series also.
It’s for minimum cooling solution under specific temperatures (CPU die surface and cooler ambient) to get the advertised stock performance. And by stock they mean on “regular” boosting (not PBO).
In other words they are talking about heat dissipation towards the cooler under specific circumstances, not about total consumption.

This is the equation they use:

TDP (Watts) = (tCase°C - tAmbient°C)/(HSF θca)

tCase°C = CPU die surface temp
tAmbient°C = cooler ambient (inside the case temp or room if its caseless)
HSF θca = thermal resistance of HeatSinkFan

1621875035658.png

And here is an example

1621875089919.png
 
Last edited:
Joined
Jan 14, 2019
Messages
9,949 (5.13/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 7800 XT
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Benchmark Scores Cinebench R23 single-core: 1,800, multi-core: 18,000. Superposition 1080p Extreme: 9,900.
It's been said more than a few times in TPU but I guess it’s really frustrating (not to me). Complicated maybe? I'm not arguing with that...

AMD’s TDP designation is not about max power consumption. AMD never said so, at least for latest generations. I believe it includes FX-series also.
It’s for minimum cooling solution under specific temperatures (CPU die surface and cooler ambient) to get the advertised stock performance. And by stock they mean on “regular” boosting (not PBO).
In other words they are talking about heat dissipation towards the cooler under specific circumstances, not about total consumption.

This is the equation they use:

TDP (Watts) = (tCase°C - tAmbient°C)/(HSF θca)

tCase°C = CPU die surface temp
tAmbient°C = cooler ambient (inside the case temp or room if its caseless)
HSF θca = thermal resistance of HeatSinkFan

View attachment 201455

And here is an example

View attachment 201456
I know this very well. It's still complicated, and makes me wish this wasn't the case.

Like I mentioned, I tried to upgrade my 3100 to a 3600 under the assumption that the same 65 W TDP meant similar heat to deal with - which was backed up by people telling me how easy it is to cool a 3600. I was wrong. The 3600 might be easy to cool with a tower cooler, but I went through hell with it in my slim case.

It's funny that Intel is called out for lying about power requirements for max turbo, but nobody calls out AMD for making up a BS formula for TDP that gives a final result in W even though the formula has nothing to do with power. AMD does a better job at power efficiency and motherboard VRM requirements, but...

Intel at least means Watts by "W", making it easier to think about cooling. You just don't know what kind of performance you get with enforced power limits... which makes me extremely conflicted about my upgrade path.
 
Last edited:
Joined
Mar 31, 2014
Messages
1,533 (0.42/day)
Location
Grunn
System Name Indis the Fair (cursed edition)
Processor 11900k 5.1/4.9 undervolted.
Motherboard MSI Z590 Unify-X
Cooling Heatkiller VI Pro, VPP755 V.3, XSPC TX360 slim radiator, 3xA12x25, 4x Arctic P14 case fans
Memory G.Skill Ripjaws V 2x16GB 4000 16-19-19 (b-die@3600 14-14-14 1.45v)
Video Card(s) EVGA 2080 Super Hybrid (T30-120 fan)
Storage 970EVO 1TB, 660p 1TB, WD Blue 3D 1TB, Sandisk Ultra 3D 2TB
Display(s) BenQ XL2546K, Dell P2417H
Case FD Define 7
Audio Device(s) DT770 Pro, Topping A50, Focusrite Scarlett 2i2, Røde VXLR+, Modmic 5
Power Supply Seasonic 860w Platinum
Mouse Razer Viper Mini, Odin Infinity mousepad
Keyboard GMMK Fullsize v2 (Boba U4Ts)
Software Win10 x64/Win7 x64/Ubuntu
HWUnboxed tried to make big news out of this, but my rule of thumb has always been the same: look at the VRM. If it has no heatsink on it, then either stay away, or never put anything more powerful than a Pentium or Ryzen 3 in it. It's good that they came out with the video to help uninformed people be a little bit less uninformed (though I'm not sure how many non-enthusiasts watch their videos before buying), but the existence of cheap **** motherboards is not B560's fault.
I think one of the reasons it's more relevant now than it was before is because B560 now has memory overclocking/XMP support which previous non-z series boards lacked. This makes the interestin the boards with this chipset considerably greater.

I think the TDP figures in general are pretty bullshit, intel should be mandating a stricter adherence to PL2 and not have a PL2 that is of order 4x the PL1/TDP rating. They should separate more between low power SKUs and high power SKUs and motherboards should be forced to list supported power. Same goes for AMD, I really don't see why they need to come up with this bullshit formula for their TDPs when they clearly have a power limit number that is actually being used. (or they should be listing a power limit figure and keeping the "TDP" figure they have for cooling purposes as an engineering figure not a marketing figure)

Something I think the community and buyers do need to understand better is that there is a large variance in current draw of workloads, so while at 4GHz prime95 may suck down 150w on a RKL 6-core, a game at 4GHz (even a multicore one) might only use 50w. If you want deterministic performance you will always have to go by the worst case scenario, so a RKL 6-core would only be able to do 4GHz in a 150w power envelope, if you want the best performance in all scenarios you are going to have to deal with the fact that the clock speed will be unpredictable and vary depending on the workload. Either way a stricter power limit system and SKUs separated by power would be good to make it easier for reviewers to do their job... But then again it's not exactly in the interest of Intel or AMD to make it easy for reviewers is it...
 
Joined
Jan 14, 2019
Messages
9,949 (5.13/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 7800 XT
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Benchmark Scores Cinebench R23 single-core: 1,800, multi-core: 18,000. Superposition 1080p Extreme: 9,900.
I think one of the reasons it's more relevant now than it was before is because B560 now has memory overclocking/XMP support which previous non-z series boards lacked. This makes the interestin the boards with this chipset considerably greater.

I think the TDP figures in general are pretty bullshit, intel should be mandating a stricter adherence to PL2 and not have a PL2 that is of order 4x the PL1/TDP rating. They should separate more between low power SKUs and high power SKUs and motherboards should be forced to list supported power. Same goes for AMD, I really don't see why they need to come up with this bullshit formula for their TDPs when they clearly have a power limit number that is actually being used. (or they should be listing a power limit figure and keeping the "TDP" figure they have for cooling purposes as an engineering figure not a marketing figure)

Something I think the community and buyers do need to understand better is that there is a large variance in current draw of workloads, so while at 4GHz prime95 may suck down 150w on a RKL 6-core, a game at 4GHz (even a multicore one) might only use 50w. If you want deterministic performance you will always have to go by the worst case scenario, so a RKL 6-core would only be able to do 4GHz in a 150w power envelope, if you want the best performance in all scenarios you are going to have to deal with the fact that the clock speed will be unpredictable and vary depending on the workload. Either way a stricter power limit system and SKUs separated by power would be good to make it easier for reviewers to do their job... But then again it's not exactly in the interest of Intel or AMD to make it easy for reviewers is it...
To be honest, imo Watts meaning Watts should be mandated either by the industry or by law (or both). A R3 3100 that eats 50 W under full load and reaches 70 C with the crappy boxed cooler on low revs in a slim case surely can't fall into the same TDP category as a R5 3600 that maxes out the 88 W PPT with a blink of an eye and can't be cooled without a tower cooler and lots of airflow. That's just a straight out lie from AMD.

As for Intel, I would benchmark everything with and without enforcing power limits. PL2 that's hundreds of Watts above PL1 makes TDP just as much of a lie as AMD's figures are altogether. Same with the stupid thermal velocity boost that gives you an extra 100 MHz if... and if... and if... and if... but that's a different story.

Edit: Fun fact that GPUs do the same with their clocks. You get slight variations in speed depending on VRM and cooling capacity and the type of workload. The only reason nobody complains about them is because they generally tend to boost higher than their advertised boost clocks while also staying within TDP - at least nvidia cards do, while AMD measures chip power consumption which is just as shady as their formula for CPU TDP is.

Let's just agree that this whole boosting business just makes TDP a lot more complicated than it needs to be - unless only the TDP is advertised, and boost varies by circumstances, or vice versa, which of course, wouldn't look as nice on paper as a 5+ GHz 8-core 65 W chip.
 
Last edited:
Joined
Mar 31, 2014
Messages
1,533 (0.42/day)
Location
Grunn
System Name Indis the Fair (cursed edition)
Processor 11900k 5.1/4.9 undervolted.
Motherboard MSI Z590 Unify-X
Cooling Heatkiller VI Pro, VPP755 V.3, XSPC TX360 slim radiator, 3xA12x25, 4x Arctic P14 case fans
Memory G.Skill Ripjaws V 2x16GB 4000 16-19-19 (b-die@3600 14-14-14 1.45v)
Video Card(s) EVGA 2080 Super Hybrid (T30-120 fan)
Storage 970EVO 1TB, 660p 1TB, WD Blue 3D 1TB, Sandisk Ultra 3D 2TB
Display(s) BenQ XL2546K, Dell P2417H
Case FD Define 7
Audio Device(s) DT770 Pro, Topping A50, Focusrite Scarlett 2i2, Røde VXLR+, Modmic 5
Power Supply Seasonic 860w Platinum
Mouse Razer Viper Mini, Odin Infinity mousepad
Keyboard GMMK Fullsize v2 (Boba U4Ts)
Software Win10 x64/Win7 x64/Ubuntu
As for Intel, I would benchmark everything with and without enforcing power limits. PL2 that's hundreds of Watts above PL1 makes TDP just as much of a lie as AMD's figures are altogether. Same with the stupid thermal velocity boost that gives you an extra 100 MHz if... and if... and if... and if... but that's a different story.
The problem with Intel's PL figures is that they are basically completely open ended, a motherboard manufacturer or SI is entirely allowed to set any PL2 limit and duration and it will be within spec by Intel's numbers (I believe HUB touched on this on their follow up video).

I wouldn't have a problem with a processor being specced at 250w PL2 and 65w PL1 as long as a) it was transparent that this 250w PL2 limit existed for a specific duration and b) motherboard vendors were forced to comply with that limit (and thereby build motherboards that are capable of delivering that). But we live in a world where neither is the case...
 
Joined
Jan 14, 2019
Messages
9,949 (5.13/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 7800 XT
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Benchmark Scores Cinebench R23 single-core: 1,800, multi-core: 18,000. Superposition 1080p Extreme: 9,900.
The problem with Intel's PL figures is that they are basically completely open ended, a motherboard manufacturer or SI is entirely allowed to set any PL2 limit and duration and it will be within spec by Intel's numbers (I believe HUB touched on this on their follow up video).

I wouldn't have a problem with a processor being specced at 250w PL2 and 65w PL1 as long as a) it was transparent that this 250w PL2 limit existed for a specific duration and b) motherboard vendors were forced to comply with that limit (and thereby build motherboards that are capable of delivering that). But we live in a world where neither is the case...
Very true.

As a SFF maniac, I would much rather have no PL2 at all. I believe most, if not all motherboards let you customise these things, so you can set a PL2 that's the same as your PL1, but then who knows how much less performance you're getting out of your CPU. I'd love to see benchmarks that cover this, so I could decide where to upgrade. Knowing how much faster X CPU against Y CPU is on full power is of no use to me.

Edit: typo
 
Last edited:
Joined
Mar 31, 2014
Messages
1,533 (0.42/day)
Location
Grunn
System Name Indis the Fair (cursed edition)
Processor 11900k 5.1/4.9 undervolted.
Motherboard MSI Z590 Unify-X
Cooling Heatkiller VI Pro, VPP755 V.3, XSPC TX360 slim radiator, 3xA12x25, 4x Arctic P14 case fans
Memory G.Skill Ripjaws V 2x16GB 4000 16-19-19 (b-die@3600 14-14-14 1.45v)
Video Card(s) EVGA 2080 Super Hybrid (T30-120 fan)
Storage 970EVO 1TB, 660p 1TB, WD Blue 3D 1TB, Sandisk Ultra 3D 2TB
Display(s) BenQ XL2546K, Dell P2417H
Case FD Define 7
Audio Device(s) DT770 Pro, Topping A50, Focusrite Scarlett 2i2, Røde VXLR+, Modmic 5
Power Supply Seasonic 860w Platinum
Mouse Razer Viper Mini, Odin Infinity mousepad
Keyboard GMMK Fullsize v2 (Boba U4Ts)
Software Win10 x64/Win7 x64/Ubuntu
Personally at least, I think that with SFF you really should be putting the effort in to tune the stuff properly around your cooling (and other capabilities)... At least basically all retail motherboards will let you fiddle with the power limits (even if some of them are wholly inadequate for running sustained boost with high current workloads). On Haswell for example, some people (myself included) used to run power limits to emulate an AVX offset.

Definitely it would be interesting to see some testing of desktop CPUs at different power limits, I know some laptop reviewers already do so.
 
Joined
May 2, 2017
Messages
7,762 (3.03/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I think the TDP figures in general are pretty bullshit, intel should be mandating a stricter adherence to PL2 and not have a PL2 that is of order 4x the PL1/TDP rating. They should separate more between low power SKUs and high power SKUs and motherboards should be forced to list supported power. Same goes for AMD, I really don't see why they need to come up with this bullshit formula for their TDPs when they clearly have a power limit number that is actually being used.
AMD's TDP formula was explicitly created as a reverse-engineering of Intel's formula so that cooler manufacturers and SIs could treat them equivalently in their design processes. Of course this has been undermined by Intel since being on a path of making TDP ever less meaningful of a metric, but still - that's not AMD's fault.
To be honest, imo Watts meaning Watts should be mandated either by the industry or by law (or both). A R3 3100 that eats 50 W under full load and reaches 70 C with the crappy boxed cooler on low revs in a slim case surely can't fall into the same TDP category as a R5 3600 that maxes out the 88 W PPT with a blink of an eye and can't be cooled without a tower cooler and lots of airflow. That's just a straight out lie from AMD.
This goes to both you and @GorbazTheDragon: you're approaching this from the wrong angle, which either stems from a fundamental misunderstanding or from wanting something that doesn't exist. The issue: TDP is not a consumer-facing specification denoting power draw. It never has been. Historically it has been roughly equivalent to this, but this is more coincidental than intentional. TDP is a specification for SIs and cooler OEMs to design their cooling solutions and system designs around. If TDP was meant to denote power draw directly, it would for example be a guide for motherboard makers in designing their VRM setups - but it's not, and there are specific specifications (dealing with the relevant metrics, volts and amps) for that. You can disagree with how TDPs are used in marketing with regards to this - I definitely do! - but you can't just transfer it into being something it isn't.

TDPs serve as broad categories for SIs and cooler makers to design for, and is explicitly directed at large OEMs. This is where the 2/3-tier (historically ~95/65/35W) TDP systems come from - they're guidelines for designing cooling systems for three tiers of CPUs. There has always been variance within these tiers - just as there is with laptops, where a 15W i7 always needs more power than a 15W i5. Treating TDP as an absolute number for power draw has always been wrong. It's just happened to be roughly accurate at times. But it's also typically been far too high - like the R3 3100 you mention, or the i5-2400 in my modded Optiplex 990 SFF (nominally 95W, yet I've never gotten it past ~78).

This is where the current issues stem from - TDP used to be reasonably close to normal power draws, with non-high-end CPUs often coming in noticeably below that number. As technology has progressed, competition has tightened, and Intel has been stuck on 14nm(+++++++++++++) yet has needed to increase core counts, Turbo - which has always been explicitly temporary, variable and potentially above TDP in power draw - has become more important, and has started pushing the silicon closer to its limits. Turbo clocks have divereged much further from base clocks than ever before (that aforementioned i5-2400 has a 300MHz Turbo on top of its 3.1GHz base clock), while the definition of TDP has stayed the same, and the categories have also stayed the same - largely due to Intel not being able to change these due to their OEM partners (if they changed the 65W class TDP to something more realistic like 105W, this would necessitate every OEM out there completely redesigning their SFF business systems to maintain base performance).

(Of course, we also need to take into account that stock (including stock boosting behaviour) power draws for CPUs are much higher today than 5-10 years ago. An i7-7700k stuck pretty tightly to its 91W tDP in terms of idle-load delta power draw, and only scaled to ~120W when OC'd. These days AMD's 105W CPUs boost to 138/144W, and Intel's 125W CPUs boost to 170-250W.)

The reason for these issues is that Intel is using an OEM-facing design class denomination in consumer products without changing it or otherwise informing users what it means. This of course leads to a lot of confusion. But it also makes no sense for them to change those classes in the OEM world - which is easily 10x the size of CPU retail. A more sensible solution would be a consumer-facing "power class" or some such to denominate something closer to power draw. But that would ultimately look like they're suddenly saying their CPUs use far more power, which means that such a move would never be sanctioned by corporate and PR.

Of course this is only tangentially related to the issue at hand here - it's one root cause of it, but indirectly. The gap between base clocks (and power at those clocks) and turbo clocks (and the power at those clocks) is now large enough that due to Intel not enforcing their PL1, PL2 and tau specs with motherboard manufacturers, we now have a situation where the same CPU can perform very, very differently depending on the motherboard you put it in, which is not how things are expected to work. Intel could easily do this - but it would also make their CPUs look worse in reviews, so again, corporate and PR would never accept that. So instead, we get this quasi-sanctioned motherboard-dependent not-quite-auto-OC situation where the ultimate performance of a system is far more variable than ever before. Which of course sucks for end users and DIYers. But Intel (and AMD, though potentially a tad less) ultimately doesn't care about us - they care far more about the OEMs and laptop makers that represent the majority of their sales.
Very true.

As a SFF maniac, I would much rather have no PL2 at all. I believe most, if not all motherboards let you customise these things, so you can set a PL2 that's the same as your PL1, but then who knows how much less performance you're getting out of your CPU. I'd love to see benchmarks that cover this, so I could decide where to upgrade. Knowing how much faster X CPU against Y CPU is on full power is of no use to me.
That would be great! Completely agree if reviews would cover this. At least TPU does test at both Intel official spec as well as unlocked power limits. But depending on just how SFF you go, there'll always be tuning (and the related stability testing) needed, which would drastically increase the reviewers' workload. And of course binning/silicon lottery outcomes dramatically affect this. So it's not very likely to happen.

But giving up boost isn't happening. CPU boosting represents massive performance gains in everyday tasks such as web browsing and office work - even in very thermally limited systems. Which is why most OEMs let their CPUs constantly bounce off the thermal throttle point of the CPU - it doesn't harm the CPU or system in any way, but allows for far better responsiveness and performance. We as enthusiast DIYers tend not to accept this, nor the performance loss inherent to it (with the commonly accepted wisdom being that if you're bouncing off the throttle point in a DIY system, replace your cooler and gain performance!). This is doubly true if we also want silence - another factor most OEMs don't care about.

So as SFF enthusiasts - which is a niche and extreme approach to DIY PC builds, after all - we need to accept that a) we might not get peak performance, b) we'll need to tune our systems more, and c) there are no official specs denoting the information we need. That's life. And it's not going to change. Luckily SFF case designs are progressing at a rapid pace, allowing for much, much better cooling in smaller volumes than ever before (the number of <15l cases fitting 240 or even 280mm radiators today compared to 3-4-5 years ago speaks to this), so the tuning and compromises are shrinking, or at worst keeping pace with how power draw is deviating from the expectation created by TDP classes. But we need to accept that our use case is non-standard, and account for that in our builds. (On a related note, are you on the smallformfactor.net forums?)

Btw, did you test your 3600 when you had it at its 45W cTDP/Eco Mode setting? That might have been a better fit for your case/cooling.
 
Joined
Jan 14, 2019
Messages
9,949 (5.13/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 7800 XT
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Benchmark Scores Cinebench R23 single-core: 1,800, multi-core: 18,000. Superposition 1080p Extreme: 9,900.
Personally at least, I think that with SFF you really should be putting the effort in to tune the stuff properly around your cooling (and other capabilities)... At least basically all retail motherboards will let you fiddle with the power limits (even if some of them are wholly inadequate for running sustained boost with high current workloads). On Haswell for example, some people (myself included) used to run power limits to emulate an AVX offset.

Definitely it would be interesting to see some testing of desktop CPUs at different power limits, I know some laptop reviewers already do so.
I just found one from GN, though it's with a 11700K at 125 W power limit vs. unlocked. It would be nice to see the same with the non-K at 65 W.


AMD's TDP formula was explicitly created as a reverse-engineering of Intel's formula so that cooler manufacturers and SIs could treat them equivalently in their design processes. Of course this has been undermined by Intel since being on a path of making TDP ever less meaningful of a metric, but still - that's not AMD's fault.

This goes to both you and @GorbazTheDragon: you're approaching this from the wrong angle, which either stems from a fundamental misunderstanding or from wanting something that doesn't exist. The issue: TDP is not a consumer-facing specification denoting power draw. It never has been. Historically it has been roughly equivalent to this, but this is more coincidental than intentional. TDP is a specification for SIs and cooler OEMs to design their cooling solutions and system designs around. If TDP was meant to denote power draw directly, it would for example be a guide for motherboard makers in designing their VRM setups - but it's not, and there are specific specifications (dealing with the relevant metrics, volts and amps) for that. You can disagree with how TDPs are used in marketing with regards to this - I definitely do! - but you can't just transfer it into being something it isn't.

TDPs serve as broad categories for SIs and cooler makers to design for, and is explicitly directed at large OEMs. This is where the 2/3-tier (historically ~95/65/35W) TDP systems come from - they're guidelines for designing cooling systems for three tiers of CPUs. There has always been variance within these tiers - just as there is with laptops, where a 15W i7 always needs more power than a 15W i5. Treating TDP as an absolute number for power draw has always been wrong. It's just happened to be roughly accurate at times. But it's also typically been far too high - like the R3 3100 you mention, or the i5-2400 in my modded Optiplex 990 SFF (nominally 95W, yet I've never gotten it past ~78).

This is where the current issues stem from - TDP used to be reasonably close to normal power draws, with non-high-end CPUs often coming in noticeably below that number. As technology has progressed, competition has tightened, and Intel has been stuck on 14nm(+++++++++++++) yet has needed to increase core counts, Turbo - which has always been explicitly temporary, variable and potentially above TDP in power draw - has become more important, and has started pushing the silicon closer to its limits. Turbo clocks have divereged much further from base clocks than ever before (that aforementioned i5-2400 has a 300MHz Turbo on top of its 3.1GHz base clock), while the definition of TDP has stayed the same, and the categories have also stayed the same - largely due to Intel not being able to change these due to their OEM partners (if they changed the 65W class TDP to something more realistic like 105W, this would necessitate every OEM out there completely redesigning their SFF business systems to maintain base performance).

(Of course, we also need to take into account that stock (including stock boosting behaviour) power draws for CPUs are much higher today than 5-10 years ago. An i7-7700k stuck pretty tightly to its 91W tDP in terms of idle-load delta power draw, and only scaled to ~120W when OC'd. These days AMD's 105W CPUs boost to 138/144W, and Intel's 125W CPUs boost to 170-250W.)

The reason for these issues is that Intel is using an OEM-facing design class denomination in consumer products without changing it or otherwise informing users what it means. This of course leads to a lot of confusion. But it also makes no sense for them to change those classes in the OEM world - which is easily 10x the size of CPU retail. A more sensible solution would be a consumer-facing "power class" or some such to denominate something closer to power draw. But that would ultimately look like they're suddenly saying their CPUs use far more power, which means that such a move would never be sanctioned by corporate and PR.

Of course this is only tangentially related to the issue at hand here - it's one root cause of it, but indirectly. The gap between base clocks (and power at those clocks) and turbo clocks (and the power at those clocks) is now large enough that due to Intel not enforcing their PL1, PL2 and tau specs with motherboard manufacturers, we now have a situation where the same CPU can perform very, very differently depending on the motherboard you put it in, which is not how things are expected to work. Intel could easily do this - but it would also make their CPUs look worse in reviews, so again, corporate and PR would never accept that. So instead, we get this quasi-sanctioned motherboard-dependent not-quite-auto-OC situation where the ultimate performance of a system is far more variable than ever before. Which of course sucks for end users and DIYers. But Intel (and AMD, though potentially a tad less) ultimately doesn't care about us - they care far more about the OEMs and laptop makers that represent the majority of their sales.
The only thing I don't understand is... well, let's take three 65 W TDP CPUs that I've had as examples:
  • The Core i7-7700 (non-K): At stock settings, the crappy box cooler managed to keep it from thermal throttling, though it was so loud that I swapped it for a 120 mm AIO (I had a Coolermaster Elite 130 case back then), and never had an issue since. It consumed roughly 60-65 W, and really, the only reason I had to stop using the box cooler is the unbearable noise at high RPMs.
  • The Ryzen 3 3100: At stock settings, the box cooler (Wraith Stealth) is more than enough to keep it cool. 70 C max with low RPM in a case with limited airflow (1x 8 cm fan on the bottom as intake, and 1x 8 cm on top as exhaust). Package power is at 50 W under full load.
  • The Ryzen 5 3600: At stock settings, the box cooler (same Wraith Stealth) failed to keep it within acceptable temps even with high RPM. Even the be quiet! Shadow Rock LP couldn't keep it below 80 C on low RPM settings (probably because of the limited airflow in the case). Package power is just short of 90 W.
If TDP has more to do with heat and cooling specifications for OEMs (as stated by both Intel and AMD), then how can these three totally different (from a thermal perspective) CPUs fall into the same category? :confused:

That would be great! Completely agree if reviews would cover this. At least TPU does test at both Intel official spec as well as unlocked power limits. But depending on just how SFF you go, there'll always be tuning (and the related stability testing) needed, which would drastically increase the reviewers' workload. And of course binning/silicon lottery outcomes dramatically affect this. So it's not very likely to happen.
That's true, and I really appreciate it, though again, I could only find the 11700KF, but not the non-K. It seems like the higher core count non-K SKUs are generally forgotten by reviewers for some reason.

But giving up boost isn't happening. CPU boosting represents massive performance gains in everyday tasks such as web browsing and office work - even in very thermally limited systems. Which is why most OEMs let their CPUs constantly bounce off the thermal throttle point of the CPU - it doesn't harm the CPU or system in any way, but allows for far better responsiveness and performance. We as enthusiast DIYers tend not to accept this, nor the performance loss inherent to it (with the commonly accepted wisdom being that if you're bouncing off the throttle point in a DIY system, replace your cooler and gain performance!). This is doubly true if we also want silence - another factor most OEMs don't care about.

So as SFF enthusiasts - which is a niche and extreme approach to DIY PC builds, after all - we need to accept that a) we might not get peak performance, b) we'll need to tune our systems more, and c) there are no official specs denoting the information we need. That's life. And it's not going to change. Luckily SFF case designs are progressing at a rapid pace, allowing for much, much better cooling in smaller volumes than ever before (the number of <15l cases fitting 240 or even 280mm radiators today compared to 3-4-5 years ago speaks to this), so the tuning and compromises are shrinking, or at worst keeping pace with how power draw is deviating from the expectation created by TDP classes. But we need to accept that our use case is non-standard, and account for that in our builds. (On a related note, are you on the smallformfactor.net forums?)

Btw, did you test your 3600 when you had it at its 45W cTDP/Eco Mode setting? That might have been a better fit for your case/cooling.
Very true again. The bad thing about it is that there's no way of knowing how a CPU performs with the tweaking/settings we need before buying one. Right now, I'm torn between building an Intel system with the Core i7-11700 non-K, and waiting for the Ryzen 7 5700GE to come for the DIY market.

smallformfactor.net? I didn't know it existed, thanks for the info. I'll definitely check it out. :)

As I also didn't know the 3600 had cTDP! :eek: Where is it? I remember working on a laptop with an Intel CPU with cTDP. The setting was in the Windows power plan settings, but there was nothing like it with the 3600.
 
Last edited:

Mussels

Freshwater Moderator
Staff member
Joined
Oct 6, 2004
Messages
58,413 (8.17/day)
Location
Oystralia
System Name Rainbow Sparkles (Power efficient, <350W gaming load)
Processor Ryzen R7 5800x3D (Undervolted, 4.45GHz all core)
Motherboard Asus x570-F (BIOS Modded)
Cooling Alphacool Apex UV - Alphacool Eisblock XPX Aurora + EK Quantum ARGB 3090 w/ active backplate
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3866 C18-22-22-22-42 TRFC704 (1.4V Hynix MJR - SoC 1.15V)
Video Card(s) Galax RTX 3090 SG 24GB: Underclocked to 1700Mhz 0.750v (375W down to 250W))
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Phillips 32 32M1N5800A (4k144), LG 32" (4K60) | Gigabyte G32QC (2k165) | Phillips 328m6fjrmb (2K144)
Case Fractal Design R6
Audio Device(s) Logitech G560 | Corsair Void pro RGB |Blue Yeti mic
Power Supply Fractal Ion+ 2 860W (Platinum) (This thing is God-tier. Silent and TINY)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE ( Sexy white keycaps)
VR HMD Oculus Rift S + Quest 2
Software Windows 11 pro x64 (Yes, it's genuinely a good OS) OpenRGB - ditch the branded bloatware!
Benchmark Scores Nyooom.
how can these three totally different (from a thermal perspective) CPUs fall into the same category? :confused:
Because they make their own metrics, and their own testing - so they can throw them out however they want

they come up with a category (15W/45W/65W/105W) for board makers and OEMs to tune cooling and power for, and then slap products into those existing categories later

3600 was too much for the wraith stealth, but then if they re-labelled it to a 95W chip theyd have to throw in a wraith prism, OEMs would need to include better coolers to meet their specs, and blah blah blah... intel does the same shit (only worse, with PL1/PL2)
 
Joined
May 2, 2017
Messages
7,762 (3.03/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Because they make their own metrics, and their own testing - so they can throw them out however they want

they come up with a category (15W/45W/65W/105W) for board makers and OEMs to tune cooling and power for, and then slap products into those existing categories later

3600 was too much for the wraith stealth, but then if they re-labelled it to a 95W chip theyd have to throw in a wraith prism, OEMs would need to include better coolers to meet their specs, and blah blah blah... intel does the same shit (only worse, with PL1/PL2)
More or less, yes. Though that's the glass-half-empty view. The glass-half-full view is that TDP is still only promising base clock performance, with anything above that being temporary and/or optional. This is of course not what's advertised at retail, but with retail chips you also need to supply your own cooler (for most CPUs), and even with crappy stock cooler you'll see the boost clocks in responsiveness-driving bursts. Is this honest advertising? Both yes and no. It's mainly overcomplicated, and that overcomplication is only the fault of the CPU makers. The biggest issues is that the problem is getting worse, proliferating down the product stack (and thus reaching a far wider audience), while CPU makers are doing nothing to alleviate it.
The only thing I don't understand is... well, let's take three 65 W TDP CPUs that I've had as examples:
  • The Core i7-7700 (non-K): At stock settings, the crappy box cooler managed to keep it from thermal throttling, though it was so loud that I swapped it for a 120 mm AIO (I had a Coolermaster Elite 130 case back then), and never had an issue since. It consumed roughly 60-65 W, and really, the only reason I had to stop using the box cooler is the unbearable noise at high RPMs.
  • The Ryzen 3 3100: At stock settings, the box cooler (Wraith Stealth) is more than enough to keep it cool. 70 C max with low RPM in a case with limited airflow (1x 8 cm fan on the bottom as intake, and 1x 8 cm on top as exhaust). Package power is at 50 W under full load.
  • The Ryzen 5 3600: At stock settings, the box cooler (same Wraith Stealth) failed to keep it within acceptable temps even with high RPM. Even the be quiet! Shadow Rock LP couldn't keep it below 80 C on low RPM settings (probably because of the limited airflow in the case). Package power is just short of 90 W.
If TDP has more to do with heat and cooling specifications for OEMs (as stated by both Intel and AMD), then how can these three totally different (from a thermal perspective) CPUs fall into the same category?
The answer here is pretty simple: as mentioned before, OEMs don't really care whatsoever about thermals as long as they stay within spec. 80C is within spec. 95C and not boosting as high is within spec, as long as it's not going below base clock, and device skin temperatures aren't excessive. You see this in pretty much every laptop out there - put a load on the CPU, and it stays bouncing off the throttle point, even when you know the fans could spin up higher and reduce thermals notably. Intel of course has a history of supplying stock coolers that can't actually keep up with the TDP of the chip they're paired with, but that's unrelated to the TDP of the chip and simply due to them using shitty, under-specced coolers.

OEMs rely on slightly overbuilding their coolers so that they can soak up short-term boost heat outputs, but also tune their systems accordingly (hence the massive variability in PL2 and tau in laptops especially, as those are the most restricted in terms of cooling). Enthusiast DIYers are often massively overbuilding their coolers - but also have the rather utopian expectation of being able to run near or at peak boost constantly without a loud cooler or high temperatures. It's pretty obvious that depending on the setup, one or more of those factors will have to give.

But the key here is this: it's not throttling unless it's below base clock. If it's at or above base clock, it's just not boosting (as high). That's the specification, that's what's actually promised, although the marketing (with the ever-present but always quite invisible "up to") does a lot of work to make it seem otherwise. Marketing for current-gen CPUs is designed around seeming to promise a lot, while actually promising only base clocks - pure CYA, "you can't sue me for this", "we didn't actually mislead you, you just didn't pay attention to the right things (the ones we tried pretty hard to hide from you)".

Boost is after all opportunistic and contextual. So all the CPUs you mention are no doubt capable of maintaining their base clock within the TDP-level power draw without overheating as long as they are paired with a built-to-spec cooler. Some (most?) of them might even boost above base within those confines - like my i5-2400 that sticks at its boost clock 100% of the time and never comes close to 95W reported power draw. This used to be the norm back when Intel didn't have any real competition and could comfortably leave plenty of unused headroom in their silicon (i.e. up to Skylake or Kaby Lake). But with boost algorithms becoming ever more aggressive, sophisticated, and opportunistic, and CPU makers working to push their silicon as far as is safely possible to gain a competitive advantage in the ever-important short-term, bursty loads that largely determine the feeling of system responsiveness, the delta between base (actually promised) and boost (seemingly promised) speeds are growing dramatically, especially as core counts rise.

To alleviate this, the only feasible solution is to introduce some sort of two-tier power denomination for each chip - i.e. a clearly marked base power/boost power denomination. But any way you do that, it would arguably be just as big of a mess as the current mess. Does boost power mean 1, 2, 3, 4, n core boost? Is boost power a constant number? Can it be exceeded for short term loads? Must all hardware be able to maintain this number? Especially the latter question has huge ramifications, as (assuming boost power is constant and peak all-core) that would essentially require every socket LGA 1200 motherboard to be able to feed ~290W to an 11900K indefinitely, for example. That would drive up B560 and H570 board prices through the roof. And if boost power isn't a platform requirement, what's really the point of the metric? Would motherboards (and OEM systems) need to start advertising which boost power level they are built for? That would be a complete mess, for sure. "Hi, I want a B560 motherboard." "Okay, do you want a 65W, 95W, 125W, 150W, 175W, 225W or 290W B550 motherboard?" Yeah, that's not going to work. And if all chips still work in all motherboards, just at different sustained performance levels, then nothing has actually changed from today save some modicum of transparency that will be entirely overshadowed by the sheer confusion it would bring with it.

Of course, MCE and similar "features" in DIY motherboards have established this problem long ago. But it used to be limited to high end SKUs, the i7s and i9s of the world. There was some commonly accepted wisdom that you'd more than likely sacrifice some performance if you paired a top-end CPU with a cheap motherboard. But now that thinking also applies to a relatively low-end (though still midrange in terms of pricing), non-K i5. And that's where this goes from a niche problem mostly limited to enthusiasts who presumably know of it and are willing and able to deal with it (or those with more money than sense), to a mass-market problem.
 
Joined
Jan 14, 2019
Messages
9,949 (5.13/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 7800 XT
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Benchmark Scores Cinebench R23 single-core: 1,800, multi-core: 18,000. Superposition 1080p Extreme: 9,900.
More or less, yes. Though that's the glass-half-empty view. The glass-half-full view is that TDP is still only promising base clock performance, with anything above that being temporary and/or optional. This is of course not what's advertised at retail, but with retail chips you also need to supply your own cooler (for most CPUs), and even with crappy stock cooler you'll see the boost clocks in responsiveness-driving bursts. Is this honest advertising? Both yes and no. It's mainly overcomplicated, and that overcomplication is only the fault of the CPU makers. The biggest issues is that the problem is getting worse, proliferating down the product stack (and thus reaching a far wider audience), while CPU makers are doing nothing to alleviate it.

The answer here is pretty simple: as mentioned before, OEMs don't really care whatsoever about thermals as long as they stay within spec. 80C is within spec. 95C and not boosting as high is within spec, as long as it's not going below base clock, and device skin temperatures aren't excessive. You see this in pretty much every laptop out there - put a load on the CPU, and it stays bouncing off the throttle point, even when you know the fans could spin up higher and reduce thermals notably. Intel of course has a history of supplying stock coolers that can't actually keep up with the TDP of the chip they're paired with, but that's unrelated to the TDP of the chip and simply due to them using shitty, under-specced coolers.

OEMs rely on slightly overbuilding their coolers so that they can soak up short-term boost heat outputs, but also tune their systems accordingly (hence the massive variability in PL2 and tau in laptops especially, as those are the most restricted in terms of cooling). Enthusiast DIYers are often massively overbuilding their coolers - but also have the rather utopian expectation of being able to run near or at peak boost constantly without a loud cooler or high temperatures. It's pretty obvious that depending on the setup, one or more of those factors will have to give.

But the key here is this: it's not throttling unless it's below base clock. If it's at or above base clock, it's just not boosting (as high). That's the specification, that's what's actually promised, although the marketing (with the ever-present but always quite invisible "up to") does a lot of work to make it seem otherwise. Marketing for current-gen CPUs is designed around seeming to promise a lot, while actually promising only base clocks - pure CYA, "you can't sue me for this", "we didn't actually mislead you, you just didn't pay attention to the right things (the ones we tried pretty hard to hide from you)".

Boost is after all opportunistic and contextual. So all the CPUs you mention are no doubt capable of maintaining their base clock within the TDP-level power draw without overheating as long as they are paired with a built-to-spec cooler. Some (most?) of them might even boost above base within those confines - like my i5-2400 that sticks at its boost clock 100% of the time and never comes close to 95W reported power draw. This used to be the norm back when Intel didn't have any real competition and could comfortably leave plenty of unused headroom in their silicon (i.e. up to Skylake or Kaby Lake). But with boost algorithms becoming ever more aggressive, sophisticated, and opportunistic, and CPU makers working to push their silicon as far as is safely possible to gain a competitive advantage in the ever-important short-term, bursty loads that largely determine the feeling of system responsiveness, the delta between base (actually promised) and boost (seemingly promised) speeds are growing dramatically, especially as core counts rise.

To alleviate this, the only feasible solution is to introduce some sort of two-tier power denomination for each chip - i.e. a clearly marked base power/boost power denomination. But any way you do that, it would arguably be just as big of a mess as the current mess. Does boost power mean 1, 2, 3, 4, n core boost? Is boost power a constant number? Can it be exceeded for short term loads? Must all hardware be able to maintain this number? Especially the latter question has huge ramifications, as (assuming boost power is constant and peak all-core) that would essentially require every socket LGA 1200 motherboard to be able to feed ~290W to an 11900K indefinitely, for example. That would drive up B560 and H570 board prices through the roof. And if boost power isn't a platform requirement, what's really the point of the metric? Would motherboards (and OEM systems) need to start advertising which boost power level they are built for? That would be a complete mess, for sure. "Hi, I want a B560 motherboard." "Okay, do you want a 65W, 95W, 125W, 150W, 175W, 225W or 290W B550 motherboard?" Yeah, that's not going to work. And if all chips still work in all motherboards, just at different sustained performance levels, then nothing has actually changed from today save some modicum of transparency that will be entirely overshadowed by the sheer confusion it would bring with it.

Of course, MCE and similar "features" in DIY motherboards have established this problem long ago. But it used to be limited to high end SKUs, the i7s and i9s of the world. There was some commonly accepted wisdom that you'd more than likely sacrifice some performance if you paired a top-end CPU with a cheap motherboard. But now that thinking also applies to a relatively low-end (though still midrange in terms of pricing), non-K i5. And that's where this goes from a niche problem mostly limited to enthusiasts who presumably know of it and are willing and able to deal with it (or those with more money than sense), to a mass-market problem.
Let's be honest, isn't this something that GPUs (especially nvidia) have been doing in the last 6-8 years? You've got an advertised base clock that you never see in real life, a boost clock which you probably also don't see if your card's cooler is any decent, and then the card boosts up to thermal, voltage, power, usage, etc. limits, leaving absolutely no headroom for overclocking. The difference between base and max. boost is huge, and anywhere in between is within spec. The only difference is that nvidia strictly keeps to TDP limits, something that CPUs could do if CPU TDP calculations weren't overcomplicated. On the other hand, you only have GPU chip power draw on AMD cards which is just as shady a practice as their CPU TDP formula is.
 
Joined
Mar 3, 2020
Messages
111 (0.07/day)
Location
Australia
System Name wasted talent
Processor i5-11400F
Motherboard Gigabyte B560M Aorussy Pro
Cooling Silverstone AR12
Memory Patriot Viper Steel 2X8 4400 @ 3600 C14,14,12,28
Video Card(s) Sapphire RX 6700 Pulse, Galax 1650 Super EX
Storage Kingston A2000 500GB
Display(s) Gigabyte M27Q
Case open mATX: zwzdiy.cc/M/Product/209574419.html
Audio Device(s) HiFiMan HE400SE
Power Supply Strix Gold 650W
Mouse Skoll Mini, G502 LightSpeed
Keyboard Akko 3084S
Software 1809 LTSC
Benchmark Scores 3968/540 CB R20 MT/ST
Let's be honest, isn't this something that GPUs (especially nvidia) have been doing in the last 6-8 years? You've got an advertised base clock that you never see in real life, a boost clock which you probably also don't see if your card's cooler is any decent, and then the card boosts up to thermal, voltage, power, usage, etc. limits, leaving absolutely no headroom for overclocking. The difference between base and max. boost is huge, and anywhere in between is within spec. The only difference is that nvidia strictly keeps to TDP limits, something that CPUs could do if CPU TDP calculations weren't overcomplicated. On the other hand, you only have GPU chip power draw on AMD cards which is just as shady a practice as their CPU TDP formula is.
I think we should just ignore GPUs... Intel and AMD at least doesn't trip PSU protection when the transients hits, in gaming, where the CPU is fine due to not actually 100% and GPU just go crazy as frames output fluctuate.
Also do not think we'll have a decent working power rating. Makes more money to have them confusing.
 
Joined
Sep 3, 2019
Messages
2,993 (1.75/day)
Location
Thessaloniki, Greece
System Name PC on since Aug 2019, 1st CPU R5 3600 + ASUS ROG RX580 8GB >> MSI Gaming X RX5700XT (Jan 2020)
Processor Ryzen 9 5900X (July 2022), 150W PPT limit, 79C temp limit, CO -9~14
Motherboard Gigabyte X570 Aorus Pro (Rev1.0), BIOS F37h, AGESA V2 1.2.0.B
Cooling Arctic Liquid Freezer II 420mm Rev7 with off center mount for Ryzen, TIM: Kryonaut
Memory 2x16GB G.Skill Trident Z Neo GTZN (July 2022) 3600MHz 1.42V CL16-16-16-16-32-48 1T, tRFC:288, B-die
Video Card(s) Sapphire Nitro+ RX 7900XTX (Dec 2023) 314~465W (390W current) PowerLimit, 1060mV, Adrenalin v24.4.1
Storage Samsung NVMe: 980Pro 1TB(OS 2022), 970Pro 512GB(2019) / SATA-III: 850Pro 1TB(2015) 860Evo 1TB(2020)
Display(s) Dell Alienware AW3423DW 34" QD-OLED curved (1800R), 3440x1440 144Hz (max 175Hz) HDR1000, VRR on
Case None... naked on desk
Audio Device(s) Astro A50 headset
Power Supply Corsair HX750i, 80+ Platinum, 93% (250~700W), modular, single/dual rail (switch)
Mouse Logitech MX Master (Gen1)
Keyboard Logitech G15 (Gen2) w/ LCDSirReal applet
Software Windows 11 Home 64bit (v23H2, OSB 22631.3155)
Because they make their own metrics, and their own testing - so they can throw them out however they want

they come up with a category (15W/45W/65W/105W) for board makers and OEMs to tune cooling and power for, and then slap products into those existing categories later

3600 was too much for the wraith stealth, but then if they re-labelled it to a 95W chip theyd have to throw in a wraith prism, OEMs would need to include better coolers to meet their specs, and blah blah blah... intel does the same shit (only worse, with PL1/PL2)
Actually 3600 can't be labeled as 95W as it only draws ~88W max on full stock limits.
AMD's TDP labeling is about cooling requirements (heat in watts toward the cooler) during that max (88W) power consumption.
95W is the heat in watts toward the cooler from a 125W (max) power draw CPU.

Its the 3100 that it should be labeled as a 40~45W TDP or even less, as its max power draw is around 60W according to web info.

One can ask... Why this discrepancy between max power consumption and the "expected" heat to the cooler?
The answer is simple. Not all the "produced" heat is going to end up to the cooler. Some of it will escape through the CPU substrate to the board. Coolers don't suck the produced heat but just take the larger portion of it through conduction to the heat spreader.

It has nothing to do with Intel's labeling method that is indeed related to PL1 only, whether is base clock consumption or not.

--------------------------------------------------------------

Let's take the example from GamersNexus for the 105W TDP (~142W PPT) 3900X and alter just the cooler's ambient temp and see what will happen.
I remind that AMD's testing methodology is taking place with fixed temperatures. And they're trying to find the proper cooler capacity to achieve those.

TDP (Watts) = (tCase°C - tAmbient°C)/(HSF θca)

1621875089919.png

61.8°C = tCase°C = CPU case temp (optimal temp for CPU lid)
42°C = tAmbient°C = cooler's ambient temp (the inside of a case or the room ambient if there is no case)
0.189 = Cooler's thermal resistance

Let's say that cooler's thermal resistance is a constant 0.189 (constant mass, surface, material, fan rpm and TIM applied).
We all know that if we improve(decrease) the ambient temp of the room/case, the CPU temp will decrease also (if its power consumption is also constant) but it wont be respectively decreased.

So, we decrease the cooler's ambient temp by 2°C from 42°C to 40°C, and the CPU tCase°C is decreased by 0.6°C from 61.8°C to 61.2°C (sounds right to you?)

(tCase°C - tAmbient°C)/(HSF θca) = TDP (Watts)
(61.2 - 40) / (0.189) = 112.17

So, now with the new ambient/CPU temp the heat removal from the cooler is no longer 105W but 112W even though its power consumption is the same (142W).

To take it even further if we change the cooler with a better one (most 240~280mm AIOs have a thermal resistance lower than 0.1) the heat removal we be much greater than 112W with the same power consumption (142W).

---------------------------------------------------------

TDP (Thermal Design Power)
Definitely not a power consumption metric...
 

Mussels

Freshwater Moderator
Staff member
Joined
Oct 6, 2004
Messages
58,413 (8.17/day)
Location
Oystralia
System Name Rainbow Sparkles (Power efficient, <350W gaming load)
Processor Ryzen R7 5800x3D (Undervolted, 4.45GHz all core)
Motherboard Asus x570-F (BIOS Modded)
Cooling Alphacool Apex UV - Alphacool Eisblock XPX Aurora + EK Quantum ARGB 3090 w/ active backplate
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3866 C18-22-22-22-42 TRFC704 (1.4V Hynix MJR - SoC 1.15V)
Video Card(s) Galax RTX 3090 SG 24GB: Underclocked to 1700Mhz 0.750v (375W down to 250W))
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Phillips 32 32M1N5800A (4k144), LG 32" (4K60) | Gigabyte G32QC (2k165) | Phillips 328m6fjrmb (2K144)
Case Fractal Design R6
Audio Device(s) Logitech G560 | Corsair Void pro RGB |Blue Yeti mic
Power Supply Fractal Ion+ 2 860W (Platinum) (This thing is God-tier. Silent and TINY)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE ( Sexy white keycaps)
VR HMD Oculus Rift S + Quest 2
Software Windows 11 pro x64 (Yes, it's genuinely a good OS) OpenRGB - ditch the branded bloatware!
Benchmark Scores Nyooom.
Actually 3600 can't be labeled as 95W as it only draws ~88W max on full stock limits.
AMD's TDP labeling is about cooling requirements (heat in watts toward the cooler) during that max (88W) power consumption.
95W is the heat in watts toward the cooler from a 125W (max) power draw CPU.

Its the 3100 that it should be labeled as a 40~45W TDP or even less, as its max power draw is around 60W according to web info.

One can ask... Why this discrepancy between max power consumption and the "expected" heat to the cooler?
The answer is simple. Not all the "produced" heat is going to end up to the cooler. Some of it will escape through the CPU substrate to the board. Coolers don't suck the produced heat but just take the larger portion of it through conduction to the heat spreader.

It has nothing to do with Intel's labeling method that is indeed related to PL1 only, whether is base clock consumption or not.

--------------------------------------------------------------

Let's take the example from GamersNexus for the 105W TDP (~142W PPT) 3900X and alter just the cooler's ambient temp and see what will happen.
I remind that AMD's testing methodology is taking place with fixed temperatures. And they're trying to find the proper cooler capacity to achieve those.

TDP (Watts) = (tCase°C - tAmbient°C)/(HSF θca)

View attachment 201719

61.8°C = tCase°C = CPU case temp (optimal temp for CPU lid)
42°C = tAmbient°C = cooler's ambient temp (the inside of a case or the room ambient if there is no case)
0.189 = Cooler's thermal resistance

Let's say that cooler's thermal resistance is a constant 0.189 (constant mass, surface, material, fan rpm and TIM applied).
We all know that if we improve(decrease) the ambient temp of the room/case, the CPU temp will decrease also (if its power consumption is also constant) but it wont be respectively decreased.

So, we decrease the cooler's ambient temp by 2°C from 42°C to 40°C, and the CPU tCase°C is decreased by 0.6°C from 61.8°C to 61.2°C (sounds right to you?)

(tCase°C - tAmbient°C)/(HSF θca) = TDP (Watts)
(61.2 - 40) / (0.189) = 112.17

So, now with the new ambient/CPU temp the heat removal from the cooler is no longer 105W but 112W even though its power consumption is the same (142W).

To take it even further if we change the cooler with a better one (most 240~280mm AIOs have a thermal resistance lower than 0.1) the heat removal we be much greater than 112W with the same power consumption (142W).

---------------------------------------------------------

TDP (Thermal Design Power)
Definitely not a power consumption metric...

i used simplified examples about how they're lumped into brackets, rather than each model having specific details
 
Joined
Jan 14, 2019
Messages
9,949 (5.13/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 7800 XT
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Benchmark Scores Cinebench R23 single-core: 1,800, multi-core: 18,000. Superposition 1080p Extreme: 9,900.
Actually 3600 can't be labeled as 95W as it only draws ~88W max on full stock limits.
AMD's TDP labeling is about cooling requirements (heat in watts toward the cooler) during that max (88W) power consumption.
95W is the heat in watts toward the cooler from a 125W (max) power draw CPU.

Its the 3100 that it should be labeled as a 40~45W TDP or even less, as its max power draw is around 60W according to web info.

One can ask... Why this discrepancy between max power consumption and the "expected" heat to the cooler?
The answer is simple. Not all the "produced" heat is going to end up to the cooler. Some of it will escape through the CPU substrate to the board. Coolers don't suck the produced heat but just take the larger portion of it through conduction to the heat spreader.

It has nothing to do with Intel's labeling method that is indeed related to PL1 only, whether is base clock consumption or not.

--------------------------------------------------------------

Let's take the example from GamersNexus for the 105W TDP (~142W PPT) 3900X and alter just the cooler's ambient temp and see what will happen.
I remind that AMD's testing methodology is taking place with fixed temperatures. And they're trying to find the proper cooler capacity to achieve those.

TDP (Watts) = (tCase°C - tAmbient°C)/(HSF θca)

View attachment 201719

61.8°C = tCase°C = CPU case temp (optimal temp for CPU lid)
42°C = tAmbient°C = cooler's ambient temp (the inside of a case or the room ambient if there is no case)
0.189 = Cooler's thermal resistance

Let's say that cooler's thermal resistance is a constant 0.189 (constant mass, surface, material, fan rpm and TIM applied).
We all know that if we improve(decrease) the ambient temp of the room/case, the CPU temp will decrease also (if its power consumption is also constant) but it wont be respectively decreased.

So, we decrease the cooler's ambient temp by 2°C from 42°C to 40°C, and the CPU tCase°C is decreased by 0.6°C from 61.8°C to 61.2°C (sounds right to you?)

(tCase°C - tAmbient°C)/(HSF θca) = TDP (Watts)
(61.2 - 40) / (0.189) = 112.17

So, now with the new ambient/CPU temp the heat removal from the cooler is no longer 105W but 112W even though its power consumption is the same (142W).

To take it even further if we change the cooler with a better one (most 240~280mm AIOs have a thermal resistance lower than 0.1) the heat removal we be much greater than 112W with the same power consumption (142W).

---------------------------------------------------------

TDP (Thermal Design Power)
Definitely not a power consumption metric...
I understand why AMD made up their formula to refer to heat rather than power consumption, but it doesn't take a degree in engineering to know that subtracting two hugely variable and fairly independent values in °C and dividing the result with a constant that is not really a constant will never give you a result in W. Watt is the unit of power which is the amount of work done in a certain time (Joules per second). I understand that AMD's version is a guidance towards cooling capacity, but it's still BS.

Let's make up another formula:
168 h = average time I work per month,
320 h = average time it feels like I work per month,
2 = my average stress level on a scale of 5,
168 h * 320 h * 2 = £ 107,520 per year. It looks like I'm severely underpaid.
 
Last edited:
Joined
May 2, 2017
Messages
7,762 (3.03/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Let's be honest, isn't this something that GPUs (especially nvidia) have been doing in the last 6-8 years? You've got an advertised base clock that you never see in real life, a boost clock which you probably also don't see if your card's cooler is any decent, and then the card boosts up to thermal, voltage, power, usage, etc. limits, leaving absolutely no headroom for overclocking. The difference between base and max. boost is huge, and anywhere in between is within spec. The only difference is that nvidia strictly keeps to TDP limits, something that CPUs could do if CPU TDP calculations weren't overcomplicated. On the other hand, you only have GPU chip power draw on AMD cards which is just as shady a practice as their CPU TDP formula is.
That is pretty much exactly what this is. And essentially it means that unlike a decade ago, when the 2700K had >50% OC potential just left in it, we now get a large portion of that performance included at stock - as long as the cooler and power delivery can keep up. I also agree that it would be simpler if CPU makers followed the GPU line on TDP, though the issue there is that GPU loads tend to be all or nothing, while CPU loads are hugely variable, so unlike GPUs you'd see a lot of cases where the TDP seems wildly overblown. Of course this would also be a nightmare for OEMs and SIs as they would need various cTDP-down modes for their PCs.

(Edit: one major difference though: GPUs don't power throttle, they crash. Given that they have purpose-built VRMs, they work from the assumption of always having plentiful power at hand, so when power is limited, they just crash outright. CPU's don't have that level of control and thus have to be a lot more flexible in responding to power delivery limitations.)

Btw, I forgot to respond to this:
As I also didn't know the 3600 had cTDP! :eek: Where is it? I remember working on a laptop with an Intel CPU with cTDP. The setting was in the Windows power plan settings, but there was nothing like it with the 3600.
In my ASRock BIOS it's labelled Eco Mode or something like that. It allows for stepping 105W CPUs down to 65W, and 65W CPUs down to 45W. AFAIK all it does is set PPT/EDC/TDC/ETC/FTW/WTF(yes, I hate all these generic abbreviations) to preset lower levels, but it's really useful. AMD actually used to have this before Ryzen too - the a8-7600 that I just retired from my NAS had a 45W mode in BIOS.

For a build like yours with limited cooling I would probably look at an APU instead though. At least my experiences with cooling a 4650G is that it's so damn easy. I've got one in my HTPC, which lives in a small Lazer3D HT5 case, uses a modified Arctic Accelero S1 GPU cooler as a CPU cooler (it's bent and mangled to fit and I had a mounting bracket laser cut from aluminium), and while there is a 140mm fan on it, that fan is off >95% of the time. Yes, there is a vent directly above the cooler, but it's still a 6c12t CPU running passively in a tiny case. It even keeps switching off the fan while gaming - I'd estimate the fan is off and on ~50% of the time while playing Rocket League using the iGPU (with the iGPU OC'd to 2100MHz, RAM at 3800MT/s, CPU stock). Oh, and the CPU routinely boosts 100-200MHz above spec in desktop workloads. In comparison, my 5800X (stock) runs pretty warm even under water.
I understand why AMD made up their formula to refer to heat rather than power consumption, but it doesn't take a degree in engineering to know that subtracting two hugely variable and fairly independent values in °C and dividing the result with a constant that is not really a constant will never give you a result in W. Watt is the unit of power which is the amount of work done in a certain time (Joules per second). I understand that AMD's version is a guidance towards cooling capacity, but it's still BS.
It's not BS, it's a useful formula for calculating the cooling needs of a PC. Pick your worst-case scenario ambient temp (most PCs tend to be rated for operation at 40-45°C ambient, though case ambient can easily be 10°C above room ambient), your desired maximum tCase, and you can then either plug in a known TDP and have the formula tell you the needed thermal resistance of your cooler to maintain that temperature, or you can plug in a known thermal resistance from a cooler you have and have the formula tell you which TDP tier it's suitable for. To reiterate what I started my minor wall of text above with:
TDP is not a consumer-facing specification denoting power draw. It never has been. Historically it has been roughly equivalent to this, but this is more coincidental than intentional. TDP is a specification for SIs and cooler OEMs to design their cooling solutions and system designs around.
From this perspective, the formula seems eminently useful. As do the categorizations/tiers for CPUs. You just can't expect those to be equivalent to power draw. And in a DIY PC, not only does nobody ever do a calculation like this, but you're dealing with retail coolers that use various (and often dubious) other formulas for their TDP claims (and never publish thermal resistance numbers), airflow is dependent on the case, fans, and heaps of other choices which influence the cooling efficiency of the system, and last but not least, our expectations are much higher - we want peak performance all the time, while running cool and quiet. If cooler manufacturers published thermal resistance numbers (which would be problematic, as thermal resistance is dependent on the heat input as well as the cooler - a cooler that's great at 200W might be average at 65W, or a great 65W cooler might cause thermal throttling at 150W) this would also be problematic, as those would almost by necessity be at fixed full fan speeds - which nobody generally wants to run their coolers at. So there's no easy way out of this past the obvious: view TDP numbers as vague categorizations that indicate something about power draw but don't say that power draw will never exceed the stated number, and rely on reviews of relevant components for more accurate data.

The issue here, which started this thread, is that Intel's current over-aggressive boosting and non-enforcement of specifications has thrown another (semi-uncontrollable) variable into this mix. Where it used to be "pick your CPU, the pick a cooler that can handle it", it now is "pick your CPU, a cooler that can handle it, and a motherboard capable of sustaining its above-stock boost if you want review-like performance". And that's a big change.
 
Last edited:
Joined
May 8, 2021
Messages
1,978 (1.80/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
and last but not least, our expectations are much higher - we want peak performance all the time, while running cool and quiet.
But are we actually to be critiqued there? Tell me which Youtuber enforces Intel spec or better yet does test just at base clock speed? There aren't many and since all what is being shown in benches are results with maxed out power limits, it forms an expectation that this is how those chips are supposed to perform and anything less is unacceptable. It's just simply psychologically unacceptable to enforce 'Intel spec' and be fine with performance losses. Even more so, when seemingly nobody else does that. This situation is even worse, when Intel chips need unlocked TDP to be actually competitive with Ryzen, otherwise it would be behind Ryzen in every bench. At least now Intel chips are either worse of the same. Not a great situation, but much better than tanking in all benches. AMD did the same with AM3+ and FX chips. They also lied a lot about TDP to the point of motherboards frying and that unmitigated FX 9590 disaster, which only worked on several very expensive boards. Intel should just stop making sub 3 GHz non K chips, be honest and raise TDPs. i5 11400 would look much nicer with 3.6-3.8GHz base clock, 4.4GHz boost and 80 watt TDP. Also Intel should post their own official PL2 and all core maximum boost clock. I think that they should just get rid of PL2, PL3 and PL4 stuff. It adds complexity and has no real benefit.

Frankly, Intel has too much corporatic bullshit and they need to kill it.
 
Joined
Jan 14, 2019
Messages
9,949 (5.13/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 7800 XT
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Benchmark Scores Cinebench R23 single-core: 1,800, multi-core: 18,000. Superposition 1080p Extreme: 9,900.
But are we actually to be critiqued there? Tell me which Youtuber enforces Intel spec or better yet does test just at base clock speed? There aren't many and since all what is being shown in benches are results with maxed out power limits, it forms an expectation that this is how those chips are supposed to perform and anything less is unacceptable. It's just simply psychologically unacceptable to enforce 'Intel spec' and be fine with performance losses. Even more so, when seemingly nobody else does that. This situation is even worse, when Intel chips need unlocked TDP to be actually competitive with Ryzen, otherwise it would be behind Ryzen in every bench.
The issue here, which started this thread, is that Intel's current over-aggressive boosting and non-enforcement of specifications has thrown another (semi-uncontrollable) variable into this mix. Where it used to be "pick your CPU, the pick a cooler that can handle it", it now is "pick your CPU, a cooler that can handle it, and a motherboard capable of sustaining its above-stock boost if you want review-like performance". And that's a big change.
That's exactly my problem with reviews these days. TechPowerUp! is still doing fine, but if you look at any review on youtube, they all enforce unlocked TDPs and expect CPUs to run like that in every motherboard. They only ever care about peak performance with beefy coolers, and that extra 1% that nobody can ever see in real life, but puts X CPU just ahead of the competition. That's what caused the stir at Hardware Unboxed. Do you drive your car with the engine rpm at redline all the time? I don't think so. Heck, even AMD CPUs don't maintain their max turbo clocks all the time. In fact, AMD never even publishes their boosting tables, only a vague max boost clock (that you probably never see in real life, just like with any Intel CPU) and nobody complains about it.

Edit: Speaking of AMD max boost clocks, the "Asus Optimizer" setting in my motherboard BIOS pushes power limits out in space so that the CPU can maintain a higher boost clock even at a 100% workload. With this enabled, my 5950 chewed through around 180 Watts in Cinebench and came close to throttling temps even with a 240 mm AIO. Sure, it maintained 4.2-4.4 GHz all-core instead of the normal 3.6-3.8, but still... Why does nobody complain about this? Because it's an Asus feature, not AMD spec. Unlocked power limits by default should not be allowed.

Intel should just stop making sub 3 GHz non K chips, be honest and raise TDPs. i5 11400 would look much nicer with 3.6-3.8GHz base clock, 4.4GHz boost and 80 watt TDP. Also Intel should post their own official PL2 and all core maximum boost clock. I think that they should just get rid of PL2, PL3 and PL4 stuff. It adds complexity and has no real benefit.
I agree, but that would be bad marketing, wouldn't it?

That is pretty much exactly what this is. And essentially it means that unlike a decade ago, when the 2700K had >50% OC potential just left in it, we now get a large portion of that performance included at stock - as long as the cooler and power delivery can keep up. I also agree that it would be simpler if CPU makers followed the GPU line on TDP, though the issue there is that GPU loads tend to be all or nothing, while CPU loads are hugely variable, so unlike GPUs you'd see a lot of cases where the TDP seems wildly overblown. Of course this would also be a nightmare for OEMs and SIs as they would need various cTDP-down modes for their PCs.

(Edit: one major difference though: GPUs don't power throttle, they crash. Given that they have purpose-built VRMs, they work from the assumption of always having plentiful power at hand, so when power is limited, they just crash outright. CPU's don't have that level of control and thus have to be a lot more flexible in responding to power delivery limitations.)
They don't throttle, but they adjust their boost bins. My 1650 runs at different clock speeds during different workloads - Superposition 720p or 1080p Medium lets it run at 1920-1950 MHz, it does around ~1900 in 1080p Ultra, and 1860 in Cyberpunk 2077.

In my ASRock BIOS it's labelled Eco Mode or something like that. It allows for stepping 105W CPUs down to 65W, and 65W CPUs down to 45W. AFAIK all it does is set PPT/EDC/TDC/ETC/FTW/WTF(yes, I hate all these generic abbreviations) to preset lower levels, but it's really useful. AMD actually used to have this before Ryzen too - the a8-7600 that I just retired from my NAS had a 45W mode in BIOS.
I don't remember seeing a similar setting in my BIOS when I still had the 3600. It would be nice to play with it. Too late I guess. :ohwell:

For a build like yours with limited cooling I would probably look at an APU instead though. At least my experiences with cooling a 4650G is that it's so damn easy. I've got one in my HTPC, which lives in a small Lazer3D HT5 case, uses a modified Arctic Accelero S1 GPU cooler as a CPU cooler (it's bent and mangled to fit and I had a mounting bracket laser cut from aluminium), and while there is a 140mm fan on it, that fan is off >95% of the time. Yes, there is a vent directly above the cooler, but it's still a 6c12t CPU running passively in a tiny case. It even keeps switching off the fan while gaming - I'd estimate the fan is off and on ~50% of the time while playing Rocket League using the iGPU (with the iGPU OC'd to 2100MHz, RAM at 3800MT/s, CPU stock). Oh, and the CPU routinely boosts 100-200MHz above spec in desktop workloads. In comparison, my 5800X (stock) runs pretty warm even under water.
That would be a solid plan if there was an APU available. The Ryzen 4000 series are expensive and very difficult to find (they're also kind of a downgrade in gaming) and the 5000 series aren't out on DIY channels, yet.

Not to worry, my impulse-bought Asus TUF B560M-Plus Wifi and Core i7-11700 have just arrived. Tests coming soon. :D

It's not BS, it's a useful formula for calculating the cooling needs of a PC. Pick your worst-case scenario ambient temp (most PCs tend to be rated for operation at 40-45°C ambient, though case ambient can easily be 10°C above room ambient), your desired maximum tCase, and you can then either plug in a known TDP and have the formula tell you the needed thermal resistance of your cooler to maintain that temperature, or you can plug in a known thermal resistance from a cooler you have and have the formula tell you which TDP tier it's suitable for. To reiterate what I started my minor wall of text above with:
If it wasn't BS, it wouldn't have tricked me into swapping my 65 W TDP processor with another 65 W part and expecting it to work just fine. Maybe it works with OEMs whose only goal is to make their systems 'just work', even if at the edge of throttling, but DIYers need to know what to expect and how to build their systems before buying.
 
Last edited:
Joined
May 8, 2021
Messages
1,978 (1.80/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
That's exactly my problem with reviews these days. TechPowerUp! is still doing fine, but if you look at any review on youtube, they all enforce unlocked TDPs and expect CPUs to run like that in every motherboard. They only ever care about peak performance with beefy coolers, and that extra 1% that nobody can ever see in real life, but puts X CPU just ahead of the competition.
I don't think that it was only extra 1%. In some games I really do notice performance improvement of having PL values maxed out. For me "stock" value just feels like choking CPU for no real good reason instead of truly enjoying it. And yet at the same time, doing stuff like that probably isn't good for motherboard longevity. BTW that game is Wreckfest. It seemed that I got 55 fps, instead of more at lows and it bothered me. I also seem to benefit from more performance in Genshin Impact. However, in "productivity" loads I really wouldn't care less about performance loss, it's not a load where work output should be seen in real time, instead you click and let computer do its stuff.


That's what caused the stir at Hardware Unboxed. Do you drive your car with the engine rpm at redline all the time? I don't think so.
And yet power curves are important for daily driving and for spirited driving, but who actually talks about them? No one. And god forbid, you didn't buy souped up version of some car and got more basic version. Then there's no way to get such data at all, unless you go to dyno and measure it yourself. Car manufacturers are no better than Intel in their spec sheets.

Heck, even AMD CPUs don't maintain their max turbo clocks all the time. In fact, AMD never even publishes their boosting tables, only a vague max boost clock (that you probably never see in real life, just like with any Intel CPU) and nobody complains about it.
Actually, you do see maximum boost clock quite often on Intel chips. I often see 4.3GHz on i5 10400F. And I often see all core maximum clock of 4GHz at pretty much any load. I heard that Ryzen chips simply don't have such tables and if they can they will keep increasing clock speed as long as cooling permits doing so until maximum specified turbo speed by AMD, also AMD does that by 25MHz increments and Intel does it in 100Mhz increments. I don't remember many details, but AMD and Intel boosting algorithms are substantially different.

Edit: Speaking of AMD max boost clocks, the "Asus Optimizer" setting in my motherboard BIOS pushes power limits out in space so that the CPU can maintain a higher boost clock even at a 100% workload. With this enabled, my 5950 chewed through around 180 Watts in Cinebench and came close to throttling temps even with a 240 mm AIO. Sure, it maintained 4.2-4.4 GHz all-core instead of the normal 3.6-3.8, but still... Why does nobody complain about this? Because it's an Asus feature, not AMD spec. Unlocked power limits by default should not be allowed.
Oh dear, those Ryzens are bad at dissipating heat. I remember stock FX 6300 consuming over 200 watts in stress test stock, despite being marketed as 95 watt chip. It didn't have wattage limiter, so turbo worked as long as there was thermal and VRM headroom (aka forever in most cases). It was rather easy to cool and didn't really need anything more than Hyper 103 cooler. That cooler was fine for 4.4-4.6Ghz all core overclock and it had to keep temps under 62C, because it was thermal limit at first. Later updated to 72C. Ryzen 5950X should be, in theory, much easier to cool than FX. However, if it's all that impossible to cool it well, then obsessing with boost clock is a waste of time. I personally think that PL and PPT values should be abandoned as nobody really cares about those, then cooling CPU, instead there could be temperature limiter, which would reduce boost speed at certain set temperature. It would be much easier to set up than vague TDP, which means almost nothing to end user.


I agree, but that would be bad marketing, wouldn't it?
I doubt it. Intel has successfully sold many chips with higher TDPs and people really don't care too much about TDP anyway. Many Intel i5 and i7 chips had TDP in 90s or 80s. And let's not forget current K chips, which are specced at 125 watt TDP. Bad marketing is to let OEMs mess with TDP too much and end up with current TDP bullshit. That's the last thing Intel needed after losing a lot of reputation. TDP spec mostly matters to prebuilt computer OEMs, which want to engineer a cooling solution just for stated wattage and not a bit better.

You can watch this video:

Bitching about lost boost starts at 10:30

This situation just isn't good. It feels like a lot of performance is being lost by sticking to too low TDP spec or by making 65 watt cooler. And if enthusiast buys an Intel chip today and invests into better than stock cooler, one can easily gain a lot of performance. The question is it gaining performance or just unchoking chip from stupid Intel spec? In times, when i5 11400F has a base speed of 2.6GHz and maximum boost clock of 4.4GHz, I would say that if you actually stick to 65 watt TDP (turbo boost off, as Intel specifies that TDP is at base clock speed), then you would be loosing a bit less than half of CPU performance to get that 65 watt TDP. In real life loads would will still get closer to 3.4Ghz even at 65 watt TDP with turbo on, but it only takes one heavy task on CPU to keep it running at base speed to fit into tiny 65 watt power budget. BOINC might be heavy enough load to not see more than 2.6GHz on that chip. When at higher power budget it could be running at 4Ghz on all cores. That's a lot of performance loss, on chips which other than stupid TDP spec, can perform much better, granted that you use aftermarket tower cooler. Even 92mm tower cooler would likely be enough for i5 with unlocked PL values.

And people seem to overlook another Intel CPU line, the T series chips, which are rated at 35 watts. i9 10900T was rated at 35 watts and to do that it has base speed of 1.9GHz and maximum boost clock of 4.6GHz. In this case you won't ever see it running at base speed or at TDP. For Intel chips maximum all core boost clock is essentially a new base clock. And those T chips were really bad at their job as they hardly saved any power when compared to non T version. Thanks to stupidly high PL2 values. Bullshit like that destroyed any value of separate T sku. What the point of getting T version, when you can get non T version and then set TDP to whatever you like? And for that matter what's the point of getting K sku, when you can just ramp up PL values on lower end chip and it will be almost as good as K version? Also non k version wouldn't even lose warranty from having PL values modified as Intel let motherboard OEMs go wild.

It's such a bad shitstorm, that I don't even know which is the least painful way to resolve it anymore. To enforce strict TDP? To raise TDP? To get rid of PL2? To keep performance or to accept losses? All this nonsense just makes me want to go back to era of single clock speed for everyone and be done with all this TDP bullshit. Let TDP to be whatever is needed for rated clock speed and be fine with results, but computer OEMs wouldn't be having any of that.

Not to worry, my impulse-bought Asus TUF B560M-Plus Wifi and Core i7-11700 have just arrived. Tests coming soon. :D
Cheers, but be ready for existential crisis of whether to unlock PL values or not.
 
Joined
May 8, 2021
Messages
1,978 (1.80/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
Inspired by this thread:

Intel B560 chipset boards fail again, hard. This time they can't even sustain "Intel spec" settings of 125 watt chips (Intel suggested PL1 of 125 watt and PL2 of 251 watts for 11900K and PL1 of 125 watts and PL2 of 224 watts for 11600K). Board failed to sustain around 100 watts, making it a complete no go with any k chip and in fact quite toasty with non k chips. Asrock and Gigabyte low end boards failed VRM test. Both overheated VRMs and failed to sustain a base clock speed of CPUs, making k series chips RMA-able in such case (if chip can't sustain base clock speed, Intel offers RMA for it). Motherboards however "work as expected" and by that it means no refund and no RMA, if you are unhappy with it.


As always, if you buy a motherboard then always pay attention to VRM quality, especially on Rocket Lake platform.
 
Top