Isn't it far better to fix a power level (say 200 watts for 16 cores) and try to maximize performance (through lithography and architecture) within that power limit?
I run my 10500H at 100 milivolt undervolt offset, underclocked to 3200 MHz and I pretty happy. I get decent performance for 35 watts. Fast enough and cool enough to actually be a lap top.
Sure it is, but then you can't fool the idiots with big numbers.
Intel and AMD both abandoned the specced TDP as a number we rely on. All marketing is directed at it, and in reality, peak usage far exceeds that and is then managed on a different metric: thermals and power consumed per X time. They push the burden on to the quality of the cooling to hide the lack of quality improvements on chip efficiency at high frequencies. So now you're left there spending 3-4x the cash on cooling for a hundred or two mhz, so we undervolt
Intel has been the worst offender though, what with all their misty turbo limits these days. The turbo used to have some headroom, now you're lucky if you even see it fully. The result of many generational baby steps to hide lack of progress since Skylake.
I get that overclocking and tweaking is fun and many people enjoy it for multiple different reason, so I certainly wouldn't want to take that away from anyone (I used to be one of those people long ago). But to be honest-- the older I get, the more I just want the stuff to work to 98% potential out of the box with no fuss. I do like that Intel/AMD are attempting to maximize their silicone and not leaving much on the table.
Sure but a bit more up front about what's really happening would be good too. We've all been discovering the hard way how these CPUs behave.