Despite that it's still a custom loop and it's still likely on par with bigger air coolers.
Oh, sure. It's perfectly capable of dissipating the full ~400W heat load of my CPU+GPU at reasonable noise levels. It just happens to have a relatively poor CPU block, which means that steady-state CPU temperatures are probably 10+ degrees higher than with a better block. An apt illustration of this is that adding my 275W GPU into the mix doesn't affect CPU thermals much, so the limitation is clearly in the CPU block and not the rest of the system.
And tell me how long do those laptops last. I doubt that they will be alive after a decade. And it's not like it's not known, we all remember nVidia GPU fiasco with 8000 series GPUs cooking themselves to death. Also GTX 480s many of them are dead. R9 290Xs many of them are dead. Any AMD monstrosity like Fury X ignoring water cooler failure, the core itself is cooked to death on most cards.
My old Thinkpad X201 lasted a decade before I sold it on, and routinely ran the CPU very hot (despite being repasted twice through its lifetime). It's true that many laptops die early, and many do die due to insufficient cooling, but it's
very rarely the CPU itself that fails in these cases. It might be the PCB itself takes damage from repeated heating/cooling cycles, or the solder joints below the CPU, RAM, or anything else, or peripheral components (charging circuitry is common, as are VRM failures and internal display circuitry failures). I don't think I've ever come across a laptop with a verifiable dead CPU - though of course it is a bit difficult to tell. But CPUs are
extremely robust, and are closely monitored for thermals. Bad laptop designs tend to cook everything else than the CPU by not ensuring sufficient internal airflow and exhaust of hot air, which kills other things, but not the CPU itself.
Well I have read about some dude (at OCN) trying to see electromigration of chip and it was Sandy bridge i7. He ramped up voltage to 1.7V and kept CPU cool, but only after 15 minutes it needed more voltage to be stable. And at more sane voltages, ne needed few hours to make it need more voltage. And translate that to 8 years of computer usage. You would want a CPU to be functional for at least 15 years and most people want it to be working for 8 years or so, any accelerated electromigration at such rates isn't acceptable. And if Ryzen needed only that much to electromigrated, think about Ryzens running stock with stock coolers. They usually stay at 85C under load and still get voltage in 1.2-1.45 volt range. That's very close to Buldzoid's test and only 60 hours to damage it like that is really not good, knowing that Ryzen chips likely doesn't test stability of itself and ask for more volts from factory than what AMD set it to have.
Electromigration and clock degradation varies
massively between process nodes and architectures, so those aren't comparable. Also, you clearly didn't read what I said: Buildzoid ran his chip way above stock thermal limits, at fixed voltages and currents, all of which were far above stock behaviour.
Here's the video if you want more detail btw. But in short, he ran the CPU at 105-112°C (depending on the time of day and how hot the room was) (also, he tried running it at 1.52V, but it shut down hard due to hitting 115°C, which is apparently the hardcoded silicon thermal shutdown limit). According to him, AMD tests its chips at slightly less idiotic settings than this for hundreds of hours to ensure they don't degrade under stock conditions. And the difference in electromigration at his ~110°C 133A 1.444V (get, 1.5V set) and stock behaviour (throttling at 95°C IIRC, voltages reading similarly high but actually being bucked lower by the CPU) is very significant. He goes into this himself as well. His results, while of course a sample size of one, indicate that these CPUs if run at stock, even with terrible cooling, will
never degrade.
It is how Intel defines TDP. TDP is "Thermal Design Power (TDP) represents the average power, in watts, the processor dissipates when operating at Base Frequency with all cores active under an Intel-defined, high-complexity workload." aka long term power limit which is PL1 and it is always set to match advertised TDP.
Close, but not quite.
PL1 is recommended to be set equal to TDP, and you seem to be missing that "power [...] the processor
dissipates" is something else than "power the processor
consumes". The difference is small, but it's nonetheless meaningful. TDP has never been directly related to power draw. It's been closely aligned, but that relation has always been variable and somewhat incidental.
Sure, but prebuilts had no problems in the past dealing with 95 watt TDP anyway. Let's not forget i7 2600k or Core 2 Quads or any AMD Phenom. 65 watt TDP is just chocking the chips for no good reason.
They dealt with them, sure, but OEMs have had a clear desire to build smaller, more affordable and space-efficient business desktops - as that's their bread and butter - and have thus pushed for lower TDPs. Also, K-SKUs like the 2600K have almost never been used in OEM systems, outside of a few gaming models. One of the major developments when intel moved to the Core architecture was the lowering of mainstream TDPs, which in turn allowed for the proliferation of SFF and uSFF business desktops, AIOs, and the like. Most of these use 65W no-letter CPUs, while the smallest use T-SKU 35W CPUs. 95W isn't seen in these spaces.
I don't think that they bin them and I haven't heard of that at all. It would be rather stupid of them to separately release them as faster models with lower voltages as it would mean more silicon unable to match Intel spec of SKU.
They do. You know how many SKUs Intel makes for each generation of chips, right? Binning is how they differentiate between these. And T SKUs are always taken from bins that perform well at low voltages. K SKUs are taken from bins that clock high at higher voltages. Some times these bins are similar, if not interchangeable. Some times they aren't.
I have, stop saying that nonsense. I may not agree with that, but doesn't mean that I don't read it. Anyway, TDP was once a decent metric, no need to shit on it. How hard could it possible be for chip make to calculate amps*volts of each chips at chip's maximum theoretical load? It's not hard at all, but for us it is as we aren't usually informed about official voltage of chip or its capabilities to pull amps. TDP only becomes a load of croc if companies start to obfuscate what it actually is and feed public with bullshit. Pentium 3 never had a problem of incorrect TDP being specified, 1.4GHz model was rated at 32.2 watts. Pentium 4 2.8GHz was rated at 68.4 watts. That was what you could measure while CPU was loaded and if you measured CPU power rail. Just like they could back then, they still can do the same with all power limits of modern chips.
Well, if you did read the thread, then you're just adamant in maintaining a belief in a reality that has never existed. I would really recommend you take a step back and try to consider the larger context. Nobody has 'shit on' TDP as a metric, we are simply discussing how it's quite problematic as boost becomes more aggressive and Intel fails to enforce their specifications in the DIY market, leading to extremely wide performance deltas for seemingly identical products.
Nobody has said it would be difficult to calculate a specific TDP for each chip, but I've been trying to say for ... what, three pages of posts now, that
this is not the purpose or function of TDP. TDP is a) a thermal dissipation specification, divided into classes, for which OEMs and cooler makers design cooling systems, and b) a marketing tier system vaguely related to power draw. You're arguing for TDP to not actually be about cooling and thermals, but rather about power draw. Which ... why would we then call it TDP?
Thermal design power? Unless that power (in watts) specifies what a cooling system must be able to dissipate, that name becomes nonsense.
As for why we can't go back to the Pentium 2/3/4 era ... well, those were fixed-clock CPUs, they had no clock scaling whatsoever, no power savings at idle to speak of, and they all had very low power consumptions. The difference in cooling needs between a 35.2W and a 64.1W CPU are tiny compared to the difference between contemporarily relevant power draws like 65W vs. 225W. So again, if you want to go back to that, you also need to accept going back to the other drawbacks of the times - such as limited motherboard compatibility (no more just picking a suitable motherboard with the correct socket, you now need to explicitly check that the CPU is listed as supported!), no boost clocks (= significant drops in system responsiveness), etc. Oh, and that completely ignores the fact that it would piss off OEMs to no end and pretty much kill Intel's business relations. Which means they would never, ever do that.
For me it would be 4GHz at 105 watts, which is what i5 10400F is exactly pulling. And I don't care about OEMs as in my country they are legitimately rare and practically don't exist. OEMs are American only concept, which doesn't apply to the rest of this planet.
Okay, so the 10400F would be rated at that. But then the 10600 (non-K) would either be specced the same (as they are the same bin, most likely), or would need to have its own TDP tier. And when each CPU has its own TDP, the metric becomes meaningless.
To be clear: what you're asking for is clearly defined
power draw metrics. This is not
thermal design power. I agree that accurate power draw metrics would be great to have on the spec sheet, but please stop mixing up your terms.
Also, sayin "OEMs are an American concept" is ludicrous. Dell, HP and Lenovo sell the
vast majority of desktop PCs in the world, and they sell them to businesses, governments and educational institutions across the world. Two of these three might be American companies, but that is utterly irrelevant - they operate globally, and in sum likely sell far more outside of the US than in the US - the US is just ~330M people, after all. Are you actually saying that major companies in your country buy their computers from small local manufacturers, or build them themselves? That is very hard to believe, as small manufacturers are quite unlikely to have the support systems major companies require. And major companies
definitely don't build DIY systems.
I would be if I expected it to be used with aluminum sunflower cooler and if all of us were speaking in legalese everyday, but I'm not. If chip can safely achieve that and do no harm to board, why on Earth I wouldn't want that "boost". Boost itself has been for nearly a decade almost identical to base speed as CPU either works at idle speed or maximum speed, which is boost. They rarely work at base clock and most users don't see that unless they disable boost in BIOS.
Yes, that's how DIY PCs work. They also often ignore PL1 by setting PL2 as infinite, or set a higher PL1 than stock. But remember, you're also asking for strict adherence to TDP, and you want TDP to be equal to PL1. Something has to give here. Please make up your mind - all of these cannot logically be true at the same time.
Why not look at my system then in profile? My cooler is clearly a Scythe Choten with stock fan, case is Silencio S400. It has 3 fans in it, one top exhaust, two front intakes, usually working at 600-800 rpm. And sure I am biased, of course "under manufacturer spec" is the most minimum spec for cooling. I said that it acceptable only when CPU is running prime95 and GPU is running Furmark at the same time. You got those 80C at nowhere near such a high load and not even close to worst case scenario.
Sorry, but my 80°C was while running Prime95 - as a response to your example. Which is also why those temperatures don't worry me whatsoever. Heck, 80°C in real world use wouldn't really be worrying either - it's well below any throttle point, and nowhere near harmful to anything. I would like it to be cooler, but I prefer silence. As for the rest of your setup, that wasn't relevant, the point was: you're setting arbitrary standards, presenting them in an oversimplified way, and using that as an argument. That is a really, really bad way of arguing.
You clearly said that is very relevant for them.
Relevant to perhaps a couple hundred users worldwide? Sure. That is not reason to use that as a generally valid benchmark - quite the opposite. You might as well argue that the needs of rally drivers are the best way to set safety standards and equipment levels for cars. Specialist needs are specialist needs, even if they use (derivatives of) general purpose equipment.
As if there were stuff like that. HEDT is high end desktop, not a workstation. And why would I not use my plebeian chips for such loads, they are perfectly capable of that and are designed to be general purpose. General purpose means that if I want I only use it for playing mp3s and if I want it, then I use it assemble molecules. I see nothing stupid or unreasonable about that. It might not be that fastest, but that doesn't mean it can be unstable or catch on fire.
... Xeon-W is for workstations, as is Ryzen Pro and Threadripper Pro. These are chips tested and validated for such workloads. Sure, you
can use any chip for such a workload, but you then also need to be cognizant that this is not a use that it's tested and validated for. And this is fine! It's likely to work perfectly. But again, you can't throw together any combination of retail consumer parts, subject them to a professional workload, and expect it to perform above spec. Which is essentially what you're arguing here.
I don't run it long, only to get an idea of what my thermals are.
.... if you're not reaching steady-state thermals, what's the point? Also, how are you getting "an idea what your thermals are" from running a power virus that generates more heat than literally any common GPU workload out there? That would give a very
unrepresentative view of your thermals. If you're into overblown cooling for its own sake, and pushing thermals as low as you can within your chosen paramenters, then that's what you like, but stop acting like that's suitable as a generally applicable standard for anything. And again, Furmark has been demonstrated to kill GPUs at stock due to its extreme heat load and how it intentionally aims to break thermal limits. Recommending it is reckless at best.
Actually boost is also technically running above manufacturer spec and is never accounted for in TDP calculations.
... I know. I have said so quite a few times. However, there are always safety margins built into the specification - any Intel chip when limited to TDP in power draw will boost to some extent (unless you've gotten the absolutely worst possible chip in that bin). Thus, disabling boost will inevitably drop voltages and power draws. Disabling boost does not mean strictly adhering to TDP (as that would require individual "TDP"s (in your meaning of "power draw specs) not for each SKU, but for each physical chip, as they inevitably differ from each other.
870K was launched in 2015, for which the system was assembled for and technically it had refreshed architecture, so it wasn't the same at older part, therefore it's 2015 tech.
... the chip you were intitially talking about still launched in October 2012.
And you think that Intel at factories doesn't use "power virus" to determine heat output? The last time I read about that, Intel used their in house tools for that and specific heat simulators or at least specialized software loads to simulate that. They do exactly what prime95 does, but better and even more taxing on chip + in final settings they add some safety margin to account for less than perfect VRMs, vDroop, hot climates and etc. If you are saying that bullshit, then Intel should fire all those people, who ensure stability and predictable heat output of chips as they are apparently useless.
How manufacturers torture test their components and how end users use their components are not the same, nor should they be. Manufacturers need to test unrealistic worst-case scenarios. That doesn't make unrealistic worst-case scenarios good tests for end users, as
what you are testing for is not the same. And no, Intel doesn't use power viruses to set TDP. Many Intel CPUs throttle under power virus loads if set to stock behaviour.
In my situation I only had a choice of either buying i3 4130 or FX 6300, FX was overall better and lasted longer. FX were a great value chips. And FX 8320 was selling for slightly less than i5 4440, so FX was maybe a better value deal too. FX didn't perform terribly, they just weren't as fast as Intel in single threaded loads. That doesn't mean that they were decently fast. I can tell that you never had an FX and have no idea what they actually were like.
Decently fast, sure, for their time and disregarding power draw. They did decently well in MT loads (though by no means close to their nominal core count advantage), c
onsumed dramatically more power even at the same TDP when compared to Intel (which just goes to show how TDP has never been a metric for power draw), lagged behind significantly in ST workloads, and
kind-of-sort-of caught up when overclocked, but at fully 3x the power consumption. They were fine for their time, if you didn't mind buying hefty cooling. But they aged very poorly, and
even an i5-6600 at 65W trounces the FX-8320E OC'd to 4.8GHz in the vast majority of tests. They might have seen an uptick in relative performance as more applications have become more multithreaded, but by that time (i.e. 2018+) they were already so far behind affordable current-generation offerings there was no real point. Of course a CPU you already own is infinitely cheaper than buying a new one, so if it performed adequately that is obviously great - I'm a big fan of making hardware last as long as possible (hence my current soon-to-be 6-year-old GPU, and me keeping my Core2Quad system from 2009 to 2017). But those old FX CPUs never aged well.