• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Be careful when recommending B560 motherboards to novice builders (HWUB)

Joined
May 8, 2021
Messages
1,978 (1.82/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
A custom loop, yes, but with a single 240mm rad for both CPU and GPU, and a quasi-AIO CPU DDC pump-block combo that isn't particularly good thermally. Also, the loop is configured for silence and not thermals, with fans ramping slowly and based on water temperatures rather than component temperatures.
Despite that it's still a custom loop and it's still likely on par with bigger air coolers.


Sorry, but that's nonsense. Silicon is perfectly fine running at 90-100°C for extended periods of time. As I've said before here, look at laptops - most laptops idle in the 60s-70s and hit tJmax at any kind of load as they prioritize keeping quiet + accept that running hot doesn't do any harm.
And tell me how long do those laptops last. I doubt that they will be alive after a decade. And it's not like it's not known, we all remember nVidia GPU fiasco with 8000 series GPUs cooking themselves to death. Also GTX 480s many of them are dead. R9 290Xs many of them are dead. Any AMD monstrosity like Fury X ignoring water cooler failure, the core itself is cooked to death on most cards.


It won't be if you also ramp voltages high while loading the CPU heavily, but advanced self-regulating CPUs like Ryzens don't allow those in combination unless you explicitly disable protections and override regulatory mechanisms. Heck, Buildzoid once tried to intentionally degrade his 3700X, and after something like a continuous 60 hours at >110°C (thermal limits bypassed) and 1.45V under 100% load he lost ... 25MHz of clock stability. So under any kind of regular workload degradation is never, ever happening, as that combination of thermals, voltage and load over time is utterly absurd for real-world workloads. Sure, his sample might be very resistant to electromigration, but even accounting for that there's no reason to worry at all.
Well I have read about some dude (at OCN) trying to see electromigration of chip and it was Sandy bridge i7. He ramped up voltage to 1.7V and kept CPU cool, but only after 15 minutes it needed more voltage to be stable. And at more sane voltages, ne needed few hours to make it need more voltage. And translate that to 8 years of computer usage. You would want a CPU to be functional for at least 15 years and most people want it to be working for 8 years or so, any accelerated electromigration at such rates isn't acceptable. And if Ryzen needed only that much to electromigrated, think about Ryzens running stock with stock coolers. They usually stay at 85C under load and still get voltage in 1.2-1.45 volt range. That's very close to Buldzoid's test and only 60 hours to damage it like that is really not good, knowing that Ryzen chips likely doesn't test stability of itself and ask for more volts from factory than what AMD set it to have.

PL1 is absolutely not how Intel defines TDP. PL1 is defined from TDP, TDP is defined as a thermal output class of CPUs towards which CPUs are tuned in terms of base clock and other characteristics. Power draw is only tangentially related to TDP.
It is how Intel defines TDP. TDP is "Thermal Design Power (TDP) represents the average power, in watts, the processor dissipates when operating at Base Frequency with all cores active under an Intel-defined, high-complexity workload." aka long term power limit which is PL1 and it is always set to match advertised TDP.

It's not going to change. The 65W TDP tier is utterly dominant in the OEM space, which outsells DIY by at least an order of magnitude. 65W TDPs for midrange and lower end chips aren't changing. If you want more for DIY, they have a K SKU to sell you to cover that desire - for a price, of course. You, and us DIYers overall, are not first in line for things being adjusted to our desires, and never will be.
Sure, but prebuilts had no problems in the past dealing with 95 watt TDP anyway. Let's not forget i7 2600k or Core 2 Quads or any AMD Phenom. 65 watt TDP is just chocking the chips for no good reason.

They are supposed to be better binned - whether they are in real life is always a gamble, as there's a lot of overlap between different bins, and some are interchangeable depending on the application.
I don't think that they bin them and I haven't heard of that at all. It would be rather stupid of them to separately release them as faster models with lower voltages as it would mean more silicon unable to match Intel spec of SKU.

Again: it seems like you haven't read the rest of this thread at all. I'll just point you to this post. Though especially this part:

Saying "DIY market was just fine without TDP shenanigans" is such an absurd reversal of reality that it makes it utterly impossible to actually discuss the issues at hand. TDPs have never been directly related to power draw, nor has it ever been intended for the DIY market beyond a product class delineation.
I have, stop saying that nonsense. I may not agree with that, but doesn't mean that I don't read it. Anyway, TDP was once a decent metric, no need to shit on it. How hard could it possible be for chip make to calculate amps*volts of each chips at chip's maximum theoretical load? It's not hard at all, but for us it is as we aren't usually informed about official voltage of chip or its capabilities to pull amps. TDP only becomes a load of croc if companies start to obfuscate what it actually is and feed public with bullshit. Pentium 3 never had a problem of incorrect TDP being specified, 1.4GHz model was rated at 32.2 watts. Pentium 4 2.8GHz was rated at 68.4 watts. That was what you could measure while CPU was loaded and if you measured CPU power rail. Just like they could back then, they still can do the same with all power limits of modern chips.

As for abandoning boost: well, if you'd be happy with ~2.5GHz CPUs, then by all means. Because that's what we'd get if there wasn't boost - we'd get base clock at sustained TDP-like power draws. The 65W TDP tier isn't going anywhere, again, as OEMs buy millions of those CPUs, and changing it would be extremely expensive for them.
For me it would be 4GHz at 105 watts, which is what i5 10400F is exactly pulling. And I don't care about OEMs as in my country they are legitimately rare and practically don't exist. OEMs are American only concept, which doesn't apply to the rest of this planet.

Yes. But that's not throttling. That's part of tuning a DIY system. Nobody has ever promised 100% boost clock 24/7 under 100% all-core load, or even 1-core load. You really need to be more nuanced in your approach to this.
I would be if I expected it to be used with aluminum sunflower cooler and if all of us were speaking in legalese everyday, but I'm not. If chip can safely achieve that and do no harm to board, why on Earth I wouldn't want that "boost". Boost itself has been for nearly a decade almost identical to base speed as CPU either works at idle speed or maximum speed, which is boost. They rarely work at base clock and most users don't see that unless they disable boost in BIOS.


"At below maximum manufacturer specified temperature" ... okay ... so, anything below 100°C-ish? Because above you seemed to say 80°C was unacceptable. Yet that's quite a bit below maximum. Also, 1200rpm ... of which model of fan, how many fans, which case, which cooler? And obviously the TR system would have failed, it had a clogged AIO cooler. My point was: you're making generalizing claims without defining even close to a sufficient amount of variables. Your criteria still make it sound like my cooling setup is well within your wants, yet you're saying above that it's unacceptable, so ... there's something more there, clearly.
Why not look at my system then in profile? My cooler is clearly a Scythe Choten with stock fan, case is Silencio S400. It has 3 fans in it, one top exhaust, two front intakes, usually working at 600-800 rpm. And sure I am biased, of course "under manufacturer spec" is the most minimum spec for cooling. I said that it acceptable only when CPU is running prime95 and GPU is running Furmark at the same time. You got those 80C at nowhere near such a high load and not even close to worst case scenario.


Prime95 not "realistic". Yes, some people calculate primes for weeks. Some people calculate the changes in molecular or cell structures of complex organisms when subjected to various chemicals. That doesn't make either a relevant end-user workload.
You clearly said that is very relevant for them.

If you're doing workstation things, get a workstation, or accept that consumer-grade products aren't designed for that and you need to overbuild to match.
As if there were stuff like that. HEDT is high end desktop, not a workstation. And why would I not use my plebeian chips for such loads, they are perfectly capable of that and are designed to be general purpose. General purpose means that if I want I only use it for playing mp3s and if I want it, then I use it assemble molecules. I see nothing stupid or unreasonable about that. It might not be that fastest, but that doesn't mean it can be unstable or catch on fire.


As for FurMark, whether a GPU can "handle" it is irrelevant. It is a workload explicitly created for maximum heat output, which is dangerous to run. It doesn't matter what thermals your GPU reads (heck, the very fact that you're saying "it can handle it with BIOS mods!" says enough by itself!), the issue is that it creates extreme hotspots away from the thermal sensors on your GPU. Most GPUs - all of them pre RDNA - have their thermal sensors along the edge of the die. Under normal loads there's easily a 10-20°C difference in thermals between the edge and centre of the die under full load. Furmark exaggerates that - so if your edge thermal sensor is reading 70-80, the hotspot temperature might be 110 or higher. If your hardware doesnt die that's good for you, but please stop subjecting it to unnecessary and unrealistic workloads just for "stress testing".
I don't run it long, only to get an idea of what my thermals are.

And yes, of course you get voltage reductions if you disable boost. That's ... rather obvious, no? Go below stock behaviour, and you'll get lower voltages and power draws. Not quite surprising.
Actually boost is also technically running above manufacturer spec and is never accounted for in TDP calculations.

No. Old stock = old, unsold products that have been sitting on shelves for a long time. That CPU was launched in October 2012, and while production of course ran for several years after that, it definitely wasn't recently manufactured when you bought it. And even if it was, it was still ancient tech at that point. Which is fine, but please don't try to say that it wasn't old.
870K was launched in 2015, for which the system was assembled for and technically it had refreshed architecture, so it wasn't the same at older part, therefore it's 2015 tech.

That's a commonly held enthusiast belief, but it's a rather irrational one. Power viruses and unrealistic heat loads can be beneficial if you're really pushing things and still want 24/7 stability, but for anything else they're both rather useless, potentially misleading, and possibly harmful to your components. What is the value of keeping CPU temps while running Prime95 under a given level if the CPU is never going to see a workload similar to that? Etc.
And you think that Intel at factories doesn't use "power virus" to determine heat output? The last time I read about that, Intel used their in house tools for that and specific heat simulators or at least specialized software loads to simulate that. They do exactly what prime95 does, but better and even more taxing on chip + in final settings they add some safety margin to account for less than perfect VRMs, vDroop, hot climates and etc. If you are saying that bullshit, then Intel should fire all those people, who ensure stability and predictable heat output of chips as they are apparently useless.


Value is relative. You clearly value overclocking for its own sake. Which is of course fine if that's what you like to spend your time doing! But your conception of value handily overlooks the fact that FX (and Bulldozer derivatives in general) performed rather terribly. They were fun from a technical and OC perspective, and they were cheap, but they were routinely outperformed by affordable i5s (and even i3s towards the end) with half the cores or less. Ryzen gen 1 and 2 delivered massive value in terms of performance/$, but as you said, they never really OC'd at all. I prefer the latter, you prefer the former - to each their own, but your desire is by far the more niche and less generally relevant one.
In my situation I only had a choice of either buying i3 4130 or FX 6300, FX was overall better and lasted longer. FX were a great value chips. And FX 8320 was selling for slightly less than i5 4440, so FX was maybe a better value deal too. FX didn't perform terribly, they just weren't as fast as Intel in single threaded loads. That doesn't mean that they were decently fast. I can tell that you never had an FX and have no idea what they actually were like.
 
Joined
Jan 14, 2019
Messages
9,881 (5.12/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 7800 XT
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Benchmark Scores Cinebench R23 single-core: 1,800, multi-core: 18,000. Superposition 1080p Extreme: 9,900.
Until a wild Nissan Micra with RB20DE swap overtakes you. Or for that matter maybe some chap lucks out and finds Mitsu colt with 4g63 engine and invests a bit in turbo and handling mods... Oh, the possibilities are endless.
They can overtake me all they want. I'm out to enjoy the drive, not to race. ;)

Sounds interesting! Let me know if you make a build log?
Will do, I just have to find the proper channel first (maybe a new forum thread). :rolleyes:

So far, I was bold enough to test with the stock settings (65 W PL1, 28 s Tau, 225 W PL2). The CPU hits 90 °C and power consumption around 180-190 W in all-core workloads, but for some reason, real-life Tau seems to last only 3-4 seconds, not 28 - maybe because of thermals, though there is no throttling reported. After that, when PL1 kicks in, temps quickly settle around 60-65 °C and clock speed drops to 2.7-2.8 GHz. In a 10-minute sustained Cinebench run, the 11700 scores similarly to a Ryzen 1700X this way.

In single-threaded runs, the CPU "only" eats around 50 W and maintains its highest boost clock of 4.8-4.9 GHz all the time, which puts it on par with Ryzen 5000 chips in the score. Temps are around 72-75 °C.

It appears that the stock 225 W PL2 is too aggressive for my setup, but the 65 W PL1 is too mild. There is definitely more tweaking needed. :)

Edit: It's also interesting to note that single-threaded runs use less power, but result in higher temperatures.

In my situation I only had a choice of either buying i3 4130 or FX 6300, FX was overall better and lasted longer. FX were a great value chips. And FX 8320 was selling for slightly less than i5 4440, so FX was maybe a better value deal too. FX didn't perform terribly, they just weren't as fast as Intel in single threaded loads. That doesn't mean that they were decently fast. I can tell that you never had an FX and have no idea what they actually were like.
I had an FX-8150 and a Core i3-4160 as well. While from a completely subjective standpoint I loved both, the i3 was the better chip for gaming at that time.
 
Last edited:
Joined
May 8, 2021
Messages
1,978 (1.82/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
So far, I was bold enough to test with the stock settings (65 W PL1, 28 s Tau, 225 W PL2). The CPU hits 90 °C and power consumption around 180-190 W in all-core workloads, but for some reason, real-life Tau seems to last only 3-4 seconds, not 28 - maybe because of thermals, though there is no throttling reported.
It doesn't report throttling, because technically losing some boost isn't throttling. Your Tau is likely working correctly, it's just that your cooling can't cope with stock PL2. Overall, it seems that you need PL1 of 80 watts and PL2 of 110-120 watts.



I had an FX-8150 and a Core i3-4160 as well. While from a completely subjective standpoint I loved both, the i3 was the better chip for gaming at that time.
That's great, but quickly i3 became rather obsolete. Everyone knew that getting a dual core chip in 2014 wouldn't end well and it only took some better threaded games to arrive to make FX chips clearly better than i3. Also you can overclock FX a lot if you desired. I'm not sure about Zambezi chips, but Vishera was truly adequate. The crazy thing is that FX 6300 had a low price of just 130 Euros and to this day there isn't a 6 core chips selling so cheap. Ryzen 1600 was 160 Euros, i5 10400F was 155 Euros. However, the real advantage of Intel platform is that you could swap CPU to i7 later and to this day it would be good. On AM3+ there wasn't any upgradability, only higher overlocking headroom.
 
Last edited:
Joined
Jan 14, 2019
Messages
9,881 (5.12/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 7800 XT
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Benchmark Scores Cinebench R23 single-core: 1,800, multi-core: 18,000. Superposition 1080p Extreme: 9,900.
It doesn't report throttling, because technically losing some boost isn't throttling. Your Tau is likely working correctly, it's just that your cooling can't cope with stock PL2. Overall, it seems that you need PL1 of 80 watts and PL2 of 110-120 watts.
Funny enough, I just tested that before work last night. :) The 120 W PL2 is a bit too steep (it still clocks down after a few seconds even with a 40 s Tau), but with the 80 W PL1, it sits comfortably in the low-mid 80 degrees while holding a stable 3 GHz all-core. It isn't much (about Ryzen 2700X levels of performance), but 1. it's awesome to know that using an 11700 above stock PL1 is possible even in a SFF system, and 2. I'm not going to run anything that needs 16 threads at 100% usage all the time, so I guess I'm fine, and lastly 3. The cooler gets reasonably hot during these tests. With the Ryzen 3600 and its stock 88 W PPT, the CPU got very hot, but the cooler stayed cold to the touch. That concludes my previous assumption: Ryzens have terrible heat dissipation, despite the efficiency of 7 nm chips.

Edit: All tests with the 11700 were done with the "Silent" BIOS fan preset. I'm not only a SFF freak, but a silence freak too. :D

That's great, but quickly i3 became rather obsolete. Everyone knew that getting a dual core chip in 2014 wouldn't end well and it only took some better threaded games to arrive to make FX chips clearly better than i3. Also you can overclock FX a lot if you desired. I'm not sure about Zambezi chips, but Vishera was truly adequate. The crazy thing is that FX 6300 had a low price of just 130 Euros and to this day there isn't a 6 core chips selling so cheap. Ryzen 1600 was 160 Euros, i5 10400F was 155 Euros. However, the real advantage of Intel platform is that you could swap CPU to i7 later and to this day it would be good. On AM3+ there wasn't any upgradability, only higher overlocking headroom.
Yes, the i3 became obsolete as games started using more threads, but that's why I said: it was the better gamer at that time. ;) I remember seeing around 20% usage on both the 8150 and the HD 7970 I had it paired with in Assasin's Creed 3, and the game barely ran at 30 FPS. It was an extreme case, but still: games didn't need 8 cores back then, and the single-core performance on FX was just plain terrible.
 

Keullo-e

S.T.A.R.S.
Joined
Dec 16, 2012
Messages
11,041 (2.66/day)
Location
Finland
System Name 4K-gaming
Processor AMD Ryzen 7 5800X up to 5.05GHz
Motherboard Gigabyte B550M Aorus Elite
Cooling Custom loop (CPU+GPU, 240 & 120 rads)
Memory 32GB Kingston HyperX Fury @ DDR4-3466
Video Card(s) PowerColor RX 6700 XT Fighter OC/UV
Storage ~4TB SSD + 6TB HDD
Display(s) Acer XV273K + Lenovo L32p-30
Case Corsair 4000D Airflow White
Audio Device(s) Asus TUF H3 Wireless
Power Supply EVGA Supernova G2 750W
Mouse Logitech MX518
Keyboard Roccat Vulcan 121 AIMO
VR HMD Oculus Rift CV1
Software Windows 11 Pro
Benchmark Scores It runs Crysis remastered at 4K
Everyone knew that getting a dual core chip in 2014 wouldn't end well
In fact an overclocked Pentium G3258 was still a great budget chip though it did became obsolete pretty quickly so you're pretty right there.
 
Joined
May 2, 2017
Messages
7,762 (3.04/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Funny enough, I just tested that before work last night. :) The 120 W PL2 is a bit too steep (it still clocks down after a few seconds even with a 40 s Tau), but with the 80 W PL1, it sits comfortably in the low-mid 80 degrees while holding a stable 3 GHz all-core. It isn't much (about Ryzen 2700X levels of performance), but 1. it's awesome to know that using an 11700 above stock PL1 is possible even in a SFF system, and 2. I'm not going to run anything that needs 16 threads at 100% usage all the time, so I guess I'm fine, and lastly 3. The cooler gets reasonably hot during these tests. With the Ryzen 3600 and its stock 88 W PPT, the CPU got very hot, but the cooler stayed cold to the touch. That concludes my previous assumption: Ryzens have terrible heat dissipation, despite the efficiency of 7 nm chips.

Edit: All tests with the 11700 were done with the "Silent" BIOS fan preset. I'm not only a SFF freak, but a silence freak too. :D
Is it possible to be an SFF freak without also being a silence freak? I know there are people out there who use FlexATX PSUs and don't mind the noise, but to me, those people belong in the kookoo bin. Some say 'size, silence, performance - pick two'; I say 'screw that, I want all three' and the fun is making that happen :D

Current Ryzens definitely have worse heat dissipation than recent Intels - it would be shocking if not given the far greater heat density combined with the off-centre positioning of the cores. The area of the cores is at most half, and the CCD is off in a corner rather than centred - very different from Intel, for sure. Not all coolers handle that equally well, and it sounds like your Shadow Rock might be particularly poor (though it might also have been a bad mount? Ryzens need pretty even pressure across the IHS). What were your clocks at those temperatures? When I was testing my build I ran my 5800X with an old Hyper 212 Evo (open on my desk), and it kept it very nicely cool (at least for that class of cooler) and boosting above spec. I think I saw slightly higher all-core boost with that compared to my current water loop actually - goes to show how a reverse-flow block isn't ideal, but there aren't many DIY DDC pump+block combos out there!). Then again, they are engineered around running rather hot, with the dynamic boost system scaling very well around "high" thermals. There's a dedicated monitoring circuit in these CPUs that controls currents, voltages, clock speeds and more in order to ensure the CPU never reaches potentially harmful combinations of these, and the only way of overriding this is through fixed-clock OC. But I definitely understand the concern if your cooler didn't seem to be doing the job.

As far as I can remember Intel's boosting system is rather 'dumb', in that the CPU will try to boost to its set boost clock within PL2 as long as tau hasn't expired, but will drop down to PL1 (and whatever boost can be maintained within that) completely if it reaches thermal limits within that span. I don't think there are dynamic limits at all. So it's not as opportunistic a system as on recent Ryzens, which just go as fast as they can until limits are hit, then step down gradually from that until an equilibrium is found. That likely explains why you're better off setting a higher PL1 and kind of ignoring PL2 - though keeping PL2 high is no doubt beneficial for responsiveness in desktop uses, as boosting very high there (for very short spans of time) will make for a smoother experience.
 
Joined
May 8, 2021
Messages
1,978 (1.82/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
Funny enough, I just tested that before work last night. :) The 120 W PL2 is a bit too steep (it still clocks down after a few seconds even with a 40 s Tau), but with the 80 W PL1, it sits comfortably in the low-mid 80 degrees while holding a stable 3 GHz all-core. It isn't much (about Ryzen 2700X levels of performance), but 1. it's awesome to know that using an 11700 above stock PL1 is possible even in a SFF system, and 2. I'm not going to run anything that needs 16 threads at 100% usage all the time, so I guess I'm fine, and lastly 3. The cooler gets reasonably hot during these tests. With the Ryzen 3600 and its stock 88 W PPT, the CPU got very hot, but the cooler stayed cold to the touch. That concludes my previous assumption: Ryzens have terrible heat dissipation, despite the efficiency of 7 nm chips.
It's that's a case 80 watts is pretty much all you can reasonably achieve. You may want PL2 set to 85 watts and Tau to something like 8 seconds.


Yes, the i3 became obsolete as games started using more threads, but that's why I said: it was the better gamer at that time. ;) I remember seeing around 20% usage on both the 8150 and the HD 7970 I had it paired with in Assasin's Creed 3, and the game barely ran at 30 FPS. It was an extreme case, but still: games didn't need 8 cores back then, and the single-core performance on FX was just plain terrible.
Oh I get it, but when you don't have much cash, you want your stuff to last long. And that was a strong point of FX. Sure it may have always been a 45 fps chip, but if it can keep being like that for 6 years instead of 3 years, that's a win. I never played Assassin's Creed 3, but usually FX for me was capable of 40-50 fps. The game that really killed FX for me was Far Cry 5. CPU became a clearly a limiting factor there and that's mostly because Ubisoft can't write a game properly. Anyway, I thought that it was a time to upgrade and once I did I never played Far Cry 5 again. From what I have observed, Far Cry 5 has a terrible engine and that even a really good hardware struggles with it. The main problem is that the game itself is very boring, so I completed story of Far Cry 1 again. And nowadays Assasin's Creed is again a complete shitshow in terms of CPU optimization. But at this point I just don't understand why people even care about that franchise. AC1 and AC2 were perhaps cool, but over time AC series just started losing plot and became a game about anything but Ezio.

A surprising thing that FX 6300 was struggling in Doom Eternal. Many people say that it is a wonderfully optimized game and yet it was really heavy on CPU. Doom 4 was much easier to run on CPU, meanwhile Doom Eternal on FX meant 30-40 fps. And funny thing is that Vulkan was supposed to make weak hardware run Doom better, but for me it usually meant a loss of around 7 fps on average.

The FX was also struggling in Victoria II. At year 1920, it will inevitably be at 5-10 fps. Too bad that even upgrade to i5 10400F meant almost nothing that that game still doesn't run well. A fun thing is that this particular game only needs a modest GPU power, probably even ATi X600 would run it maxed out at 4k, but on CPU it's absolutely brutal.
 
Joined
May 2, 2017
Messages
7,762 (3.04/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Despite that it's still a custom loop and it's still likely on par with bigger air coolers.
Oh, sure. It's perfectly capable of dissipating the full ~400W heat load of my CPU+GPU at reasonable noise levels. It just happens to have a relatively poor CPU block, which means that steady-state CPU temperatures are probably 10+ degrees higher than with a better block. An apt illustration of this is that adding my 275W GPU into the mix doesn't affect CPU thermals much, so the limitation is clearly in the CPU block and not the rest of the system.
And tell me how long do those laptops last. I doubt that they will be alive after a decade. And it's not like it's not known, we all remember nVidia GPU fiasco with 8000 series GPUs cooking themselves to death. Also GTX 480s many of them are dead. R9 290Xs many of them are dead. Any AMD monstrosity like Fury X ignoring water cooler failure, the core itself is cooked to death on most cards.
My old Thinkpad X201 lasted a decade before I sold it on, and routinely ran the CPU very hot (despite being repasted twice through its lifetime). It's true that many laptops die early, and many do die due to insufficient cooling, but it's very rarely the CPU itself that fails in these cases. It might be the PCB itself takes damage from repeated heating/cooling cycles, or the solder joints below the CPU, RAM, or anything else, or peripheral components (charging circuitry is common, as are VRM failures and internal display circuitry failures). I don't think I've ever come across a laptop with a verifiable dead CPU - though of course it is a bit difficult to tell. But CPUs are extremely robust, and are closely monitored for thermals. Bad laptop designs tend to cook everything else than the CPU by not ensuring sufficient internal airflow and exhaust of hot air, which kills other things, but not the CPU itself.
Well I have read about some dude (at OCN) trying to see electromigration of chip and it was Sandy bridge i7. He ramped up voltage to 1.7V and kept CPU cool, but only after 15 minutes it needed more voltage to be stable. And at more sane voltages, ne needed few hours to make it need more voltage. And translate that to 8 years of computer usage. You would want a CPU to be functional for at least 15 years and most people want it to be working for 8 years or so, any accelerated electromigration at such rates isn't acceptable. And if Ryzen needed only that much to electromigrated, think about Ryzens running stock with stock coolers. They usually stay at 85C under load and still get voltage in 1.2-1.45 volt range. That's very close to Buldzoid's test and only 60 hours to damage it like that is really not good, knowing that Ryzen chips likely doesn't test stability of itself and ask for more volts from factory than what AMD set it to have.
Electromigration and clock degradation varies massively between process nodes and architectures, so those aren't comparable. Also, you clearly didn't read what I said: Buildzoid ran his chip way above stock thermal limits, at fixed voltages and currents, all of which were far above stock behaviour. Here's the video if you want more detail btw. But in short, he ran the CPU at 105-112°C (depending on the time of day and how hot the room was) (also, he tried running it at 1.52V, but it shut down hard due to hitting 115°C, which is apparently the hardcoded silicon thermal shutdown limit). According to him, AMD tests its chips at slightly less idiotic settings than this for hundreds of hours to ensure they don't degrade under stock conditions. And the difference in electromigration at his ~110°C 133A 1.444V (get, 1.5V set) and stock behaviour (throttling at 95°C IIRC, voltages reading similarly high but actually being bucked lower by the CPU) is very significant. He goes into this himself as well. His results, while of course a sample size of one, indicate that these CPUs if run at stock, even with terrible cooling, will never degrade.
It is how Intel defines TDP. TDP is "Thermal Design Power (TDP) represents the average power, in watts, the processor dissipates when operating at Base Frequency with all cores active under an Intel-defined, high-complexity workload." aka long term power limit which is PL1 and it is always set to match advertised TDP.
Close, but not quite. PL1 is recommended to be set equal to TDP, and you seem to be missing that "power [...] the processor dissipates" is something else than "power the processor consumes". The difference is small, but it's nonetheless meaningful. TDP has never been directly related to power draw. It's been closely aligned, but that relation has always been variable and somewhat incidental.
Sure, but prebuilts had no problems in the past dealing with 95 watt TDP anyway. Let's not forget i7 2600k or Core 2 Quads or any AMD Phenom. 65 watt TDP is just chocking the chips for no good reason.
They dealt with them, sure, but OEMs have had a clear desire to build smaller, more affordable and space-efficient business desktops - as that's their bread and butter - and have thus pushed for lower TDPs. Also, K-SKUs like the 2600K have almost never been used in OEM systems, outside of a few gaming models. One of the major developments when intel moved to the Core architecture was the lowering of mainstream TDPs, which in turn allowed for the proliferation of SFF and uSFF business desktops, AIOs, and the like. Most of these use 65W no-letter CPUs, while the smallest use T-SKU 35W CPUs. 95W isn't seen in these spaces.
I don't think that they bin them and I haven't heard of that at all. It would be rather stupid of them to separately release them as faster models with lower voltages as it would mean more silicon unable to match Intel spec of SKU.
They do. You know how many SKUs Intel makes for each generation of chips, right? Binning is how they differentiate between these. And T SKUs are always taken from bins that perform well at low voltages. K SKUs are taken from bins that clock high at higher voltages. Some times these bins are similar, if not interchangeable. Some times they aren't.
I have, stop saying that nonsense. I may not agree with that, but doesn't mean that I don't read it. Anyway, TDP was once a decent metric, no need to shit on it. How hard could it possible be for chip make to calculate amps*volts of each chips at chip's maximum theoretical load? It's not hard at all, but for us it is as we aren't usually informed about official voltage of chip or its capabilities to pull amps. TDP only becomes a load of croc if companies start to obfuscate what it actually is and feed public with bullshit. Pentium 3 never had a problem of incorrect TDP being specified, 1.4GHz model was rated at 32.2 watts. Pentium 4 2.8GHz was rated at 68.4 watts. That was what you could measure while CPU was loaded and if you measured CPU power rail. Just like they could back then, they still can do the same with all power limits of modern chips.
Well, if you did read the thread, then you're just adamant in maintaining a belief in a reality that has never existed. I would really recommend you take a step back and try to consider the larger context. Nobody has 'shit on' TDP as a metric, we are simply discussing how it's quite problematic as boost becomes more aggressive and Intel fails to enforce their specifications in the DIY market, leading to extremely wide performance deltas for seemingly identical products.

Nobody has said it would be difficult to calculate a specific TDP for each chip, but I've been trying to say for ... what, three pages of posts now, that this is not the purpose or function of TDP. TDP is a) a thermal dissipation specification, divided into classes, for which OEMs and cooler makers design cooling systems, and b) a marketing tier system vaguely related to power draw. You're arguing for TDP to not actually be about cooling and thermals, but rather about power draw. Which ... why would we then call it TDP? Thermal design power? Unless that power (in watts) specifies what a cooling system must be able to dissipate, that name becomes nonsense.

As for why we can't go back to the Pentium 2/3/4 era ... well, those were fixed-clock CPUs, they had no clock scaling whatsoever, no power savings at idle to speak of, and they all had very low power consumptions. The difference in cooling needs between a 35.2W and a 64.1W CPU are tiny compared to the difference between contemporarily relevant power draws like 65W vs. 225W. So again, if you want to go back to that, you also need to accept going back to the other drawbacks of the times - such as limited motherboard compatibility (no more just picking a suitable motherboard with the correct socket, you now need to explicitly check that the CPU is listed as supported!), no boost clocks (= significant drops in system responsiveness), etc. Oh, and that completely ignores the fact that it would piss off OEMs to no end and pretty much kill Intel's business relations. Which means they would never, ever do that.
For me it would be 4GHz at 105 watts, which is what i5 10400F is exactly pulling. And I don't care about OEMs as in my country they are legitimately rare and practically don't exist. OEMs are American only concept, which doesn't apply to the rest of this planet.
Okay, so the 10400F would be rated at that. But then the 10600 (non-K) would either be specced the same (as they are the same bin, most likely), or would need to have its own TDP tier. And when each CPU has its own TDP, the metric becomes meaningless.

To be clear: what you're asking for is clearly defined power draw metrics. This is not thermal design power. I agree that accurate power draw metrics would be great to have on the spec sheet, but please stop mixing up your terms.

Also, sayin "OEMs are an American concept" is ludicrous. Dell, HP and Lenovo sell the vast majority of desktop PCs in the world, and they sell them to businesses, governments and educational institutions across the world. Two of these three might be American companies, but that is utterly irrelevant - they operate globally, and in sum likely sell far more outside of the US than in the US - the US is just ~330M people, after all. Are you actually saying that major companies in your country buy their computers from small local manufacturers, or build them themselves? That is very hard to believe, as small manufacturers are quite unlikely to have the support systems major companies require. And major companies definitely don't build DIY systems.
I would be if I expected it to be used with aluminum sunflower cooler and if all of us were speaking in legalese everyday, but I'm not. If chip can safely achieve that and do no harm to board, why on Earth I wouldn't want that "boost". Boost itself has been for nearly a decade almost identical to base speed as CPU either works at idle speed or maximum speed, which is boost. They rarely work at base clock and most users don't see that unless they disable boost in BIOS.
Yes, that's how DIY PCs work. They also often ignore PL1 by setting PL2 as infinite, or set a higher PL1 than stock. But remember, you're also asking for strict adherence to TDP, and you want TDP to be equal to PL1. Something has to give here. Please make up your mind - all of these cannot logically be true at the same time.
Why not look at my system then in profile? My cooler is clearly a Scythe Choten with stock fan, case is Silencio S400. It has 3 fans in it, one top exhaust, two front intakes, usually working at 600-800 rpm. And sure I am biased, of course "under manufacturer spec" is the most minimum spec for cooling. I said that it acceptable only when CPU is running prime95 and GPU is running Furmark at the same time. You got those 80C at nowhere near such a high load and not even close to worst case scenario.
Sorry, but my 80°C was while running Prime95 - as a response to your example. Which is also why those temperatures don't worry me whatsoever. Heck, 80°C in real world use wouldn't really be worrying either - it's well below any throttle point, and nowhere near harmful to anything. I would like it to be cooler, but I prefer silence. As for the rest of your setup, that wasn't relevant, the point was: you're setting arbitrary standards, presenting them in an oversimplified way, and using that as an argument. That is a really, really bad way of arguing.
You clearly said that is very relevant for them.
Relevant to perhaps a couple hundred users worldwide? Sure. That is not reason to use that as a generally valid benchmark - quite the opposite. You might as well argue that the needs of rally drivers are the best way to set safety standards and equipment levels for cars. Specialist needs are specialist needs, even if they use (derivatives of) general purpose equipment.
As if there were stuff like that. HEDT is high end desktop, not a workstation. And why would I not use my plebeian chips for such loads, they are perfectly capable of that and are designed to be general purpose. General purpose means that if I want I only use it for playing mp3s and if I want it, then I use it assemble molecules. I see nothing stupid or unreasonable about that. It might not be that fastest, but that doesn't mean it can be unstable or catch on fire.
... Xeon-W is for workstations, as is Ryzen Pro and Threadripper Pro. These are chips tested and validated for such workloads. Sure, you can use any chip for such a workload, but you then also need to be cognizant that this is not a use that it's tested and validated for. And this is fine! It's likely to work perfectly. But again, you can't throw together any combination of retail consumer parts, subject them to a professional workload, and expect it to perform above spec. Which is essentially what you're arguing here.
I don't run it long, only to get an idea of what my thermals are.
.... if you're not reaching steady-state thermals, what's the point? Also, how are you getting "an idea what your thermals are" from running a power virus that generates more heat than literally any common GPU workload out there? That would give a very unrepresentative view of your thermals. If you're into overblown cooling for its own sake, and pushing thermals as low as you can within your chosen paramenters, then that's what you like, but stop acting like that's suitable as a generally applicable standard for anything. And again, Furmark has been demonstrated to kill GPUs at stock due to its extreme heat load and how it intentionally aims to break thermal limits. Recommending it is reckless at best.
Actually boost is also technically running above manufacturer spec and is never accounted for in TDP calculations.
... I know. I have said so quite a few times. However, there are always safety margins built into the specification - any Intel chip when limited to TDP in power draw will boost to some extent (unless you've gotten the absolutely worst possible chip in that bin). Thus, disabling boost will inevitably drop voltages and power draws. Disabling boost does not mean strictly adhering to TDP (as that would require individual "TDP"s (in your meaning of "power draw specs) not for each SKU, but for each physical chip, as they inevitably differ from each other.
870K was launched in 2015, for which the system was assembled for and technically it had refreshed architecture, so it wasn't the same at older part, therefore it's 2015 tech.
... the chip you were intitially talking about still launched in October 2012.
And you think that Intel at factories doesn't use "power virus" to determine heat output? The last time I read about that, Intel used their in house tools for that and specific heat simulators or at least specialized software loads to simulate that. They do exactly what prime95 does, but better and even more taxing on chip + in final settings they add some safety margin to account for less than perfect VRMs, vDroop, hot climates and etc. If you are saying that bullshit, then Intel should fire all those people, who ensure stability and predictable heat output of chips as they are apparently useless.
How manufacturers torture test their components and how end users use their components are not the same, nor should they be. Manufacturers need to test unrealistic worst-case scenarios. That doesn't make unrealistic worst-case scenarios good tests for end users, as what you are testing for is not the same. And no, Intel doesn't use power viruses to set TDP. Many Intel CPUs throttle under power virus loads if set to stock behaviour.
In my situation I only had a choice of either buying i3 4130 or FX 6300, FX was overall better and lasted longer. FX were a great value chips. And FX 8320 was selling for slightly less than i5 4440, so FX was maybe a better value deal too. FX didn't perform terribly, they just weren't as fast as Intel in single threaded loads. That doesn't mean that they were decently fast. I can tell that you never had an FX and have no idea what they actually were like.
Decently fast, sure, for their time and disregarding power draw. They did decently well in MT loads (though by no means close to their nominal core count advantage), consumed dramatically more power even at the same TDP when compared to Intel (which just goes to show how TDP has never been a metric for power draw), lagged behind significantly in ST workloads, and kind-of-sort-of caught up when overclocked, but at fully 3x the power consumption. They were fine for their time, if you didn't mind buying hefty cooling. But they aged very poorly, and even an i5-6600 at 65W trounces the FX-8320E OC'd to 4.8GHz in the vast majority of tests. They might have seen an uptick in relative performance as more applications have become more multithreaded, but by that time (i.e. 2018+) they were already so far behind affordable current-generation offerings there was no real point. Of course a CPU you already own is infinitely cheaper than buying a new one, so if it performed adequately that is obviously great - I'm a big fan of making hardware last as long as possible (hence my current soon-to-be 6-year-old GPU, and me keeping my Core2Quad system from 2009 to 2017). But those old FX CPUs never aged well.
 
Joined
May 8, 2021
Messages
1,978 (1.82/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
In fact an overclocked Pentium G3258 was still a great budget chip though it did became obsolete pretty quickly so you're pretty right there.
Because people buying it only wanted to overclock it. Nobody really thought that it's going to last long. The notable thing about it is that you could reach 5 GHz+ on it with normal cooling and that's why it sold so well. If Intel completely lost their marbles and released Comet Lake Celeron, which comes with base clock of 5 GHz and can be overclocked to 6.5 GHz on air cooler, would you buy it? It would likely sell quite well.

My old Thinkpad X201 lasted a decade before I sold it on, and routinely ran the CPU very hot (despite being repasted twice through its lifetime). It's true that many laptops die early, and many do die due to insufficient cooling, but it's very rarely the CPU itself that fails in these cases. It might be the PCB itself takes damage from repeated heating/cooling cycles, or the solder joints below the CPU, RAM, or anything else, or peripheral components (charging circuitry is common, as are VRM failures and internal display circuitry failures). I don't think I've ever come across a laptop with a verifiable dead CPU - though of course it is a bit difficult to tell. But CPUs are extremely robust, and are closely monitored for thermals. Bad laptop designs tend to cook everything else than the CPU by not ensuring sufficient internal airflow and exhaust of hot air, which kills other things, but not the CPU itself.
That's almost as bad as CPU itself dying. And let's be honest, in 2021 it's nearly impossible to buy a properly engineered laptop and tat won't end up being a waste of money 4 years later. They became disposable ovens that anyone should avoid and just get a desktop if they can.

And there's a difference of running hot while consuming 35 watts and running hot while consuming 200+ watts. The desktop chip will suffer far more and much more likely experience a failure.


Electromigration and clock degradation varies massively between process nodes and architectures, so those aren't comparable. Also, you clearly didn't read what I said: Buildzoid ran his chip way above stock thermal limits, at fixed voltages and currents, all of which were far above stock behaviour. Here's the video if you want more detail btw. But in short, he ran the CPU at 105-112°C (depending on the time of day and how hot the room was) (also, he tried running it at 1.52V, but it shut down hard due to hitting 115°C, which is apparently the hardcoded silicon thermal shutdown limit). According to him, AMD tests its chips at slightly less idiotic settings than this for hundreds of hours to ensure they don't degrade under stock conditions. And the difference in electromigration at his ~110°C 133A 1.444V (get, 1.5V set) and stock behaviour (throttling at 95°C IIRC, voltages reading similarly high but actually being bucked lower by the CPU) is very significant. He goes into this himself as well. His results, while of course a sample size of one, indicate that these CPUs if run at stock, even with terrible cooling, will never degrade.
There's no such thing as "will never degrade". It's just a question of how long it takes before degrading to the point of instability. Northwood chips only needed a few weeks. Overclocked Sandy's do also degrade fast. If you want Skylake-Comet Lake last long, then you shouldn't use more than 1.4 volts and you can't reach more than 80C under any load.

Close, but not quite. PL1 is recommended to be set equal to TDP, and you seem to be missing that "power [...] the processor dissipates" is something else than "power the processor consumes". The difference is small, but it's nonetheless meaningful. TDP has never been directly related to power draw. It's been closely aligned, but that relation has always been variable and somewhat incidental.
CPU converts almost all electrical power into heat, so power consumption is pretty much a TDP.


They dealt with them, sure, but OEMs have had a clear desire to build smaller, more affordable and space-efficient business desktops - as that's their bread and butter - and have thus pushed for lower TDPs. Also, K-SKUs like the 2600K have almost never been used in OEM systems, outside of a few gaming models. One of the major developments when intel moved to the Core architecture was the lowering of mainstream TDPs, which in turn allowed for the proliferation of SFF and uSFF business desktops, AIOs, and the like. Most of these use 65W no-letter CPUs, while the smallest use T-SKU 35W CPUs. 95W isn't seen in these spaces.
They may as well use laptop chips then and no i7 2600K was used in quite a bit of "boring" desktops. My dad's work computer is literally a decade old i7 2700k machine with Radeon 7770. A prebuilt desktop, that wasn't obnoxiously expensive. The catch is that it's a prebuilt computer, not legit prebuilt like Dell, which I mentioned to you that legit prebuilts are rare and that's because they make zero sense to buy here. That is unless you buy one used.


They do. You know how many SKUs Intel makes for each generation of chips, right? Binning is how they differentiate between these. And T SKUs are always taken from bins that perform well at low voltages. K SKUs are taken from bins that clock high at higher voltages. Some times these bins are similar, if not interchangeable. Some times they aren't.
I'm pretty sure that T chips are just non T chips, which can't boost as high and remain in Intel's preferred voltage target. Actually more efficient chips end up in laptops.


Nobody has said it would be difficult to calculate a specific TDP for each chip, but I've been trying to say for ... what, three pages of posts now, that this is not the purpose or function of TDP. TDP is a) a thermal dissipation specification, divided into classes, for which OEMs and cooler makers design cooling systems, and b) a marketing tier system vaguely related to power draw. You're arguing for TDP to not actually be about cooling and thermals, but rather about power draw. Which ... why would we then call it TDP? Thermal design power? Unless that power (in watts) specifies what a cooling system must be able to dissipate, that name becomes nonsense.
Because all electrical power is converted to heat with only tiny fraction of that into something else. And if you claim that it's hard to calculate specific TDP, then it's not. Intel should provide several TDPs then. Average TDP, all out TDP and all out single core TDP.

As for why we can't go back to the Pentium 2/3/4 era ... well, those were fixed-clock CPUs, they had no clock scaling whatsoever, no power savings at idle to speak of, and they all had very low power consumptions. The difference in cooling needs between a 35.2W and a 64.1W CPU are tiny compared to the difference between contemporarily relevant power draws like 65W vs. 225W. So again, if you want to go back to that, you also need to accept going back to the other drawbacks of the times - such as limited motherboard compatibility (no more just picking a suitable motherboard with the correct socket, you now need to explicitly check that the CPU is listed as supported!), no boost clocks (= significant drops in system responsiveness), etc. Oh, and that completely ignores the fact that it would piss off OEMs to no end and pretty much kill Intel's business relations. Which means they would never, ever do that.
You don't get it. A chip at maximum load more or less becomes fixed clock chip anyway and that's where Intel could measure TDP. It's really easy. Idle is irrelevant as it doesn't affect TDP and boost only lasts so long before Tau expires. If they got rind of Tau and PL2, then left turbo and measured TDP at standard PL1, there wouldn't be any confusion. It's nowhere near as dramatic as you make it out to be.


Okay, so the 10400F would be rated at that. But then the 10600 (non-K) would either be specced the same (as they are the same bin, most likely), or would need to have its own TDP tier. And when each CPU has its own TDP, the metric becomes meaningless.
No it doesn't, because each CPU consumes different amount of power just like they always did. You can't keep same TDP for i5 and i7 and expect same clock speed. Cooling choice then would be a choice that you make and actually face the reality instead of trying to warp it through TDP.


To be clear: what you're asking for is clearly defined power draw metrics. This is not thermal design power. I agree that accurate power draw metrics would be great to have on the spec sheet, but please stop mixing up your terms.
It's the same anyway.


Also, sayin "OEMs are an American concept" is ludicrous. Dell, HP and Lenovo sell the vast majority of desktop PCs in the world, and they sell them to businesses, governments and educational institutions across the world. Two of these three might be American companies, but that is utterly irrelevant - they operate globally, and in sum likely sell far more outside of the US than in the US - the US is just ~330M people, after all. Are you actually saying that major companies in your country buy their computers from small local manufacturers, or build them themselves? That is very hard to believe, as small manufacturers are quite unlikely to have the support systems major companies require. And major companies definitely don't build DIY systems.
It is exactly what happens. My whole university is full of Phenom local prebuilts. Every school I have been into used some kind of local prebuilt. Nobody gives a damn about that support that you are speaking about as even if it existed it would be nearly useless or it doesn't exist in local language. Which in most cases it doesn't exist and Dells, Lenovos or whatever else also has a huge price premium and worse configs. They are simply irrelevant. That's my country, which is still considered to be a first world country, now imagine what happens in 3rd world. They would laugh at your support argument. I'm telling you, outside of few rich countries with actual support, almost nobody else cares about OEM prebuilts as they simply make no sense and before that you may not even be able to get the model you want.



Yes, that's how DIY PCs work. They also often ignore PL1 by setting PL2 as infinite, or set a higher PL1 than stock. But remember, you're also asking for strict adherence to TDP, and you want TDP to be equal to PL1. Something has to give here. Please make up your mind - all of these cannot logically be true at the same time.
Raise TDP, kill PL2 setting, raise PL1 default setting to 95 watts. Done.


Sorry, but my 80°C was while running Prime95 - as a response to your example. Which is also why those temperatures don't worry me whatsoever. Heck, 80°C in real world use wouldn't really be worrying either - it's well below any throttle point, and nowhere near harmful to anything. I would like it to be cooler, but I prefer silence. As for the rest of your setup, that wasn't relevant, the point was: you're setting arbitrary standards, presenting them in an oversimplified way, and using that as an argument. That is a really, really bad way of arguing.
Now load GPU and see if your system can actually deal with heat.


Relevant to perhaps a couple hundred users worldwide? Sure.
That would be just one bigger lab. Dude, face the reality, many people use CPU fully loaded. Tell me how many people transcode videos, run BOINC, run Folding or do something else demanding. That would be in millions, not in hundreds.

... Xeon-W is for workstations, as is Ryzen Pro and Threadripper Pro. These are chips tested and validated for such workloads. Sure, you can use any chip for such a workload, but you then also need to be cognizant that this is not a use that it's tested and validated for. And this is fine! It's likely to work perfectly. But again, you can't throw together any combination of retail consumer parts, subject them to a professional workload, and expect it to perform above spec. Which is essentially what you're arguing here.
Do you honestly think that you need those chips for work? Ever heard of being ripped off?


.... if you're not reaching steady-state thermals, what's the point? Also, how are you getting "an idea what your thermals are" from running a power virus that generates more heat than literally any common GPU workload out there?
Because at 15-20 minutes it reaches the highest temperature that it will ever achieve and any further testing is pointless.


... I know. I have said so quite a few times. However, there are always safety margins built into the specification - any Intel chip when limited to TDP in power draw will boost to some extent (unless you've gotten the absolutely worst possible chip in that bin). Thus, disabling boost will inevitably drop voltages and power draws. Disabling boost does not mean strictly adhering to TDP (as that would require individual "TDP"s (in your meaning of "power draw specs) not for each SKU, but for each physical chip, as they inevitably differ from each other.
Disabling boost means exactly running at TDP or below. Intel:
"Thermal Design Power (TDP) represents the average power, in watts, the processor dissipates when operating at Base Frequency with all cores active under an Intel-defined, high-complexity workload"

... the chip you were intitially talking about still launched in October 2012.
You can't read, 870k launched in 2015. 760k is a replacement for it due to unexpected technicalities.


How manufacturers torture test their components and how end users use their components are not the same, nor should they be. Manufacturers need to test unrealistic worst-case scenarios. That doesn't make unrealistic worst-case scenarios good tests for end users, as what you are testing for is not the same. And no, Intel doesn't use power viruses to set TDP. Many Intel CPUs throttle under power virus loads if set to stock behaviour.
You better show me that "throttling". Its impossible. I tested my own i5 10400F under prime95 without turbo and it was consuming around 40 watts. That's one of the heaviest loads imaginable and it's nowhere close to 65 watt TDP. Only i9 10900 would be closer to throttling, but I don't think it would actually ever reach that point.


Decently fast, sure, for their time and disregarding power draw. They did decently well in MT loads (though by no means close to their nominal core count advantage), consumed dramatically more power even at the same TDP when compared to Intel (which just goes to show how TDP has never been a metric for power draw)
Lol no, it also thermally dissipated all that heat. You missed out on it, but "95 watt" FX 6300 came with aluminum cooler, which had a 70mm fan and it had max rpms of 6500. Even then CPU could rather easily overheat .Hyper 103 was enough for FX 6300 at stock settings. Meanwhile FX 8320 likely needed 212 Evo for stock settings, which is designed to dissipate close to 200 watts anyway.

It's only 2 times, if you actually read what you posted.

They were fine for their time, if you didn't mind buying hefty cooling. But they aged very poorly, and even an i5-6600 at 65W trounces the FX-8320E OC'd to 4.8GHz in the vast majority of tests.
Hardly (they are closer to match with slightly better overall results for i5) and i5 6600 wasn't launched in 2014. Also i5 cost nearly twice of what FX 8320 did, so yeah it's totally fair.

They might have seen an uptick in relative performance as more applications have become more multithreaded, but by that time (i.e. 2018+) they were already so far behind affordable current-generation offerings there was no real point. Of course a CPU you already own is infinitely cheaper than buying a new one, so if it performed adequately that is obviously great - I'm a big fan of making hardware last as long as possible (hence my current soon-to-be 6-year-old GPU, and me keeping my Core2Quad system from 2009 to 2017). But those old FX CPUs never aged well.
You are just shitting on them way more than they actually sucked. It's your comment that didn't age particularly well.

And now compare FX chips with their price equivalents and era equivalents:
FX 8320 with i3 3250 or i5 3350P
FX 6300 with i3 3225
FX 4300 with i3 3210

And that's the closest video I found to what I wanted to see, which still has Intel chip which was more expensive than FX 8320 (169 dollars vs 184 dollars):

Did FX actually suck? Not really. And wattage of FX is similar to what older chips consumed, so FX wasn't exceptionally bad in that aspect either. So 8 core FX is closer to 4 core i5 and 6 core FX is clearly better than i3. And 9 years later FX 8320 is still delivering a playable experience in games:

And you can overclock it, so it performs better. You can easily achieve 4.4GHz overclock from stock 3.5GHz. And you get around 20% more performance out of it. Now tell me, how exactly did FX suck as long term budget chip and how i5 3350P was actually better.
 

Keullo-e

S.T.A.R.S.
Joined
Dec 16, 2012
Messages
11,041 (2.66/day)
Location
Finland
System Name 4K-gaming
Processor AMD Ryzen 7 5800X up to 5.05GHz
Motherboard Gigabyte B550M Aorus Elite
Cooling Custom loop (CPU+GPU, 240 & 120 rads)
Memory 32GB Kingston HyperX Fury @ DDR4-3466
Video Card(s) PowerColor RX 6700 XT Fighter OC/UV
Storage ~4TB SSD + 6TB HDD
Display(s) Acer XV273K + Lenovo L32p-30
Case Corsair 4000D Airflow White
Audio Device(s) Asus TUF H3 Wireless
Power Supply EVGA Supernova G2 750W
Mouse Logitech MX518
Keyboard Roccat Vulcan 121 AIMO
VR HMD Oculus Rift CV1
Software Windows 11 Pro
Benchmark Scores It runs Crysis remastered at 4K
Because people buying it only wanted to overclock it. Nobody really thought that it's going to last long. The notable thing about it is that you could reach 5 GHz+ on it with normal cooling and that's why it sold so well. If Intel completely lost their marbles and released Comet Lake Celeron, which comes with base clock of 5 GHz and can be overclocked to 6.5 GHz on air cooler, would you buy it? It would likely sell quite well.
Nah, they didn't hit 5GHz as it's a rare sight to see even a 4790K hit that frequency. But I get your point.

Weird that Intel still sells Celerons as 2c/2t ones, even on desktop usage that's just insufficient.
 
Joined
May 8, 2021
Messages
1,978 (1.82/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
Nah, they didn't hit 5GHz as it's a rare sight to see even a 4790K hit that frequency. But I get your point.

I don't see why not. If Linus achieved near 5GHz overclock and temps are in check, there's no reason not to clock it further. Unless I'm not aware of some architecture peculiarities, it seems that it could do more than 5GHz easily.

Weird that Intel still sells Celerons as 2c/2t ones, even on desktop usage that's just insufficient.
It's fine for web browsing and office tasks.
 
Joined
May 8, 2021
Messages
1,978 (1.82/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
@Valantar
I have calmed down a bit, thought a bit and forced myself to think about TDP and variable performance. I may not have liked it and somewhat ignored it, but it's actually genius. It lets you get more performance out of same cooler and very importantly, if you don't like stock cooler, you can keep the same chips, change absolutely nothing in BIOS and then upgrade cooler and get a very cool and quiet CPU, without it filling up the extra cooling capacity with higher PL1. And many benefits of such approach can still be enjoyed with very simple coolers. Eh, maybe that isn't that bad.

However, if Intel wants to be truly successful with such things, they absolutely must have to step up in their communication, because people think that running outside of Intel spec is acceptable and normal and yet we see videos like this:

Linus is a dipshit reviewer, who doesn't read or comprehend spec sheet, but the problems is that he effectively communicates with tech crowd and has a lot of influence on potential buyers. If Linus or more techtubers keep doing this shit any longer, Intel will be forced to raise TDP and reduce PL2. A problem is that people are much more likely are going to watch YT video over reading Intel's spec sheet. I wonder if some tech Karen actually got pissed off about performance and sued Intel or some other brand for "not getting the full performance" and actually won in court, what kind of aftereffects would it have? You know people often sue for ridiculous reasons in 'Murica and sometimes they win.

Anyway, right now pretty much every techtuber expects to get almost all turbo speed all the time and if they get any less than that, which is still perfectly within spec, it's not going to end well. After all, techtubers have an enormous influence and can do a lot of harm or good for certain brands and their engineering decisions. If Intel does nothing about that, well they are going to be fucked.
 
Joined
Jan 14, 2019
Messages
9,881 (5.12/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 7800 XT
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Benchmark Scores Cinebench R23 single-core: 1,800, multi-core: 18,000. Superposition 1080p Extreme: 9,900.
@Valantar
I have calmed down a bit, thought a bit and forced myself to think about TDP and variable performance. I may not have liked it and somewhat ignored it, but it's actually genius. It lets you get more performance out of same cooler and very importantly, if you don't like stock cooler, you can keep the same chips, change absolutely nothing in BIOS and then upgrade cooler and get a very cool and quiet CPU, without it filling up the extra cooling capacity with higher PL1. And many benefits of such approach can still be enjoyed with very simple coolers. Eh, maybe that isn't that bad.

However, if Intel wants to be truly successful with such things, they absolutely must have to step up in their communication, because people think that running outside of Intel spec is acceptable and normal and yet we see videos like this:

Linus is a dipshit reviewer, who doesn't read or comprehend spec sheet, but the problems is that he effectively communicates with tech crowd and has a lot of influence on potential buyers. If Linus or more techtubers keep doing this shit any longer, Intel will be forced to raise TDP and reduce PL2. A problem is that people are much more likely are going to watch YT video over reading Intel's spec sheet. I wonder if some tech Karen actually got pissed off about performance and sued Intel or some other brand for "not getting the full performance" and actually won in court, what kind of aftereffects would it have? You know people often sue for ridiculous reasons in 'Murica and sometimes they win.

Anyway, right now pretty much every techtuber expects to get almost all turbo speed all the time and if they get any less than that, which is still perfectly within spec, it's not going to end well. After all, techtubers have an enormous influence and can do a lot of harm or good for certain brands and their engineering decisions. If Intel does nothing about that, well they are going to be fucked.
That brings us back to one of my very first posts in this forum thread: the problem here (I think) isn't the loosely defined Intel spec. It also isn't motherboard manufacturers making different tiers of motherboards that fulfil the spec in different ways. The problem is 1. motherboard manufacturers not communicating their VRM specifications towards the public, and 2. reviewers expecting every single motherboard to be able to deliver 150+ Watts of power to the CPU, stay cool and maintain maximum boost frequencies at the same time, and then giving manufacturers sh** if they fail to do so with certain models. They of all people should acknowledge that sticking to a 65 W power limit is just as much within spec as running max boost clocks. They should also realise that nobody is going to buy the cheapest motherboard on the market without any background information and expect it to run full boost on an 11900K. Well, some people might, but we generally refer to them as retards. Hardware Unboxed made a big deal out of nothing imo (except for that one ASRock motherboard that truly failed in one of their later videos). As for Linus, he used to be good, but he's been all for the show lately. I prefer watching his weird experiment videos to be fair.

As for the configurable TDP/PL values, the more I'm playing with my new 11700, the more I'm starting to like it. I recently changed the memory controller setting from Auto to Gear 2, and package/core temps magically dropped by 10 °C with an extra 100 points in Cinebench R23. I might be able to increase PL1 even further. :)
 
Joined
May 8, 2021
Messages
1,978 (1.82/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
That brings us back to one of my very first posts in this forum thread: the problem here (I think) isn't the loosely defined Intel spec. It also isn't motherboard manufacturers making different tiers of motherboards that fulfil the spec in different ways. The problem is 1. motherboard manufacturers not communicating their VRM specifications towards the public, and 2. reviewers expecting every single motherboard to be able to deliver 150+ Watts of power to the CPU, stay cool and maintain maximum boost frequencies at the same time, and then giving manufacturers sh** if they fail to do so with certain models. They of all people should acknowledge that sticking to a 65 W power limit is just as much within spec as running max boost clocks. They should also realise that nobody is going to buy the cheapest motherboard on the market without any background information and expect it to run full boost on an 11900K. Well, some people might, but we generally refer to them as retards. Hardware Unboxed made a big deal out of nothing imo (except for that one ASRock motherboard that truly failed in one of their later videos). As for Linus, he used to be good, but he's been all for the show lately. I prefer watching his weird experiment videos to be fair.
Honestly some people in prebuilts will end up with 11900K on cheap board and that board may even fail to deliver PL1 expected power, the 125 watts. Some people buying low end board rationalize that if chip is on compatible chip list in motherboard manufacturer's site, then it must work correctly. I personally think that if motherboard specifies that it supports certain chips, then it absolutely has to be able to deliver all power needed and do no overheat even if there's almost no air blowing directly on them (aka using a tower cooler). AM3+ was ruined by very shady tactics of manufacturers and some board VRMs actually caught on fire (some MSI boards). Many makers claimed that their board supported 8 core chips and yet they didn't have any VRM cooling and sometimes only had 3+1 phases. That's unacceptable and minimum specification should be higher for any other new platform. All this shady shit going on with LGA 1200 isn't acceptable for one simple reason, if motherboard is only made to fulfill spec just so so, then chances are that such board won't last very long until malfunctioning. Then there are other factors like hotter climates, dust build-up and etc. IMO any LGA 1200 board should be made to work with supported CPU's PL2 and 20% higher CPU power demands. If board can only sustain 150 watts, then it should be limited to only 65 watt TDP parts (i5 11400F PL1 is 65 watts and PL2 is 154 watts). Otherwise, lesser products will soon become an e-waste. And to get expected performance, at least 1:1.5 power limit ratio between PL1 and PL2 should be maintained by every motherboard vendor. For i5 11400F that would be PL1 65 watts and PL2 would be 97.5 watts. So the complete minimum spec board for i5 11400F would be the board, which can sustain at least 117 watts continuously (for 1 or 2 hours) and passively cooled VRMs in some benchmark case should not exceed 70C in 20C room.


As for the configurable TDP/PL values, the more I'm playing with my new 11700, the more I'm starting to like it. I recently changed the memory controller setting from Auto to Gear 2, and package/core temps magically dropped by 10 °C with an extra 100 points in Cinebench R23. I might be able to increase PL1 even further. :)
To me it's only PL1 that matters, PL2 looks pointless. Anyway, where you will post your achievement log? It seems that 11700 was good enough for you, from what you write I can conclude that it's much more power hungry chip than i5 10400F as i5 within 65 watt limit does achieve all core maximum boost frequencies under almost any high load. I ran Cinebench R23 30 minute loop yesterday for fun at Intel "spec" settings and it kept going at 3.8-4GHz. So as long as it's not thermally limited, then i5 at 65 watts can actually sustain all core turbo in almost any workload. That's really nice. Your 11700 seems to be nowhere close to all core turbo in long workload. It makes me wonder if 11700 wouldn't be slower at Intel "spec" settings than 10400F in something like encoding.
 
Joined
Jan 14, 2019
Messages
9,881 (5.12/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 7800 XT
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Benchmark Scores Cinebench R23 single-core: 1,800, multi-core: 18,000. Superposition 1080p Extreme: 9,900.
Honestly some people in prebuilts will end up with 11900K on cheap board and that board may even fail to deliver PL1 expected power, the 125 watts. Some people buying low end board rationalize that if chip is on compatible chip list in motherboard manufacturer's site, then it must work correctly. I personally think that if motherboard specifies that it supports certain chips, then it absolutely has to be able to deliver all power needed and do no overheat even if there's almost no air blowing directly on them (aka using a tower cooler). AM3+ was ruined by very shady tactics of manufacturers and some board VRMs actually caught on fire (some MSI boards). Many makers claimed that their board supported 8 core chips and yet they didn't have any VRM cooling and sometimes only had 3+1 phases. That's unacceptable and minimum specification should be higher for any other new platform. All this shady shit going on with LGA 1200 isn't acceptable for one simple reason, if motherboard is only made to fulfill spec just so so, then chances are that such board won't last very long until malfunctioning. Then there are other factors like hotter climates, dust build-up and etc.
Maybe the solution should be not allowing manufacturers to add CPUs to their support list if the VRM can't supply enough electricity for PL2 without overheating.

IMO any LGA 1200 board should be made to work with supported CPU's PL2 and 20% higher CPU power demands. If board can only sustain 150 watts, then it should be limited to only 65 watt TDP parts (i5 11400F PL1 is 65 watts and PL2 is 154 watts). Otherwise, lesser products will soon become an e-waste. And to get expected performance, at least 1:1.5 power limit ratio between PL1 and PL2 should be maintained by every motherboard vendor. For i5 11400F that would be PL1 65 watts and PL2 would be 97.5 watts. So the complete minimum spec board for i5 11400F would be the board, which can sustain at least 117 watts continuously (for 1 or 2 hours) and passively cooled VRMs in some benchmark case should not exceed 70C in 20C room.
Well, the default PL2 of 8-core 65 W Rocket Lake chips is 225 Watts - which I think is way too much power for any CPU. But it's a number from Intel, so I guess they should mandate motherboard makers to be able to deliver it, or maybe come up with a lower PL2 instead.

To me it's only PL1 that matters, PL2 looks pointless. Anyway, where you will post your achievement log? It seems that 11700 was good enough for you, from what you write I can conclude that it's much more power hungry chip than i5 10400F as i5 within 65 watt limit does achieve all core maximum boost frequencies under almost any high load. I ran Cinebench R23 30 minute loop yesterday for fun at Intel "spec" settings and it kept going at 3.8-4GHz. So as long as it's not thermally limited, then i5 at 65 watts can actually sustain all core turbo in almost any workload. That's really nice. Your 11700 seems to be nowhere close to all core turbo in long workload. It makes me wonder if 11700 wouldn't be slower at Intel "spec" settings than 10400F in something like encoding.
I think I will open a forum thread here on TPU once I get myself to it (maybe over the weekend). :)

Intel has far reached the limitations of traditional TDP designations, just applied limits instead of changing the formula like AMD did. I remember my 7700 (non-K) always maintained its max boost while consuming anywhere between 50-60 Watts under full load. It seems your 10400F is close too, but one just can't do the same with 8 cores. The 11700 eats about 50 Watts and maintains 4.8-4.9 GHz in single-threaded loads, which is nice, but drops to 2.8 GHz in Cinebench R23 multi with default power limits. You can't expect to load 8 times the cores with only 30% more power, I guess.
 
Joined
May 8, 2021
Messages
1,978 (1.82/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
Maybe the solution should be not allowing manufacturers to add CPUs to their support list if the VRM can't supply enough electricity for PL2 without overheating.
That's what AMD did in AM3+ days. Boards were in 3 tiers. 95W, 125W and 225W boards, too bad that even then it was a complete shitshow as all AM3+ consumed way more power than AMD said. If Intel wanted to, they could pull it off more elegantly.

Well, the default PL2 of 8-core 65 W Rocket Lake chips is 225 Watts - which I think is way too much power for any CPU. But it's a number from Intel, so I guess they should mandate motherboard makers to be able to deliver it, or maybe come up with a lower PL2 instead.
I would get rid of PL2 and make PL1 95 watts. That almost looks like a good solution, knowing that LGA 1200 boards should be able to handle PL1 of 125 watts, too bad Intel stock cooler and OEMs wouldn't agree with that. Anyway, PL2 needs to be lower anyway. Even 1:2 ratio is crazy. Ideally imo PL ratio should be 1:1.5.


Intel has far reached the limitations of traditional TDP designations, just applied limits instead of changing the formula like AMD did. I remember my 7700 (non-K) always maintained its max boost while consuming anywhere between 50-60 Watts under full load. It seems your 10400F is close too, but one just can't do the same with 8 cores. The 11700 eats about 50 Watts and maintains 4.8-4.9 GHz in single-threaded loads, which is nice, but drops to 2.8 GHz in Cinebench R23 multi with default power limits. You can't expect to load 8 times the cores with only 30% more power, I guess.
That's unfortunate, because that's almost half of what chip is capable of. It seems to me that Intel was just lazy and forced the same PL1 to all locked chips. It's like car manufacturers reusing same engine in number or cars, just that in Intel's case it would be like sticking 1,9 TDi into dumpster truck. It will move, but it would be infuriatingly slow. Meanwhile PL2 is like W12 in that same dumpster truck. It seems that Intel just didn't really think about PL consequences on different SKUs and perhaps only tested one CPU and then applied those findings to every chip. Seems like something that braindead management would do.
 
Joined
Jan 14, 2019
Messages
9,881 (5.12/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 7800 XT
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Benchmark Scores Cinebench R23 single-core: 1,800, multi-core: 18,000. Superposition 1080p Extreme: 9,900.
That's unfortunate, because that's almost half of what chip is capable of. It seems to me that Intel was just lazy and forced the same PL1 to all locked chips. It's like car manufacturers reusing same engine in number or cars, just that in Intel's case it would be like sticking 1,9 TDi into dumpster truck. It will move, but it would be infuriatingly slow. Meanwhile PL2 is like W12 in that same dumpster truck. It seems that Intel just didn't really think about PL consequences on different SKUs and perhaps only tested one CPU and then applied those findings to every chip. Seems like something that braindead management would do.
That pretty much sums it up. Though I don't think it's a total failure, as even with lower clocks, you still have a 6/8 core CPU. AMD FX failed as a gaming platform because of its low single-core performance, but didn't age too badly because games started using more threads. With these 65 W 11th gen Core CPUs, you have the high low-threaded clock speed you need in the present, and the core count you may need in the future. And if you want to combine the two, you can slap a bigger cooler on it and increase/disable its power limits. Heck, I'm even tempted to get my AeroCool Aero One Mini case and Corsair H100i out of the wardrobe and see how far the little i7 goes, even though this was never my original plan. :rolleyes:
 
Joined
Jan 14, 2019
Messages
9,881 (5.12/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 7800 XT
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Benchmark Scores Cinebench R23 single-core: 1,800, multi-core: 18,000. Superposition 1080p Extreme: 9,900.
Sounds interesting! Let me know if you make a build log?
Anyway, where you will post your achievement log?
It's aliiive! :)

 
Top