• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Be careful when recommending B560 motherboards to novice builders (HWUB)

Joined
Oct 1, 2014
Messages
1,858 (0.53/day)
Location
Calabash, NC
System Name The Captain (2.0)
Processor Ryzen 7 7700X
Motherboard Gigabyte X670E AORUS Master
Cooling 280mm Arctic Liquid Freezer II, 4x Be Quiet! 140mm Silent Wings 4 (1x exhaust 3x intake)
Memory 32GB (2x16) G.Skill Trident Z5 Neo (6000Mhz)
Video Card(s) MSI GeForce RTX 3070 SUPRIM X
Storage 1x Crucial MX500 500GB SSD; 1x Crucial MX500 500GB M.2 SSD; 1x WD Blue HDD, 1x Crucial P5 Plus
Display(s) Aorus CV27F 27" 1080p 165Hz
Case Phanteks Evolv X (Anthracite Gray)
Power Supply Corsair RMx (2021) 1000W 80-Plus Gold
Mouse Varies based on mood/task; is currently Razer Basilisk V3 Pro or Razer Cobra Pro
Keyboard Varies based on mood; currently Razer Blackwidow V4 75% and Hyper X Alloy 65
For those that don't feel like watching the video, in short, ASRock had one job and massively blew it. Again. :roll:

Kudos to Steve for calling them out on their bullshit yet again. Make no mistake though, none of the boards in the video were exactly stellar.

Also, it sounded like Steve was suggesting the VRMs on ASRock's H510 boards were even worse, yet claim to support the 11900K on the product page. :fear:
 
Joined
May 8, 2021
Messages
1,978 (1.82/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
For those that don't feel like watching the video, in short, ASRock had one job and massively blew it. Again. :roll:
Gigabyte failed too.
 
Joined
Oct 1, 2014
Messages
1,858 (0.53/day)
Location
Calabash, NC
System Name The Captain (2.0)
Processor Ryzen 7 7700X
Motherboard Gigabyte X670E AORUS Master
Cooling 280mm Arctic Liquid Freezer II, 4x Be Quiet! 140mm Silent Wings 4 (1x exhaust 3x intake)
Memory 32GB (2x16) G.Skill Trident Z5 Neo (6000Mhz)
Video Card(s) MSI GeForce RTX 3070 SUPRIM X
Storage 1x Crucial MX500 500GB SSD; 1x Crucial MX500 500GB M.2 SSD; 1x WD Blue HDD, 1x Crucial P5 Plus
Display(s) Aorus CV27F 27" 1080p 165Hz
Case Phanteks Evolv X (Anthracite Gray)
Power Supply Corsair RMx (2021) 1000W 80-Plus Gold
Mouse Varies based on mood/task; is currently Razer Basilisk V3 Pro or Razer Cobra Pro
Keyboard Varies based on mood; currently Razer Blackwidow V4 75% and Hyper X Alloy 65
Joined
Mar 3, 2020
Messages
111 (0.07/day)
Location
Australia
System Name wasted talent
Processor i5-11400F
Motherboard Gigabyte B560M Aorussy Pro
Cooling Silverstone AR12
Memory Patriot Viper Steel 2X8 4400 @ 3600 C14,14,12,28
Video Card(s) Sapphire RX 6700 Pulse, Galax 1650 Super EX
Storage Kingston A2000 500GB
Display(s) Gigabyte M27Q
Case open mATX: zwzdiy.cc/M/Product/209574419.html
Audio Device(s) HiFiMan HE400SE
Power Supply Strix Gold 650W
Mouse Skoll Mini, G502 LightSpeed
Keyboard Akko 3084S
Software 1809 LTSC
Benchmark Scores 3968/540 CB R20 MT/ST
well ASRock does have hard limit on PL2 and its lower than PL1 of 125W CPU of say 11600K, I like to mess with my cheap toys, like adding heatsink myself and blast them with air to make them do things that they were never designed to. a dick move on this part since they have better VRM than MSI's PRO-E (thanks MSI for reminding me NIKO Sem exists). ASRock hard locks their VRM to 100w and subsequently 80-90c. Gigabyte had slightly better VRM than HDV I think, but also allowed you to not throttle the base clock of a 11600k.

Hopefully either 10nm makes efficiency improvements, or Intel more strict on lest not failing base clock on a i5...where (yes AMD CPUs are more efficient, that has no correlation to a board that violates Intel min spec) 3600X can be ran on a320 no problems and no modifications (software or hardware).
 

Mussels

Freshwater Moderator
Staff member
Joined
Oct 6, 2004
Messages
58,413 (8.18/day)
Location
Oystralia
System Name Rainbow Sparkles (Power efficient, <350W gaming load)
Processor Ryzen R7 5800x3D (Undervolted, 4.45GHz all core)
Motherboard Asus x570-F (BIOS Modded)
Cooling Alphacool Apex UV - Alphacool Eisblock XPX Aurora + EK Quantum ARGB 3090 w/ active backplate
Memory 2x32GB DDR4 3600 Corsair Vengeance RGB @3866 C18-22-22-22-42 TRFC704 (1.4V Hynix MJR - SoC 1.15V)
Video Card(s) Galax RTX 3090 SG 24GB: Underclocked to 1700Mhz 0.750v (375W down to 250W))
Storage 2TB WD SN850 NVME + 1TB Sasmsung 970 Pro NVME + 1TB Intel 6000P NVME USB 3.2
Display(s) Phillips 32 32M1N5800A (4k144), LG 32" (4K60) | Gigabyte G32QC (2k165) | Phillips 328m6fjrmb (2K144)
Case Fractal Design R6
Audio Device(s) Logitech G560 | Corsair Void pro RGB |Blue Yeti mic
Power Supply Fractal Ion+ 2 860W (Platinum) (This thing is God-tier. Silent and TINY)
Mouse Logitech G Pro wireless + Steelseries Prisma XL
Keyboard Razer Huntsman TE ( Sexy white keycaps)
VR HMD Oculus Rift S + Quest 2
Software Windows 11 pro x64 (Yes, it's genuinely a good OS) OpenRGB - ditch the branded bloatware!
Benchmark Scores Nyooom.
Inspired by this thread:

Intel B560 chipset boards fail again, hard. This time they can't even sustain "Intel spec" settings of 125 watt chips (Intel suggested PL1 of 125 watt and PL2 of 251 watts for 11900K and PL1 of 125 watts and PL2 of 224 watts for 11600K). Board failed to sustain around 100 watts, making it a complete no go with any k chip and in fact quite toasty with non k chips. Asrock and Gigabyte low end boards failed VRM test. Both overheated VRMs and failed to sustain a base clock speed of CPUs, making k series chips RMA-able in such case (if chip can't sustain base clock speed, Intel offers RMA for it). Motherboards however "work as expected" and by that it means no refund and no RMA, if you are unhappy with it.


As always, if you buy a motherboard then always pay attention to VRM quality, especially on Rocket Lake platform.



Case this isn't a misleading absolute clusterfark, when these results are all from the same CPU...

This is asrocks particular fail, where the 125W parts all got the 65W limit treatment:
 
Joined
Sep 3, 2019
Messages
2,981 (1.76/day)
Location
Thessaloniki, Greece
System Name PC on since Aug 2019, 1st CPU R5 3600 + ASUS ROG RX580 8GB >> MSI Gaming X RX5700XT (Jan 2020)
Processor Ryzen 9 5900X (July 2022), 150W PPT limit, 79C temp limit, CO -9~14
Motherboard Gigabyte X570 Aorus Pro (Rev1.0), BIOS F37h, AGESA V2 1.2.0.B
Cooling Arctic Liquid Freezer II 420mm Rev7 with off center mount for Ryzen, TIM: Kryonaut
Memory 2x16GB G.Skill Trident Z Neo GTZN (July 2022) 3600MHz 1.42V CL16-16-16-16-32-48 1T, tRFC:288, B-die
Video Card(s) Sapphire Nitro+ RX 7900XTX (Dec 2023) 314~465W (387W current) PowerLimit, 1060mV, Adrenalin v24.3.1
Storage Samsung NVMe: 980Pro 1TB(OS 2022), 970Pro 512GB(2019) / SATA-III: 850Pro 1TB(2015) 860Evo 1TB(2020)
Display(s) Dell Alienware AW3423DW 34" QD-OLED curved (1800R), 3440x1440 144Hz (max 175Hz) HDR1000, VRR on
Case None... naked on desk
Audio Device(s) Astro A50 headset
Power Supply Corsair HX750i, 80+ Platinum, 93% (250~700W), modular, single/dual rail (switch)
Mouse Logitech MX Master (Gen1)
Keyboard Logitech G15 (Gen2) w/ LCDSirReal applet
Software Windows 11 Home 64bit (v23H2, OSB 22631.3155)
Inspired by this thread:

Intel B560 chipset boards fail again, hard. This time they can't even sustain "Intel spec" settings of 125 watt chips (Intel suggested PL1 of 125 watt and PL2 of 251 watts for 11900K and PL1 of 125 watts and PL2 of 224 watts for 11600K). Board failed to sustain around 100 watts, making it a complete no go with any k chip and in fact quite toasty with non k chips. Asrock and Gigabyte low end boards failed VRM test. Both overheated VRMs and failed to sustain a base clock speed of CPUs, making k series chips RMA-able in such case (if chip can't sustain base clock speed, Intel offers RMA for it). Motherboards however "work as expected" and by that it means no refund and no RMA, if you are unhappy with it.


As always, if you buy a motherboard then always pay attention to VRM quality, especially on Rocket Lake platform.
This is a cheap board without even just a chunk of metal on those VRMs.
Most of these boards should've been labeled as i3 capable boards. They don't even run i5s properly, let alone i7/9s.

What are vendors and Intel thinking... :rolleyes: (?)
 
Joined
Jan 14, 2019
Messages
9,886 (5.12/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 7800 XT
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Benchmark Scores Cinebench R23 single-core: 1,800, multi-core: 18,000. Superposition 1080p Extreme: 9,900.
Actually, you do see maximum boost clock quite often on Intel chips. I often see 4.3GHz on i5 10400F. And I often see all core maximum clock of 4GHz at pretty much any load. I heard that Ryzen chips simply don't have such tables and if they can they will keep increasing clock speed as long as cooling permits doing so until maximum specified turbo speed by AMD, also AMD does that by 25MHz increments and Intel does it in 100Mhz increments. I don't remember many details, but AMD and Intel boosting algorithms are substantially different.
That's exactly what I mean. ;) AMD CPUs don't have boost tables like Intel chips do. That's why nobody cares if you see 3.6 or 3.8 or even 4 GHz in all-core workloads. No one ever said what clocks you should see, so you're not expecting anything.

Oh dear, those Ryzens are bad at dissipating heat.
That's another thing nobody seems to talk about. My Ryzen 3 3100 runs at 72 °C in Cinebench with the stock cooler (50 W max power). The 3600 came very close to throttling even with a be quiet! Shadow Rock LP (that was cold to the touch) with a maxed out 88 W PPT. The 5950X behaved similarly to the 3100 when paired with a 240 mm AIO with default BIOS settings (~130 W power consumption), but enabling the Asus optimizer and bringing power consumption towards 180 W made it jump way above 80 °C even after a few seconds of Cinebench with the same AIO. My theory is that coldplate designs are traditionally optimised to work best around the middle of their surface area. As chiplets are offset and manufactured on a smaller 7 nm node, they just can't transfer their heat to the cooler as effectively as a larger central die can. This needs more testing too. Maybe this is why AMD is so reluctant to bring their APUs to the DIY market. :wtf:

I remember the FX times too. My 8150 was the most brilliant terrible processor I've ever had. :D

And people seem to overlook another Intel CPU line, the T series chips, which are rated at 35 watts. i9 10900T was rated at 35 watts and to do that it has base speed of 1.9GHz and maximum boost clock of 4.6GHz. In this case you won't ever see it running at base speed or at TDP. For Intel chips maximum all core boost clock is essentially a new base clock. And those T chips were really bad at their job as they hardly saved any power when compared to non T version. Thanks to stupidly high PL2 values. Bullshit like that destroyed any value of separate T sku. What the point of getting T version, when you can get non T version and then set TDP to whatever you like? And for that matter what's the point of getting K sku, when you can just ramp up PL values on lower end chip and it will be almost as good as K version? Also non k version wouldn't even lose warranty from having PL values modified as Intel let motherboard OEMs go wild.
If you think about it, spending a lot less money for a "T" CPU and unlocking its power limits is a much better deal than buying a "K" version and locking it to suit your available cooling capacity. ;) It's a shame T series are not available for DIY with 11th gen.

It's such a bad shitstorm, that I don't even know which is the least painful way to resolve it anymore. To enforce strict TDP? To raise TDP? To get rid of PL2? To keep performance or to accept losses? All this nonsense just makes me want to go back to era of single clock speed for everyone and be done with all this TDP bullshit. Let TDP to be whatever is needed for rated clock speed and be fine with results, but computer OEMs wouldn't be having any of that.
To be honest, I've disagreed with boosting right from the start. Why would you not want peak performance with pre-designed power consumption all the time? The whole concept is flawed.

Cheers, but be ready for existential crisis of whether to unlock PL values or not.
Well honestly, I'm happy with my Ryzen 3 3100 as I don't really need more with my GTX 1650 at the moment. The i7 is only to meant to be a bit more future-proof, but more importantly to satisfy my curiosity of how heat dissipation differs between 7 nm chiplets and a 14 nm central die. If it ends up fine, it will be money well spent. If not, I can sell it at any time, being a modern and newly bought part. :ohwell:

As for the existential crisis, I'm planning to do some in-depth testing of different PL values and cooling possibilities in a tiny case with restricted airflow. If there is interest, I might as well publish the results (maybe open a new forum thread) for future SFF builders.
 
Last edited:
Joined
May 8, 2021
Messages
1,978 (1.82/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
This is a cheap board without even just a chunk of metal on those VRMs.
Most of these boards should've been labeled as i3 capable boards. They don't even run i5s properly, let alone i7/9s.

What are vendors and Intel thinking... :rolleyes: (?)
I dunno. But I have one of the cheapest FM2+ boards from Gigabyte. I think it's A68H-DS2 something. It handles Athlon 760K at stock perfectly fine and judging by VRM temperatures it could handle it overclocked. 760K is a FX derived chip and even at stock it uses obnoxious amount of power if all cores are loaded (140-160 watts). All it has is 4+1 phases and they are bare, no heatsink. I also have some A68H-HD+ Asrock board and it handles Athlon 845 perfectly fine too, again bare 4+1 phases. So it's not like OEMs can't make a functional board with bare VRMs that can't supply that wattage, I think that they just downspeced them too much and therefore they overheat. Thus turning i5 11600k into 11600.

That's exactly what I mean. ;) AMD CPUs don't have boost tables like Intel chips do. That's why nobody cares if you see 3.6 or 3.8 or even 4 GHz in all-core workloads. No one ever said what clocks you should see, so you're not expecting anything.
At least Intel lets you to fry your stuff if you are complete idiot.

That's another thing nobody seems to talk about. My Ryzen 3 3100 runs at 72 °C in Cinebench with the stock cooler. The 3600 came very close to throttling even with a be quiet! Shadow Rock LP (that was cold to the touch). The 5950X behaved similarly to the 3100 when paired with a 240 mm AIO with default BIOS settings (~130 W power consumption), but enabling the Asus optimizer and bringing power consumption towards 180 W made it jump way above 80 °C even after a few seconds of Cinebench with the same AIO. My theory is that coldplate designs are traditionally optimised to work best around the middle of their surface area. As chiplets are offset and manufactured on a smaller 7 nm node, they just can't transfer their heat to the cooler as effectively as a larger central die can. This needs more testing too. Maybe this is why AMD is so reluctant to bring their APUs to the DIY market. :wtf:
72C in Cinebench? That means it will almost throttle in prime95. That's not good. I don't consider that my stuff has proper cooling if it can't run prime95 indefinitely, and in some cases prime95 and Furmark at the same time. I couldn't touch Ryzens with such problems. I'm not sure about your theory, but I know that I would try lapping those Ryzens. Maybe they are just uneven.

I remember the FX times too. My 8150 was the most brilliant terrible processor I've ever had. :D
Oh I so agree here, I loved my FX 6300. My first chip that I pushed to 5.288 GHz and the first chip to make VRM area of motherboard brown in process. I gotta say that I didn't really care if it destroyed motherboards, as long as it was fun to overclock it. It was also great undervolting and it could run passively cooled with Scythe Mugen 4 heatsink. I kept using it until 2019, at the point where performance of it just wasn't good enough anymore. Apparently, FX lasted a long time and were still surprisingly not bad even in 2019:


If you think about it, spending a lot less money for a "T" CPU and unlocking its power limits is a much better deal than buying a "K" version and locking it to suit your available cooling capacity. ;) It's a shame they're not available for DIY with 11th gen.
The problem is that they aren't usually sold for a lot less than non T version.

To be honest, I disagreed with boosting right from the start. Why would you not want peak performance with pre-designed power consumption all the time? The whole concept is flawed.
Perhaps. For me coming from FX it was obvious that only base clock is important spec, because boost is opportunistic and is never a guaranteed speed. That's how I thought until I discovered PL and Tau stuff. FX and Phenom II chips had an awful boost, that wasn't worth bothering with and I just disabled it as it pushed way too many volts for small performance gains and boosting could ruing other core clock speeds, oh you also had to enable C6 state and AMD APM, and I'm not a fan of either. But on Intel you can disable any of that shit and set PL high


Well honestly, I'm happy with my Ryzen 3 3100 as I don't really need more for my GTX 1650 at the moment. The i7 is only to meant to be a bit more future-proof, but more importantly to satisfy my curiosity of how heat dissipation differs between 7 nm chiplets and a 14 nm central die. If it ends up fine, it will be money well spent. If not, I can sell it at any time, being a modern and newly bought part. :ohwell:

As for the existential crisis, I'm planning to do some in-depth testing of different PL values and cooling possibilities in a tiny case with restricted airflow. If there is interest, I might as well publish the results (maybe open a new forum thread) for future SFF builders.
Oh, then I guess it's fine. One tip, I tested my 10400F at 45 watts and it was still surprisingly decent, also at 3.6 GHz it uses dramatically less power than at 4GHz. Impact was so great that wattage came down from 111 watts to just 74 watts, if CPU is reporting it correctly. From my wall readings, I don't have a reason to suspect that this is incorrect.
 
Joined
Oct 2, 2020
Messages
658 (0.51/day)
System Name ASUS TUF F15
Processor Intel Core i5-10300H
Motherboard ASUS FX506LHB
Cooling Laptop built-in cooling lol
Memory 24GB @ 2933 Dual Channel
Video Card(s) Intel UHD & Nvidia GTX 1650 Mobile
Storage WD Black SN770 NVMe 1TB PCIe 4.0
Display(s) Dell 27 4K Monitor S2721QS; Samsung Odyssey G55 Curved 2K 144 Hz LC27G55TQWRXEN
Audio Device(s) LOGITECH 2.1-channel
Power Supply ASUS 180W PSU (from more powerful ASUS TUF DASH F15 lol)
Mouse Logitech G604
Keyboard SteelSeries Apex 7 TKL
Software Windows 11 Pro
well, the moral is simple guys: don't put 8-core max-clocker to some cheapest mobo you could get for cashback lol. balance is needed in pc building, it's not good idea to put RTX3090 with some i3-10100 or Ryzen 3100 nor it's good idea to build Ryzen 9 with A520 or 10700/11700 on H410 or H510, or crappy B-chipset equivalent just for the manufacturer to put price $20 higher lol.....
 
Joined
Sep 3, 2019
Messages
2,981 (1.76/day)
Location
Thessaloniki, Greece
System Name PC on since Aug 2019, 1st CPU R5 3600 + ASUS ROG RX580 8GB >> MSI Gaming X RX5700XT (Jan 2020)
Processor Ryzen 9 5900X (July 2022), 150W PPT limit, 79C temp limit, CO -9~14
Motherboard Gigabyte X570 Aorus Pro (Rev1.0), BIOS F37h, AGESA V2 1.2.0.B
Cooling Arctic Liquid Freezer II 420mm Rev7 with off center mount for Ryzen, TIM: Kryonaut
Memory 2x16GB G.Skill Trident Z Neo GTZN (July 2022) 3600MHz 1.42V CL16-16-16-16-32-48 1T, tRFC:288, B-die
Video Card(s) Sapphire Nitro+ RX 7900XTX (Dec 2023) 314~465W (387W current) PowerLimit, 1060mV, Adrenalin v24.3.1
Storage Samsung NVMe: 980Pro 1TB(OS 2022), 970Pro 512GB(2019) / SATA-III: 850Pro 1TB(2015) 860Evo 1TB(2020)
Display(s) Dell Alienware AW3423DW 34" QD-OLED curved (1800R), 3440x1440 144Hz (max 175Hz) HDR1000, VRR on
Case None... naked on desk
Audio Device(s) Astro A50 headset
Power Supply Corsair HX750i, 80+ Platinum, 93% (250~700W), modular, single/dual rail (switch)
Mouse Logitech MX Master (Gen1)
Keyboard Logitech G15 (Gen2) w/ LCDSirReal applet
Software Windows 11 Home 64bit (v23H2, OSB 22631.3155)
I dunno. But I have one of the cheapest FM2+ boards from Gigabyte. I think it's A68H-DS2 something. It handles Athlon 760K at stock perfectly fine and judging by VRM temperatures it could handle it overclocked. 760K is a FX derived chip and even at stock it uses obnoxious amount of power if all cores are loaded (140-160 watts). All it has is 4+1 phases and they are bare, no heatsink. I also have some A68H-HD+ Asrock board and it handles Athlon 845 perfectly fine too, again bare 4+1 phases. So it's not like OEMs can't make a functional board with bare VRMs that can't supply that wattage, I think that they just downspeced them too much and therefore they overheat. Thus turning i5 11600k into 11600.
References on 10 year old systems provide little to zero insight at this current situation. Today a board without VRM heatsink is an entry level one and it shouldn't be used with anything above i3 for Intel and R5 for AMD.
That's another thing nobody seems to talk about. My Ryzen 3 3100 runs at 72 °C in Cinebench with the stock cooler (50 W max power). The 3600 came very close to throttling even with a be quiet! Shadow Rock LP (that was cold to the touch) with a maxed out 88 W PPT. The 5950X behaved similarly to the 3100 when paired with a 240 mm AIO with default BIOS settings (~130 W power consumption), but enabling the Asus optimizer and bringing power consumption towards 180 W made it jump way above 80 °C even after a few seconds of Cinebench with the same AIO. My theory is that coldplate designs are traditionally optimised to work best around the middle of their surface area. As chiplets are offset and manufactured on a smaller 7 nm node, they just can't transfer their heat to the cooler as effectively as a larger central die can. This needs more testing too. Maybe this is why AMD is so reluctant to bring their APUs to the DIY market. :wtf:
This is exactly the reason I switched thermal paste from a normal one (AS5) to LiquidMetal. Very small die surface and off-center position.
With AS5 the R5 3600 (88W) had the same max temp with my old FX8370 (150+W) with the same 280mm AIO.
LM helped decrease temp about 6~7C.

The down side with LM is the cost, material treatment and replacements. Also it interacts with some other material like copper and alu.
I'm ok with these, but for sure its not for the mainstream user.
 
Joined
May 8, 2021
Messages
1,978 (1.82/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
References on 10 year old systems provide little to zero insight at this current situation. Today a board without VRM heatsink is an entry level one and it shouldn't be used with anything above i3 for Intel and R5 for AMD.
I bought that board with CPU in 2018 and it's FM2+ platform, not FM2. It's not 10 years old. At that same time Zen+ was out.

BTW Artic Silver 5 is one of the worst performing thermal pastes nowadays, there's no surprise that even some basic paste would be better than it. It's literally dead last in tests.
 
Joined
May 2, 2017
Messages
7,762 (3.04/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Inspired by this thread:

Intel B560 chipset boards fail again, hard. This time they can't even sustain "Intel spec" settings of 125 watt chips (Intel suggested PL1 of 125 watt and PL2 of 251 watts for 11900K and PL1 of 125 watts and PL2 of 224 watts for 11600K). Board failed to sustain around 100 watts, making it a complete no go with any k chip and in fact quite toasty with non k chips. Asrock and Gigabyte low end boards failed VRM test. Both overheated VRMs and failed to sustain a base clock speed of CPUs, making k series chips RMA-able in such case (if chip can't sustain base clock speed, Intel offers RMA for it). Motherboards however "work as expected" and by that it means no refund and no RMA, if you are unhappy with it.


As always, if you buy a motherboard then always pay attention to VRM quality, especially on Rocket Lake platform.
Inspired by (and linking to) the same thread that you're posting in? :laugh:

Still, it's pretty atrocious that ASRock makes motherboards that don't even meet base spec, and that Intel doesn't enforce platform standards.
That's exactly my problem with reviews these days. TechPowerUp! is still doing fine, but if you look at any review on youtube, they all enforce unlocked TDPs and expect CPUs to run like that in every motherboard. They only ever care about peak performance with beefy coolers, and that extra 1% that nobody can ever see in real life, but puts X CPU just ahead of the competition. That's what caused the stir at Hardware Unboxed. Do you drive your car with the engine rpm at redline all the time? I don't think so. Heck, even AMD CPUs don't maintain their max turbo clocks all the time. In fact, AMD never even publishes their boosting tables, only a vague max boost clock (that you probably never see in real life, just like with any Intel CPU) and nobody complains about it.
The problem for reviewers is they essentially have to choose - either Intel spec or motherboard default, whatever that is - and depending on their resources, they are partially at the mercy of motherboard makers, as not everyone can afford to buy an expensive motherboard for a test platform. Most serious reviewers are pretty transparent about this as well as their reasoning behind whatever choice they make. But YouTubers generally don't count as 'serious reviewers'. GN is the only real exception (though perhaps Level1Techs should be included?). Other than that there's AnandTech, TPU, and a few other sites doing in-depth written reviews of high quality, including discussions of methodologies and the rationales behind them.
Edit: Speaking of AMD max boost clocks, the "Asus Optimizer" setting in my motherboard BIOS pushes power limits out in space so that the CPU can maintain a higher boost clock even at a 100% workload. With this enabled, my 5950 chewed through around 180 Watts in Cinebench and came close to throttling temps even with a 240 mm AIO. Sure, it maintained 4.2-4.4 GHz all-core instead of the normal 3.6-3.8, but still... Why does nobody complain about this? Because it's an Asus feature, not AMD spec. Unlocked power limits by default should not be allowed.
Yep, platform holders need to enforce their standards. Optional features from third parties are never a problem as long as they are optional.
They don't throttle, but they adjust their boost bins. My 1650 runs at different clock speeds during different workloads - Superposition 720p or 1080p Medium lets it run at 1920-1950 MHz, it does around ~1900 in 1080p Ultra, and 1860 in Cyberpunk 2077.
Not the same. They adjust their boost bins to keep within power and thermal limits, and mainly the latter. Different workloads draw different amounts of power, so 1950MHz and 1860MHz might both draw the same amount of power depending on what work is being done.
I don't remember seeing a similar setting in my BIOS when I still had the 3600. It would be nice to play with it. Too late I guess. :ohwell:
It's not the most widely advertised nor obviously visible setting, but it should be there on any current-gen motherboard.
That would be a solid plan if there was an APU available. The Ryzen 4000 series are expensive and very difficult to find (they're also kind of a downgrade in gaming) and the 5000 series aren't out on DIY channels, yet.
I agree that they are difficult to find, but they're not that expensive. A bit of a premium, sure. I got my 4650G from a reputable Ebay store, and have had zero issues.
Not to worry, my impulse-bought Asus TUF B560M-Plus Wifi and Core i7-11700 have just arrived. Tests coming soon. :D
Good luck dealing with its stock 224W PL2, I guess? :p Should be decent enough as long as you're willing to tune things manually (or are willing to deal with short-term (28s tau) heat loads well above TDP.
If it wasn't BS, it wouldn't have tricked me into swapping my 65 W TDP processor with another 65 W part and expecting it to work just fine. Maybe it works with OEMs whose only goal is to make their systems 'just work', even if at the edge of throttling, but DIYers need to know what to expect and how to build their systems before buying.
Again: you need to change what you think TDP means, because you're talking and acting as if it is a consumer-facing denotation of power consumption. It isn't. It never has been - though, as discussed above, it used to be pretty similar. That is no longer the case. It is a number denoting a design spec for OEMs, and a marketing class for end users. Period. If you went into this with the expectation that "my system can cool ~65W reasonably, so I'll be able to sustain base clock and maybe boost some, but I'll never see a high all-core boost", then you wouldn't be disappointed, and your expectations would align with actual specifications. If you expect to get much higher than base boost clocks, while simultaneously steering well clear of tJmax, within a TDP-like thermal envelope, then you're basing your expectations on something that isn't reality, and that nobody has promised.
Actually, you do see maximum boost clock quite often on Intel chips. I often see 4.3GHz on i5 10400F. And I often see all core maximum clock of 4GHz at pretty much any load. I heard that Ryzen chips simply don't have such tables and if they can they will keep increasing clock speed as long as cooling permits doing so until maximum specified turbo speed by AMD, also AMD does that by 25MHz increments and Intel does it in 100Mhz increments. I don't remember many details, but AMD and Intel boosting algorithms are substantially different.
My 5800X easily exceeds its 4.7GHz spec in desktop usage - two cores boost to a reported 4.841 (likely 4.85 minus some base clock measurement inaccuracy, as the 100MHz base clock is reported as 99.8), with the remaining six all hitting 4.791. That's at entirely stock settings, though of course these clocks are with extremely low % and light loads. In Prime95 (blend, so not the hottest/heaviest workload) it fluctuates between 4.475 and 4.55GHz all-core at a PPT of 122-127W (well below its 138W limit). It also runs up to ~85°C under that load - but then my water loop is configured for silence, bases pump and fan rpms only off water temperatures, and ramps very slowly.

(Testing single core speeds is much more difficult as the Windows scheduler will shift heavy processes around from core to core rapidly in order to alleviate heat build-up - so running Blend with a single thread just results in a reported ~20% load across several cores, with clock speeds and which cores are under load fluctuating rapidly.)
Oh dear, those Ryzens are bad at dissipating heat. I remember stock FX 6300 consuming over 200 watts in stress test stock, despite being marketed as 95 watt chip. It didn't have wattage limiter, so turbo worked as long as there was thermal and VRM headroom (aka forever in most cases). It was rather easy to cool and didn't really need anything more than Hyper 103 cooler. That cooler was fine for 4.4-4.6Ghz all core overclock and it had to keep temps under 62C, because it was thermal limit at first. Later updated to 72C. Ryzen 5950X should be, in theory, much easier to cool than FX.
Well ... thermal density is massively increased. The 5800X is the most difficult to cool (105W TDP/138W PPT across a single CCD, compared to 105W TDP/144W PPT across two CCDs for the 5900X and 5950X), and it's still fine. No, you're not likely to se temps in the 60s under an all-core load at high boosts. Given that that heat is distributed across just 83.7mm² (and not even evenly - a lot of that size is L3 and interconnects, with the cores being ~half of the CCD), compared to 315mm² for Bulldozer. It stands to reason that a) Ryzen gets hotter overall as the heat sources are smaller and more concentrated, and b) that it's more difficult to dissipate this heat out through the IHS and into the cooler thanks to the concentration of heat. Given the density of these chips, what they're able to do with them is highly impressive.
However, if it's all that impossible to cool it well, then obsessing with boost clock is a waste of time. I personally think that PL and PPT values should be abandoned as nobody really cares about those, then cooling CPU, instead there could be temperature limiter, which would reduce boost speed at certain set temperature. It would be much easier to set up than vague TDP, which means almost nothing to end user.
... there is. CPUs throttle if they get to hot. Boost is dependent on thermals as well as power. But the thermal throttle points are pretty high, typically around 95-100°C - thankfully, as those are entirely safe temperatures for the CPU, and anything lower would just be giving up performance for no reason. If you're asking for CPUs to throttle at temperatures lower than this, I suggest you take a step back and re-think a bit.

This situation just isn't good. It feels like a lot of performance is being lost by sticking to too low TDP spec or by making 65 watt cooler. And if enthusiast buys an Intel chip today and invests into better than stock cooler, one can easily gain a lot of performance. The question is it gaining performance or just unchoking chip from stupid Intel spec? In times, when i5 11400F has a base speed of 2.6GHz and maximum boost clock of 4.4GHz, I would say that if you actually stick to 65 watt TDP (turbo boost off, as Intel specifies that TDP is at base clock speed), then you would be loosing a bit less than half of CPU performance to get that 65 watt TDP. In real life loads would will still get closer to 3.4Ghz even at 65 watt TDP with turbo on, but it only takes one heavy task on CPU to keep it running at base speed to fit into tiny 65 watt power budget. BOINC might be heavy enough load to not see more than 2.6GHz on that chip. When at higher power budget it could be running at 4Ghz on all cores. That's a lot of performance loss, on chips which other than stupid TDP spec, can perform much better, granted that you use aftermarket tower cooler. Even 92mm tower cooler would likely be enough for i5 with unlocked PL values.
Again: please stop treating TDP as if it is a consumer-facing denomination of power draw. Did you at all read the rest of this thread? I agree that it would make more sense for a consumer-oriented 10400F-like SKU to have a higher base clock and a 95W TDP, but that would just add another SKU to an already complex lineup and make inventory-keeping and binning more challenging for Intel, but more importantly, for distributors and retailers. There are already 19 SKUs for Rocket Lake - and that doesn't even include i3s or Pentiums! Adding higher base clock SKUs for consumers to replace the 65W SKUs would just make a mess of things (especially if you start wanting both F and non-F versions of those as well). And OEMs wouldn't use those, as they wouldn't be able to fit them into their 65W thermal designs, meaning they couldn't just cut the 65W SKUs either.

The issue here isn't low TDPs or low base clocks, it's the lack of predictability of performance.
And people seem to overlook another Intel CPU line, the T series chips, which are rated at 35 watts. i9 10900T was rated at 35 watts and to do that it has base speed of 1.9GHz and maximum boost clock of 4.6GHz. In this case you won't ever see it running at base speed or at TDP. For Intel chips maximum all core boost clock is essentially a new base clock. And those T chips were really bad at their job as they hardly saved any power when compared to non T version. Thanks to stupidly high PL2 values. Bullshit like that destroyed any value of separate T sku. What the point of getting T version, when you can get non T version and then set TDP to whatever you like?
T SKUs are binned (slightly) better than higher TDP SKUs, allowing them to run at slightly higher base clocks at 35W. Sure, you could get lucky and get a great K or non-K chip and match it, but there's no guarantee. But T chips are rarely sold at retail at all (a few shops carry them intermittently at best), and are only targeted towards uSFF OEM solutions like ThinkStation Tinys and Optiplex uSFFs. They might consume more than 35W even there - if thermals and power delivery allows them to - but the entire point is that they constitute a discrete class of CPU - ones designed for very low power, small form factor applications - that OEMs require for their large-scale business, education and government customers. You could of course run a 65W CPU in the same chassis, but it would constantly be thermal throttling, which isn't ideal, and the fan would be running fast even at idle. Instead, the 35W SKU lets them design tiny cases while the PL2 lets these CPUs stay as responsive as their higher power counterparts in desktop workloads, where very high clocks matter the most.

Again, you're treating an OEM-oriented designation (TDP) as well as now, an OEM-oriented product series as if they were consumer-directed. They aren't. It stands to reason things will be misunderstood and misinterpreted when things meant for one group are fit into the frameworks of understanding of the other.
And for that matter what's the point of getting K sku, when you can just ramp up PL values on lower end chip and it will be almost as good as K version? Also non k version wouldn't even lose warranty from having PL values modified as Intel let motherboard OEMs go wild.
That's indeed part of the issue here, with Intel's non-enforcement of power limits - that locked-down SKUs with high boost clocks can now essentially act as unlocked SKUs (there's no real OC headroom on top of stock boost anyhow) with unlocked power limits. That's essentially what this thread is about.
It's such a bad shitstorm, that I don't even know which is the least painful way to resolve it anymore. To enforce strict TDP? To raise TDP? To get rid of PL2? To keep performance or to accept losses? All this nonsense just makes me want to go back to era of single clock speed for everyone and be done with all this TDP bullshit. Let TDP to be whatever is needed for rated clock speed and be fine with results, but computer OEMs wouldn't be having any of that.
Accepting some performance variability is necessary in DIY - there will always be variables that manufacturers can't account for - but Intel still needs to define their specs more clearly and enforce them better. The current free-for-all among motherboard manufacturers is the core problem here, not TDPs or PL2 settings.
That's another thing nobody seems to talk about. My Ryzen 3 3100 runs at 72 °C in Cinebench with the stock cooler (50 W max power). The 3600 came very close to throttling even with a be quiet! Shadow Rock LP (that was cold to the touch) with a maxed out 88 W PPT.
Hm, I've seen quite a few discussions of various Zen2 and Zen3 Ryzens "running hot". My impression is that it's established knowledge in enthusiast circles at this point that higher thermal density Zen2 and Zen3 Ryzens are a bit difficult to cool.
If you think about it, spending a lot less money for a "T" CPU and unlocking its power limits is a much better deal than buying a "K" version and locking it to suit your available cooling capacity. ;) It's a shame T series are not available for DIY with 11th gen.
Arent T-series SKUs - the few times you can find them at retail - typically the same price as K SKUs, if not more expensive?
To be honest, I've disagreed with boosting right from the start. Why would you not want peak performance with pre-designed power consumption all the time? The whole concept is flawed.
As with everything it's a compromise. But would you honestly want a CPU that either sacrificed a massive amount of responsiveness (seriously, the difference in desktop usage feel between, say, 3GHz and 4.5GHz is massive) for a reasonable max power draw, or one that demanded crazy cooling to work at all? That's just dumb to me. CPUs today are smart, as in they adapt to their workloads and environments. This allows for a far, far better balance of responsiveness, overall performance, and cooling needs than any previous solution. Is it perfect? Of course not. But it's flexible enough to adapt to a lot of shitty configurations while delivering the best possible OOB experience across the board. Why would you not want that?
72C in Cinebench? That means it will almost throttle in prime95. That's not good. I don't consider that my stuff has proper cooling if it can't run prime95 indefinitely, and in some cases prime95 and Furmark at the same time. I couldn't touch Ryzens with such problems. I'm not sure about your theory, but I know that I would try lapping those Ryzens. Maybe they are just uneven.
Throttle means run below base clock. It might not boost quite as high. That is not the same.
I don't consider that my stuff has proper cooling if it can't run prime95 indefinitely, and in some cases prime95 and Furmark at the same time.
What does that mean? At what thermals? At what noise levels? At what power draws? Heck, the Threadripper 1920X in my partner's workstation ran Prime95 indefinitely with a clogged water cooler - at 600MHz and ~90°C, but it ran for as long as I wanted it to. Also, running power virus loads to check thermals is kind of useless, seeing how no real-world workload will create as much heat as P95 small FFT or FurMark. Heck, FurMark has a tendency to kill perfectly good GPUs due to overheating parts of the die. I really don't see the benefit of that.
I dunno. But I have one of the cheapest FM2+ boards from Gigabyte. I think it's A68H-DS2 something. It handles Athlon 760K at stock perfectly fine and judging by VRM temperatures it could handle it overclocked. 760K is a FX derived chip and even at stock it uses obnoxious amount of power if all cores are loaded (140-160 watts). All it has is 4+1 phases and they are bare, no heatsink. I also have some A68H-HD+ Asrock board and it handles Athlon 845 perfectly fine too, again bare 4+1 phases. So it's not like OEMs can't make a functional board with bare VRMs that can't supply that wattage, I think that they just downspeced them too much and therefore they overheat. Thus turning i5 11600k into 11600.
Not really a valid comparison. VRMs care about amps, not watts, and due to older CPUs running at much higher voltages. Those Athlon X4s ran at 1.5V or higher, which has pretty dramatic effects compared to contemporary CPUs often running at 1-1.2V. 160W at 1.5V is 106.7A. 160W at 1V is 160A, and at 1.2V it's 133A. Both of those are significant increases for, say, a 4x50A power delivery setup.
 

HTC

Joined
Apr 1, 2008
Messages
4,604 (0.78/day)
Location
Portugal
System Name HTC's System
Processor Ryzen 5 2600X
Motherboard Asrock Taichi X370
Cooling NH-C14, with the AM4 mounting kit
Memory G.Skill Kit 16GB DDR4 F4 - 3200 C16D - 16 GTZB
Video Card(s) Sapphire Nitro+ Radeon RX 480 OC 4 GB
Storage 1 Samsung NVMe 960 EVO 250 GB + 1 3.5" Seagate IronWolf Pro 6TB 7200RPM 256MB SATA III
Display(s) LG 27UD58
Case Fractal Design Define R6 USB-C
Audio Device(s) Onboard
Power Supply Corsair TX 850M 80+ Gold
Mouse Razer Deathadder Elite
Software Ubuntu 19.04 LTS
This is a cheap board without even just a chunk of metal on those VRMs.
Most of these boards should've been labeled as i3 capable boards. They don't even run i5s properly, let alone i7/9s.

What are vendors and Intel thinking... :rolleyes: (?)

Let me see if i get this straight:


They were thinking????
 
Joined
May 2, 2017
Messages
7,762 (3.04/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I bought that board with CPU in 2018 and it's FM2+ platform, not FM2. It's not 10 years old. At that same time Zen+ was out.

BTW Artic Silver 5 is one of the worst performing thermal pastes nowadays, there's no surprise that even some basic paste would be better than it. It's literally dead last in tests.
When you bought it is irrelevant - old stock is still sold until companies write it off and bin it. The Athlon X4 760K is an FM2 (not +, though it's also compatible with + motherboards) CPU from October 2012.
 
Joined
Sep 3, 2019
Messages
2,981 (1.76/day)
Location
Thessaloniki, Greece
System Name PC on since Aug 2019, 1st CPU R5 3600 + ASUS ROG RX580 8GB >> MSI Gaming X RX5700XT (Jan 2020)
Processor Ryzen 9 5900X (July 2022), 150W PPT limit, 79C temp limit, CO -9~14
Motherboard Gigabyte X570 Aorus Pro (Rev1.0), BIOS F37h, AGESA V2 1.2.0.B
Cooling Arctic Liquid Freezer II 420mm Rev7 with off center mount for Ryzen, TIM: Kryonaut
Memory 2x16GB G.Skill Trident Z Neo GTZN (July 2022) 3600MHz 1.42V CL16-16-16-16-32-48 1T, tRFC:288, B-die
Video Card(s) Sapphire Nitro+ RX 7900XTX (Dec 2023) 314~465W (387W current) PowerLimit, 1060mV, Adrenalin v24.3.1
Storage Samsung NVMe: 980Pro 1TB(OS 2022), 970Pro 512GB(2019) / SATA-III: 850Pro 1TB(2015) 860Evo 1TB(2020)
Display(s) Dell Alienware AW3423DW 34" QD-OLED curved (1800R), 3440x1440 144Hz (max 175Hz) HDR1000, VRR on
Case None... naked on desk
Audio Device(s) Astro A50 headset
Power Supply Corsair HX750i, 80+ Platinum, 93% (250~700W), modular, single/dual rail (switch)
Mouse Logitech MX Master (Gen1)
Keyboard Logitech G15 (Gen2) w/ LCDSirReal applet
Software Windows 11 Home 64bit (v23H2, OSB 22631.3155)
Not really a valid comparison. VRMs care about amps, not watts, and due to older CPUs running at much higher voltages. Those Athlon X4s ran at 1.5V or higher, which has pretty dramatic effects compared to contemporary CPUs often running at 1-1.2V. 160W at 1.5V is 106.7A. 160W at 1V is 160A, and at 1.2V it's 133A. Both of those are significant increases for, say, a 4x50A power delivery setup.
When you bought it is irrelevant - old stock is still sold until companies write it off and bin it. The Athlon X4 760K is an FM2 (not +, though it's also compatible with + motherboards) CPU from October 2012.
That was exactly my point. The tech is 10years old.
 
Joined
Sep 20, 2018
Messages
1,444 (0.71/day)
The funny thing about my PC case is that even though it's a slim one that only accepts low profile graphics cards and CPU coolers, micro-ATX motherboards aren't an issue. I'm using an Asus B550M TUF Wifi at the moment, and I would be a bit sad to swap it for something else (unless it's of the same quality as this one).

If I go Intel again, I want to be looking at something similar - the Asus B560M TUF Wifi, or the Asus Z590M Prime are the ones with similar-looking quality and affordability available in my area. As for CPU, I was thinking about a Core i7-11700 non-K and locking its PL1 to 65 W, and PL2 to whatever I can cool. Hopefully, the Asus boards I looked at (or something else) would let me do that, even if it's not their default setting.
Asus tuf dont have as good VRMs as Gigabyte or MSI's upper mid tier boards, those are equipped with 12 50amp stages, where the tuf is only 8 50amps stages
 
Joined
May 8, 2021
Messages
1,978 (1.82/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
Inspired by (and linking to) the same thread that you're posting in? :laugh:
It was originally a separate thread, but for some reason merged with this one.

My 5800X easily exceeds its 4.7GHz spec in desktop usage - two cores boost to a reported 4.841 (likely 4.85 minus some base clock measurement inaccuracy, as the 100MHz base clock is reported as 99.8), with the remaining six all hitting 4.791. That's at entirely stock settings, though of course these clocks are with extremely low % and light loads. In Prime95 (blend, so not the hottest/heaviest workload) it fluctuates between 4.475 and 4.55GHz all-core at a PPT of 122-127W (well below its 138W limit). It also runs up to ~85°C under that load - but then my water loop is configured for silence, bases pump and fan rpms only off water temperatures, and ramps very slowly.

(Testing single core speeds is much more difficult as the Windows scheduler will shift heavy processes around from core to core rapidly in order to alleviate heat build-up - so running Blend with a single thread just results in a reported ~20% load across several cores, with clock speeds and which cores are under load fluctuating rapidly.)
That's with custom loop? Oh god.


Well ... thermal density is massively increased. The 5800X is the most difficult to cool (105W TDP/138W PPT across a single CCD, compared to 105W TDP/144W PPT across two CCDs for the 5900X and 5950X), and it's still fine. No, you're not likely to se temps in the 60s under an all-core load at high boosts. Given that that heat is distributed across just 83.7mm² (and not even evenly - a lot of that size is L3 and interconnects, with the cores being ~half of the CCD), compared to 315mm² for Bulldozer. It stands to reason that a) Ryzen gets hotter overall as the heat sources are smaller and more concentrated, and b) that it's more difficult to dissipate this heat out through the IHS and into the cooler thanks to the concentration of heat. Given the density of these chips, what they're able to do with them is highly impressive.
Well, I'm not really impressed by thermals of Ryzen chips. You could cool FX chips at 5GHz and keep them under 62C with just big air cooler. Stock 95 watt FX chips could be passively cooled with same air cooler, but with fans removed. And now you need big water cooler just to keep Ryzen working at stock clocks. That's a fail to me. The last time AMD needed water cooler was with FX 9590 and it was just 120mm AIO.


... there is. CPUs throttle if they get to hot. Boost is dependent on thermals as well as power. But the thermal throttle points are pretty high, typically around 95-100°C - thankfully, as those are entirely safe temperatures for the CPU, and anything lower would just be giving up performance for no reason. If you're asking for CPUs to throttle at temperatures lower than this, I suggest you take a step back and re-think a bit.
Keeping CPU at 100C or 90C isn't acceptable for it. That it can survive such temperatures, means that it won't have any lasting effect if it reaches such temperatures occasionally. I remember some Intel thermal engineer posting that their 14nm chips could survive 1.4 volts at up to 80C in long term, but violate that voltage or cooling and electromigration will be bad.


Again: please stop treating TDP as if it is a consumer-facing denomination of power draw.
Never, Intel's PL1 is how they define TDP. For the first time they finally got their shit together in this one aspect.


Did you at all read the rest of this thread? I agree that it would make more sense for a consumer-oriented 10400F-like SKU to have a higher base clock and a 95W TDP, but that would just add another SKU to an already complex lineup and make inventory-keeping and binning more challenging for Intel, but more importantly, for distributors and retailers. There are already 19 SKUs for Rocket Lake - and that doesn't even include i3s or Pentiums! Adding higher base clock SKUs for consumers to replace the 65W SKUs would just make a mess of things (especially if you start wanting both F and non-F versions of those as well). And OEMs wouldn't use those, as they wouldn't be able to fit them into their 65W thermal designs, meaning they couldn't just cut the 65W SKUs either.
Well that's obvious, but it matters now what they will do with Alder Lake.


T SKUs are binned (slightly) better than higher TDP SKUs, allowing them to run at slightly higher base clocks at 35W. Sure, you could get lucky and get a great K or non-K chip and match it, but there's no guarantee. But T chips are rarely sold at retail at all (a few shops carry them intermittently at best), and are only targeted towards uSFF OEM solutions like ThinkStation Tinys and Optiplex uSFFs. They might consume more than 35W even there - if thermals and power delivery allows them to - but the entire point is that they constitute a discrete class of CPU - ones designed for very low power, small form factor applications - that OEMs require for their large-scale business, education and government customers. You could of course run a 65W CPU in the same chassis, but it would constantly be thermal throttling, which isn't ideal, and the fan would be running fast even at idle. Instead, the 35W SKU lets them design tiny cases while the PL2 lets these CPUs stay as responsive as their higher power counterparts in desktop workloads, where very high clocks matter the most.

Again, you're treating an OEM-oriented designation (TDP) as well as now, an OEM-oriented product series as if they were consumer-directed. They aren't. It stands to reason things will be misunderstood and misinterpreted when things meant for one group are fit into the frameworks of understanding of the other.
First, I highly doubt that T chips are actually a better bins of non T chips and BIOSes often allow you to set your own PL values.


Accepting some performance variability is necessary in DIY - there will always be variables that manufacturers can't account for - but Intel still needs to define their specs more clearly and enforce them better. The current free-for-all among motherboard manufacturers is the core problem here, not TDPs or PL2 settings.
DIY market was just fine without TDP shenanigans. Even chips with one clock speed were decently acceptable and didn't have problems. I'm not a fan of turbo and other power tweaking. One static clock with downclocking for power savings seems to be the best design so far.

Throttle means run below base clock. It might not boost quite as high. That is not the same.
I know full well that it's not exactly a throttle in legal terms, but realistically you lose performance, because your cooler can't keep up. You sacrifice performance to not damage the chip.

What does that mean? At what thermals? At what noise levels? At what power draws? Heck, the Threadripper 1920X in my partner's workstation ran Prime95 indefinitely with a clogged water cooler - at 600MHz and ~90°C, but it ran for as long as I wanted it to.
Obviously at below maximum manufacturer specified temperature, maximum clock speed and at whatever my ears tell me is acceptable noise level, which tends to be somewhere at up to 1200 rpms most of the time, while preferably at no more than 1000 rpm. Power draw depends on chip and is generally not a concerns, unless it's very high. Your partner's TR system would have failed this test spectacularly.

Also, running power virus loads to check thermals is kind of useless, seeing how no real-world workload will create as much heat as P95 small FFT or FurMark. Heck, FurMark has a tendency to kill perfectly good GPUs due to overheating parts of the die. I really don't see the benefit of that.
prime95 is a perfectly realistic workload, some people calculate primes for weeks. And let's not get into Furmark shit again. I will be very clear, if card can't handle some type of workload, then it's either badly tuned or has an inadequate cooling solution. I don't care that it kills some badly engineered cards, as no properly made card should die in Furmark. Also judging by power figures, running Furmark is not much different than mining or running MilkyWay@Home. My RX 580 can handle Furmark just fine with vBIOS mods. It now can't reach 80s and barely breaks into 70s in Furmark. RX 560 that I have in other machine, fails to reach 70s.

Not really a valid comparison. VRMs care about amps, not watts, and due to older CPUs running at much higher voltages. Those Athlon X4s ran at 1.5V or higher, which has pretty dramatic effects compared to contemporary CPUs often running at 1-1.2V. 160W at 1.5V is 106.7A. 160W at 1V is 160A, and at 1.2V it's 133A. Both of those are significant increases for, say, a 4x50A power delivery setup.
And watts are amps*volts, therefore VRMs care about watts. And no those Athlons didn't run at 1.5 volts. Athlon X4 870K and Athlon A4 845 are both limited to 1.5V or 1.485V. No Athlon came out with more than that. Also, most of that voltage is needed to turbo to work, so if you disable turbo, you can get massive voltage reductions.

When you bought it is irrelevant - old stock is still sold until companies write it off and bin it. The Athlon X4 760K is an FM2 (not +, though it's also compatible with + motherboards) CPU from October 2012.
Nah, it's new stock. I have loads of chips for FM2+ boards. Athlon 760K is just one of them. I bought it for unique reasons:

Previously that computer had 870K, which was made in 2015 and Athlon 845 was made in 2016. Both are nowhere near being 10 years old. Several motherboards had an extended manufacturing for some reason and thus you could buy them even in 2018 and probably in 2019. Athlon 845 is an unicorn chip, which is somewhat rare as it was released at the end of lifespan of FM2+ platform and it had Carrizo core, the last architectural improvement on AM3+ and FM2+ platforms. Athlon 870K is also late production model, but is a better binned 860K. Availability of it was poor and it mostly sold after FM2+ becoming obsolete. There were bunch of other rare CPUs released in 2016 for FM2+ platform, like A6 7470K or A10 7890K.
 
Joined
Jan 14, 2019
Messages
9,886 (5.12/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 7800 XT
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Benchmark Scores Cinebench R23 single-core: 1,800, multi-core: 18,000. Superposition 1080p Extreme: 9,900.
The problem for reviewers is they essentially have to choose - either Intel spec or motherboard default, whatever that is - and depending on their resources, they are partially at the mercy of motherboard makers, as not everyone can afford to buy an expensive motherboard for a test platform. Most serious reviewers are pretty transparent about this as well as their reasoning behind whatever choice they make. But YouTubers generally don't count as 'serious reviewers'. GN is the only real exception (though perhaps Level1Techs should be included?). Other than that there's AnandTech, TPU, and a few other sites doing in-depth written reviews of high quality, including discussions of methodologies and the rationales behind them.
If I had to write a review, I'd try to do it both ways - like the guys here at TPU do. When reviewing, you need to consider that not everyone who reads your review will want the same out of their system.

Not the same. They adjust their boost bins to keep within power and thermal limits, and mainly the latter. Different workloads draw different amounts of power, so 1950MHz and 1860MHz might both draw the same amount of power depending on what work is being done.
Exactly. Throttling means dropping below base clock, which (coming back to the original topic) only that one ASRock motherboard does in HU's latter video. All the rest are within spec, however vague that spec is.

I agree that they are difficult to find, but they're not that expensive. A bit of a premium, sure. I got my 4650G from a reputable Ebay store, and have had zero issues.
I saw a 4750G on ebay a couple weeks ago for about £450. As an OEM CPU, it comes with no box and no warranty. I got the Asus B560M TUF motherboard and the i7-11700 for the same price brand new. We'll see what happens when the 5000G/GE series come out for DIY. I might buy one just to test it, and sell the Core i7 if it's any good. :p

Good luck dealing with its stock 224W PL2, I guess? :p Should be decent enough as long as you're willing to tune things manually (or are willing to deal with short-term (28s tau) heat loads well above TDP.
Oh no, I'm definitely not gonna run a 224 W PL2. :laugh: I intend to do as much tweaking as necessary to make it work in my thin SFF case. I want to find the perfect balance. :)

My 5800X easily exceeds its 4.7GHz spec in desktop usage - two cores boost to a reported 4.841 (likely 4.85 minus some base clock measurement inaccuracy, as the 100MHz base clock is reported as 99.8), with the remaining six all hitting 4.791. That's at entirely stock settings, though of course these clocks are with extremely low % and light loads. In Prime95 (blend, so not the hottest/heaviest workload) it fluctuates between 4.475 and 4.55GHz all-core at a PPT of 122-127W (well below its 138W limit). It also runs up to ~85°C under that load - but then my water loop is configured for silence, bases pump and fan rpms only off water temperatures, and ramps very slowly.

(...)

Well ... thermal density is massively increased. The 5800X is the most difficult to cool (105W TDP/138W PPT across a single CCD, compared to 105W TDP/144W PPT across two CCDs for the 5900X and 5950X), and it's still fine. No, you're not likely to se temps in the 60s under an all-core load at high boosts. Given that that heat is distributed across just 83.7mm² (and not even evenly - a lot of that size is L3 and interconnects, with the cores being ~half of the CCD), compared to 315mm² for Bulldozer. It stands to reason that a) Ryzen gets hotter overall as the heat sources are smaller and more concentrated, and b) that it's more difficult to dissipate this heat out through the IHS and into the cooler thanks to the concentration of heat. Given the density of these chips, what they're able to do with them is highly impressive.

(...)

Hm, I've seen quite a few discussions of various Zen2 and Zen3 Ryzens "running hot". My impression is that it's established knowledge in enthusiast circles at this point that higher thermal density Zen2 and Zen3 Ryzens are a bit difficult to cool.
Having gone through the 3100, 5950X, 3600 and then back to the 3100, that's my conclusion too. I guess you won't damage these modern chips with high temperatures as much as you would for example an FX CPU. I remember those having maximum recommended temperatures of 61 °C, while Tjmax is usually around 100 °C these days. I also remember when Navi came out and everybody freaked out of the newly reported junction temperature reaching 100 °C. AMD had to make a statement that junction temp is totally fine up to 110 °C. Still, modern Ryzens get hot even with low power consumption, and are more difficult to cool than chips of yesteryear.

As with everything it's a compromise. But would you honestly want a CPU that either sacrificed a massive amount of responsiveness (seriously, the difference in desktop usage feel between, say, 3GHz and 4.5GHz is massive) for a reasonable max power draw, or one that demanded crazy cooling to work at all? That's just dumb to me. CPUs today are smart, as in they adapt to their workloads and environments. This allows for a far, far better balance of responsiveness, overall performance, and cooling needs than any previous solution. Is it perfect? Of course not. But it's flexible enough to adapt to a lot of shitty configurations while delivering the best possible OOB experience across the board. Why would you not want that?
I'm not quite sure that's the case. My Ryzen 3 3100 basically runs at 3.85-3.9 GHz all the time, independent of workload, as it never maxes out its power limit. Hungrier chips with more cores could do the same with cTDP. If you want full power, set cTDP to the highest, and enjoy maximum clock speed all the time. You want low thermals? Just turn your cTDP down to have your clocks and voltages decrease too. You don't even need different SKUs with different TDP ratings for this.

72C in Cinebench? That means it will almost throttle in prime95. That's not good. I don't consider that my stuff has proper cooling if it can't run prime95 indefinitely, and in some cases prime95 and Furmark at the same time. I couldn't touch Ryzens with such problems. I'm not sure about your theory, but I know that I would try lapping those Ryzens. Maybe they are just uneven.
I don't test with prime95. I use my PC for gaming, so I don't need such a heavy workload to test for CPU thermals. Cinebench is just fine.
Same with GPUs: I stay clear of Furmark, and use a Superposition loop, or 3DMark stability test instead.

Oh I so agree here, I loved my FX 6300. My first chip that I pushed to 5.288 GHz and the first chip to make VRM area of motherboard brown in process. I gotta say that I didn't really care if it destroyed motherboards, as long as it was fun to overclock it. It was also great undervolting and it could run passively cooled with Scythe Mugen 4 heatsink. I kept using it until 2019, at the point where performance of it just wasn't good enough anymore. Apparently, FX lasted a long time and were still surprisingly not bad even in 2019:
To be honest, I thought my 8150 was a difficult chip to cool, though slapping a Hyper 212 on it was just fine. Now I reconsider my opinion with modern Ryzens. :laugh: FX was also my first platform where I burned my fingers just by touching the VRM heatsink.
As for performance, I wasn't really happy with it gaming-wise. Though I think these old FXes might be doing a little better with the passing of time and games needing more cores/threads to run well nowadays.
 
Joined
May 8, 2021
Messages
1,978 (1.82/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
I don't test with prime95. I use my PC for gaming, so I don't need such a heavy workload to test for CPU thermals. Cinebench is just fine.
Same with GPUs: I stay clear of Furmark, and use a Superposition loop, or 3DMark stability test instead.
On Comet Lake prime95 was the best stability test. Different platforms have slightly different best stability testing tools. Generally higher power consumption at wall tells a lot about which tool tests more of the chip better.

For GPU I test stability in heaven or tropics, and then separately test thermal in Furmark. For me, any chip should be absolutely stable and have thermals in check. Anything less is never acceptable. Then again, I actually tweak GPU permanently if I attempt to actually tweak it. My RX 580 is BIOS modded with my own custom tune to reduce wattage and noise. I managed to achieve small undervolt too.

The general rule for stability testing is to get an idea whether system is stable at maximum imaginable load, it doesn't matter if it's realistic or not, because one day you might need a similar load to work perfectly. And once stability testing is done and thermals are in check, it's still advisable to increase voltage a bit to leave some room for any unexpected voltage fluctuation or just aging of chip.


To be honest, I thought my 8150 was a difficult chip to cool, though slapping a Hyper 212 on it was just fine. Now I reconsider my opinion with modern Ryzens. :laugh: FX was also my first platform where I burned my fingers just by touching the VRM heatsink.
As for performance, I wasn't really happy with it gaming-wise. Though I think these old FXes might be doing a little better with the passing of time and games needing more cores/threads to run well nowadays.
I swear to god, those FX chips were the blast to overclock. Over 5GHz on air was super intoxicating. People outside of overclocking circles were always super impressed by that, probably not so much nowadays when stock CPUs do that. Anyway, my last board for FX had a very hot northbridge for some reason and it could cry fingers. My first board for FX was way beyond finger burning hot. I recorded 159C at VRMs and that's the same board that I said got a brown stain. I achieved 5.288GHz with Asrock 970 Pro3 R2.0 board, which didn't have any VRM cooling. I certainly wouldn't want to touch those bare VRMs. Anyway, to not try to reach 5GHz on FX is a crime and impossible to do. Even if it's a suicide run, it's so much fun. I achieved that highest overclock with just Cooler Master Hyper 103 cooler, which is worse than 212 Evo. Cooling didn't really matter as I was limited by motherboards VRM capabilities. I set voltage to 1.72V for FX 6300 with other 2 modules disabled with highest LLC that board had. It didn't throttle, but it lost any efficiency and scaling at that point. Surprisingly it wasn't in 160s on VRMs. Once I got validation, I just used FX at stock settings for another 2-3 years until it died. With any decent board I probably could have achieved 5.5-5.7 GHz on air with same cooler at expense of disabling thermal protection of chip. With actually adequate cooler, 6GHz could be possible on air for suicide run.

All I can say is that FX chips have totally spoiled me and made my overclocking expectations really high, if I will ever attempt that on some other platform. To the date, there's no better overclocking chip made than FX series. And on top of that FX chips were dirt cheap, so financial loss in case of disaster wouldn't be big. Ryzens just can't match FX in terms of their prices. I still remember 130 Euros for 6 cores and 180-200 Euros for 8 cores, Ryzen never had value close to FX and they don't really overclock, unless you deal with lame turbo.
 
Joined
Jan 14, 2019
Messages
9,886 (5.12/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 7800 XT
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Benchmark Scores Cinebench R23 single-core: 1,800, multi-core: 18,000. Superposition 1080p Extreme: 9,900.
On Comet Lake prime95 was the best stability test. Different platforms have slightly different best stability testing tools. Generally higher power consumption at wall tells a lot about which tool tests more of the chip better.

For GPU I test stability in heaven or tropics, and then separately test thermal in Furmark. For me, any chip should be absolutely stable and have thermals in check. Anything less is never acceptable. Then again, I actually tweak GPU permanently if I attempt to actually tweak it. My RX 580 is BIOS modded with my own custom tune to reduce wattage and noise. I managed to achieve small undervolt too.

The general rule for stability testing is to get an idea whether system is stable at maximum imaginable load, it doesn't matter if it's realistic or not, because one day you might need a similar load to work perfectly. And once stability testing is done and thermals are in check, it's still advisable to increase voltage a bit to leave some room for any unexpected voltage fluctuation or just aging of chip.
Maximum imaginable load is one thing, but what you're going to use the PC for is another. There is no game on the planet that's going to stress your GPU as much as Furmark does, that's why I think such programs are a bit pointless. I always aim for stability under real life conditions, so Superposition for GPU and Cinebench for CPU are the best imo.

I swear to god, those FX chips were the blast to overclock. Over 5GHz on air was super intoxicating. People outside of overclocking circles were always super impressed by that, probably not so much nowadays when stock CPUs do that. Anyway, my last board for FX had a very hot northbridge for some reason and it could cry fingers. My first board for FX was way beyond finger burning hot. I recorded 159C at VRMs and that's the same board that I said got a brown stain. I achieved 5.288GHz with Asrock 970 Pro3 R2.0 board, which didn't have any VRM cooling. I certainly wouldn't want to touch those bare VRMs. Anyway, to not try to reach 5GHz on FX is a crime and impossible to do. Even if it's a suicide run, it's so much fun. I achieved that highest overclock with just Cooler Master Hyper 103 cooler, which is worse than 212 Evo. Cooling didn't really matter as I was limited by motherboards VRM capabilities. I set voltage to 1.72V for FX 6300 with other 2 modules disabled with highest LLC that board had. It didn't throttle, but it lost any efficiency and scaling at that point. Surprisingly it wasn't in 160s on VRMs. Once I got validation, I just used FX at stock settings for another 2-3 years until it died. With any decent board I probably could have achieved 5.5-5.7 GHz on air with same cooler at expense of disabling thermal protection of chip. With actually adequate cooler, 6GHz could be possible on air for suicide run.

All I can say is that FX chips have totally spoiled me and made my overclocking expectations really high, if I will ever attempt that on some other platform. To the date, there's no better overclocking chip made than FX series. And on top of that FX chips were dirt cheap, so financial loss in case of disaster wouldn't be big. Ryzens just can't match FX in terms of their prices. I still remember 130 Euros for 6 cores and 180-200 Euros for 8 cores, Ryzen never had value close to FX and they don't really overclock, unless you deal with lame turbo.
To be honest, I've always thought overclocking was pointless, and I still do. Whatever extra you get out of your PC in benchmarks doesn't matter; the perceivable difference in real-life experience is always going to be minimal at best, at the cost of exponentially increased heat and power consumption, and decreased longevity of your parts.
 
Joined
May 8, 2021
Messages
1,978 (1.82/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
Maximum imaginable load is one thing, but what you're going to use the PC for is another. There is no game on the planet that's going to stress your GPU as much as Furmark does, that's why I think such programs are a bit pointless. I always aim for stability under real life conditions, so Superposition for GPU and Cinebench for CPU are the best imo.
When you have to ensure that cooler is truly capable of dealing with heat, then Furmark is a great tool at ensuring that. I don't want cooler to be good enough 95% of time, I want it to be good enough even in the worst case scenario, so that I know that it can cope with everyday loads and with occasionally higher loads.


To be honest, I've always thought overclocking was pointless, and I still do. Whatever extra you get out of your PC in benchmarks doesn't matter; the perceivable difference in real-life experience is always going to be minimal at best, at the cost of exponentially increased heat and power consumption, and decreased longevity of your parts.
I guess you could make such point. I don't necessarily disagree with it, but there's more to it. I mean people do this stuff to computers, others do it to their cars. Sometimes gains are small, sometimes they are substantial. Often gains come with other costs (lower longevity). I think that we as hobbyists can enjoy overclocking just as much as actual gains as we just appreciate the achievement itself. It's like making 1000 bhp car. It's useless and dangerous, likely will break down soon, but it's so much fun to actually achieve such number, even if you can't really utilize more than 600 bhp well and 100 bhp legally. FX line of CPUs was the last where overclocking could yield big gains in performance. Lowest clocked FX had 3.3GHz base speed and you could clock it to almost 5GHz. I get that it means investing a lot more into cooling, motherboard, but let's be honest it's a bit silly, yet fun. And just a bit before FX, there were Core 2 Duos, which came with base speed of 1.86GHz and could be rather safely overclocked to 3GHz+ with same motherboard and stock cooler. Of course, there were even more legendary overclocking chips like Celeron 300A, which could be overclocked from 300MHz stock speed to 450MHz. Overclocking is basically hot-rodding but for computers. Obviously it takes time, effort and money to do, not everyone appreciates that, but those that do they enjoy it for what it is, often when they know how impractical that actually is.
 
Joined
Jan 14, 2019
Messages
9,886 (5.12/day)
Location
Midlands, UK
System Name Nebulon-B Mk. 4
Processor AMD Ryzen 7 7800X3D
Motherboard MSi PRO B650M-A WiFi
Cooling be quiet! Dark Rock 4
Memory 2x 24 GB Corsair Vengeance EXPO DDR5-6000
Video Card(s) Sapphire Pulse Radeon RX 7800 XT
Storage 2 TB Corsair MP600 GS, 2 TB Corsair MP600 R2, 4 + 8 TB Seagate Barracuda 3.5"
Display(s) Dell S3422DWG, 7" Waveshare touchscreen
Case Kolink Citadel Mesh black
Power Supply Seasonic Prime GX-750
Mouse Logitech MX Master 2S
Keyboard Logitech G413 SE
Software Windows 10 Pro
Benchmark Scores Cinebench R23 single-core: 1,800, multi-core: 18,000. Superposition 1080p Extreme: 9,900.
When you have to ensure that cooler is truly capable of dealing with heat, then Furmark is a great tool at ensuring that. I don't want cooler to be good enough 95% of time, I want it to be good enough even in the worst case scenario, so that I know that it can cope with everyday loads and with occasionally higher loads.
I get that, though there is no "occasionally higher load" ever. The way Furmark stresses your GPU is unrealistic, and it's guaranteed that you'll never encounter a similar scenario while gaming.

I guess you could make such point. I don't necessarily disagree with it, but there's more to it. I mean people do this stuff to computers, others do it to their cars. Sometimes gains are small, sometimes they are substantial. Often gains come with other costs (lower longevity). I think that we as hobbyists can enjoy overclocking just as much as actual gains as we just appreciate the achievement itself. It's like making 1000 bhp car. It's useless and dangerous, likely will break down soon, but it's so much fun to actually achieve such number, even if you can't really utilize more than 600 bhp well and 100 bhp legally. FX line of CPUs was the last where overclocking could yield big gains in performance. Lowest clocked FX had 3.3GHz base speed and you could clock it to almost 5GHz. I get that it means investing a lot more into cooling, motherboard, but let's be honest it's a bit silly, yet fun. And just a bit before FX, there were Core 2 Duos, which came with base speed of 1.86GHz and could be rather safely overclocked to 3GHz+ with same motherboard and stock cooler. Of course, there were even more legendary overclocking chips like Celeron 300A, which could be overclocked from 300MHz stock speed to 450MHz. Overclocking is basically hot-rodding but for computers. Obviously it takes time, effort and money to do, not everyone appreciates that, but those that do they enjoy it for what it is, often when they know how impractical that actually is.
I guess I understand that too. I know a few people who like tuning basic cars to their limits. It's just that the time they spend in the garage making sure their "upgrades" don't end up being complete sh**, I spend out on the road enjoying the drive. :p
 
Joined
May 8, 2021
Messages
1,978 (1.82/day)
Location
Lithuania
System Name Shizuka
Processor Intel Core i5 10400F
Motherboard Gigabyte B460M Aorus Pro
Cooling Scythe Choten
Memory 2x8GB G.Skill Aegis 2666 MHz
Video Card(s) PowerColor Red Dragon V2 RX 580 8GB ~100 watts in Wattman
Storage 512GB WD Blue + 256GB WD Green + 4TH Toshiba X300
Display(s) BenQ BL2420PT
Case Cooler Master Silencio S400
Audio Device(s) Topping D10 + AIWA NSX-V70
Power Supply Chieftec A90 550W (GDP-550C)
Mouse Steel Series Rival 100
Keyboard Hama SL 570
Software Windows 10 Enterprise
I get that, though there is no "occasionally higher load" ever. The way Furmark stresses your GPU is unrealistic, and it's guaranteed that you'll never encounter a similar scenario while gaming.
As I said mining and BOINC loads are quite similar to Furmark in terms of power usage and heat output.


I guess I understand that too. I know a few people who like tuning basic cars to their limits. It's just that the time they spend in the garage making sure their "upgrades" don't end up being complete sh**, I spend out on the road enjoying the drive. :p
Until a wild Nissan Micra with RB20DE swap overtakes you. Or for that matter maybe some chap lucks out and finds Mitsu colt with 4g63 engine and invests a bit in turbo and handling mods... Oh, the possibilities are endless.
 
Last edited:
Joined
Feb 1, 2019
Messages
2,582 (1.35/day)
Location
UK, Leicester
System Name Main PC
Processor 13700k
Motherboard Asrock Z690 Steel Legend D4 - Bios 13.02
Cooling Noctua NH-D15S
Memory 32 Gig 3200CL14
Video Card(s) 3080 RTX FE 10G
Storage 1TB 980 PRO (OS, games), 2TB SN850X (games), 2TB DC P4600 (work), 2x 3TB WD Red, 2x 4TB WD Red
Display(s) LG 27GL850
Case Fractal Define R4
Audio Device(s) Asus Xonar D2X
Power Supply Antec HCG 750 Gold
Software Windows 10 21H2 LTSC
I blame intel more than the board vendors, I would only blame a board vendor if they not meeting what they advertising.

Ultimately this is happening because intel cpu's have become very power hungry, but intel refuse to reflect this is in the official spec's, as well as not enforcing board partners to adhere to these limits, they dont do this of course as they want the marketing benefits of what their chips can do whilst running with unlimited power whilst also advertising a low TDP.

--edit--

Just seen the video on the asrock board, opinion changed, cannot excuse asrock for that board, I think its ok to release boards that cannot properly handle higher power limits and overclocks, but not ok to advertise boards to support chips where you cannot meet specification, of course intel still has blame as well.
 
Last edited:
Joined
May 2, 2017
Messages
7,762 (3.04/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
That's with custom loop? Oh god.
A custom loop, yes, but with a single 240mm rad for both CPU and GPU, and a quasi-AIO CPU DDC pump-block combo that isn't particularly good thermally. Also, the loop is configured for silence and not thermals, with fans ramping slowly and based on water temperatures rather than component temperatures.
Well, I'm not really impressed by thermals of Ryzen chips. You could cool FX chips at 5GHz and keep them under 62C with just big air cooler. Stock 95 watt FX chips could be passively cooled with same air cooler, but with fans removed. And now you need big water cooler just to keep Ryzen working at stock clocks. That's a fail to me. The last time AMD needed water cooler was with FX 9590 and it was just 120mm AIO.
Apparently you didn't read what I wrote whatsoever. Oh well.
Keeping CPU at 100C or 90C isn't acceptable for it. That it can survive such temperatures, means that it won't have any lasting effect if it reaches such temperatures occasionally. I remember some Intel thermal engineer posting that their 14nm chips could survive 1.4 volts at up to 80C in long term, but violate that voltage or cooling and electromigration will be bad.
Sorry, but that's nonsense. Silicon is perfectly fine running at 90-100°C for extended periods of time. As I've said before here, look at laptops - most laptops idle in the 60s-70s and hit tJmax at any kind of load as they prioritize keeping quiet + accept that running hot doesn't do any harm. It won't be if you also ramp voltages high while loading the CPU heavily, but advanced self-regulating CPUs like Ryzens don't allow those in combination unless you explicitly disable protections and override regulatory mechanisms. Heck, Buildzoid once tried to intentionally degrade his 3700X, and after something like a continuous 60 hours at >110°C (thermal limits bypassed) and 1.45V under 100% load he lost ... 25MHz of clock stability. So under any kind of regular workload degradation is never, ever happening, as that combination of thermals, voltage and load over time is utterly absurd for real-world workloads. Sure, his sample might be very resistant to electromigration, but even accounting for that there's no reason to worry at all.
Never, Intel's PL1 is how they define TDP. For the first time they finally got their shit together in this one aspect.
PL1 is absolutely not how Intel defines TDP. PL1 is defined from TDP, TDP is defined as a thermal output class of CPUs towards which CPUs are tuned in terms of base clock and other characteristics. Power draw is only tangentially related to TDP.
Well that's obvious, but it matters now what they will do with Alder Lake.
It's not going to change. The 65W TDP tier is utterly dominant in the OEM space, which outsells DIY by at least an order of magnitude. 65W TDPs for midrange and lower end chips aren't changing. If you want more for DIY, they have a K SKU to sell you to cover that desire - for a price, of course. You, and us DIYers overall, are not first in line for things being adjusted to our desires, and never will be.
First, I highly doubt that T chips are actually a better bins of non T chips and BIOSes often allow you to set your own PL values.
They are supposed to be better binned - whether they are in real life is always a gamble, as there's a lot of overlap between different bins, and some are interchangeable depending on the application.
DIY market was just fine without TDP shenanigans. Even chips with one clock speed were decently acceptable and didn't have problems. I'm not a fan of turbo and other power tweaking. One static clock with downclocking for power savings seems to be the best design so far.
Again: it seems like you haven't read the rest of this thread at all. I'll just point you to this post. Though especially this part:
you're approaching this from the wrong angle, which either stems from a fundamental misunderstanding or from wanting something that doesn't exist. The issue: TDP is not a consumer-facing specification denoting power draw. It never has been. Historically it has been roughly equivalent to this, but this is more coincidental than intentional. TDP is a specification for SIs and cooler OEMs to design their cooling solutions and system designs around. If TDP was meant to denote power draw directly, it would for example be a guide for motherboard makers in designing their VRM setups - but it's not, and there are specific specifications (dealing with the relevant metrics, volts and amps) for that. You can disagree with how TDPs are used in marketing with regards to this - I definitely do! - but you can't just transfer it into being something it isn't.
Saying "DIY market was just fine without TDP shenanigans" is such an absurd reversal of reality that it makes it utterly impossible to actually discuss the issues at hand. TDPs have never been directly related to power draw, nor has it ever been intended for the DIY market beyond a product class delineation.

As for abandoning boost: well, if you'd be happy with ~2.5GHz CPUs, then by all means. Because that's what we'd get if there wasn't boost - we'd get base clock at sustained TDP-like power draws. The 65W TDP tier isn't going anywhere, again, as OEMs buy millions of those CPUs, and changing it would be extremely expensive for them.
I know full well that it's not exactly a throttle in legal terms, but realistically you lose performance, because your cooler can't keep up. You sacrifice performance to not damage the chip.
Yes. But that's not throttling. That's part of tuning a DIY system. Nobody has ever promised 100% boost clock 24/7 under 100% all-core load, or even 1-core load. You really need to be more nuanced in your approach to this.
Obviously at below maximum manufacturer specified temperature, maximum clock speed and at whatever my ears tell me is acceptable noise level, which tends to be somewhere at up to 1200 rpms most of the time, while preferably at no more than 1000 rpm. Power draw depends on chip and is generally not a concerns, unless it's very high. Your partner's TR system would have failed this test spectacularly.
"At below maximum manufacturer specified temperature" ... okay ... so, anything below 100°C-ish? Because above you seemed to say 80°C was unacceptable. Yet that's quite a bit below maximum. Also, 1200rpm ... of which model of fan, how many fans, which case, which cooler? And obviously the TR system would have failed, it had a clogged AIO cooler. My point was: you're making generalizing claims without defining even close to a sufficient amount of variables. Your criteria still make it sound like my cooling setup is well within your wants, yet you're saying above that it's unacceptable, so ... there's something more there, clearly.
prime95 is a perfectly realistic workload, some people calculate primes for weeks. And let's not get into Furmark shit again. I will be very clear, if card can't handle some type of workload, then it's either badly tuned or has an inadequate cooling solution. I don't care that it kills some badly engineered cards, as no properly made card should die in Furmark. Also judging by power figures, running Furmark is not much different than mining or running MilkyWay@Home. My RX 580 can handle Furmark just fine with vBIOS mods. It now can't reach 80s and barely breaks into 70s in Furmark. RX 560 that I have in other machine, fails to reach 70s.
Prime95 not "realistic". Yes, some people calculate primes for weeks. Some people calculate the changes in molecular or cell structures of complex organisms when subjected to various chemicals. That doesn't make either a relevant end-user workload. If you're doing workstation things, get a workstation, or accept that consumer-grade products aren't designed for that and you need to overbuild to match. As for FurMark, whether a GPU can "handle" it is irrelevant. It is a workload explicitly created for maximum heat output, which is dangerous to run. It doesn't matter what thermals your GPU reads (heck, the very fact that you're saying "it can handle it with BIOS mods!" says enough by itself!), the issue is that it creates extreme hotspots away from the thermal sensors on your GPU. Most GPUs - all of them pre RDNA - have their thermal sensors along the edge of the die. Under normal loads there's easily a 10-20°C difference in thermals between the edge and centre of the die under full load. Furmark exaggerates that - so if your edge thermal sensor is reading 70-80, the hotspot temperature might be 110 or higher. If your hardware doesnt die that's good for you, but please stop subjecting it to unnecessary and unrealistic workloads just for "stress testing".
And watts are amps*volts, therefore VRMs care about watts. And no those Athlons didn't run at 1.5 volts. Athlon X4 870K and Athlon A4 845 are both limited to 1.5V or 1.485V. No Athlon came out with more than that. Also, most of that voltage is needed to turbo to work, so if you disable turbo, you can get massive voltage reductions.
Jesus christ, man, come on. No. VRMs care about watts only as expressed in amps. That was the entire point of what I said. And while it's true I cited the voltage of the highest running Athlons, they're still much higher than current CPUs. (Current-gen Ryzens report very high core voltages in software, but from what AMD's engineering teams has said those voltages are read before stepping down to what the core actually demands, so it's not actually running 1.4V or higher during boost despite what software might say.)

And yes, of course you get voltage reductions if you disable boost. That's ... rather obvious, no? Go below stock behaviour, and you'll get lower voltages and power draws. Not quite surprising.
Nah, it's new stock. I have loads of chips for FM2+ boards. Athlon 760K is just one of them. I bought it for unique reasons:
No. Old stock = old, unsold products that have been sitting on shelves for a long time. That CPU was launched in October 2012, and while production of course ran for several years after that, it definitely wasn't recently manufactured when you bought it. And even if it was, it was still ancient tech at that point. Which is fine, but please don't try to say that it wasn't old.
Previously that computer had 870K, which was made in 2015 and Athlon 845 was made in 2016. Both are nowhere near being 10 years old. Several motherboards had an extended manufacturing for some reason and thus you could buy them even in 2018 and probably in 2019. Athlon 845 is an unicorn chip, which is somewhat rare as it was released at the end of lifespan of FM2+ platform and it had Carrizo core, the last architectural improvement on AM3+ and FM2+ platforms. Athlon 870K is also late production model, but is a better binned 860K. Availability of it was poor and it mostly sold after FM2+ becoming obsolete. There were bunch of other rare CPUs released in 2016 for FM2+ platform, like A6 7470K or A10 7890K.
Not that those CPUs aren't interesting, but they're still old tech. My A8-7600 that I just retired from my NAS was just as old. Sure, AMD iterated upon its 'large machinery' cores for quite a few years, and even launched Carrizo very close to Ryzen, but the actual changes generation-on-generation were pretty tiny. And that a five or six-year-old CPU is less old than a 10-year-old CPU is ... not that interesting?
If I had to write a review, I'd try to do it both ways - like the guys here at TPU do. When reviewing, you need to consider that not everyone who reads your review will want the same out of their system.
Absolutely. Though that's a lot of work - more than most reviewers probably have time (or get paid) for. IMO, reviewers ought to have at least two test systems per generation, one high end and one midrange, and compare the two at spec and stock settings. That would be near ideal.
Exactly. Throttling means dropping below base clock, which (coming back to the original topic) only that one ASRock motherboard does in HU's latter video. All the rest are within spec, however vague that spec is.
Yeah, that's pretty atrocious. This is why this discussion is getting so muddled though - people mix up annoyance at Intel for being vague AF and not enforcing their specs with OEMs partially making use of that to effectively OC their parts, and partly just making cheap shit and selling it as if it was good enough. Both sides need addressing, and need addressing specifically for what they're messing up. But that's tricky.
I saw a 4750G on ebay a couple weeks ago for about £450. As an OEM CPU, it comes with no box and no warranty. I got the Asus B560M TUF motherboard and the i7-11700 for the same price brand new. We'll see what happens when the 5000G/GE series come out for DIY. I might buy one just to test it, and sell the Core i7 if it's any good. :p
Whoa! I paid €225 for my 4650G. I don't care much about the warranty - I've never had a CPU fail, and stories of that are rare enough that I can't imagine needing it.
Oh no, I'm definitely not gonna run a 224 W PL2. :laugh: I intend to do as much tweaking as necessary to make it work in my thin SFF case. I want to find the perfect balance. :)
Sounds interesting! Let me know if you make a build log?
I'm not quite sure that's the case. My Ryzen 3 3100 basically runs at 3.85-3.9 GHz all the time, independent of workload, as it never maxes out its power limit. Hungrier chips with more cores could do the same with cTDP. If you want full power, set cTDP to the highest, and enjoy maximum clock speed all the time. You want low thermals? Just turn your cTDP down to have your clocks and voltages decrease too. You don't even need different SKUs with different TDP ratings for this.
Well, the 3100 is a "low end of its TDP tier" SKU, i.e. it's likely overspecced in terms of TDP. They could probably make its base and boost clocks match if they wanted to, but probably leave some room between them to add leeway for utilizing garbage-tier bins of chips if they want to. (You often see the same on older i5s and i3s too.) Each tier must include a range of products after all. But without modern boosting systems, we'd either need SKU-specific TDPs or we'd get a much smaller range of chips to choose from as the power draw would limit differentiation.

The general rule for stability testing is to get an idea whether system is stable at maximum imaginable load, it doesn't matter if it's realistic or not, because one day you might need a similar load to work perfectly. And once stability testing is done and thermals are in check, it's still advisable to increase voltage a bit to leave some room for any unexpected voltage fluctuation or just aging of chip.
That's a commonly held enthusiast belief, but it's a rather irrational one. Power viruses and unrealistic heat loads can be beneficial if you're really pushing things and still want 24/7 stability, but for anything else they're both rather useless, potentially misleading, and possibly harmful to your components. What is the value of keeping CPU temps while running Prime95 under a given level if the CPU is never going to see a workload similar to that? Etc.
Ryzens just can't match FX in terms of their prices. I still remember 130 Euros for 6 cores and 180-200 Euros for 8 cores, Ryzen never had value close to FX and they don't really overclock, unless you deal with lame turbo.
Value is relative. You clearly value overclocking for its own sake. Which is of course fine if that's what you like to spend your time doing! But your conception of value handily overlooks the fact that FX (and Bulldozer derivatives in general) performed rather terribly. They were fun from a technical and OC perspective, and they were cheap, but they were routinely outperformed by affordable i5s (and even i3s towards the end) with half the cores or less. Ryzen gen 1 and 2 delivered massive value in terms of performance/$, but as you said, they never really OC'd at all. I prefer the latter, you prefer the former - to each their own, but your desire is by far the more niche and less generally relevant one.
 
Top