• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Does gpu memory need cooling?

Joined
Oct 2, 2019
Messages
79 (0.04/day)
Hello.

After some googling I found some conflicting information regarding this.
I know the vrm needs definite cooling. But the memory chips not too sure.

I was trying to repurpose a old AIO cooler that had no brackets.
I got it on with ease using some spring screws but the memory I'm stil debating.
When i took the heatsink off the thermal pads that were on top of it was barely anything(thinner than a piece of paper)

Maybe if there is enough airflow it should work passively and perhaps toss a piece of aluminum on top of it with thermal paste
There is going to be plenty of airflow Going to the rear of the of the case.


The gpu itself is a r9 270x so not the most powerful
 

Attachments

  • 67018A20-F834-4F6E-BF08-A04B90E42BEA.jpeg
    67018A20-F834-4F6E-BF08-A04B90E42BEA.jpeg
    2.9 MB · Views: 4,090
  • 5A207904-C20A-4DB4-891D-DC10AC55451B.jpeg
    5A207904-C20A-4DB4-891D-DC10AC55451B.jpeg
    3 MB · Views: 2,866
Joined
Mar 23, 2016
Messages
4,914 (1.48/day)
Processor Intel Core i7-13700 PL2 150W
Motherboard MSI Z790 Gaming Plus WiFi
Cooling Cooler Master RGB Tower cooler
Memory Crucial Pro DDR5-5600 32GB Kit OC 6600
Video Card(s) Gigabyte Radeon RX 9070 GAMING OC 16G
Storage 970 EVO NVMe 500GB, WD850N 2TB
Display(s) Samsung 28” 4K monitor
Case Corsair iCUE 4000D RGB AIRFLOW
Audio Device(s) EVGA NU Audio, Edifier Bookshelf Speakers R1280
Power Supply TT TOUGHPOWER GF A3 Gold 1050W
Mouse Logitech G502 Hero
Keyboard Logitech G G413 Silver
Software Windows 11 Professional v24H2
Heatsinks on system RAM is superfluous, on a graphics card though it can be useful because of the close proximity of the GPU heat soaking the PCB.

On lower end cards cooling for the GDDR is omitted to cut costs, doesn't mean you could get away without cooling the chips though.
 
Last edited:
Joined
Oct 2, 2019
Messages
79 (0.04/day)
Heatsinks on system RAM is superfluous, on a graphics card though it can be useful because of the close proximity of the GPU heat soaking the PCB.

On lower end cards cooling for the GDDR is omitted to cut costs, doesn't mean you could get away without cooling the chips though.
Would putting a fan here be a good idea then? This won't obscure the pcie slots as its a Mac pro case and the psu is in the top slot

And where can i monitor my tempatures for the vram? I can't remember ever seeing Temps for vram on a gpu before
 

Attachments

  • EC68F7FF-4E42-4B6F-8AFF-2B38740A2AEC.jpeg
    EC68F7FF-4E42-4B6F-8AFF-2B38740A2AEC.jpeg
    1.6 MB · Views: 1,780
Joined
May 2, 2017
Messages
7,762 (2.66/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Older GPUs aren't necessarily equipped with VRAM temperature sensors, but if it has them a proper monitoring application like HWinfo should show it. On my RX 570 there's a line item simply called "GPU Memory Temperature" in HWinfo. As @biffzinker said above you definitely want some cooling on GPU RAM simply due to the proximity to the very hot GPU, though with water cooling that ought to be slightly less of an issue. You can get little RAM heatsinks pretty much anywhere for quite cheap, though most of those come with rather terrible thermal adhesive pads to stick them on - they both transfer heat poorly and are prone to falling off. But they work in a pinch. On a card like that some active airflow across the chips is probably sufficient though, and mounting a fan like in your pic ought to do that job more than good enough.
 
Joined
Oct 2, 2019
Messages
79 (0.04/day)
Older GPUs aren't necessarily equipped with VRAM temperature sensors, but if it has them a proper monitoring application like HWinfo should show it. On my RX 570 there's a line item simply called "GPU Memory Temperature" in HWinfo. As @biffzinker said above you definitely want some cooling on GPU RAM simply due to the proximity to the very hot GPU, though with water cooling that ought to be slightly less of an issue. You can get little RAM heatsinks pretty much anywhere for quite cheap, though most of those come with rather terrible thermal adhesive pads to stick them on - they both transfer heat poorly and are prone to falling off. But they work in a pinch. On a card like that some active airflow across the chips is probably sufficient though, and mounting a fan like in your pic ought to do that job more than good enough.
If the memory overheats. And the card has no tempature monitor for the memory. Wil it downclock the memory?
I Wil try to get some little heatsinks (aqua tuning sells it)
Altho i could cut a piece of aluminum to put onto them
And stick it on with thermal paste (the die is fuacin up in this case
 
Joined
Jan 31, 2012
Messages
2,774 (0.57/day)
Location
East Europe
System Name PLAHI
Processor I5-13500
Motherboard ASROCK B760M PRO RS/D4
Cooling 120 AIO IWONGOU
Memory 1x32GB Kingston BEAST DDR4 @ 3200Mhz
Video Card(s) RX 6800XT
Storage Kingston Renegade GEN4 nVME 512GB
Display(s) Philips 288E2A 28" 4K + 22" LG 1080p
Case TT URBAN R31
Audio Device(s) Creative Soundblaster Z
Power Supply Fractal Design IntegraM 650W
Mouse Logitech Triathlon
Keyboard REDRAGON MITRA
Software Windows 11 Home x 64
I had the same exercise, but I did put small copper heatsinks on the RAM. Bigger issue was the VRM though. It was in the front of the card and NZXT bracket has the fan on the back, so completely useless in my case.
 
Joined
May 2, 2017
Messages
7,762 (2.66/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
If the memory overheats. And the card has no tempature monitor for the memory. Wil it downclock the memory?
I Wil try to get some little heatsinks (aqua tuning sells it)
Altho i could cut a piece of aluminum to put onto them
And stick it on with thermal paste (the die is fuacin up in this case
No, memory doesn't thermal throttle (it barely does dynamic clocks at all). All that will happen when memory overheats is that it (eventually) stops working properly, likely beginning with artefacting and other glitches before becoming unstable or ceasing to work whatsoever. AFAIK GDDR5 is rated for something like 105C, so it takes some doing to overheat it with a low-power GPU.
 
Joined
Oct 2, 2019
Messages
79 (0.04/day)
okay. i'll put my finger on it every so often during testing to see if it reaches that *this is uncomfy* levels.

with this mount in the case i might be able to easily point a fan towards it so the airflow *grazes* the mem chips
 

Attachments

  • photo6001377809114706771.jpg
    photo6001377809114706771.jpg
    187.1 KB · Views: 868

eidairaman1

The Exiled Airman
Joined
Jul 2, 2007
Messages
44,288 (6.80/day)
Location
Republic of Texas (True Patriot)
System Name PCGOD
Processor AMD FX 8350@ 5.0GHz
Motherboard Asus TUF 990FX Sabertooth R2 2901 Bios
Cooling Scythe Ashura, 2×BitFenix 230mm Spectre Pro LED (Blue,Green), 2x BitFenix 140mm Spectre Pro LED
Memory 16 GB Gskill Ripjaws X 2133 (2400 OC, 10-10-12-20-20, 1T, 1.65V)
Video Card(s) AMD Radeon 290 Sapphire Vapor-X
Storage Samsung 840 Pro 256GB, WD Velociraptor 1TB
Display(s) NEC Multisync LCD 1700V (Display Port Adapter)
Case AeroCool Xpredator Evil Blue Edition
Audio Device(s) Creative Labs Sound Blaster ZxR
Power Supply Seasonic 1250 XM2 Series (XP3)
Mouse Roccat Kone XTD
Keyboard Roccat Ryos MK Pro
Software Windows 7 Pro 64
I would cool them.

Semi permenent heatsinks, mix AS5 with AS Epoxy, 50/50 mix apply just enough on each chip that isn't the exact dimentions of the chip (due to squeeze out of excess) apply heatsinks to each chip, hold for 30 seconds each, let card sit for 2-4 hours..
 
Joined
Oct 2, 2019
Messages
79 (0.04/day)
I would cool them.

Semi permenent heatsinks, mix AS5 with AS Epoxy, 50/50 mix apply just enough on each chip that isn't the exact dimentions of the chip (due to squeeze out of excess) apply heatsinks to each chip, hold for 30 seconds each, let card sit for 2-4 hours..
only problem is. some mem chips are slightly covered by the AIO mounting mechanism/tubing and a heatsink wont fit on those.
would you reckon if i get these pieces of metal a bit bigger. and then with the top fan. it would cool the memory chips if i attach them with some thermal adhesive? (wont do permament)
these fins are from a massive heatsink from a pentuim 4 dell
 

Attachments

  • photo6001462557409390825.jpg
    photo6001462557409390825.jpg
    210.1 KB · Views: 1,055
  • photo6001462557409390826.jpg
    photo6001462557409390826.jpg
    237.4 KB · Views: 1,174

eidairaman1

The Exiled Airman
Joined
Jul 2, 2007
Messages
44,288 (6.80/day)
Location
Republic of Texas (True Patriot)
System Name PCGOD
Processor AMD FX 8350@ 5.0GHz
Motherboard Asus TUF 990FX Sabertooth R2 2901 Bios
Cooling Scythe Ashura, 2×BitFenix 230mm Spectre Pro LED (Blue,Green), 2x BitFenix 140mm Spectre Pro LED
Memory 16 GB Gskill Ripjaws X 2133 (2400 OC, 10-10-12-20-20, 1T, 1.65V)
Video Card(s) AMD Radeon 290 Sapphire Vapor-X
Storage Samsung 840 Pro 256GB, WD Velociraptor 1TB
Display(s) NEC Multisync LCD 1700V (Display Port Adapter)
Case AeroCool Xpredator Evil Blue Edition
Audio Device(s) Creative Labs Sound Blaster ZxR
Power Supply Seasonic 1250 XM2 Series (XP3)
Mouse Roccat Kone XTD
Keyboard Roccat Ryos MK Pro
Software Windows 7 Pro 64
only problem is. some mem chips are slightly covered by the AIO mounting mechanism/tubing and a heatsink wont fit on those.
would you reckon if i get these pieces of metal a bit bigger. and then with the top fan. it would cool the memory chips if i attach them with some thermal adhesive? (wont do permament)
these fins are from a massive heatsink from a pentuim 4 dell

I used tweakmonster ramsinks, another solution is to do that plate then braze on additional fins to do a offset heatsink, you might use the thermal expoxy straight there even for the offset fins.

Straight pure aluminum has a low melting point, im unsure if heatsinks are pure or an alloy.
 
Joined
Jul 19, 2015
Messages
1,029 (0.29/day)
Location
Nova Scotia, Canada
Processor Ryzen 5 5600 @ 4.65GHz CO -30
Motherboard AsRock X370 Taichi
Cooling Cooler Master Hyper 212 Plus
Memory 32GB 4x8 G.SKILL Trident Z 3200 CL14 1.35V
Video Card(s) PCWINMAX RTX 3060 6GB Laptop GPU (80W)
Storage 1TB Kingston NV2
Display(s) LG 25UM57-P @ 75Hz OC
Case Fractal Design Arc XL
Audio Device(s) ATH-M20x
Power Supply Evga SuperNova 1300 G2
Mouse Evga Torq X3
Keyboard Thermaltake Challenger
Software Win 11 Pro 64-Bit
No they don't need any kind of heatsinks at all, the pcb will dissipate all the heat generated by them. The memory will be running cooler bare than they did with the stock cooler.
 
Joined
Mar 18, 2015
Messages
2,970 (0.80/day)
Location
Long Island
Kinda like asking does one need an umbrella. The answer depends on the weather conditions. And here it depends on how much power the card uses. It also depends on the installation. Had one box on the test bench that the top card in an SLI build was 10C higher than the bottom card because ythe heat from the bottom card was heating the tpp card. A 120m, fan installed on the back of the HD cage cured the problem. The same effect often occurs when the AIO block inteferes with air flow around the card.

If you take the original GFX card cover off and you see pads or TIM, then it's apparent that the manufacturer thought it was necessary. If you download a GPU waterblock instructions and they include pad / TIM application to the memory, then obviously thy thought so too.

I never quite understood the AIO approach which calls for a 2 x 120mm rad on a 125 watt OC'd CPU but a 1 x 120mm AIO is just fine on a 300 watt GFX card. The fact is , if you are going to use auxillary cooling fan for the VRMs, it's going to affect the memory also.
 
Joined
May 2, 2017
Messages
7,762 (2.66/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Kinda like asking does one need an umbrella. The answer depends on the weather conditions. And here it depends on how much power the card uses. It also depends on the installation. Had one box on the test bench that the top card in an SLI build was 10C higher than the bottom card because ythe heat from the bottom card was heating the tpp card. A 120m, fan installed on the back of the HD cage cured the problem. The same effect often occurs when the AIO block inteferes with air flow around the card.

If you take the original GFX card cover off and you see pads or TIM, then it's apparent that the manufacturer thought it was necessary. If you download a GPU waterblock instructions and they include pad / TIM application to the memory, then obviously thy thought so too.

I never quite understood the AIO approach which calls for a 2 x 120mm rad on a 125 watt OC'd CPU but a 1 x 120mm AIO is just fine on a 300 watt GFX card. The fact is , if you are going to use auxillary cooling fan for the VRMs, it's going to affect the memory also.
The difference between CPU and GPU AIO needs is actually quite simple, down to a couple of things:
-CPUs have extreme heat density as virtually all heat is generated in the cores, which are quite small (both in an absolute sense and in relation to the die itself). This inherently makes it difficult to stop these spots getting very hot, even if the total heat output is "low". Spreading the heat out enough that coolers can dissipate it efficiently is a major challenge with CPUs. GPUs have their heat generation spread very evenly across the die (which is also normally much bigger than a CPU die) with the vast majority of the die being actual heat-generating components. This means the "cores" individually don't get as hot (as they also consume a fraction of the power, seeing how there's 500-4000 of them rather than 2-16), and it's easier to transfer the heat away to a cooler.
-GPUs have direct die contact, minimizing interfaces between the die and cooler. CPUs use an IHS, which helps spread heat, but also impedes cooling. This is why a 275W Fury X could get away with just a medium thickness 120mm AIO and a relatively quiet fan and never exceed 60C, while achieving the same on a 275W CPU would require a 360 or more. Die-TIM-cold plate-water is much more efficient than die-(s)TIM-IHS-TIM-cold plate-water, especially when the latter also has a heat density disadvantage.
 
Joined
Feb 18, 2005
Messages
6,396 (0.87/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) Dell S3221QS(A) (32" 38x21 60Hz) + 2x AOC Q32E2N (32" 25x14 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G604
Keyboard Razer Pro Type Ultra
Software Windows 10 Professional x64
As long as there is active airflow over the chips, you're fine.
 
Last edited:
Joined
Mar 18, 2015
Messages
2,970 (0.80/day)
Location
Long Island
The difference between CPU and GPU AIO needs is actually quite simple, down to a couple of things:
-CPUs have extreme heat density as virtually all heat is generated in the cores, which are quite small (both in an absolute sense and in relation to the die itself). This inherently makes it difficult to stop these spots getting very hot, even if the total heat output is "low". Spreading the heat out enough that coolers can dissipate it efficiently is a major challenge with CPUs. GPUs have their heat generation spread very evenly across the die (which is also normally much bigger than a CPU die) with the vast majority of the die being actual heat-generating components. This means the "cores" individually don't get as hot (as they also consume a fraction of the power, seeing how there's 500-4000 of them rather than 2-16), and it's easier to transfer the heat away to a cooler.
-GPUs have direct die contact, minimizing interfaces between the die and cooler. CPUs use an IHS, which helps spread heat, but also impedes cooling. This is why a 275W Fury X could get away with just a medium thickness 120mm AIO and a relatively quiet fan and never exceed 60C, while achieving the same on a 275W CPU would require a 360 or more. Die-TIM-cold plate-water is much more efficient than die-(s)TIM-IHS-TIM-cold plate-water, especially when the latter also has a heat density disadvantage.


Yes it is simple and I explained why ... but you're barking up the wrong tree. Yes, heat ytransfer has to take place between the GPU / CPU but that same amount of wattage THEN has to be transferred from the coolant to the air. To quote Zappa, "This is the 'crux of the apostrophe'. How the heat gets from the chip to the coolant differs between applications, how it gets from the rad to the air does not and that is where the issue is.

Whatever heat comes off the CPU heat sink, being in a closed system, to get that 120 watts into the radiator and out of the case, that radiator has to dissipate 120 watts
Whatever heat comes off the CPU heat sink, being in a closed system, to get that 300 watts into the radiator and out of the case, that radiator doesn't have to disspate 300 watts but a helluva lot more than 120 watts. Now your already stating thet the memory / VRMs do not need cooling ... so you are obviously convinced that memory / VRM heat laods are small. let's call it 5 watts for the memory (equivalent to desktop RAM) and 45 watts for the VRMs leaving us 250 watts for the GPU

Let's look at the lab data for radiators:


We see the following (ST30):

@ 600 rpm, the 3 x 120 mm dissipated 96 watts of heat (32 watts per 120mm of rad)
@ 1000 rpm, the 3 x 120 mm dissipated 150 watts of heat (50 watts per 120mm of rad)
@ 1400 rpm, the 3 x 120 mm dissipated 204 watts of heat (68 watts per 120mm of rad)
@ 1800 rpm, the 3 x 120 mm dissipated 242 watts of heat (81 watts per 120mm of rad)
@ 2200 rpm, the 3 x 120 mm dissipated 281 watts of heat (93 watts per 120mm of rad)

So if we say that 2 x 120mm is the way to go for the 120 watt CPU .... 60 watts per 120mm, all we need is about 1200 rpm to do that

Now lets try that with the 250 watt GPU and 1 x 120. We learned above that the 'way to go" is 60 watts per 120mm. But yet that GPU radiator is expected to handle more than 4 times that ?

Or, with the radiator 'way to go" handling the 60 watts, t that the PCB will radiate 190 watts ? That means:

a) We only need a 1 x 120mm radiator to handle the 60 watts from the GPU
b) But the 190 watts radiating out into the box is fine ?

So either 60 watts per 120mm is just fine or it isn't can't be both ways. Ya can'y say ya need one 120mm to handle a 250-300 watt GPU and at the same time say you need 2 x 1200mm to handle a 120 watts CPU. And yet we have air coolers that still beat any 120mm and most 2 x 120mm CLCs
 
Joined
May 2, 2017
Messages
7,762 (2.66/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Yes it is simple and I explained why ... but you're barking up the wrong tree. Yes, heat ytransfer has to take place between the GPU / CPU but that same amount of wattage THEN has to be transferred from the coolant to the air. To quote Zappa, "This is the 'crux of the apostrophe'. How the heat gets from the chip to the coolant differs between applications, how it gets from the rad to the air does not and that is where the issue is.

Whatever heat comes off the CPU heat sink, being in a closed system, to get that 120 watts into the radiator and out of the case, that radiator has to dissipate 120 watts
Whatever heat comes off the CPU heat sink, being in a closed system, to get that 300 watts into the radiator and out of the case, that radiator doesn't have to disspate 300 watts but a helluva lot more than 120 watts. Now your already stating thet the memory / VRMs do not need cooling ... so you are obviously convinced that memory / VRM heat laods are small. let's call it 5 watts for the memory (equivalent to desktop RAM) and 45 watts for the VRMs leaving us 250 watts for the GPU

Let's look at the lab data for radiators:


We see the following (ST30):

@ 600 rpm, the 3 x 120 mm dissipated 96 watts of heat (32 watts per 120mm of rad)
@ 1000 rpm, the 3 x 120 mm dissipated 150 watts of heat (50 watts per 120mm of rad)
@ 1400 rpm, the 3 x 120 mm dissipated 204 watts of heat (68 watts per 120mm of rad)
@ 1800 rpm, the 3 x 120 mm dissipated 242 watts of heat (81 watts per 120mm of rad)
@ 2200 rpm, the 3 x 120 mm dissipated 281 watts of heat (93 watts per 120mm of rad)

So if we say that 2 x 120mm is the way to go for the 120 watt CPU .... 60 watts per 120mm, all we need is about 1200 rpm to do that

Now lets try that with the 250 watt GPU and 1 x 120. We learned above that the 'way to go" is 60 watts per 120mm. But yet that GPU radiator is expected to handle more than 4 times that ?

Or, with the radiator 'way to go" handling the 60 watts, t that the PCB will radiate 190 watts ? That means:

a) We only need a 1 x 120mm radiator to handle the 60 watts from the GPU
b) But the 190 watts radiating out into the box is fine ?

So either 60 watts per 120mm is just fine or it isn't can't be both ways. Ya can'y say ya need one 120mm to handle a 250-300 watt GPU and at the same time say you need 2 x 1200mm to handle a 120 watts CPU. And yet we have air coolers that still beat any 120mm and most 2 x 120mm CLCs
Sorry if the the explanation was unnecessary, but you did literally say you "never quite understood the AIO approach which calls for a 2 x 120mm rad on a 125 watt OC'd CPU but a 1 x 120mm AIO is just fine on a 300 watt GFX card", which certainly didn't make it seem like you had a detailed understanding of this. The reason, put simply, is after all that this works well on GPUs but CPUs need more cooling despite their lower thermal output. The PCBs certainly aren't dissipating any noticeable amount of heat (at least less than with air cooled GPUs, as the GPU temp is lower on water and thus less heat is transferred into the PCB).

Also, you can't simplify radiator thermal transfer down to "Watts per size". Thermodynamics doesn't work that way. Thermal energy transfer accelerates as the thermal energy difference between two media increases, i.e. the hotter your water (and thus your radiator) is compared to the air, the faster heat will transfer between them. That's why people with massive custom loops see very small gains over more sensible setups, as they are lowering liquid temps enough that thermal transfer is very, very slow. How does this relate to the difference between CPUs and GPUs? The (vastly) more efficient transfer from the GPU die to the water will cause more thermal energy to transfer into the water - in other words heat it up more - which will in turn allow the radiator to dissipate more heat (given the same fan speed and ambient temperature). So: GPUs have more evenly distributed heat which makes it much easier to dissipate into a cooler, have direct die cooling which inhibits dissipation minimally, thus they transfer more thermal energy into the coolant, which again helps the radiator dissipate as much heat as possible as the delta in thermal energy between it and the surrounding air increases.

If you're looking for universal numbers you can't test with variable output components like PC components (at least not unless you go to great lengths to control them), but need a reliable and repeatable heat source with a variable output, controlled temperatures, fan speeds, amounts of water, etc., and testing at various heat loads, which will then give you a delta T number for how much said radiator can lower temperatures under those conditions (which will necessarily not include inefficiencies like CPU IHSes). Saying "X size radiator can dissipate Y watts of heat" comes with so many caveats and asterisks attached to it it's at best applicable in an extremely limited number of implementations - and certainly not across heat sources as different as CPUs and GPUs.
 
Joined
Mar 18, 2015
Messages
2,970 (0.80/day)
Location
Long Island
I'm saying that I never understood the thought process that leads folks to people to conclude things that defy science and logic. There's people in this day and age that belive that the earth is flat and I can't understand that. The science in both cases is simple and irrefutable. The arguments you're making are self defeateting.

Yes, thermodynamics certainly does work that way ... at least it did when I last taught it (and fluids) to college students. I hope that in an age of alternative facts, it remains so. In all experimentation when a variable is being examined it requires that all other factors are equal ... there is no "magic thermodynamics".

The argument that "the hotter the water" is specious because a) it's self defeating and b) it fails to recognize the fact that this is a closed system. When you increase the delta T at the rad to get more cooling, you decrease the effectiveness of the block. A custom loop is usually designed for a Delta T of 10C, at 23C ambient coolant will have a coolant temperature 33C if wattage loads are accurate. WhenI sit down and sesign a system, I have yet to have a result that varies by more than 0.5C

Your "make the water hotter" solution looks like a simple explanation but you have to remember that we are in a closed system. Let's do the math. If we make the water hotter as you suggested from 10C hotter than ambient to 20C Delta T, I agree, you've essentially doubled the performance of the radiator ... quick and easy, no problem. Really ? But wait, with a water temp of 43C ... hows that gonna effect the thermal heat transfer between your GPU and the coolant ? Haven't you just dropped the thermal transfer of your water block by a huge margin ? lets use an assumed 55C temperature of your water block 10C delta T for your radiator and 33C coolant, you had a Delta T of (55-33) or 22C at the block. Making the water 10C hotter to double radiator efficiency means you have 43C water and a Delta T of 12C (55-43) at the block

The temperature of the water is not a determining factor. Delta T is the determining factor.... at both ends ... improving one hurts the other.

Look at the radiator testing on matins liquid lab site. Radiator sizing, for any design Delta T depends on:

a) Watts
b) Fan speed
c) Pump speed (pm) ... beyond a certain speed / flow rate (1.0 - 1.5 gpm) effect is minimal ...radiator testing is typically done at 1.5 gpm. CLC's are typically 0.11 gpm
d) Thickness has minimal effect except at high fan speeds


A radiator is in no way influenced by the source of the heat. In testing, they measure the precise amount of heat required to produce a temperature differential of 10C. 300 watts is 300 watts, measured at the wall and it's finely controlled.

Ya also can't use dual logic ... to say that the VRMs, memory and PCB throw off most of the heat and then turn around and say, there's no point in making sure those very same items are cooled when switching to a CLC. Either it has a lot of heat or it doesn't.

If you're looking for universal numbers you can't test with variable output components like PC components (at least not unless you go to great lengths to control them), but need a reliable and repeatable heat source with a variable output, controlled temperatures, fan speeds, amounts of water, etc., and testing at various heat loads, which will then give you a delta T number for how much said radiator can lower temperatures under those conditions (which will necessarily not include inefficiencies like CPU IHSes). Saying "X size radiator can dissipate Y watts of heat" comes with so many caveats and asterisks attached to it it's at best applicable in an extremely limited number of implementations - and certainly not across heat sources as different as CPUs and GPUs.

Which is exactly what has been done! ...Read the post and the links ?

The radiators are tested are not in a PC which negates everything you're saying. Martin test every rad in exactly the same manner with all the conditions you have described. How do you disregard, when sizing rads based upon martins data, we get a predicted Delta T and then when the systems are built, the resultant Delta T is right on target ? The Radiator Size Estimator / Calculator created based on that data has never been "off" by more than 0.5C.

Let's go one by one in your list and compare to test setup

  • reliable and repeatable heat source with a variable output - check
  • controlled temperatures - check
  • amounts of water - check .... while test setup is the same, in an installed system the volume of water does not affect steady state conditions. Thermal mass will affect how long it takes to heat up to "steady state conditions" and how long it takes to cool down ... but once steady state conditions are met, volume ir irrelevant. Heat gets into the system from the block. it get out thru the rads . .. the end. You could argue that, theoretically a larger reservoir has more surface area and therefore will radiate more heat. Try measuring it .. try calculating it. It's below the instrument margin of error.
  • testing at various heat loads, which will then give you a delta T number - check
Obviously different mounts, different TIM, etc will result in a range of individual test results. But which one of these are you saying eliminates the 400% disparity we have here ? You have more than twice the heat load and half the rad... and your argument is the up to 2% variation from block mounting erases a 400% unit loading ?

Let's look at the math ... on variable at a time

1. Is the GPU load really 250 ? And how does it matter if it isn't ? Seems to me no matter what it is the heat load per unit of rad area is still several orders of magnitude. Suffice to say, no matter what reasonable number you put in there, it doesn't change the argument that the 1 x 120mm GPU has no shot at cooling at a level anywhere near the 2 x 120m on the CPU. Add up all your "variables" (TIM, plate transfer efficiency) and tell us what combination gets you comparable to a heat dissipation per unit area close to that z x 1200m rad on the CPU. If the GPU is putting more than 60 watts in, you're not going to match it.

Heat Load (watts)
120​
250​
225​
200​
175​
60​
Radiator Area
28800​
14400​
14400​
14400​
14400​
14400​
Heat Load / 1000 mm^2
4.17​
17.36​
15.63​
13.89​
12.15​
4.17​


2. Is the actual heat dissipation of the radiator significant ? What if its not 50 per 120mm ? I mean it seems rather obvious that a 2 x 120mm rad will be about twice as effective as a a single 1 x 120mm radd. Can you explain how that changes if the capacity of each 120mm is 50, 40, 60 ? ... when we look at 100 / 50 ... 80 / 40 ... 120 / 60 ... isn't the first number in every pairing still twice as big as the 2nd ? Is there any heat dissipation number you can pick that makes the coolant temp anywhere near that of number that you can pick that can support the position mathematically ? Your GPU cooler would have to have 400% of the heat dissipation capacity per unit area of the CPU cooler to get close to th 2x 120's performance. Please give us the numbers in % for each factor beyond our control that can eat up that 400%.

CPU (2 x 120mm)
GPU (1 x 120mm)
GPU (1 x 120mm)
GPU (1 x 120mm)
GPU (1 x 120mm)
GPU (1 x 120mm)
Heat Load (watts)
120​
250​
250​
250​
200​
200​
Watts Dissipated @ 10C
100​
50​
40​
60​
40​
60​
Delta T Required
12.00​
50.00​
62.50​
41.67​
50.00​
33.33​
Coolant Temperature Req'd
35.00​
73.00​
85.50​
64.67​
73.00​
56.33​

What do we see ? ... Let's look at the last column

We took off 50 watts beyond what we can expect the GPU to have in heat output
We added 20% better radiator performance due to factors you believe may favor the GPU cooler.
Letting all of that slide for a moment ... How effective will a block be with a water temp so close to its operating temp ?

And again ... we have verified martins test results on our own test bench.

- 3 x 140mm and 2 x 240mm
- Dual Pump Speed control from 0 - 4500 rpm
- Fan Speed control from 325 - 1250 rpm
- Radiator Inlet and Outlet temperature probes (0.1C accuracy)
- Ambient and case temperature probes (0.1C accuracy)
- Reeven 6 channel digital display (0.1C accuracy)

We can create steady state conditions on our own test bed and and we can calculate results, accurately and repeatedly ...

Relative amount of heat added by CPU by, measuring the amount of temperature rise from Radiator Out => CPU Block ==> Radiator in
Relative amount of heat added by GPUs by, measuring the amount of temperature rise from Radiator Out => GPU Block ==> Radiator in
Relative Amount of heat removed by each 140mm of radiator
Delta T based upon varying pump speeds
Delta T based upon varying fan speeds

There is no mathematical model which you can produce which

Saying "X size radiator can dissipate Y watts of heat" comes with so many caveats and asterisks attached to it it's at best applicable in an extremely limited number of implementations "

Look at martins testing procedures and equipment and name one.

Then explain the repeatability of the results ? How can there be dozens and dozens of builds in operation where folks have built system using the Radiator Size Estimator or Martins Data and obtain results so closely aligned with the data. Looking at water clocking sites can you find one person who says the data is inaccurate ? Can you create a mathematical model including all the supposed variations which, added up, eliminate the inherent 400+% advantage.

Let's not introduce another specious argument here. We are talking about "what we can expect" within a small range. The 50 watts I used was based upon the Alphcool ST30 3 x 120mm @ 1,000 rpm (50 watts per 120) which is closest in thickness to most CLCs ...do other rads gave different results ... yeah, double the thickness and it removes 1 more watt (151). But we dont have different thicknesses here. We are talking identical blocks connect to otherwise identical radiators (120 vs 240mm. 30mm thick of same material) where the block has been "adapted" to fit a GPU and with identical fans at identical speeds. It should certainly be expected that the block's larger size "may" be capable of transferring more heat ... the question is, can the radiator dissipate it ?

The only other thing we can be assured of is that the aluminum rads in the CLC will not do as well as the copper ones. Id expect them to do about 45 atts at 1000 rpm. I should cover that CLCs tend to use extreme speed fans to compensate for the cheap rads... so lets cover what happens at 2200 rpm. being only 30mm thick, we won't see the gains that thicker copper radiators get with increased thickness. Where the ST30 sits in 2nd place at 1000 rpm, it drops to 11th at 2200 with just under 94 watts per 120mm ... so differences will be greater

Heat Dissipation of 120mm radiator ~ 94 watts
Heat Dissipation of 120mm radiator ~ 188 watts


CPU (2 x 120mm)
GPU (1 x 120mm)
GPU (1 x 120mm)
GPU (1 x 120mm)
GPU (1 x 120mm)
GPU (1 x 120mm)
Heat Load (watts)
120​
250​
225​
200​
175​
60​
Radiator Area
28800​
14400​
14400​
14400​
14400​
14400​
Heat Load / 1000 mm^2
4.17​
17.36​
15.63​
13.89​
12.15​
4.17​
CPU (2 x 120mm)
GPU (1 x 120mm)
GPU (1 x 120mm)
GPU (1 x 120mm)
GPU (1 x 120mm)
GPU (1 x 120mm)
Heat Load (watts)
120​
250​
250​
250​
200​
200​
Watts Dissipated @ 10C
188​
94​
85​
103​
85​
103​
Delta T Required
6.38​
26.60​
29.41​
24.27​
23.53​
19.42​
Coolant Temperature Req'd
29.38​
49.60​
52.41​
47.27​
46.53​
42.42​


When doing a custom loop, one generally looks to design around a Delta T of around 10C ... 15C - 20C is typically considered acceptable for CLCs ... so why design the CPU for 6.4C and the GPU for something in the mid 20s ? Thats the logic that is hard to understand. 1 x 120mm on CPU would be 12-13 and 2 x 120mm on GPU would be 11.75 ish (200 watts) 13.3 (250 watts)
 
Joined
May 2, 2017
Messages
7,762 (2.66/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
I'm saying that I never understood the thought process that leads folks to people to conclude things that defy science and logic. There's people in this day and age that belive that the earth is flat and I can't understand that. The science in both cases is simple and irrefutable. The arguments you're making are self defeateting.

Yes, thermodynamics certainly does work that way ... at least it did when I last taught it (and fluids) to college students. I hope that in an age of alternative facts, it remains so. In all experimentation when a variable is being examined it requires that all other factors are equal ... there is no "magic thermodynamics".

The argument that "the hotter the water" is specious because a) it's self defeating and b) it fails to recognize the fact that this is a closed system. When you increase the delta T at the rad to get more cooling, you decrease the effectiveness of the block. A custom loop is usually designed for a Delta T of 10C, at 23C ambient coolant will have a coolant temperature 33C if wattage loads are accurate. WhenI sit down and sesign a system, I have yet to have a result that varies by more than 0.5C

Your "make the water hotter" solution looks like a simple explanation but you have to remember that we are in a closed system. Let's do the math. If we make the water hotter as you suggested from 10C hotter than ambient to 20C Delta T, I agree, you've essentially doubled the performance of the radiator ... quick and easy, no problem. Really ? But wait, with a water temp of 43C ... hows that gonna effect the thermal heat transfer between your GPU and the coolant ? Haven't you just dropped the thermal transfer of your water block by a huge margin ? lets use an assumed 55C temperature of your water block 10C delta T for your radiator and 33C coolant, you had a Delta T of (55-33) or 22C at the block. Making the water 10C hotter to double radiator efficiency means you have 43C water and a Delta T of 12C (55-43) at the block

The temperature of the water is not a determining factor. Delta T is the determining factor.... at both ends ... improving one hurts the other.

Look at the radiator testing on matins liquid lab site. Radiator sizing, for any design Delta T depends on:

a) Watts
b) Fan speed
c) Pump speed (pm) ... beyond a certain speed / flow rate (1.0 - 1.5 gpm) effect is minimal ...radiator testing is typically done at 1.5 gpm. CLC's are typically 0.11 gpm
d) Thickness has minimal effect except at high fan speeds


A radiator is in no way influenced by the source of the heat. In testing, they measure the precise amount of heat required to produce a temperature differential of 10C. 300 watts is 300 watts, measured at the wall and it's finely controlled.

Ya also can't use dual logic ... to say that the VRMs, memory and PCB throw off most of the heat and then turn around and say, there's no point in making sure those very same items are cooled when switching to a CLC. Either it has a lot of heat or it doesn't.



Which is exactly what has been done! ...Read the post and the links ?

The radiators are tested are not in a PC which negates everything you're saying. Martin test every rad in exactly the same manner with all the conditions you have described. How do you disregard, when sizing rads based upon martins data, we get a predicted Delta T and then when the systems are built, the resultant Delta T is right on target ? The Radiator Size Estimator / Calculator created based on that data has never been "off" by more than 0.5C.

Let's go one by one in your list and compare to test setup

  • reliable and repeatable heat source with a variable output - check
  • controlled temperatures - check
  • amounts of water - check .... while test setup is the same, in an installed system the volume of water does not affect steady state conditions. Thermal mass will affect how long it takes to heat up to "steady state conditions" and how long it takes to cool down ... but once steady state conditions are met, volume ir irrelevant. Heat gets into the system from the block. it get out thru the rads . .. the end. You could argue that, theoretically a larger reservoir has more surface area and therefore will radiate more heat. Try measuring it .. try calculating it. It's below the instrument margin of error.
  • testing at various heat loads, which will then give you a delta T number - check
Obviously different mounts, different TIM, etc will result in a range of individual test results. But which one of these are you saying eliminates the 400% disparity we have here ? You have more than twice the heat load and half the rad... and your argument is the up to 2% variation from block mounting erases a 400% unit loading ?

Let's look at the math ... on variable at a time

1. Is the GPU load really 250 ? And how does it matter if it isn't ? Seems to me no matter what it is the heat load per unit of rad area is still several orders of magnitude. Suffice to say, no matter what reasonable number you put in there, it doesn't change the argument that the 1 x 120mm GPU has no shot at cooling at a level anywhere near the 2 x 120m on the CPU. Add up all your "variables" (TIM, plate transfer efficiency) and tell us what combination gets you comparable to a heat dissipation per unit area close to that z x 1200m rad on the CPU. If the GPU is putting more than 60 watts in, you're not going to match it.

Heat Load (watts)
120​
250​
225​
200​
175​
60​
Radiator Area
28800​
14400​
14400​
14400​
14400​
14400​
Heat Load / 1000 mm^2
4.17​
17.36​
15.63​
13.89​
12.15​
4.17​

2. Is the actual heat dissipation of the radiator significant ? What if its not 50 per 120mm ? I mean it seems rather obvious that a 2 x 120mm rad will be about twice as effective as a a single 1 x 120mm radd. Can you explain how that changes if the capacity of each 120mm is 50, 40, 60 ? ... when we look at 100 / 50 ... 80 / 40 ... 120 / 60 ... isn't the first number in every pairing still twice as big as the 2nd ? Is there any heat dissipation number you can pick that makes the coolant temp anywhere near that of number that you can pick that can support the position mathematically ? Your GPU cooler would have to have 400% of the heat dissipation capacity per unit area of the CPU cooler to get close to th 2x 120's performance. Please give us the numbers in % for each factor beyond our control that can eat up that 400%.

CPU (2 x 120mm)
GPU (1 x 120mm)
GPU (1 x 120mm)
GPU (1 x 120mm)
GPU (1 x 120mm)
GPU (1 x 120mm)
Heat Load (watts)
120​
250​
250​
250​
200​
200​
Watts Dissipated @ 10C
100​
50​
40​
60​
40​
60​
Delta T Required
12.00​
50.00​
62.50​
41.67​
50.00​
33.33​
Coolant Temperature Req'd
35.00​
73.00​
85.50​
64.67​
73.00​
56.33​
What do we see ? ... Let's look at the last column


We took off 50 watts beyond what we can expect the GPU to have in heat output
We added 20% better radiator performance due to factors you believe may favor the GPU cooler.
Letting all of that slide for a moment ... How effective will a block be with a water temp so close to its operating temp ?

And again ... we have verified martins test results on our own test bench.

- 3 x 140mm and 2 x 240mm
- Dual Pump Speed control from 0 - 4500 rpm
- Fan Speed control from 325 - 1250 rpm
- Radiator Inlet and Outlet temperature probes (0.1C accuracy)
- Ambient and case temperature probes (0.1C accuracy)
- Reeven 6 channel digital display (0.1C accuracy)

We can create steady state conditions on our own test bed and and we can calculate results, accurately and repeatedly ...

Relative amount of heat added by CPU by, measuring the amount of temperature rise from Radiator Out => CPU Block ==> Radiator in
Relative amount of heat added by GPUs by, measuring the amount of temperature rise from Radiator Out => GPU Block ==> Radiator in
Relative Amount of heat removed by each 140mm of radiator
Delta T based upon varying pump speeds
Delta T based upon varying fan speeds

There is no mathematical model which you can produce which



Look at martins testing procedures and equipment and name one.

Then explain the repeatability of the results ? How can there be dozens and dozens of builds in operation where folks have built system using the Radiator Size Estimator or Martins Data and obtain results so closely aligned with the data. Looking at water clocking sites can you find one person who says the data is inaccurate ? Can you create a mathematical model including all the supposed variations which, added up, eliminate the inherent 400+% advantage.

Let's not introduce another specious argument here. We are talking about "what we can expect" within a small range. The 50 watts I used was based upon the Alphcool ST30 3 x 120mm @ 1,000 rpm (50 watts per 120) which is closest in thickness to most CLCs ...do other rads gave different results ... yeah, double the thickness and it removes 1 more watt (151). But we dont have different thicknesses here. We are talking identical blocks connect to otherwise identical radiators (120 vs 240mm. 30mm thick of same material) where the block has been "adapted" to fit a GPU and with identical fans at identical speeds. It should certainly be expected that the block's larger size "may" be capable of transferring more heat ... the question is, can the radiator dissipate it ?

The only other thing we can be assured of is that the aluminum rads in the CLC will not do as well as the copper ones. Id expect them to do about 45 atts at 1000 rpm. I should cover that CLCs tend to use extreme speed fans to compensate for the cheap rads... so lets cover what happens at 2200 rpm. being only 30mm thick, we won't see the gains that thicker copper radiators get with increased thickness. Where the ST30 sits in 2nd place at 1000 rpm, it drops to 11th at 2200 with just under 94 watts per 120mm ... so differences will be greater

Heat Dissipation of 120mm radiator ~ 94 watts
Heat Dissipation of 120mm radiator ~ 188 watts


CPU (2 x 120mm)
GPU (1 x 120mm)
GPU (1 x 120mm)
GPU (1 x 120mm)
GPU (1 x 120mm)
GPU (1 x 120mm)
Heat Load (watts)
120​
250​
225​
200​
175​
60​
Radiator Area
28800​
14400​
14400​
14400​
14400​
14400​
Heat Load / 1000 mm^2
4.17​
17.36​
15.63​
13.89​
12.15​
4.17​
CPU (2 x 120mm)
GPU (1 x 120mm)
GPU (1 x 120mm)
GPU (1 x 120mm)
GPU (1 x 120mm)
GPU (1 x 120mm)
Heat Load (watts)
120​
250​
250​
250​
200​
200​
Watts Dissipated @ 10C
188​
94​
85​
103​
85​
103​
Delta T Required
6.38​
26.60​
29.41​
24.27​
23.53​
19.42​
Coolant Temperature Req'd
29.38​
49.60​
52.41​
47.27​
46.53​
42.42​

When doing a custom loop, one generally looks to design around a Delta T of around 10C ... 15C - 20C is typically considered acceptable for CLCs ... so why design the CPU for 6.4C and the GPU for something in the mid 20s ? Thats the logic that is hard to understand. 1 x 120mm on CPU would be 12-13 and 2 x 120mm on GPU would be 11.75 ish (200 watts) 13.3 (250 watts)
Well that was a rant long enough that I thought I might have written it myself, though I guess I'm not the only one here prone to overlong explanations.

I'll try to simplify things a bit.

First off, you asked if I could find faults in the testing methodology of the site you linked. While it's kind of difficult to do that when the link to the detailed description of the test bed gives a 403 error, the description in the review you linked alone raises a major red flag: the use of an aquarium heater as a heat source. While this is understandable in terms of wanting to eliminate variables from testing, it goes so far from the intended use case (cooling computer components) that I would argue that it entirely invalidates the testing outside of like-for-like per-component comparisons (which are obviously very valuable, but not transferable beyond as a relative ranking). Why? Because you are testing a device specifically designed to heat water, submerged in a tube of water (with water flowing past it on all sides unless I misunderstand the written description - sadly there are no photos or diagrams). It thus has significant benefits compared to cooling a CPU or GPU under a water block. While a cold plate with a dense microfin stack might have more surface area than a heater like this, the heater is directly contacting the water and likely has far higher temperature tolerances than a CPU or GPU, meaning that keeping it below a certain temperature is a) easier due to direct water comtact and b) less important due to no (real) risk of damage above certain temperatures, and likely much higher absolute thermal limits. This sounds like a clear-cut case of abstraction for the sake of eliminating variables being taken too far, rendering the testing borderline meaningless (as the tested system is so far removed from real-world use cases that the thermal output of the heat source is fundamentally different from how computer components output their heat). For relevant test data you would need a controllable CPU-analogue heat source like Anandtech uses or like the one GamersNexus (more detail here) uses to verify their testing, as that allows you to simulate the heat coming from an actual CPU rather than just getting the water hot however is convenient.

A second issue is with your line of argumentation: water temperatures at a given level are not the goal of water cooling. Sure, it might be what one designs a system around (as unlike thermal input it's not variable as long as the heat is transferred into the system in the same way (which, if you didn't spot it, presupposes identical heat sources, which CPUs and GPUs aren't)), but ultimately the water delta T is meaningless if you don't also take into account the actual temperature of the component you're trying to cool. Which is where IHSes, TIM and cold plates come into play and dramatically alter the outcomes of testing, as component temperatures are the actual important part here, not maintaining a steady deltaT over ambient in your water. Heck, if that is the goal, you would be improving cooling by using a thick thermal pad rather than thermal paste on your CPU as you'd have less heat transferred into the water, thus lowering water temperatures. See how silly that gets? The point is: talking about water temperatures alone is entirely meaningless. Component temperatures are the end goal, and thus must be taken into consideration. The water loop may be a closed system, but it still has heat transferred into it on one side and out of it on the other, neither of which are intrinsic parts of said system.

Oh, btw, I just have to point this out:
Ya also can't use dual logic ... to say that the VRMs, memory and PCB throw off most of the heat and then turn around and say, there's no point in making sure those very same items are cooled when switching to a CLC. Either it has a lot of heat or it doesn't.
What the actual [bleep] are you talking about? I never said that - you pulled that inane statement out of your rear in a previous post based on ... well, I have to assume an inability to parse an argument that doesn't align with yours? Because I never said anything even remotely close to this. Please stop putting words in my mouth. In fact I specifically brought up an example of an entirely water cooled GPU (the Fury X, which cools both its VRAM and VRMs with water) to avoid further complicating the discussion (as, say, an RTX 2080 Ti with an AIO stuck on it will have some unknown and non-measurable portion of its ~275W power draw not cooled by the AIO).

Also, nice "I used to teach this stuff at University" flex. You're still wrong, and all you managed to do by saying that is make yourself look all the more condescending. Good job.

But let's get back into our (very OT, but still interesting) discussion here: how CPU and GPU cooling differs. And again, let's look at die sizes and heat density. This is rather necessary, as your line of argumentation entirely disregards the heat source, which is rather baffling as it is both fundamentally affecting the outcome of the cooling and also the actual thing we are trying to cool. I'll ignore monster die GPUs like the RTX 2080Ti, simply because it gives too much of an cooling advantage. Let's instead (again) use the Fury X as an example. And for the sake of simplicity, let's say all the VRM and VRAM heat (around 20W for the HBM and ~17W for the VRM assuming ~94% conversion efficiency at 275W card power draw) is output through the GPU die, even if that in reality that has its own surface area with its own contact and thus would further ease thermal transfer. And let's use a 9900K for comparison, as that's one of the few CPUs capable of reaching these kinds of power draws without LN2 cooling and priced within a range that's attainable by mortals.

The Fury X then has a 596 mm2 die size, of which the vast majority of the die is CUs and thus heat-generating components. I haven't measured, but judging from die shots at least 80% of the area is actual compute components. So let's be conservative and call that 70% or 418mm2. Treating this as a monolithic block is reasonable, as there's just one central stripe of non-CU components across the middle of the die. Assuming 100% GPU utilization that translates to a heat output of 275W/418mm2 or 0,66W/mm2.

The 9900K has a die size of ~174mm2, but of this only about ~60% is the actual cores, with the rest being the iGPU and uncore. Quick Photoshop measurements off Wikichip's annotated die shot gives a per-core area of ~8.4mm2. Let's be generous and include the l3 cache, so let's call that 11mm2 per core (rounded up), and there are 8 cores grouped close enough together to treat as a single heat source. So that's 275W/88mm2 or 3,13W/mm2 - 4.7x the heat density of the GPU. Even at a strictly limited 95W it would be 1.6x the heat density of the GPU.

This obviously means that the CPU cores will get much hotter much faster (due to more thermal energy being generated in a smaller area) while also having much less surface area to dissipate the heat into the cooler. That is of course where the IHS comes into play, trying to spread the heat out to a larger area for more efficient transfer to the cooler - but also adding inefficiency to the system through multiple interfaces. Which means that not only is the CPU running hotter to begin with and has a disadvantage in thermal transfer due to less surface area, it adds to that disadvantage by slowing down the thermal transfer through additional interfaces. Thus the GPU will also better utilize the surface area given by the microfins on the cold plate. Which means that for any given thermal load a GPU will run at a lower temperature than a CPU due to more efficiently being able to dissipate its thermal energy into its cooler. The transfer inefficiency on the input side (and heat being spread and thus temperatures being lowered) also largely negates the advantage from the higher deltaT at the cold plate compared to the GPU.

Which, again, means that at the same thermal load and the same water cooling system, any GPU die will run cooler than a CPU die with an IHS (unless you fundamentally change how either of these parts are made.


Still, we are getting to the crux of the matter here: you are talking about a system more or less arbitrarily limited to a 10C water temperature delta over ambient. I am not. The increased transfer efficiency of the GPU will of course mean that it heats the water more than the CPU (relative to die temperature, of course, but not necessarily linearly due to the CPU's inefficiencies), or conversely, that for the same amount of heat transferred into the water the CPU die will be at a much higher temperature. This is why a 275W GPU like the Fury X can run at a steady ~60C forever with a ~30mm 120mm rad and a ~2000-2500rpm fan in a ~25C ambient room, while the same cooler on a 275W CPU would in all likelihood see the CPU running at >100C and thermal throttling - as the former will be heating its thermal transfer medium (water) more efficiently, lowering input-side temperatures even if output-side temperatures are the same (until the CPU throttles, that is). Now, there are of course good reasons to limit water temperature - organic growth, galvanic corrosion, permeation, etc. - and letting water temperatures run wild is an obviously bad idea, but 10C seems like an arbitrarily low limit. With good biocides and corrosion inhibitors water temperatures can run much higher than that without issue, at least in regions where ambient temps are in the low 20s or thereabouts. That is another reason, of course, in that most AIOs are designed for global use, and thus specs must apply for the Nordics as well as the Middle East, despite the possible >20C difference in ambient temperatures. But it then stands to reason that cooler ambients would allow for higher water temps without issues, i.e. higher absolute cooling capacity than what the specs say. Then again, most people buying a $500 9900K won't be spending $60 on a cheap 120mm AIO anyhow, and likely don't want full-load temperatures in the 90s (which, as we've seen above, is an inherent CPU cooling trait compared to GPUs), so they'll buy overspecced AIOs to bring core temps further down. Which, again, points us back to why a 120mm AIO works fine for a 275W GPU with a ~60C temperature target but not for a 275W CPU with the same 60C temperature target, and why your line of reasoning, while not being wrong is myopic and limited in such a was as to render itself irrelevant to the question at hand.
 
Joined
Oct 2, 2019
Messages
79 (0.04/day)
k i uh..
i just put the stock cooler back on. i noticed some of the different brands of this r9 270x did not have heatsink contact with the memory chips

the aio was unable to handle the heat (probs because it was old)

(same layout and chips tho)
 
Joined
Sep 17, 2014
Messages
23,904 (6.16/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
No, memory doesn't thermal throttle (it barely does dynamic clocks at all). All that will happen when memory overheats is that it (eventually) stops working properly, likely beginning with artefacting and other glitches before becoming unstable or ceasing to work whatsoever. AFAIK GDDR5 is rated for something like 105C, so it takes some doing to overheat it with a low-power GPU.

Most notorious cases exist with hot spots around the GPU die and backplates certainly don't help a whole lot most of the time in creating those.

Airflow is always good to have around there, screw turbulence, you just want air to go through whatever way as fast as possible.
 
Joined
May 2, 2017
Messages
7,762 (2.66/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Most notorious cases exist with hot spots around the GPU die and backplates certainly don't help a whole lot most of the time in creating those.

Airflow is always good to have around there, screw turbulence, you just want air to go through whatever way as fast as possible.
It'd be nice if things were read in context. I was responding to a very specific question there, whether VRAM can thermal throttle. To quote myself a bit earlier in the thread:
As @biffzinker said above you definitely want some cooling on GPU RAM simply due to the proximity to the very hot GPU, though with water cooling that ought to be slightly less of an issue. You can get little RAM heatsinks pretty much anywhere for quite cheap, though most of those come with rather terrible thermal adhesive pads to stick them on - they both transfer heat poorly and are prone to falling off. But they work in a pinch. On a card like that some active airflow across the chips is probably sufficient though, and mounting a fan like in your pic ought to do that job more than good enough.
 
Joined
Sep 17, 2014
Messages
23,904 (6.16/day)
Location
The Washing Machine
System Name Tiny the White Yeti
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling CPU: Thermalright Peerless Assassin / Case: Phanteks T30-120 x3
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
VR HMD HD 420 - Green Edition ;)
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
It'd be nice if things were read in context. I was responding to a very specific question there, whether VRAM can thermal throttle. To quote myself a bit earlier in the thread:

Meant no harm, was mostly responding to your statement about temp influence, even if your emphasis might have been different. Its not impossible to hit that 105C even if the gpu die temp is much lower.

Also, I think this should not be overcomplicated and it looks as though OP is thinking the same :) Good airflow and yes a heatsink will help a great deal in solving this problem on any GPU.
 
Top