• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Why don't cases seperate CPU from GPU for thermal reasons?

The two main sources of heat are the CPU and GPU. Why are they located in the same compartment? Wouldn't it make more sense to separate them so that case fan curves for the different zones could reflect whatever component was actually in there? I heard that a riser cable for a GPU doesn't impact performance significantly so this should be entirely possible.

I say this as a user who is considering an AIO because I want better airflow for my mobo and GPU, not because I think the CPU needs it.
EGPUs already do this, so nice that the future is now. ;)
 
BTW, what you're asking *used to be* standard. Every single pre-built desktop PC I've had hands on made before the 20teens, separated the CPU's airflow from the rest of the case, in some manner.
IIRC, it was part of Intel's 'Thermally Advantaged Chassis Design' guidelines.
Ummm, I think you are confusing something with something else - though not sure what with what. There never was such a "standard". There were several, essentially futile attempts to bring cool air into the case and directly on to the CPU using side panel tubes with fans, but as I said, they proved to be futile - in part because more and more tower/side firing coolers started to appear on the market.

Plus, we soon learned that side panel fans tended to disrupt the desired front-to-back flow of cool air.

Then the use of alternative cooling solutions (water blocks and radiators) put nails in that coffin, and the concept of Thermally Advantaged Chassis essentially died.

Pets4Ever said:
Why don't cases seperate CPU from GPU for thermal reasons?
Many reasons.

It is not just up to case designers to make that decision. The ATX Form Factor standard is managed by a consortium of manufacturers from all over the computer industry, from makers of cases, motherboards, PSUs, expansion cards, and more.

The CPU is located on motherboards of different sizes in a specific, ATX Form Factor defined place. Why? So case makers and motherboard makers know where to put additional mounting points in the same places to ensure there is enough support for heavy CPU coolers, regardless the case or motherboard.

Expansion card slots are positioned on motherboards and cases in specific, ATX defined places so inserted cards line up properly with cases, regardless the motherboard or the case.

And of course, there is the whole rear panel I/O connection section, specifically located in a defined location on every ATX compliant motherboard so that it will align properly with the rear panel I/O area of the case.

The ATX Form Factor standard was created for us consumers (and so 3rd party HW makers could compete with IBM). It is what allows us to avoid proprietary solutions and instead, we can choose to put an ASUS AMD motherboard in an FD case with a Gigabyte graphics card and Samsung RAM and drives, and Seasonic PSU. Then tomorrow, swap all that out and put in a MSI Intel motherboard with EVGA graphics, WD drives, Kingston RAM and a Be Quiet! PSU and have confidence they all will properly fit physically, with compatible connectors and voltages, and run just fine. And then the next day, swap everything into a Coolermaster case.

Even things like PSU mounting screws (size, thread count, etc.) are controlled by the ATX standard. And that is a very good thing for us consumers.

Without such a standard, the build-it-yourself PC industry would essentially be non-existent - just as the build-it-yourself laptop industry is. Everything would be proprietary, more expensive to buy, more expensive to repair, and upgrade options (if any) would be very limited and more expensive too.

The adherence to the ATX standard are looser and looser.
Kinda, sorta, but not really. There are exceptions, of course, but if you buy a mid-tower ATX compliant case for example, you can still mount any ATX compliant motherboard , any ATX compliant power supply, and any ATX compliant drive in there. If you can't, then it is not an ATX compliant case and as such, will likely have several proprietary features - typically not good when it comes to future upgrade options.

If you look at modular power supplies for example. The ATX Form Factor standard ensures the voltages and the component ends of the power cables are standard. But there are no such standards for the PSU end. For that reason, we cannot, without careful research, mix and match modular cables from different supplies and be certain of compatibility - often not even from the same brand! :( Not good!

Manufacturers are able to design cases with isolating compartments for the PSU simply because PSUs connect to components by cables already. But there are major disadvantages to using cables to interconnect other major components.

There is a reason Intel and the other ATX members designed the standard to position the CPU, the graphics solution and RAM as close as possible to each other. That is to decrease the distance between them in order to reduce "transport latency" across the motherboard bus.

"Transport latency" is the time it takes for a request/response to be transmitted to/from processing components. Increasing the distance between the components increases transport latency. NOT GOOD!

Riser cables, regardless how good, add distance between the CPU and GPU. They also add two additional connectors. No connector is 100% efficient (compared to a straight wire).

If not concerned with transport latency across the motherboard bus, then I would suggest sticking with integrated graphics (today's are quite good) and not worry about a graphics card, its cooling or riser cables.

I say, keep case design the way it is.

And I say, AMD and NVIDIA and the various graphics cards makers need to improve efficiency of their GPUs and cards to reduce heat generation. And they need better cooling solutions to better extract the heat out the back QUIETELY (I hate fan noise).
 
Dell used to have a greed duct over the rear fan to pull the heat from the CPU cooler directly out of the case.
 

Attachments

  • CPU Duct.jpeg
    CPU Duct.jpeg
    73 KB · Views: 45
Or.. just hear me out here..

Turn up your case fans and GPU heat will be a none issue :)
 
Turn up your case fans and GPU heat will be a none issue :)
Then crank up your THX surround sound speakers and fan noise will be a non-issue too! :D
 
Then crank up your THX surround sound speakers and fan noise will be a non-issue too! :D
And soon you'll need hearing aids which will give you another level of audio customization! =D
 
Then crank up your THX surround sound speakers and fan noise will be a non-issue too! :D
How loud are regular fans? Like a whisper or a bit more :D
 
And soon you'll need hearing aids which will give you another level of audio customization!
Too many years around military aircraft, Led Zeppelin, Pink Floyd and AC/DC already made that almost inevitable.

How loud are regular fans? Like a whisper or a bit more
That, of course, depends on the quality of the fans, the size of the fans, and how fast your have them spinning. The trick then is to do your homework when buying your case to make sure it supports lots of large fans.

Two fans spinning slowly can move more air and still make less noise than one fan spinning at full speed.

And bigger fans can move more air while spinning more slowly, and more quietly than smaller fans spinning fast.

It helps too to have the case sitting somewhere other than on your desk, next to your head.
 
That, of course, depends on the quality of the fans, the size of the fans, and how fast your have them spinning.
And also what kind of grille you have behind them:

 
@Beertintedgoggles
i think you meant green.
greed cover sounds like an insurance policy you might need in the USA :D

@Pets4Ever
because we basically started LC cpu/gpu like many other stuff.
reinventing a case is more costly, might not work with the next gpu 2gens down the road (to simplify), and very unlikely to be adapted world wide.
e.g. how many of those you know, have the same case, and most dont choose perf as priority.

nowadays there are plenty of midi sized cases that have the option for 240/280 rad/fans to be mounted to the "right" sidepanel, so you can mount a rad to dump the heat (cpu, loop or aio) straight outside the case, while front/bottom are intakes, rear/top exhaust, for case flow, lowering temps by about 30C delta.
so not just will you gain lower ase temps for the gpu, all other stuff like ram/pwm/vrm/drives/psu will also have much lower temps as well.

no case "grills", Dremel like tools take care of them.
and even of you have pets/kids that dont learn, those aftermarket grills with 2 or 3 "circles" of material will still be less restrictive than any case grill/cover i have seen in the past 25y.
 
Last edited:
Heat builds up quicker in a smaller space and it adds other challenges as well. I have had a compartmented case before and in my opinion it was a failure of a design.
 
There is a reason Intel and the other ATX members designed the standard to position the CPU, the graphics solution and RAM as close as possible to each other. That is to decrease the distance between them in order to reduce "transport latency" across the motherboard bus.
Well yes, and I was refering to layout, neither of those distances do however mandate airflow directions. Also the distances to the PCIE slots will vary both physically and logically. The only noticeable optimization of those you would notice is likely ram oc (like on itx) on shorter traces.

I do agree 100% on the standardization, the layout from the cooling and airflow pov would still benefit from change. Change tha is in no way imposing any noteworthy latency.
 
@io-waiter
except it
1 only is really relevant if both cpu and gpu are air cooled.
2. its something that at least +70 of the users will not care about (read the masses)
3. will cost case makers a lot, vs not caring (and keep making traditional stuff).
4. becomes quickly irrelevant once both are LC, usualy for same or less than "optimizing" case (interior)
look at itx/sff and compare the options/offers to atx or even matx, and companies wont see much return for any investment.

and for me, doesnt even incl the fact no aircooling will get my rig as quiet as water, or i wouldnt be running a loop.
 
That, of course, depends on the quality of the fans, the size of the fans, and how fast your have them spinning. The trick then is to do your homework when buying your case to make sure it supports lots of large fans.

Two fans spinning slowly can move more air and still make less noise than one fan spinning at full speed.

And bigger fans can move more air while spinning more slowly, and more quietly than smaller fans spinning fast.

It helps too to have the case sitting somewhere other than on your desk, next to your head.
I choose high airflow cases to get good performance and about as quiet as it gets at low speeds, to still being quiet while moving a pretty good amount of air (for the noise)

You can have monstrous airflow too, but that comes with monstrous noise.. and depending on the fans, you could hear them through different levels of your house lol..

This case that I am using now comes with three good, quiet fans that move a lot of air. The 200x38s are quieter than Fractals 180x38s while moving more air.. not bad.
 
I've been mounting all of my AIO radiators externally since the Corsair H80 came out (July 2011). I initially began doing it to gain space inside the case but also discovered a nice drop in CPU temps as well. Also, being able to use 38mm fans (and in push/pull if I wanted to) was another added bonus. Most cases require Dremel surgery, some major and others minor but a few require no cutting at all. I've posted pictures of some of them previously.
 
Yes

imgur.com/a/lian-li-x-noctua-URqWPPU
With 12 fans how are you failing to keep the temp of your CPU under load under control that thing should be almost as cool as an AIO would cool it
 
Well yes, and I was refering to layout, neither of those distances do however mandate airflow directions.
Air flow directions? Huh? The direction does not matter. Nor should it! It does not matter if you have good front-to-back or back-to-front as long as you have lots of it. You can even go bottom-to-top or top-to-bottom AS LONG AS there are no obstacles in the way, like a power supply or no case vents.

The most common is front-to-back for two reasons. (1) PSUs typically exhaust out the back and (2) double and triple wide graphics cards typically exhaust out the back. Neither have anything to do with motherboard layout or the distances between the major components over the motherboard bus.
Also the distances to the PCIE slots will vary both physically and logically.
No they won't. To suggest logically just does not make... well... logical sense. The logic to establish communication between the CPU and GPU would not vary just because a slot is further away. Both processors already know how to deal with wait states.

And as I explained above, the physical location of the CPU is standard on motherboards and in cases to ensure there is enough support (via case and motherboard mounting points mating with case standoffs) under the board (regardless size: µATX, ATX or EATX) for heavy CPU coolers - not to mention the heavy hand often needed to secure the cooler in place.

And the location of the PCIe slots is standardized too, for the same reason - so any ATX compliant motherboard will be compatible with any ATX compliant case.

Note, in terms of PCIe slots, the main difference between a µATX board and an EATX board is there are more slots on the lower portion of the board. The first (top) PCIe slot is still located in the exact same place physically and therefore is the same distance to the CPU.

If this were not true, we could not put a µATX motherboard in a full tower case. But of course, we can.

With 12 fans how are you failing to keep the temp of your CPU under load under control that thing should be almost as cool as an AIO would cool it
Did you forget the 3 graphics card fans and the fan(s) inside the [invisible?] PSU?

I assume (hope) the bottom fans are intakes blowing up and top fans are exhaust. I also assume (hope) the CPU cooler fans are all blowing in the same direction (towards the rear of the case). But I wonder if the 3 side/front fans are not creating unwanted turbulence that is actually disrupting and counterproductive to the desired flow through the case? An improper application, or broken bond of TIM can thwart the heat transfer process too.

The 200x38s are quieter
I had a great Antec case that supported Antec "Big Boy" 200mm x 30mm fans. :) They were so quiet I forgot they where there until, one day, I stuck my hand in there to make sure a DVD drive cable was securely fastened and the fan chopped a hunk of flesh off my knuckle before breaking and rocketing the offending blade about the case interior and into my cheek drawing even more blood. Of course I knew better than to check cable connections with the system running - but lazy stupidity, with a touch of cockiness obscured my better judgement - again. :rolleyes:
 
With 12 fans how are you failing to keep the temp of your CPU under load under control that thing should be almost as cool as an AIO would cool it
Maybe follow the actual previous messages so you know what i responded to.
 
No they won't. To suggest logically just does not make... well... logical sense. The logic to establish communication between the CPU and GPU would not vary just because a slot is further away. Both processors already know how to deal with wait states.
I don’t necessarily disagree but sometimes pcie has logic between it and the CPU, and while not 100% sure I don’t think trace length a physical location is the same (eg 2 vs 4 ram slots, layers of pcb)

My attempt at a point was that the BTX layout is better, and the airflow that ATX tries to force on us is not optimal.
 
You are changing, or for the benefit of the doubt, clarifying your points now.
but sometimes pcie has logic between it and the CPU
No "sometimes". All the time. The CPU and GPU must know how to communicate at all times. That requires logic, or computer programming based on established protocols.

That is not what you said earlier. Before you replied to my comment about ATX boards, claiming the distances between the CPU and PCIe slots vary. No they don't - at least not to the first PCIe slots which is where graphics cards typically reside.

My attempt at a point was that the BTX layout is better
BTX is a better layout for cooling but that turned out to be its only significant advantage. BTX just has (had) too many disadvantages to succeed, and more importantly to supplant ATX. Had BTX supported more ATX standard devices (PSUs was a biggie), allowing users to carry over more of their components saving big money, BTX might have seen better success. But sadly too many ATX components were not compatible.

The timing for BTX didn't help as about the same time it came about AMD and Intel and NVIDIA (RAM makers too) starting producing lower voltage, more efficient devices that generated less heat, thus required less cooling. And, again at the same time, cooler makers started making better coolers, advanced TIM came on the market, case makers started making better designed cases that supported more and larger (more CFM) fans.

In effect, BTX was a proprietary format. Never good. Plus, BTX cost more.

Today, ATX is too entrenched. Intel (the developers of both ATX and BTX) stopped BTX development way back in 2006.

And for sure, ATX case makers have made significant design improvements to help with case cooling, like moving the PSU to the bottom of the case or even putting it in its own chamber. Remember, way back when ATX came out in 1995, many if not most cases supported 1 x 80mm fan in back only. Some relied entirely on the PSU's fan. :( Today, almost all standard mid tower cases support multiple 140mm fans in front and back.

What REALLY needs to happen is AMD, Intel and NVIDIA need to make devices that aren't blast furnaces. And I see that happening - just not quick enough.
 
BTX is a better layout for cooling but that turned out to be its only significant advantage.
I still have my Stacker STC-T01 (in BTX form) sitting under my basement stairs for like 15 years lol.. even has wheels :D

Anywhoo.. even back then I was into high heat applications, and pushing the limits of what modern at the time hardware could really do.

For CPU/GPU thermals, it is great. You still need to cool the Northbridge though, and for that.. I use a 120x38 Panaflo sitting on my TRUE wedged with a USB thingy from the mobo box haha..

I should slap my Rampage Formula into it. But it is no fun with a quadcore.. the real fun is with an E8500 or E8600.

I forgot, I did run my Rampage III Formula in it with 2x GTX570 and X5690 ES, and it was my daily and folding rig till I bought a Define R4 when they were new on the shelf.

I really wish I had the squirrel cage blower for it.
 
As a watercooler I don't have that problem, but something like having the GPU card with a riser isolating that from the other part of the PC could be a working solution?
 
I don't have that problem
Sure you do, you just accept the higher temps on components you do not care about as much as what you are watercooling :D
 
OP would love the old Mac Pro towers. CPUs and ram in their own compartment on a tray with their own cooling and the GPUs in a big compartment above. Love or hate Apple, those things were a fantastic design and beautifully made.
 
Back
Top