• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Need a 4U case recommendation

Joined
Oct 20, 2009
Messages
2,883 (0.51/day)
Location
Corpus Christi, Texas
System Name FumoffuFumoffu
Processor Intel i7 4770K
Motherboard Gigabyte Z87X -UD3H
Cooling Corsair H100i
Memory 16GB DDR3 1600 Crucial Ballistix
Video Card(s) Sapphire AMD Radeon HD 7970 OC
Storage 1- WD 500GB 1- Samsung F2 1.5TB 1- Crucial M4 128GB SSD 1-256GB ADATA XPG SX900 ASX900S3 SSD
Display(s) Hanns-G HZ281HPB 27.5'' 3ms Full HD 1920x1200 WideScreen LCD Monitor
Case Corsair Graphite Series 600T
Audio Device(s) Creative Soundblaster X-Fi Titanium
Power Supply Corsair HX 750W Gold
Software Windows 7 Pro x64
Given the cost for a good 4U case, should I just get a Rosewill or Sliger 4U case?

Ive been looking at the sliger ones like the CX4150a, CX4170a, and CX4150i. Or the CX4150e, CX4170i, CX4200i.

As well as the Rosewill RSV-R4000U, RSV-R4100U, RSV-R4200U, RSV-L4000U, and RSV-L4500U.

I do not want to spend $400 on a case though. It will need rails. It is going in a 4-post rack.

I will be putting a Gigabyte X570 Aorus Master ATX board in there with a Noctua NH-D12L ~147mm cooler. I don’t need a bunch of drive bays as this will be a All-Flash server but having 1 or 2 5.25" bays would be handy for adding an Icydock or similar hotswap bay.
 
I don't think there's a way to avoid spending $400. Homebrew DIY build cases and server racks are a pretty tiny niche, and you're narrowing that niche further by restricting your choice to US-only brands. The only reason consumer ATX/E-ATX cases are cheap are economies of scale. A $200 rack-mountable E-ATX case is unlikely to be anywhere near the same quality as a $200 desktop tower. The first Sliger I looked at from your list is $239 bare, you'll need to buy rails and internal frames - probably bringing the total to over $400 including shipping.

I buy Silverstone or Supermicro in the rare occasions when I need to rackmount standard ATX consumer hardware, and I cannot comment on Rosewill or Sliger as they don't typically leave the US and are therefore irrelevant to the global market. Those are typically £250/€300 for a pretty basic chassis that includes almost nothing, not even rack rails.
 
I don't think there's a way to avoid spending $400. Homebrew DIY build cases and server racks are a pretty tiny niche, and you're narrowing that niche further by restricting your choice to US-only brands. The only reason consumer ATX/E-ATX cases are cheap are economies of scale. A $200 rack-mountable E-ATX case is unlikely to be anywhere near the same quality as a $200 desktop tower. The first Sliger I looked at from your list is $239 bare, you'll need to buy rails and internal frames - probably bringing the total to over $400 including shipping.

I buy Silverstone or Supermicro in the rare occasions when I need to rackmount standard ATX consumer hardware, and I cannot comment on Rosewill or Sliger as they don't typically leave the US and are therefore irrelevant to the global market. Those are typically £250/€300 for a pretty basic chassis that includes almost nothing, not even rack rails.
I don't need a rack. I already have a rack. It's a 4 post rack. All i need is the case. SuperMicro 4U cases that are NOT limited to custom SuperMicro form factor boards start at either $800 (bare) or more.
I'm building a game server. I have all the parts minus the case.
I'm fine spending $400 on a case + rails. Not fine dropping $400 on a bare case

Rails from Sliger are $100. Rails from Rosewill are ~$50.
 
I don't need a rack. I already have a rack. It's a 4 post rack. All i need is the case. SuperMicro 4U cases that are NOT limited to custom SuperMicro form factor boards start at either $800 (bare) or more.
I'm building a game server. I have all the parts minus the case.
I'm fine spending $400 on a case + rails. Not fine dropping $400 on a bare case

Rails from Sliger are $100. Rails from Rosewill are ~$50.
When I said frames, I meant internal frames for the 4U cases to adapt to various card/motherboard layouts, not a rackmount frame (or cabinet, as I normally call them).

The Sliger stuff looks the most modern but costs add up quickly with brackets, rails and other things you're likely to need.
All of the affordable Rosewill stuff looks like 15-year old traditional storage-server hardware with a focus on drive bays.

Look for a 4U mining case. I always used Veddha open frames, but the Inter-tech 4W2 was always a popular, affordable mining rack - and they're better suited to a modern PC in terms of cooling, oversize GPU support, and are typically geared up for ATX PSUs.
 
As a 2U rack user from the early era I can confidently say these aren't going to be cheap at any size.
The identity of these boxes is "space is at a premium" so anything 4U and up may as well be desktop.
I initially went server tower because of a fully integrated water cooling solution that I knew was needed.
The 2U was a ~5yr later purchase and my half-assed attempt at retiring the system home to that tower.

My best option at the time was through NORCO. They basically had an answer for everything.
1746779459472.png


A lot has happened since then and they may or may not have been bought out and rebranded as Rosewill.
Either way, your best shot at getting something 3U or 4U at bottom tier prices is gonna be flea market.
Looking at Rosewill and Sliger, this is what you're working with online:
1746779935025.png


It is not a good time to join the circus unless you know exactly what you're doing.
By that I mean knowing when (and how) you want to get out of this before getting in.
Assuming this is long term, needs/tastes can and will change a lot over time.
From a single 2U, my only issues have been a lack of HHHL brackets and management.

That makes me the odd one out. Over time you're going to be looking for stuff like:
Hot swap bays
Adjustable cooling rails
Thermal alarm probes
Multiple 1GbE NICs that play nice in vmware/proxmox/etc
Video out devices that don't immediately crash under linux desktop
Highly converged networking devices (10GbE+)
HBAs to handle tons of HBA/RAID solutions
I will be putting a Gigabyte X570 Aorus Master ATX board in there with a Noctua NH-D12L ~147mm cooler.
I don’t need a bunch of drive bays as this will be a All-Flash server
having 1 or 2 5.25" bays would be handy for adding an Icydock or similar hotswap bay.
What you describe would be the equivalent of uprooting my AMD 970 based FX rack and retiring my X570 and R5 3600 combo.
If you are seriously okay with something of this nature as the performance floor to host a game server or high bandwidth server...
First of all good job, another brave soul. Next...The 147mm Noctua cooler is effectively 6" and change when accounting for space.
What I'm saying is it seems to be the only thing pushing you towards a 4U decision. I really want to suggest you find something else.

There's also the issue of the X570 Master being a full size ATX board, which...I get it. If I had to do this again I'd consider the RSV-R4000U.
Depending on front I/O and security requirements, at sub-$200 this one might actually be a steal in current year.
Consider your needs and then throw them all out the window because you will never be anywhere as needy as your customers.
1746781690514.png


That's a deep boi.
 
As a 2U rack user from the early era I can confidently say these aren't going to be cheap at any size.
The identity of these boxes is "space is at a premium" so anything 4U and up may as well be desktop.
I initially went server tower because of a fully integrated water cooling solution that I knew was needed.
The 2U was a ~5yr later purchase and my half-assed attempt at retiring the system home to that tower.

My best option at the time was through NORCO. They basically had an answer for everything.
View attachment 398801

A lot has happened since then and they may or may not have been bought out and rebranded as Rosewill.
Either way, your best shot at getting something 3U or 4U at bottom tier prices is gonna be flea market.
Looking at Rosewill and Sliger, this is what you're working with online:
View attachment 398802

It is not a good time to join the circus unless you know exactly what you're doing.
By that I mean knowing when (and how) you want to get out of this before getting in.
Assuming this is long term, needs/tastes can and will change a lot over time.
From a single 2U, my only issues have been a lack of HHHL brackets and management.

That makes me the odd one out. Over time you're going to be looking for stuff like:
Hot swap bays
Adjustable cooling rails
Thermal alarm probes
Multiple 1GbE NICs that play nice in vmware/proxmox/etc
Video out devices that don't immediately crash under linux desktop
Highly converged networking devices (10GbE+)
HBAs to handle tons of HBA/RAID solutions

What you describe would be the equivalent of uprooting my AMD 970 based FX rack and retiring my X570 and R5 3600 combo.
If you are seriously okay with something of this nature as the performance floor to host a game server or high bandwidth server...
First of all good job, another brave soul. Next...The 147mm Noctua cooler is effectively 6" and change when accounting for space.
What I'm saying is it seems to be the only thing pushing you towards a 4U decision. I really want to suggest you find something else.

There's also the issue of the X570 Master being a full size ATX board, which...I get it. If I had to do this again I'd consider the RSV-R4000U.
Depending on front I/O and security requirements, at sub-$200 this one might actually be a steal in current year.
Consider your needs and then throw them all out the window because you will never be anywhere as needy as your customers.
View attachment 398804

That's a deep boi.
I ended up getting the Sliger CX4150i.
I went that route because I have an Icydock 2x 2.5" SSD hotswap bay that Im using for the OS drives. All game server files are going to live on a Intel DC P3700 2TB AIC and I will have a couple basic NVMe drives for any caching etc that is needed. (Redis etc)
The cooler is a Noctua NH-D12L that is specifically designed for 4U cases. It comes in at around 145mm.
I had originally purchased a RackOwl 4U case but that was a POS and DOA so I returned it.

This will be the 3rd Game server hosting box I am standing up. I also have a SuperMicro 6028U X10DRI-TR4T server, a Dell Poweredge R720 and 2x R710 servers that are all for game hosting. I am replacing the R710 servers (old and crapstatic) with this server. The Supermicro has quad 10GbE ports on it so I will be adding a 10GbE NIC to this server so I can direct connect between the two. This server will be dead silent compared to basically all the other servers in the rack.
My OS drives will be a pair of 480GB Intel DC S3610 SSDs since they have really high Write endurance and Power Loss Protection.
The OS will be either Debian 12 or RockyLinux 9 headless and all servers are managed by Pterodactyl/Pelican panel.
IPMI/BMC is handled with a PiKVM w/Tailscale
I am ~850mi away from the rack so I need to be able to remotely manage these things.
The primary use for the SM and this server is Minecraft cluster and Veloren hosting.
I am reusing mostly already existing hardware so I could save a buck.
128GB DDR4 3200 non-ECC RAM and the 5900X.

I have looked at used enterprise servers and yes they are cheap but stupidly inefficient, loud and generally slow AF. You need generally high clockspeed single thread chips and Xeons are generally not that. Especially the Sandybridge/Ivybridge series Xeons you can get for cheap.
That SuperMicro server sounds like a 747 on takeoff 24/7 for no apparent reason and does not offer the ability to replace the 1k PSUs with the SuperQuiet models so I am stuck with the 40mm 25k RPM fans.
The R720 someone else bought and donated to the rack. I would personally put that through a shredder if I owned that one. That server on its own is an Ark Survival Evolved server host and is slow AF.
Whoever donated it installed Crucial BX500 SSDs for the OS RAID and it shows. The game servers on that box are hosted in a RAID5 cluster of 10k spinning rust. I timed the startup time for just one Ark server on that box. ~30min to start one of the 13 Ark servers.

I have built other servers that were not rack mounted and it was a walk in the park but my friend who hosts the rack and is the "owner" of the gaming community demanded I do rack servers from now on because he has a rackmount fetish or something. If he didnt require me to do rackmount, Id have slapped this into a Fractal case or something relatively cheap and called it a day.

The DC P3700 AIC if you arent aware has a 17 DWPD rating with an Write Endurance rating of 60 PBW. Life Expectancy of 230 years with a MTBF of 2 million hours.
  • Sequential Read: Up to 2,800 MB/s
  • Sequential Write: Up to 2,000 MB/s
  • Random Read (4K, QD32): Up to 460,000 IOPS
  • Random Write (4K, QD32): Up to 175,000 IOPS
  • Latency (Read/Write): 20 µs / 20 µs

  • Endurance: Up to 17 Drive Writes Per Day (DWPD) over 5 years
  • Mean Time Between Failures (MTBF): 2 million hours
  • Uncorrectable Bit Error Rate (UBER): 1 sector per 10^17 bits read
  • Power Loss Protection: Enhanced power-loss data protection

  • Active Power Consumption: Up to 25W (write), 11W (read)
  • Idle Power Consumption: 4W
  • Operating Temperature: 0°C to 55°C (ambient) with specified airflow
  • Airflow Requirement: 300 LFM (Linear Feet per Minute)

  • Interface: PCIe 3.0 x4, NVMe 1.0
  • NAND Type: Intel 20nm High Endurance Technology (HET) Multi-Level Cell (MLC)
  • Capacity: 2TB
  • Controller: Intel-developed NVMe controller

  • DWPD = 17
  • Drive Capacity = 2 TB
  • Warranty = 5 years
PBW=17×2×365×5=62,050 GB\text{PBW} = 17 \times 2 \times 365 \times 5 = 62,050 \text{ GB}PBW=17×2×365×5=62,050 GB or ~60 PBW

This drive should be more than plenty for Minecraft and Veloren.
 
Last edited:
I ended up getting the Sliger CX4150i.
I went that route because I have an Icydock 2x 2.5" SSD hotswap bay that Im using for the OS drives.
This seems to be a good idea for a production environment that needs testing for frequent hotpatch or quick recovery options.
I would never do this but see the need for it. I typically image a working volume then clone it to a disk TBD during the emergency.
I don't keep any hot or cold spares anymore.
All game server files are going to live on a Intel DC P3700 2TB AIC and I will have a couple basic NVMe drives for any caching etc that is needed. (Redis etc)
These are some high $$$ PCI-E parts. I'll normally toss a $30 WarpDrive at such a problem and call it a day. Is the cache demand really that high?
The R720 someone else bought and donated to the rack. I would personally put that through a shredder if I owned that one. That server on its own is an Ark Survival Evolved server host and is slow AF.
Whoever donated it installed Crucial BX500 SSDs for the OS RAID and it shows. The game servers on that box are hosted in a RAID5 cluster of 10k spinning rust. I timed the startup time for just one Ark server on that box. ~30min to start one of the 13 Ark servers.
Okay definitely needs high core clock and extremely LARGE cache. Good job.
I have built other servers that were not rack mounted and it was a walk in the park but my friend who hosts the rack and is the "owner" of the gaming community demanded I do rack servers from now on because he has a rackmount fetish or something. If he didnt require me to do rackmount, Id have slapped this into a Fractal case or something relatively cheap and called it a day.
If distance and management are the main pains of the setup it's probably something to do with storage space, spare parts and a LOT of other infra going on since your friend is the one that has to live/operate in that environment. I get it, it's not just the game servers but his life. Anything easier on the ears is a huge +++. The only issue I get from all this is it sounds like you guys are frequently shuffling components around to improve this and that. I'm not an Ark player but I've also never had a personal need to run more than 4 game servers at a time let alone multiple instances of a game or anything so ridiculous as Ark, Rust, Palworld, V Rising or whatever insanely modded Minecraft that's popular at the moment. Have you run into any revolving door CPU spikes on the X570? I know any Xeon system would crumble immediately.
 
This seems to be a good idea for a production environment that needs testing for frequent hotpatch or quick recovery options.
I would never do this but see the need for it. I typically image a working volume then clone it to a disk TBD during the emergency.
I don't keep any hot or cold spares anymore.

These are some high $$$ PCI-E parts. I'll normally toss a $30 WarpDrive at such a problem and call it a day. Is the cache demand really that high?

Okay definitely needs high core clock and extremely LARGE cache. Good job.

If distance and management are the main pains of the setup it's probably something to do with storage space, spare parts and a LOT of other infra going on since your friend is the one that has to live/operate in that environment. I get it, it's not just the game servers but his life. Anything easier on the ears is a huge +++. The only issue I get from all this is it sounds like you guys are frequently shuffling components around to improve this and that. I'm not an Ark player but I've also never had a personal need to run more than 4 game servers at a time let alone multiple instances of a game or anything so ridiculous as Ark, Rust, Palworld, V Rising or whatever insanely modded Minecraft that's popular at the moment. Have you run into any revolving door CPU spikes on the X570? I know any Xeon system would crumble immediately.
The DC P3700 2TB cost me ~$110 on ebay for a drive with 99% life left.

I've never had issues with X570. I ran Star Citizen on that and that game is heavy CPU usage.
It was not my decision to run that many Ark servers. Its a gaming community and a bunch of the people wanted every map Ark has so a server instance for each map sort of thing. Each instance takes ~12GB of RAM.....which is nuts.

As for Minecraft, the server cluster is running Pupur (a customized papermc version focused on performance) and ~15 paper/spigot plugins. There will be a 10GbE direct link between the SM and this server so the Velocity proxy can route specific server traffic to the different servers i.e: multiple worlds spread between the two servers. The supermicro will not have the P3700 but will instead have 10x S3610 480GB SSDs in a soft RAID or ZFS pool. That server has 2x E5-2690v4 chips.
 
Back
Top