• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Silent Plex/Backup Server

Joined
Mar 14, 2016
Messages
130 (0.04/day)
Location
Iowa
System Name Gateway
Processor AMD A4-5000
Motherboard Laptop
Cooling Laptop
Memory 8 GB DDR3
Video Card(s) Radeon HD Graphics
Storage 1 TB
Software Windows 10
I wouldn't run the 8TB without backup just alone unless it's used how intended, cold storage. If it's critical data don't leave it just on the 8TB, have an extra copy. You should be doing this REGARDLESS of what drive you're using for data, not just the 8TB.

I can report it works well formatted as NTFS and ZFS filesystems. Definitely will not work under BTRFS if you opt to use it due to some problems within the file system itself, I've tried it and it was horrible. But all in all, pretty ok drives overall and for the price. In my case I mirror the critical data from my RAIDZ(RAID5) array to it so I have an extra copy and keep other non-critical stuff on it. All in all, it's your data and this is just my 2c.

I was planning on this all said and done:

PCPartPicker part list / Price breakdown by merchant

CPU: Intel Core i5-6500 3.2GHz Quad-Core Processor ($194.99 @ Newegg)
CPU Cooler: Cooler Master Hyper 212X 82.9 CFM CPU Cooler ($39.99 @ Newegg)
Motherboard: ASRock B150M Pro4S Micro ATX LGA1151 Motherboard ($78.99 @ Newegg)
Memory: Corsair Vengeance LPX 8GB (1 x 8GB) DDR4-2400 Memory ($27.99 @ Newegg)
Storage: Samsung 850 EVO-Series 250GB 2.5" Solid State Drive ($88.00 @ Amazon)
Storage: Western Digital Red 6TB 3.5" 5400RPM Internal Hard Drive ($234.37 @ Amazon)
Storage: Western Digital Red 6TB 3.5" 5400RPM Internal Hard Drive ($234.37 @ Amazon)
Storage: Western Digital Red 4TB 3.5" 5900RPM Internal Hard Drive ($149.99 @ Newegg)
Storage: Western Digital Red 4TB 3.5" 5900RPM Internal Hard Drive ($149.99 @ Newegg)
Storage: Western Digital Red 4TB 3.5" 5900RPM Internal Hard Drive ($149.99 @ Newegg)
Case: Fractal Design Node 804 MicroATX Mid Tower Case ($84.99 @ NCIX US)
Power Supply: EVGA SuperNOVA G2 550W 80+ Gold Certified Fully-Modular ATX Power Supply ($71.49 @ Newegg)
Keyboard: Logitech K400 Plus Wireless Mini Keyboard w/Touchpad ($29.99 @ Amazon)
Total: $1535.14
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2016-04-14 15:41 EDT-0400


Basically I have a couple reasons for expanding my budget out so far...

1. Raid 5 (Mirrored Volume) for the 2x4TB Red Drives, for mission critical files.
2. Raid 5 (Mirrored Volume) for the 2x6TB Red Drives, for media (music & movies) - Plex
3. Red 4TB Drive for backups on my other computers.
4. SSD for Windows 7 Ultimate to function and transcode on.
5. Modular PSU, I've had non-modular in the past and they make a mess.
6. Thoughts? Recommendations? Options?
 
Last edited:

newtekie1

Semi-Retired Folder
Joined
Nov 22, 2005
Messages
28,472 (4.23/day)
Location
Indiana, USA
Processor Intel Core i7 10850K@5.2GHz
Motherboard AsRock Z470 Taichi
Cooling Corsair H115i Pro w/ Noctua NF-A14 Fans
Memory 32GB DDR4-3600
Video Card(s) RTX 2070 Super
Storage 500GB SX8200 Pro + 8TB with 1TB SSD Cache
Display(s) Acer Nitro VG280K 4K 28"
Case Fractal Design Define S
Audio Device(s) Onboard is good enough for me
Power Supply eVGA SuperNOVA 1000w G3
Software Windows 10 Pro x64
Do you know if Filezilla is supported by android or what a good app is that supports FileZilla?

I use AndFTP. It will let you access your FTP server. If you just want to play media, you use plex.

There also isn't a real point to multiple RAID arrays like that. Just get 4x6TB in RAID 5. You'll have about the same amount of storage space and it will all be redundant.
 
Last edited:
Joined
Oct 28, 2007
Messages
690 (0.11/day)
System Name Pegasus
Processor AMD Ryzen Threadripper 1950X @ 4GHz
Motherboard ASUS ROG Zenith Extreme
Cooling Custom 480mm EK Loop
Memory 4 x 8GB G.Skill TridentZ RGB 3000MHz @ 3000MHz
Video Card(s) ASUS ROG Nvidia GTX 1080 Ti
Storage Samsung 960 EVO M.2 500GB / Samsung 850 EVO 500GB / Samsung 840 EVO 250GB
Display(s) 2 x 25" Dell Ultrasharp U2515H / 1 x 15" ASUS MB169+
Case Corsair 900D
Audio Device(s) 2 x Tannoy Reveal 502 / Beyerdynamic DT 990 PRO 250 Ohm / Behringer Xenyx X1204 USB / MXL 770
Power Supply EVGA SuperNova G3 1000W
Mouse Logitech G903 Lightspeed
Keyboard HyperX Alloy FPS / Corsair K95 RGB / Anne Pro 2 / 2 x Elgato Stream Deck
Software Windows 10 Professional 64-Bit
With a large amount of disks I'd go RAID6 (or RAIDZ2 under ZFS on Linux/FreeBSD/FreeNAS) and I'd get 2 sticks or RAM even if it's 2x4GB not 2x8GB to enable dual channel.
 
Joined
Mar 14, 2016
Messages
130 (0.04/day)
Location
Iowa
System Name Gateway
Processor AMD A4-5000
Motherboard Laptop
Cooling Laptop
Memory 8 GB DDR3
Video Card(s) Radeon HD Graphics
Storage 1 TB
Software Windows 10
I use AndFTP. It will let you access your FTP server. If you just want to play media, you use plex.

There also isn't a real point to multiple RAID arrays like that. Just get 4x6TB in RAID 5. You'll have about the same amount of storage space and it will all be redundant.

With a large amount of disks I'd go RAID6 (or RAIDZ2 under ZFS on Linux/FreeBSD/FreeNAS) and I'd get 2 sticks or RAM even if it's 2x4GB not 2x8GB to enable dual channel.

Thanks guys this is why I love this community, one of the best forums on the net if you ask me, that makes alot of since, from a software standpoint doesn't Windows 7 Ultimate only do Raid 0, 1, 5, 10? Or in windows they call it mirroring, striping, and spanning volumes. How would you setup a complicated raid array like this? Also how does this raid array work?
 
Joined
Oct 28, 2007
Messages
690 (0.11/day)
System Name Pegasus
Processor AMD Ryzen Threadripper 1950X @ 4GHz
Motherboard ASUS ROG Zenith Extreme
Cooling Custom 480mm EK Loop
Memory 4 x 8GB G.Skill TridentZ RGB 3000MHz @ 3000MHz
Video Card(s) ASUS ROG Nvidia GTX 1080 Ti
Storage Samsung 960 EVO M.2 500GB / Samsung 850 EVO 500GB / Samsung 840 EVO 250GB
Display(s) 2 x 25" Dell Ultrasharp U2515H / 1 x 15" ASUS MB169+
Case Corsair 900D
Audio Device(s) 2 x Tannoy Reveal 502 / Beyerdynamic DT 990 PRO 250 Ohm / Behringer Xenyx X1204 USB / MXL 770
Power Supply EVGA SuperNova G3 1000W
Mouse Logitech G903 Lightspeed
Keyboard HyperX Alloy FPS / Corsair K95 RGB / Anne Pro 2 / 2 x Elgato Stream Deck
Software Windows 10 Professional 64-Bit
You could do it withing Windows or via the controller on your motherboard, it all depends. Within the OS is what we call software RAID, when you use specific controllers whether on the motherboard or as an expansion card that would be called hardware raid.

Pros for hardware raid:

  • Can be inexpensive (HighPoint) for high throughput
  • Dedicated controller
Cons for hardware raid:

  • If the controller fails you need to replace it with a similar/identical controller
  • Some cards (LSI, Adaptec) can be very expensive
Pros for software raid:

  • Has better data integrity check depending on the feature set of the chosen OS/file system
  • Does not require hardware raid controllers
  • Is fast if given the right hardware
Cons for software raid:

  • Some file systems based on software raids are near impossible or very expensive to recover
  • Requires higher spec components for better throughput
  • If using hardware raid cards as HBAs instead of an actual HBA you may come across issues with the arrays "belonging" to the device and if the card failed you'd need to replace it with an identical card. I've tested this and with my card it wasn't the case, the array worked fine regardless of how it was plugged but it seems to be something that can happen.

That's a very high level view to give you an idea what to expect. I have used both and moved from hardware RAID to software RAID using ZFS because of the data integrity checks it provides. I'm sure Microsoft was developing something on a similar scale. If you're not using Seagate Archive (SMR type) hdds you could also try Rockstor which can use BTRFS, I've tried it, it's well supported with dedicated developers and quite easy to use. RAID arrays, in whichever configuration are easy to use and you can usually do it through a GUI to keep it simple. If you need basic info on RAID levels you should be able to find the info depending on which file system you choose.
 
Joined
Nov 3, 2007
Messages
1,700 (0.28/day)
For your requirements, you are PERFECT to run (the same as I do, 2 x actually) unRAID. It is at it's core a media server with parity protection (Soon to have dual parity). But also supports VM's and "Docker" containers. They have a pre-built Plex Docker image, all you do is download it, do some basic config (there is a video you can follow), and off it goes.


You would need to buy the license. But I have another setup that is perfect for unRAID, it is:

Q6600
SuperMicro MB (http://www.newegg.com/Product/Product.aspx?Item=N82E16813182151)
4GB (2x2) DDR2
----------------------------
I would sell these for $125 shipped. Then the HDD's are on you to get, board supports 6.

This is more than enough to be a file server, host your shares, run backups, and host your Plex app for streaming.
Last I will add, here is a pic of my home server (soon to have dual parity disks)






Runs headless in the basement and I use up ipmi to manage the hardware.
 
Last edited:
Joined
Sep 16, 2013
Messages
1,357 (0.35/day)
Location
Canada
System Name HTPC
Processor Intel Core 2 Duo E8400 - 3.00/6M/1333
Motherboard AsRock G31M-GS R2.0
Cooling CoolerMaster Vortex 752 - Black
Memory 4 Go (2x2) Kingston ValueRam DDR2-800
Video Card(s) Asus EN8600GT/HTDP/512M
Storage WD 3200AAKS
Display(s) 32" Irico E320GV-FHD
Case Aerocool Qx-2000
Audio Device(s) Onboard
Power Supply Enermax NoiseTaker 2 - 465w
Mouse Logitech Wave MK550 combo (M510)
Keyboard Logitech Wave MK550 combo (K350)
Software Win_7_Pro-French
Benchmark Scores Windows index : 6.5 / 6.5 / 5.6 / 6.3 / 5.9
FTP ... Fuck The Pussies servers?!
 

newtekie1

Semi-Retired Folder
Joined
Nov 22, 2005
Messages
28,472 (4.23/day)
Location
Indiana, USA
Processor Intel Core i7 10850K@5.2GHz
Motherboard AsRock Z470 Taichi
Cooling Corsair H115i Pro w/ Noctua NF-A14 Fans
Memory 32GB DDR4-3600
Video Card(s) RTX 2070 Super
Storage 500GB SX8200 Pro + 8TB with 1TB SSD Cache
Display(s) Acer Nitro VG280K 4K 28"
Case Fractal Design Define S
Audio Device(s) Onboard is good enough for me
Power Supply eVGA SuperNOVA 1000w G3
Software Windows 10 Pro x64
Thanks guys this is why I love this community, one of the best forums on the net if you ask me, that makes alot of since, from a software standpoint doesn't Windows 7 Ultimate only do Raid 0, 1, 5, 10? Or in windows they call it mirroring, striping, and spanning volumes. How would you setup a complicated raid array like this? Also how does this raid array work?

You'll want to use the RAID built into the motherboard, or a dedicated card. Not the Windows RAID.

If the controller fails you need to replace it with a similar/identical controller

All of your information was pretty good except this. With the Intel controller, the RAID arrays have been compatible with almost all other Intel controllers for at least a generation before. So if you create an array on a B170 motherboard and that motherboard fails, you can move the drives to pretty much any other halfway modern Intel motherboard, and the array will be recognized and work. The same is true with Highpoint. In fact, they pride themselves on the fact that an array can be created with any of the add-on cards, and it will work with any other add-on card. Highpoint talks about this in their FAQ.

So, yeah, you have to replace the controller with one from the same brand, but it doesn't have to be identical.
 
Joined
Jul 21, 2015
Messages
501 (0.16/day)
Bermel72 - a few things you will want to consider..

It's a VERY bad idea to use RAID 5 for an array that large using large (>2TB) drives. You only have a single drive's worth of redundancy, and if a second drive fails while a rebuild is in progress - which is a very real possibility, because rebuilding stresses the remaining drives in the array, which are the same age and usually from the same batch as the one that failed - you lose the entire array.

Software RAID - especially a parity flavor - is very CPU intensive. This is becoming less of a problem as CPUs get more powerful, but there are still potential pitfalls when implementing it on a transcoding media server. Even with really good hardware, the machine will get swamped if there are multiple high intensity operations happening at the same time.. For example if someone is streaming a movie that Plex has to transcode (and with only 3Mbps upload it will have to transcode everything if you're mobile) and you start copying a rip to the array, the stream(s) might stutter.

Your servers should ALWAYS have a UPS with unattended shutdown configured, but with software RAID you also run the risk of array problems if the server loses power during a write operation. Software RAID does not have a safe battery backed write cache, so if power is lost or the system crashes during a write, it will automatically start a verify operation - which not only chews up CPU and bottlenecks the array bandwidth but in a parity array that large it will likely take a day or more to complete.

Call me old school but I still recommend a good hardware RAID card. You can get demoted enterprise hardware on ebay really cheap. For example an 8 port LSI 9560 with BBU can be had for under $100. Old, but they work great and if it shits the bed you can easily find another one (LSI 9000 cards can also be swapped for other models within the 9000 series without issue). RAID cards also support staggered spinup (powers the drives up one at a time so your power supply doesn't have a heart attack on powerup) Software RAID can't so that. Just always make sure you get one with the BBU (battery backup unit). And Highpoint sucks, stay away from them.

Now going back to the actual array....

There's a couple very good (and way too often dismissed/ignored) reasons to use more drives vs bigger drives.. One, 2TB is the largest drive where the array is in an "acceptable" danger zone of losing a second drive on failure of any single drive. The larger the drives are, the longer a rebuild takes and the bigger beating the remaining drives take, not only for the rebuild operation itself but all data from the missing drive must be calculated on the fly from parity if the array is still being accessed during the rebuild (which if you don't have a hot spare or an extra drive onhand means it will be degraded until you can get one).

Two is the parity cost. 2TB drives are cheap. 4 and 6 TB drives are not. In a RAID5 array you lose 1 full drive's worth of space to parity. In RAID 6 you lose 2 drive's worth, which means the larger the drives the larger the parity cost (both in price AND unusable space). So if you use 4x6TB drives in RAID6 (sorry @N-Gen, only a fool would use 6TB drives in RAID5), you're spending $500 just on parity and will only have 12TB free, whereas 8x2TB drives in RAID6 will give you the same protection against 2 drive failures, 3x faster recovery in case of a failure, and have your data less vulnerable to total loss - PLUS it'll cost you $330 less. With RAID it is cheaper to use MORE drives, not larger.

Food for thought..
 
Last edited:
Joined
Mar 14, 2016
Messages
130 (0.04/day)
Location
Iowa
System Name Gateway
Processor AMD A4-5000
Motherboard Laptop
Cooling Laptop
Memory 8 GB DDR3
Video Card(s) Radeon HD Graphics
Storage 1 TB
Software Windows 10
Bermel72 - a few things you will want to consider..

It's a VERY bad idea to use RAID 5 for an array that large using large (>2TB) drives. You only have a single drive's worth of redundancy, and if a second drive fails while a rebuild is in progress - which is a very real possibility, because rebuilding stresses the remaining drives in the array, which are the same age and usually from the same batch as the one that failed - you lose the entire array.

Software RAID - especially a parity flavor - is very CPU intensive. This is becoming less of a problem as CPUs get more powerful, but there are still potential pitfalls when implementing it on a transcoding media server. Even with really good hardware, the machine will get swamped if there are multiple high intensity operations happening at the same time.. For example if someone is streaming a movie that Plex has to transcode (and with only 3Mbps upload it will have to transcode everything if you're mobile) and you start copying a rip to the array, the stream(s) might stutter.

Your servers should ALWAYS have a UPS with unattended shutdown configured, but with software RAID you also run the risk of array problems if the server loses power during a write operation. Software RAID does not have a safe battery backed write cache, so if power is lost or the system crashes during a write, it will automatically start a verify operation - which not only chews up CPU and bottlenecks the array bandwidth but in a parity array that large it will likely take a day or more to complete.

Call me old school but I still recommend a good hardware RAID card. You can get demoted enterprise hardware on ebay really cheap. For example an 8 port LSI 9560 with BBU can be had for under $100. Old, but they work great and if it shits the bed you can easily find another one (LSI 9000 cards can also be swapped for other models within the 9000 series without issue). RAID cards also support staggered spinup (powers the drives up one at a time so your power supply doesn't have a heart attack on powerup) Software RAID can't so that. Just always make sure you get one with the BBU (battery backup unit). And Highpoint sucks, stay away from them.

Now going back to the actual array....

There's a couple very good (and way too often dismissed/ignored) reasons to use more drives vs bigger drives.. One, 2TB is the largest drive where the array is in an "acceptable" danger zone of losing a second drive on failure of any single drive. The larger the drives are, the longer a rebuild takes and the bigger beating the remaining drives take, not only for the rebuild operation itself but all data from the missing drive must be calculated on the fly from parity if the array is still being accessed during the rebuild (which if you don't have a hot spare or an extra drive onhand means it will be degraded until you can get one).

Two is the parity cost. 2TB drives are cheap. 4 and 6 TB drives are not. In a RAID5 array you lose 1 full drive's worth of space to parity. In RAID 6 you lose 2 drive's worth, which means the larger the drives the larger the parity cost (both in price AND unusable space). So if you use 4x6TB drives in RAID6 (sorry @N-Gen, only a fool would use 6TB drives in RAID5), you're spending $500 just on parity and will only have 12TB free, whereas 8x2TB drives in RAID6 will give you the same protection against 2 drive failures, 3x faster recovery in case of a failure, and have your data less vulnerable to total loss - PLUS it'll cost you $330 less. With RAID it is cheaper to use MORE drives, not larger.

Food for thought..


Haha you sound like you know what your talking about, so all in all a hardware raid is better? The Fractal Design 804 have 10 usable 3.5" which equals about 20TB if I use a couple of SATA expansions and your way of using 2TB drives instead of 6TB drives, alright fair enough thats still more then enough space for me. As for the Raid 6 would someone please explain to me what this is and what the difference between a RAID 5 and 6 is? And I can't find a way to do a RAID 6 with just the software, does Windows support only hardware RAID 6? To get around the limitation of the motherboard only having 6 SATA III ports you would just throw in a couple expansions correct, or would the hardware RAID take care of that for you?
 
Joined
Jul 21, 2015
Messages
501 (0.16/day)
Haha you sound like you know what your talking about, so all in all a hardware raid is better? The Fractal Design 804 have 10 usable 3.5" which equals about 20TB if I use a couple of SATA expansions and your way of using 2TB drives instead of 6TB drives, alright fair enough thats still more then enough space for me. As for the Raid 6 would someone please explain to me what this is and what the difference between a RAID 5 and 6 is? And I can't find a way to do a RAID 6 with just the software, does Windows support only hardware RAID 6? To get around the limitation of the motherboard only having 6 SATA III ports you would just throw in a couple expansions correct, or would the hardware RAID take care of that for you?

In my opinion, yes hardware RAID is the way to go (but again you'll always have those that unequivocally stand by software RAID). You'd plug all of your array drives into the RAID card, NOT the motherboard. You'd use the ports on the motherboard only for your boot/OS drive and your optical drive. Do not install Windows on the array.

If it has 10 drive bays, that would allow you 16TB worth using 2TB drives. As I said, with RAID6 you'll lose two drives' worth of capacity for parity (the redundancy data). RAID 5 and 6 work exactly the same except that RAID 5 keeps enough parity data on each drive to allow the array to survive the loss of one drive. RAID 6 keeps enough parity data on each drive to allow the array to survive the loss of TWO drives. This is the important difference when you get into larger arrays. The laws of probability (and that prick Murphy :D) dictate that once a single large drive in an array fails, there is a significant chance of a second drive failing soon after (again due to the increased stress of the rebuild, age of the remaining drives, etc). If the second drive fails before the first drive can be completely rebuilt, the array data can only survive if there is a second set of parity data available to finish rebuilding both drives. If this is a RAID 5 array, there is no additional parity available if a second drive fails, so the entire array is destroyed. This is where RAID 6 has the advantage.

If you really think you're going to fill up 16TB within a short timeframe and you're hell-bent on that 10-bay case, then you can use 7x 4TB drives (remember you lose two drives' worth of space) in RAID 6 to give you 20TB. I still would not use the 6TB drives. 20TB is actually about the break-even point where the 4TB drives become cheaper to use (well, it's about $5 more expensive at 20TB, but any further expansion will be cheaper :)), and the 2 drives' worth of parity should still keep you protected. You would also want to use the 12 port card, so you still have room to expand.

Windows will have nothing to do with the RAID. It will all be handled by the card. To Windows it will simply look like a single 16 or 20TB drive.

And if you use that case, make sure you install four fans in the top to keep the drives cool.

And also as far as your desire for headless operation, you can use Remote Desktop + FTP as has already been suggested (I would also install a VPN server, which would allow you to access your entire network, including drive shares), or you can install TeamViewer (free for personal use). It can do file transfers (albeit a little slow), and they even have an Android app.
 
Last edited:
Joined
Mar 14, 2016
Messages
130 (0.04/day)
Location
Iowa
System Name Gateway
Processor AMD A4-5000
Motherboard Laptop
Cooling Laptop
Memory 8 GB DDR3
Video Card(s) Radeon HD Graphics
Storage 1 TB
Software Windows 10
In my opinion, yes hardware RAID is the way to go (but again you'll always have those that unequivocally stand by software RAID). You'd plug all of your array drives into the RAID card, NOT the motherboard. You'd use the ports on the motherboard only for your boot/OS drive and your optical drive. Do not install Windows on the array.

If it has 10 drive bays, that would allow you 16TB worth using 2TB drives. As I said, with RAID6 you'll lose two drives' worth of capacity for parity (the redundancy data). RAID 5 and 6 work exactly the same except that RAID 5 keeps enough parity data on each drive to allow the array to survive the loss of one drive. RAID 6 keeps enough parity data on each drive to allow the array to survive the loss of TWO drives. This is the important difference when you get into larger arrays. The laws of probability dictate that once a single large drive in an array fails, there is a significant chance of a second drive failing soon after (again due to the increased stress of the rebuild, age of the remaining drives, etc). If the second drive fails before the first drive can be completely rebuilt, the array data can only survive if there is a second set of parity data available to rebuild both drives. If this is a RAID 5 array, there is no additional parity available if a second drive fails, so the entire array is destroyed. This is where RAID 6 has the advantage.

If you really think you're going to fill up 16TB within a short timeframe and you're hell-bent on that 10-bay case, then you can use 7x 4TB drives (remember you lose two drives' worth of space) in RAID 6 to give you 20TB. I still would not use the 6TB drives. 20TB is actually about the break-even point where the 4TB drives become cheaper to use (well, it's about $5 more expensive at 20TB, but any further expansion will be cheaper :)), and the 2 drives' worth of parity should still keep you protected. You would also want to use the 12 port card, so you still have room to expand.

Windows will have nothing to do with the RAID. It will all be handled by the card. To Windows it will simply look like a single 16 or 20TB drive.

And if you use that case, make sure you install four fans in the top to keep the drives cool.

And also as far as your desire for headless operation, you can use Remote Desktop + FTP as has already been suggested (I would also install a VPN server, which would allow you to access your entire network, including drive shares), or you can install TeamViewer (free for personal use). It can do file transfers (albeit a little slow), and they even have an Android app.

Lots of good information here, I love it. I think that the hardware raid would make it easier on me, and no 16TB is plenty for me, I just had the mindframe that once I build it I wouldn't be upgrading or rebuilding it for a while. I like the idea of the hardware array and windows just sensing it as one massive drive. I'm definetly not set on that case and would consider a different one, if some suggestions were to be made :)
 
Joined
Feb 13, 2014
Messages
487 (0.13/day)
Location
Cyprus
Processor 13700KF - 5.7GHZ
Motherboard Z690 UNIFY-X
Cooling ARCTIC Liquid Freezer III 360 (NF-A12x25)
Memory 2x16 G.SKILL M-DIE (7200-34-44-44-28)
Video Card(s) XFX MERC 7900XT
Storage 1TB KINGSTON KC3000
Display(s) FI32Q
Case LIAN LI O11 DYNAMIC EVO
Audio Device(s) HD599
Power Supply RMX1000
Mouse PULSAR XLITE V2 MINI (RETRO)
Keyboard KEYCHRON V3 (DUROCK T1 + MT3 GODSPEED R2)
Software Windows 11
Benchmark Scores Superposition 4k optimized - 20652
In my opinion, yes hardware RAID is the way to go (but again you'll always have those that unequivocally stand by software RAID). You'd plug all of your array drives into the RAID card, NOT the motherboard. You'd use the ports on the motherboard only for your boot/OS drive and your optical drive. Do not install Windows on the array.

If it has 10 drive bays, that would allow you 16TB worth using 2TB drives. As I said, with RAID6 you'll lose two drives' worth of capacity for parity (the redundancy data). RAID 5 and 6 work exactly the same except that RAID 5 keeps enough parity data on each drive to allow the array to survive the loss of one drive. RAID 6 keeps enough parity data on each drive to allow the array to survive the loss of TWO drives. This is the important difference when you get into larger arrays. The laws of probability (and that prick Murphy :D) dictate that once a single large drive in an array fails, there is a significant chance of a second drive failing soon after (again due to the increased stress of the rebuild, age of the remaining drives, etc). If the second drive fails before the first drive can be completely rebuilt, the array data can only survive if there is a second set of parity data available to finish rebuilding both drives. If this is a RAID 5 array, there is no additional parity available if a second drive fails, so the entire array is destroyed. This is where RAID 6 has the advantage.

If you really think you're going to fill up 16TB within a short timeframe and you're hell-bent on that 10-bay case, then you can use 7x 4TB drives (remember you lose two drives' worth of space) in RAID 6 to give you 20TB. I still would not use the 6TB drives. 20TB is actually about the break-even point where the 4TB drives become cheaper to use (well, it's about $5 more expensive at 20TB, but any further expansion will be cheaper :)), and the 2 drives' worth of parity should still keep you protected. You would also want to use the 12 port card, so you still have room to expand.

Windows will have nothing to do with the RAID. It will all be handled by the card. To Windows it will simply look like a single 16 or 20TB drive.

And if you use that case, make sure you install four fans in the top to keep the drives cool.

And also as far as your desire for headless operation, you can use Remote Desktop + FTP as has already been suggested (I would also install a VPN server, which would allow you to access your entire network, including drive shares), or you can install TeamViewer (free for personal use). It can do file transfers (albeit a little slow), and they even have an Android app.
I think that case is bad for this scenario , in the drives chamber you can only put fans infront and back and the drives will be one big mass , maybe getting too hot.
I think a case with direct airflow on the drives will serve better
 
Joined
Jul 21, 2015
Messages
501 (0.16/day)
I'll let someone else make case suggestions but honestly that one doesn't look bad. However if you need more than 10 bays I think that's probably going to shift you into full tower territory.

And I don't know what your video habits are like but I didn't fill mine up anywhere near as fast as I expected. I built it a couple years ago with 4x2TB drives (6TB available). I have about 400 movies (probably 350 in HD, the rest SD), a bunch of TV series, probably 5000-6000 songs, and other miscellaneous stuff on my server.. Right now I'm JUST to the point where I need to add some more drives to the array. Mine is RAID5, but when I have some spare cash I will be building a new array on a RAID6 card.



I think that case is bad for this scenario , in the drives chamber you can only put fans infront and back and the drives will be one big mass , maybe getting too hot.
I think a case with direct airflow on the drives will serve better

There are 4x120mm fan mounts directly above the drive hangers on the top of the box.. Plenty of airflow. ;)
 
Last edited:
Joined
Mar 14, 2016
Messages
130 (0.04/day)
Location
Iowa
System Name Gateway
Processor AMD A4-5000
Motherboard Laptop
Cooling Laptop
Memory 8 GB DDR3
Video Card(s) Radeon HD Graphics
Storage 1 TB
Software Windows 10
I'll let someone else make case suggestions but honestly that one doesn't look bad. However if you need more than 10 bays I think that's probably going to shift you into full tower territory.

And I don't know what your video habits are like but I didn't fill mine up anywhere near as fast as I expected. I built it a couple years ago with 4x2TB drives (6TB available). I have about 400 movies (probably 350 in HD, the rest SD), a bunch of TV series, probably 5000-6000 songs, and other miscellaneous stuff on my server.. Right now I'm JUST to the point where I need to add some more drives to the array. Mine is RAID5, but when I have some spare cash I will be building a new array on a RAID6 card.





There are 4x120mm fan mounts directly above the drive hangers on the top of the box.. Plenty of airflow. ;)

So I went searching for a raid card and I'm totally lost. Is there some sort of guide that can break this all down for me. I feel frustrated at this point cause I have no idea what I'm even looking at on newegg or Google.
 
Joined
Jul 21, 2015
Messages
501 (0.16/day)
New/retail they are very expensive (about $500). I'm talking about used ones.

This one is the best deal I see at the moment.. The only thing is you'll need to buy the 4-1 SATA cables for it. The cables that come with it are for connecting it a 12 drive backplane (which isn't what you're using). The ones you'll need fans out to 4 SATA plugs from each of the card ports.

http://www.ebay.com/itm/AMCC-9650SE...032890?hash=item28157a5e3a:g:Z44AAOSwgApXBSmN

These are the cables ($15 for 2, you'd only need 3 so youll have a spare):

http://www.ebay.com/itm/2x-Mini-10G...095311?hash=item20e642ddcf:g:GloAAOxyx0JTgAoj

 
Last edited:
Joined
Feb 13, 2014
Messages
487 (0.13/day)
Location
Cyprus
Processor 13700KF - 5.7GHZ
Motherboard Z690 UNIFY-X
Cooling ARCTIC Liquid Freezer III 360 (NF-A12x25)
Memory 2x16 G.SKILL M-DIE (7200-34-44-44-28)
Video Card(s) XFX MERC 7900XT
Storage 1TB KINGSTON KC3000
Display(s) FI32Q
Case LIAN LI O11 DYNAMIC EVO
Audio Device(s) HD599
Power Supply RMX1000
Mouse PULSAR XLITE V2 MINI (RETRO)
Keyboard KEYCHRON V3 (DUROCK T1 + MT3 GODSPEED R2)
Software Windows 11
Benchmark Scores Superposition 4k optimized - 20652
I'll let someone else make case suggestions but honestly that one doesn't look bad. However if you need more than 10 bays I think that's probably going to shift you into full tower territory.

And I don't know what your video habits are like but I didn't fill mine up anywhere near as fast as I expected. I built it a couple years ago with 4x2TB drives (6TB available). I have about 400 movies (probably 350 in HD, the rest SD), a bunch of TV series, probably 5000-6000 songs, and other miscellaneous stuff on my server.. Right now I'm JUST to the point where I need to add some more drives to the array. Mine is RAID5, but when I have some spare cash I will be building a new array on a RAID6 card.





There are 4x120mm fan mounts directly above the drive hangers on the top of the box.. Plenty of airflow. ;)
You cannot mount the drive hanger and also the fans , thats one problem with the case
 
Joined
Jul 21, 2015
Messages
501 (0.16/day)
You cannot mount the drive hanger and also the fans , thats one problem with the case

I don't see anything anywhere to substantiate that claim. There is a disclaimer about large radiators interfering with the drive cages but not bare fans. I hardly think that's something that nobody would point out given the number of reviews it has had.
 
Joined
Oct 28, 2007
Messages
690 (0.11/day)
System Name Pegasus
Processor AMD Ryzen Threadripper 1950X @ 4GHz
Motherboard ASUS ROG Zenith Extreme
Cooling Custom 480mm EK Loop
Memory 4 x 8GB G.Skill TridentZ RGB 3000MHz @ 3000MHz
Video Card(s) ASUS ROG Nvidia GTX 1080 Ti
Storage Samsung 960 EVO M.2 500GB / Samsung 850 EVO 500GB / Samsung 840 EVO 250GB
Display(s) 2 x 25" Dell Ultrasharp U2515H / 1 x 15" ASUS MB169+
Case Corsair 900D
Audio Device(s) 2 x Tannoy Reveal 502 / Beyerdynamic DT 990 PRO 250 Ohm / Behringer Xenyx X1204 USB / MXL 770
Power Supply EVGA SuperNova G3 1000W
Mouse Logitech G903 Lightspeed
Keyboard HyperX Alloy FPS / Corsair K95 RGB / Anne Pro 2 / 2 x Elgato Stream Deck
Software Windows 10 Professional 64-Bit
You'll want to use the RAID built into the motherboard, or a dedicated card. Not the Windows RAID.



All of your information was pretty good except this. With the Intel controller, the RAID arrays have been compatible with almost all other Intel controllers for at least a generation before. So if you create an array on a B170 motherboard and that motherboard fails, you can move the drives to pretty much any other halfway modern Intel motherboard, and the array will be recognized and work. The same is true with Highpoint. In fact, they pride themselves on the fact that an array can be created with any of the add-on cards, and it will work with any other add-on card. Highpoint talks about this in their FAQ.

So, yeah, you have to replace the controller with one from the same brand, but it doesn't have to be identical.

You're right, it's why I put similar/identical not only similar but I think I should have elaborated further and mentioned that they have to have the same algorithm.

For others mentioning the drive capabilities of the Node 804, I know quite a number of people that filled it to capacity without any issues whatsoever. Flow in general is good, it's well built case. We're close to summer so I can report how temps will be during the coming inferno but all in all so far, half full, it's a great case.

Regarding hardware vs. software RAID and which is best, it's all down to personal opinion and requirements. Software RAID can easily end up being more expensive than hardware RAID if your requirements are high. One of the reasons I moved from hardware to software was for possible cases of RAID card failure. The way it's used now, as a HBA, if it fails I can replaced it with any other RAID card/HBA or connect the drives to the motherboard ad still be able to access the array. I could also move it from machine to machine without any issues and just mount it.

Hardware RAID is easier to set up in my opinion and for simple applications HighPoint offer great cards on the cheap. I've owned my 2720SGL for 3 years and it's always delivered what was promised. I even had a power cut during a RAID rebuild right after swapping out a faulty disk and after all my worries when the power came back on it just continued where it left off, all my data is still intact. I just want the data integrity features given by ZFS, I would have been happy with BTRFS but it doesn't work well with SMR disks. Having a large amount of data 16TB+, foreseeing massive data growth and most of it being critical, those features are a big deal to me.

But like I said, they both work and they're both used in production systems within large business so with a bit of homework you can determine which method is best for your needs.
 
Last edited:
Joined
Oct 28, 2007
Messages
690 (0.11/day)
System Name Pegasus
Processor AMD Ryzen Threadripper 1950X @ 4GHz
Motherboard ASUS ROG Zenith Extreme
Cooling Custom 480mm EK Loop
Memory 4 x 8GB G.Skill TridentZ RGB 3000MHz @ 3000MHz
Video Card(s) ASUS ROG Nvidia GTX 1080 Ti
Storage Samsung 960 EVO M.2 500GB / Samsung 850 EVO 500GB / Samsung 840 EVO 250GB
Display(s) 2 x 25" Dell Ultrasharp U2515H / 1 x 15" ASUS MB169+
Case Corsair 900D
Audio Device(s) 2 x Tannoy Reveal 502 / Beyerdynamic DT 990 PRO 250 Ohm / Behringer Xenyx X1204 USB / MXL 770
Power Supply EVGA SuperNova G3 1000W
Mouse Logitech G903 Lightspeed
Keyboard HyperX Alloy FPS / Corsair K95 RGB / Anne Pro 2 / 2 x Elgato Stream Deck
Software Windows 10 Professional 64-Bit
I don't see anything anywhere to substantiate that claim. There is a disclaimer about large radiators interfering with the drive cages but not bare fans. I hardly think that's something that nobody would point out given the number of reviews it has had.

Adding to this, it doesn't seem possible to mount the top fans with the HDDs because the fans are mounting on the inside of the case and there's only a few mm until you hit the HDD cage. However, the top is still vented and the 8 disks that are hanging have enough space between them for air to flow well front to back provided good fans are installed.

The only tricky bit with the case is the cage right above the PSU. Cables will be bit tight to the best option would be angled connectors otherwise there's gonna be a lot of bending, trust me, it's annoying.
 
Joined
Mar 14, 2016
Messages
130 (0.04/day)
Location
Iowa
System Name Gateway
Processor AMD A4-5000
Motherboard Laptop
Cooling Laptop
Memory 8 GB DDR3
Video Card(s) Radeon HD Graphics
Storage 1 TB
Software Windows 10
Adding to this, it doesn't seem possible to mount the top fans with the HDDs because the fans are mounting on the inside of the case and there's only a few mm until you hit the HDD cage. However, the top is still vented and the 8 disks that are hanging have enough space between them for air to flow well front to back provided good fans are installed.

The only tricky bit with the case is the cage right above the PSU. Cables will be bit tight to the best option would be angled connectors otherwise there's gonna be a lot of bending, trust me, it's annoying.

Just get some hyperboreas with 90 CFM.
 
Joined
Nov 3, 2007
Messages
1,700 (0.28/day)
unRAID is weird. It's basically software based RAID3/4.. And nobody uses RAID 3/4 because the write performance sucks.
Hmm, I guess my 100+ MBps writes directly to array (bypassing cache for large writes over 500+GB) would differ, but oh well. :toast: (not trying to stir the pot, but just provide some real world use cases)

P.S. Perhaps in the 200+MBps (teaming), or if I were to setup 10Gb lan in the house I would see "slower" (used loosely here) writes to array, but as it stands now my home is wired for gigabit, and I max it out so I am happy.

In the future if I start upgrading pieces to 10Gb, then I will write directly to cache. (Though I would need to invest in some high $ nic cards, switches, and a router that is all 10Gb capable. I don't see this anytime soon for my home use)

Also, unraid is ZFS with with whole files per disk (not striped), so if I lost a disk, only the data on that disk would be lost, not the entire array (not counting the fact I can still rebuild from parity, and soon to be dual parity in unraid 6.2. So errors in the array are now exact, and not "assumed" corrected on parity). As well, I am using XFS as my file system type.

You're right, it's why I put similar/identical not only similar but I think I should have elaborated further and mentioned that they have to have the same algorithm.

For others mentioning the drive capabilities of the Node 804, I know quite a number of people that filled it to capacity without any issues whatsoever. Flow in general is good, it's well built case. We're close to summer so I can report how temps will be during the coming inferno but all in all so far, half full, it's a great case.

Regarding hardware vs. software RAID and which is best, it's all down to personal opinion and requirements. Software RAID can easily end up being more expensive than hardware RAID if your requirements are high. One of the reasons I moved from hardware to software was for possible cases of RAID card failure. The way it's used now, as a HBA, if it fails I can replaced it with any other RAID card/HBA or connect the drives to the motherboard ad still be able to access the array. I could also move it from machine to machine without any issues and just mount it.

Hardware RAID is easier to set up in my opinion and for simple applications HighPoint offer great cards on the cheap. I've owned my 2720SGL and it's always delivered what was promised. I even had a power cut during a RAID rebuild right after swapping out a faulty disk and after all my worries when the power came back on it just continued where it left off, all my data is still intact. I just want the data integrity features given by ZFS, I would have been happy with BTRFS but it doesn't work well with SMR disks. Having a large amount of data 16TB+, foreseeing massive data growth and most of it being critical, those features are a big deal to me.

But like I said, they both work and they're both used in production systems within large business so with a bit of homework you can determine which method is best for your needs.

This is good advice
 
Last edited:

newtekie1

Semi-Retired Folder
Joined
Nov 22, 2005
Messages
28,472 (4.23/day)
Location
Indiana, USA
Processor Intel Core i7 10850K@5.2GHz
Motherboard AsRock Z470 Taichi
Cooling Corsair H115i Pro w/ Noctua NF-A14 Fans
Memory 32GB DDR4-3600
Video Card(s) RTX 2070 Super
Storage 500GB SX8200 Pro + 8TB with 1TB SSD Cache
Display(s) Acer Nitro VG280K 4K 28"
Case Fractal Design Define S
Audio Device(s) Onboard is good enough for me
Power Supply eVGA SuperNOVA 1000w G3
Software Windows 10 Pro x64
It's a VERY bad idea to use RAID 5 for an array that large using large (>2TB) drives. You only have a single drive's worth of redundancy, and if a second drive fails while a rebuild is in progress - which is a very real possibility, because rebuilding stresses the remaining drives in the array, which are the same age and usually from the same batch as the one that failed - you lose the entire array.

I believe this is a hold over from a long time ago. Rebuild times are not as long as they used to be. Heck, when 2TB drives first came out, I built a RAID5 array with them, and rebuild time was over a day. Now, I just built an array with 6TB drives, and the rebuild time was just under 24 hours. So it took the same amount of time to rebuilt an array with 6TB drive today as it did to rebuilt with 2TB drives several years ago. The controllers are faster, the computer CPUs are faster(in the case where the controller is using the CPU for calculations), and the drives have gotten faster(particularly write speeds). This all changes with SMR, because the write speed suffers greatly on those drives, so those would be the only type of drive I would recommend avoiding in a RAID5 array.

Plus, having more disks means way more points of failure. You are a lot more likely to have a drive fail if you have 20 drives than if you have 4.

Finally, I'm pretty sure it has been mentioned, but RAID is not a replacement for backups. Your RAID array should be backed up. Even if it is backed up to something not redundant. The money you're talking about spending on a dedicated RAID card, and extra drives would be better spent on a couple extra 6TB drives to perform back-ups to.

This is my current server. F is my main RAID5 array, G is another non-redundant array that F gets backed up nightly to. Adding a backup is more important than RAID6.
http://tpuminecraft.servebeer.com/pictures/raid.png
 
Last edited:
Joined
Oct 28, 2007
Messages
690 (0.11/day)
System Name Pegasus
Processor AMD Ryzen Threadripper 1950X @ 4GHz
Motherboard ASUS ROG Zenith Extreme
Cooling Custom 480mm EK Loop
Memory 4 x 8GB G.Skill TridentZ RGB 3000MHz @ 3000MHz
Video Card(s) ASUS ROG Nvidia GTX 1080 Ti
Storage Samsung 960 EVO M.2 500GB / Samsung 850 EVO 500GB / Samsung 840 EVO 250GB
Display(s) 2 x 25" Dell Ultrasharp U2515H / 1 x 15" ASUS MB169+
Case Corsair 900D
Audio Device(s) 2 x Tannoy Reveal 502 / Beyerdynamic DT 990 PRO 250 Ohm / Behringer Xenyx X1204 USB / MXL 770
Power Supply EVGA SuperNova G3 1000W
Mouse Logitech G903 Lightspeed
Keyboard HyperX Alloy FPS / Corsair K95 RGB / Anne Pro 2 / 2 x Elgato Stream Deck
Software Windows 10 Professional 64-Bit
I believe this is a hold over from a long time ago. Rebuilt times are not as long as they used to be. Heck, when 2TB drives first came out, I built a RAID5 array with them, and rebuild time was over a day. Now, I just built an array with 6TB drives, and the rebuild time was just under 24 hours. So it took the same amount of time to rebuilt an array with 6TB drive today as it did to rebuilt with 2TB drives several years ago. The controllers are faster, the computer CPUs are faster(in the case where the controller is using the CPU for calculations), and the drives have gotten faster(particularly write speeds). This all changes with SMR, because the write speed suffers greatly on those drives, so those would be the only type of drive I would recommend avoiding in a RAID5 array.

Plus, having more disks means way more points of failure. You are a lot more likely to have a drive fail if you have 20 drives than if you have 4.

Finally, I'm pretty sure it has been mentioned, but RAID is not a replacement for backups. Your RAID array should be backed up. Even if it is backed up to something not redundant. The money you're talking about spending on a dedicated RAID card, and extra drives would be better spent on a couple extra 6TB drives to perform back-ups to.

This is my current server. F is my main RAID5 array, G is another non-redundant array that F gets backed up nightly to. Adding a backup is more important than RAID6.
http://tpuminecraft.servebeer.com/pictures/raid.png

With my little HighPoint a 4x3TB RAID5 array full of data took around 36 hours to rebuild when using the RAID card controller. Haven't rebuilt under software yet.

And yet again, like everyone else I will keep stressing on backup. Don't learn the hard way like the rest of us.

Here's mine:

 
Top