• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Need specifics on RAID 10

newtekie1

Semi-Retired Folder
Joined
Nov 22, 2005
Messages
28,472 (4.25/day)
Location
Indiana, USA
Processor Intel Core i7 10850K@5.2GHz
Motherboard AsRock Z470 Taichi
Cooling Corsair H115i Pro w/ Noctua NF-A14 Fans
Memory 32GB DDR4-3600
Video Card(s) RTX 2070 Super
Storage 500GB SX8200 Pro + 8TB with 1TB SSD Cache
Display(s) Acer Nitro VG280K 4K 28"
Case Fractal Design Define S
Audio Device(s) Onboard is good enough for me
Power Supply eVGA SuperNOVA 1000w G3
Software Windows 10 Pro x64
So it's working fine for me to have JBOD, only reason to have RAID is because it would simplify things a bit by having it all one huge drive instead of several drives. Is it even worth it to do RAID for my situation?

Yes, RAID would be worth it for your situation. RAID-10 is only slightly more redundant than RAID-5. With RAID-5 you can have one drive fail and still keep all your data, but a second drive failure and all is lost. With RAID-10 you have two RAID-1 arrays nested in a RAID-0. So it is possible to loose multiple drives and still be fine, but it is also possible that loosing two drives will kill all your data. It comes down to which drive dies.

IMO, for your setup a RAID-5 would be sufficient, there isn't much point in going RAID-10, the space you loose isn't worth the minor improvement in redundancy. If anything do a RAID-5 with a hot spare instead.
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,147 (2.96/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
I've got around $800 bucks in one of my hardware RAID Arrays. It's a LSI 3ware 9650SE 4-port (about $650) and I bought a BBU (around $110)

You can get an LSI 4-port card for 320 USD and an 8-port for a bit less than double that, not including the BBU. RAID cards aren't as expensive as they used to be. Not to say that they're not expensive but any RAID with 10 drives is going to be costly no matter how you do it and with that many drives that big I'm super skeptical about software RAID.
If anything do a RAID-5 with a hot spare instead.

Or RAID-6 if the device supports it, that way you would need to lose 3 drives to lose data as opposed to two. (RAID-5 has one drive worth of active fault tolerance and 6 has two, for those who may not have known.)
 
Joined
Mar 12, 2009
Messages
1,075 (0.20/day)
Location
SCOTLAND!
System Name Machine XV
Processor Dual Xeon E5 2670 V3 Turbo unlocked
Motherboard Kllisre X99 Dual
Cooling 120mm heatsink
Memory 64gb DDR4 ECC
Video Card(s) RX 480 4Gb
Storage 1Tb NVME SSD
Display(s) 19" + 23" + 17"
Case ATX
Audio Device(s) XFi xtreme USB
Power Supply 800W
Software Windows 10
Is the storage for manipulating large files or large data stores or is it for media storage and playback?

If your using large files like for virtual machines or a massive database then go raid on a very expensive controller this is what it was designed for redundancy and performance.

If It's just for media storage and playback then use unraid or flexraid.

You can add drives without breaking the array

The array only powers up the drive you are wanting the data from so you don't need 10 drives spinning just to give you a single file

If you lose 2 drives all your data on the rest of the drives are fine but with most raid it's totally unrecoverable.

Sure the performance is not as fast as raid on a very expensive controller but if it's going over gigabit that's not an issue.

If you spend $$$ on a 8 port sata card for raid then decide you want more than 8 drives you better hope you can find an identical card to expand your array with. With unraid I have mixed controllers from SIL, JMB, VIA, AMD, LSI.

I have ran unraid with 10 drives for over a year adding a drive every few months. It rebuilds the parity in around 24 hours and the array accessible during this time.
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,147 (2.96/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
I just read up on Unraid and Flexraid and it's all file-level redundancy. It's by no means RAID and I think even for pictures, video, and music I would be skeptical to run software that lets you schedule when the parity gets calculated. That alone is dangerous and is keeping me far from it. In all seriousness if you're going to run a RAID, use something at at least does hardware level striping rather than file level redundancy because your performance is going to be dismal and integrity of your data upon being written can not be guaranteed to be valid if parity info isn't written at the same time.

I would say that if you're going to run Windows then use Windows software RAID and if you're going to use some distro of Linux I would use Linux software raid if 10 disks is going to be your end all. Anything not done at the driver level is concerning and redundancy that doesn't need to be calculated when the data is written is scary as hell and I would trust none of my data to a system that allows that insanity.
 
Joined
Mar 12, 2009
Messages
1,075 (0.20/day)
Location
SCOTLAND!
System Name Machine XV
Processor Dual Xeon E5 2670 V3 Turbo unlocked
Motherboard Kllisre X99 Dual
Cooling 120mm heatsink
Memory 64gb DDR4 ECC
Video Card(s) RX 480 4Gb
Storage 1Tb NVME SSD
Display(s) 19" + 23" + 17"
Case ATX
Audio Device(s) XFi xtreme USB
Power Supply 800W
Software Windows 10
I just read up on Unraid and Flexraid and it's all file-level redundancy. It's by no means RAID and I think even for pictures, video, and music I would be skeptical to run software that lets you schedule when the parity gets calculated. That alone is dangerous and is keeping me far from it. In all seriousness if you're going to run a RAID, use something at at least does hardware level striping rather than file level redundancy because your performance is going to be dismal and integrity of your data upon being written can not be guaranteed to be valid if parity info isn't written at the same time.

I would say that if you're going to run Windows then use Windows software RAID and if you're going to use some distro of Linux I would use Linux software raid if 10 disks is going to be your end all. Anything not done at the driver level is concerning and redundancy that doesn't need to be calculated when the data is written is scary as hell and I would trust none of my data to a system that allows that insanity.



It's not file level redundancy with unraid it calculates the parity in real time just like a hardware raid controller but it does not stripe the data across the drives so the read performance is the same as a single drive like parity protected JBOD array . This means in a worst case where I lose 2 data disks or a parity and data disk I can remove the rest of my drives and plug it into any Linux box and just copy the files from it.

I know I'm limited to 40mb/s writing directly to the array but that's why I have the cache drive to dump files to. And yes I know it's not protected while its on that cache drive but that's why I write my backups direct to the array.

Flex raid works the same but you can have it with network drives or usb or hard drives or any data store you want

It has 2 options. Real time parity just like normal raid or you can have it as a scheduled parity so you can restore to the last parity run.

Your missing my point that raid is not for long term storage it was designed for performance and uptime not redundancy and capacity. Do you need to read and write your movies at 200mb/s? When you are limited to 100mb of gigabit Ethernet?

Think of it more like NAS than a server
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,147 (2.96/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
Your missing my point that raid is not for long term storage it was designed for performance and uptime not redundancy and capacity. Do you need to read and write your movies at 200mb/s? When you are limited to 100mb of gigabit Ethernet?

Actually RAID was designed for long term reliable storage. There is no point in running a RAID if it's not going to last because uptime is going to cost you and uptime is usually most likely the reason you're running RAID in the first place. Not to mention uptime is directly related to reliability of your storage devices.

Even running software RAID offers a lot of the protection of hardware RAID, is usually just as reliable with a performance penelty, but not nearly as steep at 40MB/s. I don't know about you but when I go to backup my RAID (even unraid isn't a replacement for offsite backup,) but I get pretty annoyed at how slow my USB 2.0 external drive is, and running it on USB 3.0 port in Turbo mode, I get about 42MB/s and it sucks. I don't care what kind of data I have, I don't want those kinds of speeds locally.

Just when you thought a WD Caviar Green on SATA was slow. :p

So yeah, for strictly storage I'm sure it's fast enough for regular usage, like watching a video but if you're going to copy an HD video to it or do any trans-coding you're going to wish you had a real RAID that writes faster. So if one person is using it and it's strictly for storage and occasionally a single person watching an HD video, it might be enough but I wouldn't try doing anything to it while your backing up, or syncing, or moving any data to or from it. 40MB/s is fairly easy to saturate now I understand you have a caching drive for writes but not all data can be handled that way.
 
Joined
Mar 12, 2009
Messages
1,075 (0.20/day)
Location
SCOTLAND!
System Name Machine XV
Processor Dual Xeon E5 2670 V3 Turbo unlocked
Motherboard Kllisre X99 Dual
Cooling 120mm heatsink
Memory 64gb DDR4 ECC
Video Card(s) RX 480 4Gb
Storage 1Tb NVME SSD
Display(s) 19" + 23" + 17"
Case ATX
Audio Device(s) XFi xtreme USB
Power Supply 800W
Software Windows 10
Actually RAID was designed for long term reliable storage. There is no point in running a RAID if it's not going to last because uptime is going to cost you and uptime is usually most likely the reason you're running RAID in the first place. Not to mention uptime is directly related to reliability of your storage devices.

Even running software RAID offers a lot of the protection of hardware RAID, is usually just as reliable with a performance penelty, but not nearly as steep at 40MB/s. I don't know about you but when I go to backup my RAID (even unraid isn't a replacement for offsite backup,) but I get pretty annoyed at how slow my USB 2.0 external drive is, and running it on USB 3.0 port in Turbo mode, I get about 42MB/s and it sucks. I don't care what kind of data I have, I don't want those kinds of speeds locally.

Just when you thought a WD Caviar Green on SATA was slow. :p

So yeah, for strictly storage I'm sure it's fast enough for regular usage, like watching a video but if you're going to copy an HD video to it or do any trans-coding you're going to wish you had a real RAID that writes faster. So if one person is using it and it's strictly for storage and occasionally a single person watching an HD video, it might be enough but I wouldn't try doing anything to it while your backing up, or syncing, or moving any data to or from it. 40MB/s is fairly easy to saturate now I understand you have a caching drive for writes but not all data can be handled that way.

Yea 40mb's is not the fastest but the 1tb cache is more than adequate for dumping large amounts of data to the server. It's set up so that its transparent so you just use the array as normal and you don't notice that its on the cache or array. I have it set up to write out the cache once a day so it's never unprotected for long.

The only time I write direct to the array is for backing up my pc and laptop but I know this is no substitute for offsite backup.

The reasons I went with unraid over hardware raid or windows Home server array

1. Totally hardware independent I just move my flash drive and hard drives to any PC and it boots and works

2. Drives power up independently when data is accessed so you don't have to power up 10 drives for a single file So it saves about 50w when it's idle

3. I can add drives without having to backup and rebuild the array.

If your just a home user wanting to have a large storage pool with enough redundancy to survive a single drive failure then it's ideal.
 
Last edited:
Joined
May 20, 2004
Messages
10,487 (1.45/day)
Why is RAID 10 the most suitable? I'd go for RAID 6 instead. Similar redundancy and more space. Performance shouldn't be a huge issue with the right hardware.

Also, apart from controller and drives you need to think about housing. Putting 10 drives in a normal case without decent drive bays means you get messy cabling and replacing a drive isn't as straight forwarded.

I personally use an Axus YI-16SAEU4, you should be able to find similar hardware around the price of a decent hardware RAID controller. You'll have a reliably device with all features you could wish for though.
Quick look on ebay give me this for eaxmple.
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,147 (2.96/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
1. Totally hardware independent I just move my flash drive and hard drives to any PC and it boots and works

So is any other software raid solution such as Windows and Linux software RAID.
2. Drives power up independently when data is accessed so you don't have to power up 10 drives for a single file So it saves about 50w when it's idle
It's also why your disk access speeds are incredibly slow. Also spinning up and spinning down hard drives on a regular basis puts more stress on the motor than keeping it running. Granted I don't know how often your drives wake up and go to sleep.

3. I can add drives without having to backup and rebuild the array.
Got me there, but RAID you don't need to backup to add a drive (even though you should,) but in defense of this, my RAID can be degraded and rebuilding and it will still perform better than unraid, that's the point. :)
If your just a home user wanting to have a large storage pool with enough redundancy to survive a single drive failure then it's ideal.
I agree but software RAID (real striping RAID,) can offer better performance and equal if not better reliability.
 
Joined
Mar 12, 2009
Messages
1,075 (0.20/day)
Location
SCOTLAND!
System Name Machine XV
Processor Dual Xeon E5 2670 V3 Turbo unlocked
Motherboard Kllisre X99 Dual
Cooling 120mm heatsink
Memory 64gb DDR4 ECC
Video Card(s) RX 480 4Gb
Storage 1Tb NVME SSD
Display(s) 19" + 23" + 17"
Case ATX
Audio Device(s) XFi xtreme USB
Power Supply 800W
Software Windows 10
So is any other software raid solution such as Windows and Linux software RAID.

Yet me clarify I don't have to install unraid it runs live from the flash drive and all it ever writes is to the config file when you set it up. So i don't have to set it up when I migrate hardware I just plug and play.

It's also why your disk access speeds are incredibly slow. Also spinning up and spinning down hard drives on a regular basis puts more stress on the motor than keeping it running. Granted I don't know how often your drives wake up and go to sleep.

They are set to power down after 1 hour so for most of the day they are off. i have the filenames buffered so it doesn't have to power on drives when im browsing for something only if I open a file will it power on the drive. So most of my drives are off apart from an hour or 2 a day where a few might come on. The cache drive also keeps drives powered down by buffering all writes to the array and moving everything at once.

Got me there, but RAID you don't need to backup to add a drive (even though you should,) but in defense of this, my RAID can be degraded and rebuilding and it will still perform better than unraid, that's the point. :)

As long as your controller supports online capacity expansion no software raid does.

I agree but software RAID (real striping RAID,) can offer better performance and equal if not better reliability.

Here is my problem with striping your storage array is if you loose too many drives you lose everything with zero chance of getting anything from the remaining drives.

I can take any drive from my array and plug it into any Linux computer and copy the files from it, even the cache. So I can recover my data far easier than any form of striped raid setup.
 
Last edited:
Joined
May 20, 2004
Messages
10,487 (1.45/day)
As long as your controller supports online capacity expansion no software raid does.

mdadm does, as did the raidcore stack. As for the latter, mentioning it made me wonder what happened with it, seems it's offered now under the name assuredVRA. I should play with it.
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,147 (2.96/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
mdadm does, as did the raidcore stack. As for the latter, mentioning it made me wonder what happened with it, seems it's offered now under the name assuredVRA. I should play with it.

mdadm also stores RAID information on the RAID drives rather than in a config. So if you move the drives to another *nix machine you can migrate your RAID. It's also pretty quick for software RAID too.

Yet me clarify I don't have to install unraid it runs live from the flash drive and all it ever writes is to the config file when you set it up. So i don't have to set it up when I migrate hardware I just plug and play.

I hope you don't migrate 10 drives to different hardware often. :twitch:
Software is like hardware though, like works with like. If you're running a RAID off a particular LSI card, any LSI card like it will take it in. Same with Intel's RST, adadm, you name it. It when you change the controller or software you're using. So if you didn't want to use unraid anymore or if the project died and stopped getting upgraded it wouldn't be easy to migrate it to another system. Just like any other method of redundancy.
 
Joined
Mar 12, 2009
Messages
1,075 (0.20/day)
Location
SCOTLAND!
System Name Machine XV
Processor Dual Xeon E5 2670 V3 Turbo unlocked
Motherboard Kllisre X99 Dual
Cooling 120mm heatsink
Memory 64gb DDR4 ECC
Video Card(s) RX 480 4Gb
Storage 1Tb NVME SSD
Display(s) 19" + 23" + 17"
Case ATX
Audio Device(s) XFi xtreme USB
Power Supply 800W
Software Windows 10
I hope you don't migrate 10 drives to different hardware often. :twitch:
Software is like hardware though, like works with like. If you're running a RAID off a particular LSI card, any LSI card like it will take it in. Same with Intel's RST, adadm, you name it. It when you change the controller or software you're using. So if you didn't want to use unraid anymore or if the project died and stopped getting upgraded it wouldn't be easy to migrate it to another system. Just like any other method of redundancy.

Your missing the point I am trying to make I can take any single drive from my array and copy everything off it just by copying and pasting While its plugged into any Linux computer.

I don't need the unraid OS to recover files as they are all in one Piece on one drive just spread over the drives. So I f I pulled one of my hard drives and looked at the folder structure it would just like the array on one disk but the folders will only contain the files designated to that perticular drive.

So when I rebuild the array after adding a disk I am not rebuilding the array but the parity disk so even if I pushed the reset button half way through a rebuild it would just reboot and start rebuilding the parity again with no loss.

I don't know of any raid controller that can survive a total power loss during online capacity expansion process.
 

Easy Rhino

Linux Advocate
Staff member
Joined
Nov 13, 2006
Messages
15,436 (2.43/day)
Location
Mid-Atlantic
System Name Desktop
Processor i5 13600KF
Motherboard AsRock B760M Steel Legend Wifi
Cooling Noctua NH-U9S
Memory 4x 16 Gb Gskill S5 DDR5 @6000
Video Card(s) Gigabyte Gaming OC 6750 XT 12GB
Storage WD_BLACK 4TB SN850x
Display(s) Gigabye M32U
Case Corsair Carbide 400C
Audio Device(s) On Board
Power Supply EVGA Supernova 650 P2
Mouse MX Master 3s
Keyboard Logitech G915 Wireless Clicky
Software The Matrix
so even if I pushed the reset button half way through a rebuild it would just reboot and start rebuilding the parity again with no loss.

I don't know of any raid controller that can survive a total power loss during online capacity expansion process.

why would you hit the reset button ? if you are hitting the reset button on an enterprise server running RAID then you should be fired.
 
Joined
Mar 12, 2009
Messages
1,075 (0.20/day)
Location
SCOTLAND!
System Name Machine XV
Processor Dual Xeon E5 2670 V3 Turbo unlocked
Motherboard Kllisre X99 Dual
Cooling 120mm heatsink
Memory 64gb DDR4 ECC
Video Card(s) RX 480 4Gb
Storage 1Tb NVME SSD
Display(s) 19" + 23" + 17"
Case ATX
Audio Device(s) XFi xtreme USB
Power Supply 800W
Software Windows 10
why would you hit the reset button ? if you are hitting the reset button on an enterprise server running RAID then you should be fired.

Obviously I wouldn't do that my point is my data would be fine if something happened during the process like a power cut.
 

Easy Rhino

Linux Advocate
Staff member
Joined
Nov 13, 2006
Messages
15,436 (2.43/day)
Location
Mid-Atlantic
System Name Desktop
Processor i5 13600KF
Motherboard AsRock B760M Steel Legend Wifi
Cooling Noctua NH-U9S
Memory 4x 16 Gb Gskill S5 DDR5 @6000
Video Card(s) Gigabyte Gaming OC 6750 XT 12GB
Storage WD_BLACK 4TB SN850x
Display(s) Gigabye M32U
Case Corsair Carbide 400C
Audio Device(s) On Board
Power Supply EVGA Supernova 650 P2
Mouse MX Master 3s
Keyboard Logitech G915 Wireless Clicky
Software The Matrix
Obviously I wouldn't do that my point is my data would be fine if something happened during the process like a power cut.

OK. But that seems like a pointless thing to say.
 
Joined
Nov 11, 2012
Messages
619 (0.15/day)
Is the storage for manipulating large files or large data stores or is it for media storage and playback?

If your using large files like for virtual machines or a massive database then go raid on a very expensive controller this is what it was designed for redundancy and performance.

If It's just for media storage and playback then use unraid or flexraid.

You can add drives without breaking the array

The array only powers up the drive you are wanting the data from so you don't need 10 drives spinning just to give you a single file

If you lose 2 drives all your data on the rest of the drives are fine but with most raid it's totally unrecoverable.

Sure the performance is not as fast as raid on a very expensive controller but if it's going over gigabit that's not an issue.

If you spend $$$ on a 8 port sata card for raid then decide you want more than 8 drives you better hope you can find an identical card to expand your array with. With unraid I have mixed controllers from SIL, JMB, VIA, AMD, LSI.

I have ran unraid with 10 drives for over a year adding a drive every few months. It rebuilds the parity in around 24 hours and the array accessible during this time.

The drives are primarily for large data stores. Two of the drives play media frequently (like video) but most are mainly for storage and accessed relatively infrequently (maybe a few times per day but not continuously like media files).
 
Joined
Nov 10, 2006
Messages
4,665 (0.73/day)
Location
Washington, US
System Name Rainbow
Processor Intel Core i7 8700k
Motherboard MSI MPG Z390M GAMING EDGE AC
Cooling Corsair H115i, 2x Noctua NF-A14 industrialPPC-3000 PWM
Memory G. Skill TridentZ RGB 4x8GB (F4-3600C16Q-32GTZR)
Video Card(s) ZOTAC GeForce RTX 3090 Trinity
Storage 2x Samsung 950 Pro 256GB | 2xHGST Deskstar 4TB 7.2K
Display(s) Samsung C27HG70
Case Xigmatek Aquila
Power Supply Seasonic 760W SS-760XP
Mouse Razer Deathadder 2013
Keyboard Corsair Vengeance K95
Software Windows 10 Pro
Benchmark Scores 4 trillion points in GmailMark, over 144 FPS 2K Facebook Scrolling (Extreme Quality preset)
Going to try to steer this back on topic.

While it is a lot of data, it's not quite irreplaceable data. I think RAID 5 is more than enough redundancy.

Is there any reason why no one has suggested a NAS? Pair that with four inexpensive 4TB drives and it comes out to 16TB at $0.06 USD per gigabyte or 12TB of redundant storage at $0.08 USD per gigabyte ...including the NAS. The total cost comes out to $999.95 (free shipping) on the Egg and probably cheaper elsewhere. To save a bit, 3TB drives can be used for 12TB or 9TB redundant of storage.
 
Joined
Mar 12, 2009
Messages
1,075 (0.20/day)
Location
SCOTLAND!
System Name Machine XV
Processor Dual Xeon E5 2670 V3 Turbo unlocked
Motherboard Kllisre X99 Dual
Cooling 120mm heatsink
Memory 64gb DDR4 ECC
Video Card(s) RX 480 4Gb
Storage 1Tb NVME SSD
Display(s) 19" + 23" + 17"
Case ATX
Audio Device(s) XFi xtreme USB
Power Supply 800W
Software Windows 10
The drives are primarily for large data stores. Two of the drives play media frequently (like video) but most are mainly for storage and accessed relatively infrequently (maybe a few times per day but not continuously like media files).

If you have big data stores then go for a hardware raid card running either raid 0+1 if you only have a few disks but If you have more than 4 I would go with raid 5 or 6 if you can as you won't lose as much space due to using parity rather than 1-1 replication.
 

newtekie1

Semi-Retired Folder
Joined
Nov 22, 2005
Messages
28,472 (4.25/day)
Location
Indiana, USA
Processor Intel Core i7 10850K@5.2GHz
Motherboard AsRock Z470 Taichi
Cooling Corsair H115i Pro w/ Noctua NF-A14 Fans
Memory 32GB DDR4-3600
Video Card(s) RTX 2070 Super
Storage 500GB SX8200 Pro + 8TB with 1TB SSD Cache
Display(s) Acer Nitro VG280K 4K 28"
Case Fractal Design Define S
Audio Device(s) Onboard is good enough for me
Power Supply eVGA SuperNOVA 1000w G3
Software Windows 10 Pro x64
Your missing my point that raid is not for long term storage it was designed for performance and uptime not redundancy and capacity.

Redundant Array of Independent Disks.

You sure RAID wasn't designed for redundancy? Because its right in the name.

RAID was not designed with performance in mind. It was designed for redundancy originally. And it wasn't designed for uptime either. The only version of RAID that puts performance ahead of redundancy is RAID-0, but RAID-0 wasn't one of the original RAID levels, it was added later. And originally, when a drive failed the array had to be taken out of service to replace the drive, and was out of service the entire time during the rebuild. Uptime wasn't a concern either initially, it was all about redundancy. Uptime, capacity, and speed all came later, pretty much in that order.

If you have big data stores then go for a hardware raid card running either raid 0+1 if you only have a few disks but If you have more than 4 I would go with raid 5 or 6 if you can as you won't lose as much space due to using parity rather than 1-1 replication.

Personally, if you have more than 2 drives any level of mirroring is pointless. So I'd say 3 or more drives go with RAID-5 at least.

Going to try to steer this back on topic.

While it is a lot of data, it's not quite irreplaceable data. I think RAID 5 is more than enough redundancy.

Is there any reason why no one has suggested a NAS? Pair that with four inexpensive 4TB drives and it comes out to 16TB at $0.06 USD per gigabyte or 12TB of redundant storage at $0.08 USD per gigabyte ...including the NAS. The total cost comes out to $999.95 (free shipping) on the Egg and probably cheaper elsewhere. To save a bit, 3TB drives can be used for 12TB or 9TB redundant of storage.

I'd grab one of these before going with a NAS. Though I think the reason most didn't suggest a NAS is because he said he wants to use up to 10 drives, and 10 drive NAS devices are expensive.

But with the external RAID box, you can have up to 10 drives connected to one card, with two enclosures. And all 10 drives can be set up in one big RAID5 or 6 array.
 
Last edited:
Joined
Mar 12, 2009
Messages
1,075 (0.20/day)
Location
SCOTLAND!
System Name Machine XV
Processor Dual Xeon E5 2670 V3 Turbo unlocked
Motherboard Kllisre X99 Dual
Cooling 120mm heatsink
Memory 64gb DDR4 ECC
Video Card(s) RX 480 4Gb
Storage 1Tb NVME SSD
Display(s) 19" + 23" + 17"
Case ATX
Audio Device(s) XFi xtreme USB
Power Supply 800W
Software Windows 10
Redundant Array of Independent Disks.

You sure RAID wasn't designed for redundancy? Because its right in the name.

RAID was not designed with performance in mind. It was designed for redundancy originally. And it wasn't designed for uptime either. The only version of RAID that puts performance ahead of redundancy is RAID-0, but RAID-0 wasn't one of the original RAID levels, it was added later. And originally, when a drive failed the array had to be taken out of service to replace the drive, and was out of service the entire time during the rebuild. Uptime wasn't a concern either initially, it was all about redundancy. Uptime, capacity, and speed all came later, pretty much in that order.

Sorry it probably was not the best choice of words on my part all I mean is no version of RAID (apart from raid 1) can you recover data from an individual drive. if you lose 2 drives you cannot recover data from the rest of the drives either as everything is striped between them.


Maximum chance of recovery and lowest power consumption was what I was looking for with my setup, it suits me for my use of dumping media to it and playing it back. Each 1tb drive can saturate my gigabit home network individually so I did not need the extra performance that raid 5 provided. So it's protected with a parity drive and I can use the 1tb cache as a hot spare of i lose a drive. I just flush it, unmount the array, change it from cache to data and hit rebuild.

If your doing a business setup and have the cash for 12 port sas controllers and a proper server I would recommend doing that instead.
 
Last edited:
Joined
Nov 11, 2012
Messages
619 (0.15/day)
Going to try to steer this back on topic.

While it is a lot of data, it's not quite irreplaceable data. I think RAID 5 is more than enough redundancy.

Is there any reason why no one has suggested a NAS? Pair that with four inexpensive 4TB drives and it comes out to 16TB at $0.06 USD per gigabyte or 12TB of redundant storage at $0.08 USD per gigabyte ...including the NAS. The total cost comes out to $999.95 (free shipping) on the Egg and probably cheaper elsewhere. To save a bit, 3TB drives can be used for 12TB or 9TB redundant of storage.

Well actually those 4TB might be relatively inexpensive but you can get 2TB for only $75 which is 4TB for $150 so multiple 2TB is still the cheapest option.

However, if 4TB eventually lower to the price of 2 x 2TB then certainly it will be better to get 4TB drives.

Personally I see no reason to buy more HDD now, when I still have a couple TB left and HDD prices continue to lower regularly. I figure, by the time I need more HDD space, which might be a couple to a few months, HDD prices will have gone even lower by then.

I don't need to buy a dedicated NAS because that's one of the main reasons for building my desktop PC - to house more HDD space.

Redundant Array of Independent Disks.

You sure RAID wasn't designed for redundancy? Because its right in the name.

RAID was not designed with performance in mind. It was designed for redundancy originally. And it wasn't designed for uptime either. The only version of RAID that puts performance ahead of redundancy is RAID-0, but RAID-0 wasn't one of the original RAID levels, it was added later. And originally, when a drive failed the array had to be taken out of service to replace the drive, and was out of service the entire time during the rebuild. Uptime wasn't a concern either initially, it was all about redundancy. Uptime, capacity, and speed all came later, pretty much in that order.



Personally, if you have more than 2 drives any level of mirroring is pointless. So I'd say 3 or more drives go with RAID-5 at least.



I'd grab one of these before going with a NAS. Though I think the reason most didn't suggest a NAS is because he said he wants to use up to 10 drives, and 10 drive NAS devices are expensive.

But with the external RAID box, you can have up to 10 drives connected to one card, with two enclosures. And all 10 drives can be set up in one big RAID5 or 6 array.

Hmm, I'd be open to considering an external dedicated RAID box...
 
Last edited:

newtekie1

Semi-Retired Folder
Joined
Nov 22, 2005
Messages
28,472 (4.25/day)
Location
Indiana, USA
Processor Intel Core i7 10850K@5.2GHz
Motherboard AsRock Z470 Taichi
Cooling Corsair H115i Pro w/ Noctua NF-A14 Fans
Memory 32GB DDR4-3600
Video Card(s) RTX 2070 Super
Storage 500GB SX8200 Pro + 8TB with 1TB SSD Cache
Display(s) Acer Nitro VG280K 4K 28"
Case Fractal Design Define S
Audio Device(s) Onboard is good enough for me
Power Supply eVGA SuperNOVA 1000w G3
Software Windows 10 Pro x64
Hmm, I'd be open to considering an external dedicated RAID box...

That is what I would do, and in this case the RAID box comes with everything you need, including the card. And if you buy two you get a spare card, you can either keep it just to have a spare in case something happens to the first one, or sell it on ebay or something.
 

Geekoid

New Member
Joined
Mar 27, 2013
Messages
77 (0.02/day)
Location
UK
Yes - a raid box is the way to go. Personally, I use a dual-channel fibre connect and use RAID6 with hot spares. It takes 4 simultaneous disk failures before I'm in trouble. A rebuild takes less than 12 hours, so I'd have to have the other 3 failures within that time. Even a second disk fail has never happened yet during the rebuild window. Full backups only happen each weekend as it takes quite a while to copy the whole 16TB :) As you can tell, I'm all out for keeping my data over disk performance.
 
Joined
Mar 12, 2009
Messages
1,075 (0.20/day)
Location
SCOTLAND!
System Name Machine XV
Processor Dual Xeon E5 2670 V3 Turbo unlocked
Motherboard Kllisre X99 Dual
Cooling 120mm heatsink
Memory 64gb DDR4 ECC
Video Card(s) RX 480 4Gb
Storage 1Tb NVME SSD
Display(s) 19" + 23" + 17"
Case ATX
Audio Device(s) XFi xtreme USB
Power Supply 800W
Software Windows 10
I would love to have a little itx box with an external hard drive rack but it was not going to be cheap to build.
 
Top