• We've upgraded our forums. Please post any issues/requests in this thread.

Need specifics on RAID 10

newtekie1

Semi-Retired Folder
Joined
Nov 22, 2005
Messages
24,274 (5.51/day)
Likes
10,361
Location
Indiana, USA
Processor Intel Core i7 4790K@4.6GHz
Motherboard AsRock Z97 Extreme6
Cooling Corsair H100i
Memory 32GB Corsair DDR3-1866 9-10-9-27
Video Card(s) ASUS GTX960 STRIX @ 1500/1900
Storage 480GB Crucial MX200 + 2TB Seagate Solid State Hybrid Drive with 128GB OCZ Synapse SSD Cache
Display(s) QNIX QX2710 1440p@120Hz
Case Corsair 650D Black
Audio Device(s) Onboard is good enough for me
Power Supply Corsair HX850
Software Windows 10 Pro x64
#51
So it's working fine for me to have JBOD, only reason to have RAID is because it would simplify things a bit by having it all one huge drive instead of several drives. Is it even worth it to do RAID for my situation?
Yes, RAID would be worth it for your situation. RAID-10 is only slightly more redundant than RAID-5. With RAID-5 you can have one drive fail and still keep all your data, but a second drive failure and all is lost. With RAID-10 you have two RAID-1 arrays nested in a RAID-0. So it is possible to loose multiple drives and still be fine, but it is also possible that loosing two drives will kill all your data. It comes down to which drive dies.

IMO, for your setup a RAID-5 would be sufficient, there isn't much point in going RAID-10, the space you loose isn't worth the minor improvement in redundancy. If anything do a RAID-5 with a hot spare instead.
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
10,398 (4.85/day)
Likes
5,477
Location
Concord, NH
System Name Kratos
Processor Intel Core i7 3930k @ 4.2Ghz
Motherboard ASUS P9X79 Deluxe
Cooling Zalman CPNS9900MAX 130mm
Memory G.Skill DDR3-2133, 16gb (4x4gb) @ 9-11-10-28-108-1T 1.65v
Video Card(s) MSI AMD Radeon R9 390 GAMING 8GB @ PCI-E 3.0
Storage 2x120Gb SATA3 Corsair Force GT Raid-0, 4x1Tb RAID-5, 1x500GB
Display(s) 1x LG 27UD69P (4k), 2x Dell S2340M (1080p)
Case Antec 1200
Audio Device(s) Onboard Realtek® ALC898 8-Channel High Definition Audio
Power Supply Seasonic 1000-watt 80 PLUS Platinum
Mouse Logitech G602
Keyboard Rosewill RK-9100
Software Ubuntu 17.10
Benchmark Scores Benchmarks aren't everything.
#52
I've got around $800 bucks in one of my hardware RAID Arrays. It's a LSI 3ware 9650SE 4-port (about $650) and I bought a BBU (around $110)
You can get an LSI 4-port card for 320 USD and an 8-port for a bit less than double that, not including the BBU. RAID cards aren't as expensive as they used to be. Not to say that they're not expensive but any RAID with 10 drives is going to be costly no matter how you do it and with that many drives that big I'm super skeptical about software RAID.
If anything do a RAID-5 with a hot spare instead.
Or RAID-6 if the device supports it, that way you would need to lose 3 drives to lose data as opposed to two. (RAID-5 has one drive worth of active fault tolerance and 6 has two, for those who may not have known.)
 
Joined
Mar 12, 2009
Messages
1,042 (0.33/day)
Likes
153
Location
SCOTLAND!
System Name Machine XII
Processor Phenom II 1155T
Motherboard Asus M4A88TD-M EVO/USB3
Cooling Custom Water Cooling
Memory 8Gb ddr3 1600mhz XMS 3
Video Card(s) HD5970 2Gb
Storage 128Gb Sandisk SATAIII 6G/s
Display(s) 19" + 23" + 17"
Case micro atx case
Audio Device(s) XFi xtreme OEM
Power Supply 750W
Software Windows 7 x64 Ultimate
#53
Is the storage for manipulating large files or large data stores or is it for media storage and playback?

If your using large files like for virtual machines or a massive database then go raid on a very expensive controller this is what it was designed for redundancy and performance.

If It's just for media storage and playback then use unraid or flexraid.

You can add drives without breaking the array

The array only powers up the drive you are wanting the data from so you don't need 10 drives spinning just to give you a single file

If you lose 2 drives all your data on the rest of the drives are fine but with most raid it's totally unrecoverable.

Sure the performance is not as fast as raid on a very expensive controller but if it's going over gigabit that's not an issue.

If you spend $$$ on a 8 port sata card for raid then decide you want more than 8 drives you better hope you can find an identical card to expand your array with. With unraid I have mixed controllers from SIL, JMB, VIA, AMD, LSI.

I have ran unraid with 10 drives for over a year adding a drive every few months. It rebuilds the parity in around 24 hours and the array accessible during this time.
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
10,398 (4.85/day)
Likes
5,477
Location
Concord, NH
System Name Kratos
Processor Intel Core i7 3930k @ 4.2Ghz
Motherboard ASUS P9X79 Deluxe
Cooling Zalman CPNS9900MAX 130mm
Memory G.Skill DDR3-2133, 16gb (4x4gb) @ 9-11-10-28-108-1T 1.65v
Video Card(s) MSI AMD Radeon R9 390 GAMING 8GB @ PCI-E 3.0
Storage 2x120Gb SATA3 Corsair Force GT Raid-0, 4x1Tb RAID-5, 1x500GB
Display(s) 1x LG 27UD69P (4k), 2x Dell S2340M (1080p)
Case Antec 1200
Audio Device(s) Onboard Realtek® ALC898 8-Channel High Definition Audio
Power Supply Seasonic 1000-watt 80 PLUS Platinum
Mouse Logitech G602
Keyboard Rosewill RK-9100
Software Ubuntu 17.10
Benchmark Scores Benchmarks aren't everything.
#54
I just read up on Unraid and Flexraid and it's all file-level redundancy. It's by no means RAID and I think even for pictures, video, and music I would be skeptical to run software that lets you schedule when the parity gets calculated. That alone is dangerous and is keeping me far from it. In all seriousness if you're going to run a RAID, use something at at least does hardware level striping rather than file level redundancy because your performance is going to be dismal and integrity of your data upon being written can not be guaranteed to be valid if parity info isn't written at the same time.

I would say that if you're going to run Windows then use Windows software RAID and if you're going to use some distro of Linux I would use Linux software raid if 10 disks is going to be your end all. Anything not done at the driver level is concerning and redundancy that doesn't need to be calculated when the data is written is scary as hell and I would trust none of my data to a system that allows that insanity.
 
Joined
Mar 12, 2009
Messages
1,042 (0.33/day)
Likes
153
Location
SCOTLAND!
System Name Machine XII
Processor Phenom II 1155T
Motherboard Asus M4A88TD-M EVO/USB3
Cooling Custom Water Cooling
Memory 8Gb ddr3 1600mhz XMS 3
Video Card(s) HD5970 2Gb
Storage 128Gb Sandisk SATAIII 6G/s
Display(s) 19" + 23" + 17"
Case micro atx case
Audio Device(s) XFi xtreme OEM
Power Supply 750W
Software Windows 7 x64 Ultimate
#55
I just read up on Unraid and Flexraid and it's all file-level redundancy. It's by no means RAID and I think even for pictures, video, and music I would be skeptical to run software that lets you schedule when the parity gets calculated. That alone is dangerous and is keeping me far from it. In all seriousness if you're going to run a RAID, use something at at least does hardware level striping rather than file level redundancy because your performance is going to be dismal and integrity of your data upon being written can not be guaranteed to be valid if parity info isn't written at the same time.

I would say that if you're going to run Windows then use Windows software RAID and if you're going to use some distro of Linux I would use Linux software raid if 10 disks is going to be your end all. Anything not done at the driver level is concerning and redundancy that doesn't need to be calculated when the data is written is scary as hell and I would trust none of my data to a system that allows that insanity.


It's not file level redundancy with unraid it calculates the parity in real time just like a hardware raid controller but it does not stripe the data across the drives so the read performance is the same as a single drive like parity protected JBOD array . This means in a worst case where I lose 2 data disks or a parity and data disk I can remove the rest of my drives and plug it into any Linux box and just copy the files from it.

I know I'm limited to 40mb/s writing directly to the array but that's why I have the cache drive to dump files to. And yes I know it's not protected while its on that cache drive but that's why I write my backups direct to the array.

Flex raid works the same but you can have it with network drives or usb or hard drives or any data store you want

It has 2 options. Real time parity just like normal raid or you can have it as a scheduled parity so you can restore to the last parity run.

Your missing my point that raid is not for long term storage it was designed for performance and uptime not redundancy and capacity. Do you need to read and write your movies at 200mb/s? When you are limited to 100mb of gigabit Ethernet?

Think of it more like NAS than a server
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
10,398 (4.85/day)
Likes
5,477
Location
Concord, NH
System Name Kratos
Processor Intel Core i7 3930k @ 4.2Ghz
Motherboard ASUS P9X79 Deluxe
Cooling Zalman CPNS9900MAX 130mm
Memory G.Skill DDR3-2133, 16gb (4x4gb) @ 9-11-10-28-108-1T 1.65v
Video Card(s) MSI AMD Radeon R9 390 GAMING 8GB @ PCI-E 3.0
Storage 2x120Gb SATA3 Corsair Force GT Raid-0, 4x1Tb RAID-5, 1x500GB
Display(s) 1x LG 27UD69P (4k), 2x Dell S2340M (1080p)
Case Antec 1200
Audio Device(s) Onboard Realtek® ALC898 8-Channel High Definition Audio
Power Supply Seasonic 1000-watt 80 PLUS Platinum
Mouse Logitech G602
Keyboard Rosewill RK-9100
Software Ubuntu 17.10
Benchmark Scores Benchmarks aren't everything.
#56
Your missing my point that raid is not for long term storage it was designed for performance and uptime not redundancy and capacity. Do you need to read and write your movies at 200mb/s? When you are limited to 100mb of gigabit Ethernet?
Actually RAID was designed for long term reliable storage. There is no point in running a RAID if it's not going to last because uptime is going to cost you and uptime is usually most likely the reason you're running RAID in the first place. Not to mention uptime is directly related to reliability of your storage devices.

Even running software RAID offers a lot of the protection of hardware RAID, is usually just as reliable with a performance penelty, but not nearly as steep at 40MB/s. I don't know about you but when I go to backup my RAID (even unraid isn't a replacement for offsite backup,) but I get pretty annoyed at how slow my USB 2.0 external drive is, and running it on USB 3.0 port in Turbo mode, I get about 42MB/s and it sucks. I don't care what kind of data I have, I don't want those kinds of speeds locally.

Just when you thought a WD Caviar Green on SATA was slow. :p

So yeah, for strictly storage I'm sure it's fast enough for regular usage, like watching a video but if you're going to copy an HD video to it or do any trans-coding you're going to wish you had a real RAID that writes faster. So if one person is using it and it's strictly for storage and occasionally a single person watching an HD video, it might be enough but I wouldn't try doing anything to it while your backing up, or syncing, or moving any data to or from it. 40MB/s is fairly easy to saturate now I understand you have a caching drive for writes but not all data can be handled that way.
 
Joined
Mar 12, 2009
Messages
1,042 (0.33/day)
Likes
153
Location
SCOTLAND!
System Name Machine XII
Processor Phenom II 1155T
Motherboard Asus M4A88TD-M EVO/USB3
Cooling Custom Water Cooling
Memory 8Gb ddr3 1600mhz XMS 3
Video Card(s) HD5970 2Gb
Storage 128Gb Sandisk SATAIII 6G/s
Display(s) 19" + 23" + 17"
Case micro atx case
Audio Device(s) XFi xtreme OEM
Power Supply 750W
Software Windows 7 x64 Ultimate
#57
Actually RAID was designed for long term reliable storage. There is no point in running a RAID if it's not going to last because uptime is going to cost you and uptime is usually most likely the reason you're running RAID in the first place. Not to mention uptime is directly related to reliability of your storage devices.

Even running software RAID offers a lot of the protection of hardware RAID, is usually just as reliable with a performance penelty, but not nearly as steep at 40MB/s. I don't know about you but when I go to backup my RAID (even unraid isn't a replacement for offsite backup,) but I get pretty annoyed at how slow my USB 2.0 external drive is, and running it on USB 3.0 port in Turbo mode, I get about 42MB/s and it sucks. I don't care what kind of data I have, I don't want those kinds of speeds locally.

Just when you thought a WD Caviar Green on SATA was slow. :p

So yeah, for strictly storage I'm sure it's fast enough for regular usage, like watching a video but if you're going to copy an HD video to it or do any trans-coding you're going to wish you had a real RAID that writes faster. So if one person is using it and it's strictly for storage and occasionally a single person watching an HD video, it might be enough but I wouldn't try doing anything to it while your backing up, or syncing, or moving any data to or from it. 40MB/s is fairly easy to saturate now I understand you have a caching drive for writes but not all data can be handled that way.
Yea 40mb's is not the fastest but the 1tb cache is more than adequate for dumping large amounts of data to the server. It's set up so that its transparent so you just use the array as normal and you don't notice that its on the cache or array. I have it set up to write out the cache once a day so it's never unprotected for long.

The only time I write direct to the array is for backing up my pc and laptop but I know this is no substitute for offsite backup.

The reasons I went with unraid over hardware raid or windows Home server array

1. Totally hardware independent I just move my flash drive and hard drives to any PC and it boots and works

2. Drives power up independently when data is accessed so you don't have to power up 10 drives for a single file So it saves about 50w when it's idle

3. I can add drives without having to backup and rebuild the array.

If your just a home user wanting to have a large storage pool with enough redundancy to survive a single drive failure then it's ideal.
 
Last edited:

DanTheBanjoman

Señor Moderator
Joined
May 20, 2004
Messages
10,488 (2.12/day)
Likes
1,331
#58
Why is RAID 10 the most suitable? I'd go for RAID 6 instead. Similar redundancy and more space. Performance shouldn't be a huge issue with the right hardware.

Also, apart from controller and drives you need to think about housing. Putting 10 drives in a normal case without decent drive bays means you get messy cabling and replacing a drive isn't as straight forwarded.

I personally use an Axus YI-16SAEU4, you should be able to find similar hardware around the price of a decent hardware RAID controller. You'll have a reliably device with all features you could wish for though.
Quick look on ebay give me this for eaxmple.
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
10,398 (4.85/day)
Likes
5,477
Location
Concord, NH
System Name Kratos
Processor Intel Core i7 3930k @ 4.2Ghz
Motherboard ASUS P9X79 Deluxe
Cooling Zalman CPNS9900MAX 130mm
Memory G.Skill DDR3-2133, 16gb (4x4gb) @ 9-11-10-28-108-1T 1.65v
Video Card(s) MSI AMD Radeon R9 390 GAMING 8GB @ PCI-E 3.0
Storage 2x120Gb SATA3 Corsair Force GT Raid-0, 4x1Tb RAID-5, 1x500GB
Display(s) 1x LG 27UD69P (4k), 2x Dell S2340M (1080p)
Case Antec 1200
Audio Device(s) Onboard Realtek® ALC898 8-Channel High Definition Audio
Power Supply Seasonic 1000-watt 80 PLUS Platinum
Mouse Logitech G602
Keyboard Rosewill RK-9100
Software Ubuntu 17.10
Benchmark Scores Benchmarks aren't everything.
#59
1. Totally hardware independent I just move my flash drive and hard drives to any PC and it boots and works
So is any other software raid solution such as Windows and Linux software RAID.
2. Drives power up independently when data is accessed so you don't have to power up 10 drives for a single file So it saves about 50w when it's idle
It's also why your disk access speeds are incredibly slow. Also spinning up and spinning down hard drives on a regular basis puts more stress on the motor than keeping it running. Granted I don't know how often your drives wake up and go to sleep.

3. I can add drives without having to backup and rebuild the array.
Got me there, but RAID you don't need to backup to add a drive (even though you should,) but in defense of this, my RAID can be degraded and rebuilding and it will still perform better than unraid, that's the point. :)
If your just a home user wanting to have a large storage pool with enough redundancy to survive a single drive failure then it's ideal.
I agree but software RAID (real striping RAID,) can offer better performance and equal if not better reliability.
 
Joined
Mar 12, 2009
Messages
1,042 (0.33/day)
Likes
153
Location
SCOTLAND!
System Name Machine XII
Processor Phenom II 1155T
Motherboard Asus M4A88TD-M EVO/USB3
Cooling Custom Water Cooling
Memory 8Gb ddr3 1600mhz XMS 3
Video Card(s) HD5970 2Gb
Storage 128Gb Sandisk SATAIII 6G/s
Display(s) 19" + 23" + 17"
Case micro atx case
Audio Device(s) XFi xtreme OEM
Power Supply 750W
Software Windows 7 x64 Ultimate
#60
So is any other software raid solution such as Windows and Linux software RAID.
Yet me clarify I don't have to install unraid it runs live from the flash drive and all it ever writes is to the config file when you set it up. So i don't have to set it up when I migrate hardware I just plug and play.

It's also why your disk access speeds are incredibly slow. Also spinning up and spinning down hard drives on a regular basis puts more stress on the motor than keeping it running. Granted I don't know how often your drives wake up and go to sleep.
They are set to power down after 1 hour so for most of the day they are off. i have the filenames buffered so it doesn't have to power on drives when im browsing for something only if I open a file will it power on the drive. So most of my drives are off apart from an hour or 2 a day where a few might come on. The cache drive also keeps drives powered down by buffering all writes to the array and moving everything at once.

Got me there, but RAID you don't need to backup to add a drive (even though you should,) but in defense of this, my RAID can be degraded and rebuilding and it will still perform better than unraid, that's the point. :)
As long as your controller supports online capacity expansion no software raid does.

I agree but software RAID (real striping RAID,) can offer better performance and equal if not better reliability.
Here is my problem with striping your storage array is if you loose too many drives you lose everything with zero chance of getting anything from the remaining drives.

I can take any drive from my array and plug it into any Linux computer and copy the files from it, even the cache. So I can recover my data far easier than any form of striped raid setup.
 
Last edited:

DanTheBanjoman

Señor Moderator
Joined
May 20, 2004
Messages
10,488 (2.12/day)
Likes
1,331
#61
As long as your controller supports online capacity expansion no software raid does.
mdadm does, as did the raidcore stack. As for the latter, mentioning it made me wonder what happened with it, seems it's offered now under the name assuredVRA. I should play with it.
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
10,398 (4.85/day)
Likes
5,477
Location
Concord, NH
System Name Kratos
Processor Intel Core i7 3930k @ 4.2Ghz
Motherboard ASUS P9X79 Deluxe
Cooling Zalman CPNS9900MAX 130mm
Memory G.Skill DDR3-2133, 16gb (4x4gb) @ 9-11-10-28-108-1T 1.65v
Video Card(s) MSI AMD Radeon R9 390 GAMING 8GB @ PCI-E 3.0
Storage 2x120Gb SATA3 Corsair Force GT Raid-0, 4x1Tb RAID-5, 1x500GB
Display(s) 1x LG 27UD69P (4k), 2x Dell S2340M (1080p)
Case Antec 1200
Audio Device(s) Onboard Realtek® ALC898 8-Channel High Definition Audio
Power Supply Seasonic 1000-watt 80 PLUS Platinum
Mouse Logitech G602
Keyboard Rosewill RK-9100
Software Ubuntu 17.10
Benchmark Scores Benchmarks aren't everything.
#62
mdadm does, as did the raidcore stack. As for the latter, mentioning it made me wonder what happened with it, seems it's offered now under the name assuredVRA. I should play with it.
mdadm also stores RAID information on the RAID drives rather than in a config. So if you move the drives to another *nix machine you can migrate your RAID. It's also pretty quick for software RAID too.

Yet me clarify I don't have to install unraid it runs live from the flash drive and all it ever writes is to the config file when you set it up. So i don't have to set it up when I migrate hardware I just plug and play.
I hope you don't migrate 10 drives to different hardware often. :twitch:
Software is like hardware though, like works with like. If you're running a RAID off a particular LSI card, any LSI card like it will take it in. Same with Intel's RST, adadm, you name it. It when you change the controller or software you're using. So if you didn't want to use unraid anymore or if the project died and stopped getting upgraded it wouldn't be easy to migrate it to another system. Just like any other method of redundancy.
 
Joined
Mar 12, 2009
Messages
1,042 (0.33/day)
Likes
153
Location
SCOTLAND!
System Name Machine XII
Processor Phenom II 1155T
Motherboard Asus M4A88TD-M EVO/USB3
Cooling Custom Water Cooling
Memory 8Gb ddr3 1600mhz XMS 3
Video Card(s) HD5970 2Gb
Storage 128Gb Sandisk SATAIII 6G/s
Display(s) 19" + 23" + 17"
Case micro atx case
Audio Device(s) XFi xtreme OEM
Power Supply 750W
Software Windows 7 x64 Ultimate
#63
I hope you don't migrate 10 drives to different hardware often. :twitch:
Software is like hardware though, like works with like. If you're running a RAID off a particular LSI card, any LSI card like it will take it in. Same with Intel's RST, adadm, you name it. It when you change the controller or software you're using. So if you didn't want to use unraid anymore or if the project died and stopped getting upgraded it wouldn't be easy to migrate it to another system. Just like any other method of redundancy.
Your missing the point I am trying to make I can take any single drive from my array and copy everything off it just by copying and pasting While its plugged into any Linux computer.

I don't need the unraid OS to recover files as they are all in one Piece on one drive just spread over the drives. So I f I pulled one of my hard drives and looked at the folder structure it would just like the array on one disk but the folders will only contain the files designated to that perticular drive.

So when I rebuild the array after adding a disk I am not rebuilding the array but the parity disk so even if I pushed the reset button half way through a rebuild it would just reboot and start rebuilding the parity again with no loss.

I don't know of any raid controller that can survive a total power loss during online capacity expansion process.
 

Easy Rhino

Linux Advocate
Joined
Nov 13, 2006
Messages
14,405 (3.56/day)
Likes
4,256
System Name VHOST01 | Desktop
Processor i7 980x | i5 7500 Kaby Lake
Motherboard Gigabyte x58 Extreme | AsRock MicroATX Z170M Exteme4
Cooling Prolimatech Megahelams | Stock
Memory 6x4 GB @ 1333 | 2x 8G Gskill Aegis DDR4 2400
Video Card(s) Nvidia GT 210 | Nvidia GTX 970 FTW+
Storage 4x2 TB Enterprise RAID5 |Corsair mForce nvme 250G
Display(s) N/A | Dell 27" 1440p 8bit GSYNC
Case Lian Li ATX Mid Tower | Corsair Carbide 400C
Audio Device(s) NA | On Board
Power Supply SeaSonic 500W Gold | Seasonic SSR-650GD Flagship Prime Series 650W Gold
Mouse N/A | Logitech G900 Chaos Spectrum
Keyboard N/A | Posiden Z RGB Cherry MX Brown
Software Centos 7 | Windows 10
#64
so even if I pushed the reset button half way through a rebuild it would just reboot and start rebuilding the parity again with no loss.

I don't know of any raid controller that can survive a total power loss during online capacity expansion process.
why would you hit the reset button ? if you are hitting the reset button on an enterprise server running RAID then you should be fired.
 
Joined
Mar 12, 2009
Messages
1,042 (0.33/day)
Likes
153
Location
SCOTLAND!
System Name Machine XII
Processor Phenom II 1155T
Motherboard Asus M4A88TD-M EVO/USB3
Cooling Custom Water Cooling
Memory 8Gb ddr3 1600mhz XMS 3
Video Card(s) HD5970 2Gb
Storage 128Gb Sandisk SATAIII 6G/s
Display(s) 19" + 23" + 17"
Case micro atx case
Audio Device(s) XFi xtreme OEM
Power Supply 750W
Software Windows 7 x64 Ultimate
#65
why would you hit the reset button ? if you are hitting the reset button on an enterprise server running RAID then you should be fired.
Obviously I wouldn't do that my point is my data would be fine if something happened during the process like a power cut.
 

Easy Rhino

Linux Advocate
Joined
Nov 13, 2006
Messages
14,405 (3.56/day)
Likes
4,256
System Name VHOST01 | Desktop
Processor i7 980x | i5 7500 Kaby Lake
Motherboard Gigabyte x58 Extreme | AsRock MicroATX Z170M Exteme4
Cooling Prolimatech Megahelams | Stock
Memory 6x4 GB @ 1333 | 2x 8G Gskill Aegis DDR4 2400
Video Card(s) Nvidia GT 210 | Nvidia GTX 970 FTW+
Storage 4x2 TB Enterprise RAID5 |Corsair mForce nvme 250G
Display(s) N/A | Dell 27" 1440p 8bit GSYNC
Case Lian Li ATX Mid Tower | Corsair Carbide 400C
Audio Device(s) NA | On Board
Power Supply SeaSonic 500W Gold | Seasonic SSR-650GD Flagship Prime Series 650W Gold
Mouse N/A | Logitech G900 Chaos Spectrum
Keyboard N/A | Posiden Z RGB Cherry MX Brown
Software Centos 7 | Windows 10
#66
Obviously I wouldn't do that my point is my data would be fine if something happened during the process like a power cut.
OK. But that seems like a pointless thing to say.
 
Joined
Nov 11, 2012
Messages
619 (0.33/day)
Likes
34
#67
Is the storage for manipulating large files or large data stores or is it for media storage and playback?

If your using large files like for virtual machines or a massive database then go raid on a very expensive controller this is what it was designed for redundancy and performance.

If It's just for media storage and playback then use unraid or flexraid.

You can add drives without breaking the array

The array only powers up the drive you are wanting the data from so you don't need 10 drives spinning just to give you a single file

If you lose 2 drives all your data on the rest of the drives are fine but with most raid it's totally unrecoverable.

Sure the performance is not as fast as raid on a very expensive controller but if it's going over gigabit that's not an issue.

If you spend $$$ on a 8 port sata card for raid then decide you want more than 8 drives you better hope you can find an identical card to expand your array with. With unraid I have mixed controllers from SIL, JMB, VIA, AMD, LSI.

I have ran unraid with 10 drives for over a year adding a drive every few months. It rebuilds the parity in around 24 hours and the array accessible during this time.
The drives are primarily for large data stores. Two of the drives play media frequently (like video) but most are mainly for storage and accessed relatively infrequently (maybe a few times per day but not continuously like media files).
 
Joined
Nov 10, 2006
Messages
4,527 (1.12/day)
Likes
6,075
Location
Washington, US
System Name Lappy
Processor i7 6700k
Motherboard Sager NP9870
Cooling A lot smaller than I'd like
Memory Samsung 4x8GB DDR4-2133 SO-DIMM
Video Card(s) GTX 980 (MXM)
Storage 2xSamsung 950 Pro 256GB | 2xHGST 1TB 7.2K
Display(s) 17.3" IPS 1080p G-SYNC
Audio Device(s) Sound Blaster X-FI MB 5, Foster 2.1 channel integrated
Mouse Razer Deathadder
Keyboard 3-zone RGB integrated
Software Windows 10 Pro
Benchmark Scores Not slow
#68
Going to try to steer this back on topic.

While it is a lot of data, it's not quite irreplaceable data. I think RAID 5 is more than enough redundancy.

Is there any reason why no one has suggested a NAS? Pair that with four inexpensive 4TB drives and it comes out to 16TB at $0.06 USD per gigabyte or 12TB of redundant storage at $0.08 USD per gigabyte ...including the NAS. The total cost comes out to $999.95 (free shipping) on the Egg and probably cheaper elsewhere. To save a bit, 3TB drives can be used for 12TB or 9TB redundant of storage.
 
Joined
Mar 12, 2009
Messages
1,042 (0.33/day)
Likes
153
Location
SCOTLAND!
System Name Machine XII
Processor Phenom II 1155T
Motherboard Asus M4A88TD-M EVO/USB3
Cooling Custom Water Cooling
Memory 8Gb ddr3 1600mhz XMS 3
Video Card(s) HD5970 2Gb
Storage 128Gb Sandisk SATAIII 6G/s
Display(s) 19" + 23" + 17"
Case micro atx case
Audio Device(s) XFi xtreme OEM
Power Supply 750W
Software Windows 7 x64 Ultimate
#69
The drives are primarily for large data stores. Two of the drives play media frequently (like video) but most are mainly for storage and accessed relatively infrequently (maybe a few times per day but not continuously like media files).
If you have big data stores then go for a hardware raid card running either raid 0+1 if you only have a few disks but If you have more than 4 I would go with raid 5 or 6 if you can as you won't lose as much space due to using parity rather than 1-1 replication.
 

newtekie1

Semi-Retired Folder
Joined
Nov 22, 2005
Messages
24,274 (5.51/day)
Likes
10,361
Location
Indiana, USA
Processor Intel Core i7 4790K@4.6GHz
Motherboard AsRock Z97 Extreme6
Cooling Corsair H100i
Memory 32GB Corsair DDR3-1866 9-10-9-27
Video Card(s) ASUS GTX960 STRIX @ 1500/1900
Storage 480GB Crucial MX200 + 2TB Seagate Solid State Hybrid Drive with 128GB OCZ Synapse SSD Cache
Display(s) QNIX QX2710 1440p@120Hz
Case Corsair 650D Black
Audio Device(s) Onboard is good enough for me
Power Supply Corsair HX850
Software Windows 10 Pro x64
#70
Your missing my point that raid is not for long term storage it was designed for performance and uptime not redundancy and capacity.
Redundant Array of Independent Disks.

You sure RAID wasn't designed for redundancy? Because its right in the name.

RAID was not designed with performance in mind. It was designed for redundancy originally. And it wasn't designed for uptime either. The only version of RAID that puts performance ahead of redundancy is RAID-0, but RAID-0 wasn't one of the original RAID levels, it was added later. And originally, when a drive failed the array had to be taken out of service to replace the drive, and was out of service the entire time during the rebuild. Uptime wasn't a concern either initially, it was all about redundancy. Uptime, capacity, and speed all came later, pretty much in that order.

If you have big data stores then go for a hardware raid card running either raid 0+1 if you only have a few disks but If you have more than 4 I would go with raid 5 or 6 if you can as you won't lose as much space due to using parity rather than 1-1 replication.
Personally, if you have more than 2 drives any level of mirroring is pointless. So I'd say 3 or more drives go with RAID-5 at least.

Going to try to steer this back on topic.

While it is a lot of data, it's not quite irreplaceable data. I think RAID 5 is more than enough redundancy.

Is there any reason why no one has suggested a NAS? Pair that with four inexpensive 4TB drives and it comes out to 16TB at $0.06 USD per gigabyte or 12TB of redundant storage at $0.08 USD per gigabyte ...including the NAS. The total cost comes out to $999.95 (free shipping) on the Egg and probably cheaper elsewhere. To save a bit, 3TB drives can be used for 12TB or 9TB redundant of storage.
I'd grab one of these before going with a NAS. Though I think the reason most didn't suggest a NAS is because he said he wants to use up to 10 drives, and 10 drive NAS devices are expensive.

But with the external RAID box, you can have up to 10 drives connected to one card, with two enclosures. And all 10 drives can be set up in one big RAID5 or 6 array.
 
Last edited:
Joined
Mar 12, 2009
Messages
1,042 (0.33/day)
Likes
153
Location
SCOTLAND!
System Name Machine XII
Processor Phenom II 1155T
Motherboard Asus M4A88TD-M EVO/USB3
Cooling Custom Water Cooling
Memory 8Gb ddr3 1600mhz XMS 3
Video Card(s) HD5970 2Gb
Storage 128Gb Sandisk SATAIII 6G/s
Display(s) 19" + 23" + 17"
Case micro atx case
Audio Device(s) XFi xtreme OEM
Power Supply 750W
Software Windows 7 x64 Ultimate
#71
Redundant Array of Independent Disks.

You sure RAID wasn't designed for redundancy? Because its right in the name.

RAID was not designed with performance in mind. It was designed for redundancy originally. And it wasn't designed for uptime either. The only version of RAID that puts performance ahead of redundancy is RAID-0, but RAID-0 wasn't one of the original RAID levels, it was added later. And originally, when a drive failed the array had to be taken out of service to replace the drive, and was out of service the entire time during the rebuild. Uptime wasn't a concern either initially, it was all about redundancy. Uptime, capacity, and speed all came later, pretty much in that order.
Sorry it probably was not the best choice of words on my part all I mean is no version of RAID (apart from raid 1) can you recover data from an individual drive. if you lose 2 drives you cannot recover data from the rest of the drives either as everything is striped between them.


Maximum chance of recovery and lowest power consumption was what I was looking for with my setup, it suits me for my use of dumping media to it and playing it back. Each 1tb drive can saturate my gigabit home network individually so I did not need the extra performance that raid 5 provided. So it's protected with a parity drive and I can use the 1tb cache as a hot spare of i lose a drive. I just flush it, unmount the array, change it from cache to data and hit rebuild.

If your doing a business setup and have the cash for 12 port sas controllers and a proper server I would recommend doing that instead.
 
Last edited:
Joined
Nov 11, 2012
Messages
619 (0.33/day)
Likes
34
#72
Going to try to steer this back on topic.

While it is a lot of data, it's not quite irreplaceable data. I think RAID 5 is more than enough redundancy.

Is there any reason why no one has suggested a NAS? Pair that with four inexpensive 4TB drives and it comes out to 16TB at $0.06 USD per gigabyte or 12TB of redundant storage at $0.08 USD per gigabyte ...including the NAS. The total cost comes out to $999.95 (free shipping) on the Egg and probably cheaper elsewhere. To save a bit, 3TB drives can be used for 12TB or 9TB redundant of storage.
Well actually those 4TB might be relatively inexpensive but you can get 2TB for only $75 which is 4TB for $150 so multiple 2TB is still the cheapest option.

However, if 4TB eventually lower to the price of 2 x 2TB then certainly it will be better to get 4TB drives.

Personally I see no reason to buy more HDD now, when I still have a couple TB left and HDD prices continue to lower regularly. I figure, by the time I need more HDD space, which might be a couple to a few months, HDD prices will have gone even lower by then.

I don't need to buy a dedicated NAS because that's one of the main reasons for building my desktop PC - to house more HDD space.

Redundant Array of Independent Disks.

You sure RAID wasn't designed for redundancy? Because its right in the name.

RAID was not designed with performance in mind. It was designed for redundancy originally. And it wasn't designed for uptime either. The only version of RAID that puts performance ahead of redundancy is RAID-0, but RAID-0 wasn't one of the original RAID levels, it was added later. And originally, when a drive failed the array had to be taken out of service to replace the drive, and was out of service the entire time during the rebuild. Uptime wasn't a concern either initially, it was all about redundancy. Uptime, capacity, and speed all came later, pretty much in that order.



Personally, if you have more than 2 drives any level of mirroring is pointless. So I'd say 3 or more drives go with RAID-5 at least.



I'd grab one of these before going with a NAS. Though I think the reason most didn't suggest a NAS is because he said he wants to use up to 10 drives, and 10 drive NAS devices are expensive.

But with the external RAID box, you can have up to 10 drives connected to one card, with two enclosures. And all 10 drives can be set up in one big RAID5 or 6 array.
Hmm, I'd be open to considering an external dedicated RAID box...
 
Last edited:

newtekie1

Semi-Retired Folder
Joined
Nov 22, 2005
Messages
24,274 (5.51/day)
Likes
10,361
Location
Indiana, USA
Processor Intel Core i7 4790K@4.6GHz
Motherboard AsRock Z97 Extreme6
Cooling Corsair H100i
Memory 32GB Corsair DDR3-1866 9-10-9-27
Video Card(s) ASUS GTX960 STRIX @ 1500/1900
Storage 480GB Crucial MX200 + 2TB Seagate Solid State Hybrid Drive with 128GB OCZ Synapse SSD Cache
Display(s) QNIX QX2710 1440p@120Hz
Case Corsair 650D Black
Audio Device(s) Onboard is good enough for me
Power Supply Corsair HX850
Software Windows 10 Pro x64
#73
Hmm, I'd be open to considering an external dedicated RAID box...
That is what I would do, and in this case the RAID box comes with everything you need, including the card. And if you buy two you get a spare card, you can either keep it just to have a spare in case something happens to the first one, or sell it on ebay or something.
 

Geekoid

New Member
Joined
Mar 27, 2013
Messages
77 (0.04/day)
Likes
24
Location
UK
#74
Yes - a raid box is the way to go. Personally, I use a dual-channel fibre connect and use RAID6 with hot spares. It takes 4 simultaneous disk failures before I'm in trouble. A rebuild takes less than 12 hours, so I'd have to have the other 3 failures within that time. Even a second disk fail has never happened yet during the rebuild window. Full backups only happen each weekend as it takes quite a while to copy the whole 16TB :) As you can tell, I'm all out for keeping my data over disk performance.
 
Joined
Mar 12, 2009
Messages
1,042 (0.33/day)
Likes
153
Location
SCOTLAND!
System Name Machine XII
Processor Phenom II 1155T
Motherboard Asus M4A88TD-M EVO/USB3
Cooling Custom Water Cooling
Memory 8Gb ddr3 1600mhz XMS 3
Video Card(s) HD5970 2Gb
Storage 128Gb Sandisk SATAIII 6G/s
Display(s) 19" + 23" + 17"
Case micro atx case
Audio Device(s) XFi xtreme OEM
Power Supply 750W
Software Windows 7 x64 Ultimate
#75
I would love to have a little itx box with an external hard drive rack but it was not going to be cheap to build.