1. Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Need specifics on RAID 10

Discussion in 'Storage' started by vawrvawerawe, Mar 11, 2013.

  1. newtekie1

    newtekie1 Semi-Retired Folder

    Joined:
    Nov 22, 2005
    Messages:
    20,237 (6.10/day)
    Thanks Received:
    6,275
    Yes, RAID would be worth it for your situation. RAID-10 is only slightly more redundant than RAID-5. With RAID-5 you can have one drive fail and still keep all your data, but a second drive failure and all is lost. With RAID-10 you have two RAID-1 arrays nested in a RAID-0. So it is possible to loose multiple drives and still be fine, but it is also possible that loosing two drives will kill all your data. It comes down to which drive dies.

    IMO, for your setup a RAID-5 would be sufficient, there isn't much point in going RAID-10, the space you loose isn't worth the minor improvement in redundancy. If anything do a RAID-5 with a hot spare instead.
     
    vawrvawerawe says thanks.
    Crunching for Team TPU 50 Million points folded for TPU
  2. Aquinus

    Aquinus Resident Wat-man

    Joined:
    Jan 28, 2012
    Messages:
    6,877 (6.48/day)
    Thanks Received:
    2,470
    Location:
    Concord, NH
    You can get an LSI 4-port card for 320 USD and an 8-port for a bit less than double that, not including the BBU. RAID cards aren't as expensive as they used to be. Not to say that they're not expensive but any RAID with 10 drives is going to be costly no matter how you do it and with that many drives that big I'm super skeptical about software RAID.
    Or RAID-6 if the device supports it, that way you would need to lose 3 drives to lose data as opposed to two. (RAID-5 has one drive worth of active fault tolerance and 6 has two, for those who may not have known.)
     
    vawrvawerawe says thanks.
  3. Geofrancis

    Geofrancis

    Joined:
    Mar 12, 2009
    Messages:
    1,042 (0.49/day)
    Thanks Received:
    153
    Location:
    SCOTLAND!
    Is the storage for manipulating large files or large data stores or is it for media storage and playback?

    If your using large files like for virtual machines or a massive database then go raid on a very expensive controller this is what it was designed for redundancy and performance.

    If It's just for media storage and playback then use unraid or flexraid.

    You can add drives without breaking the array

    The array only powers up the drive you are wanting the data from so you don't need 10 drives spinning just to give you a single file

    If you lose 2 drives all your data on the rest of the drives are fine but with most raid it's totally unrecoverable.

    Sure the performance is not as fast as raid on a very expensive controller but if it's going over gigabit that's not an issue.

    If you spend $$$ on a 8 port sata card for raid then decide you want more than 8 drives you better hope you can find an identical card to expand your array with. With unraid I have mixed controllers from SIL, JMB, VIA, AMD, LSI.

    I have ran unraid with 10 drives for over a year adding a drive every few months. It rebuilds the parity in around 24 hours and the array accessible during this time.
     
    vawrvawerawe says thanks.
  4. Aquinus

    Aquinus Resident Wat-man

    Joined:
    Jan 28, 2012
    Messages:
    6,877 (6.48/day)
    Thanks Received:
    2,470
    Location:
    Concord, NH
    I just read up on Unraid and Flexraid and it's all file-level redundancy. It's by no means RAID and I think even for pictures, video, and music I would be skeptical to run software that lets you schedule when the parity gets calculated. That alone is dangerous and is keeping me far from it. In all seriousness if you're going to run a RAID, use something at at least does hardware level striping rather than file level redundancy because your performance is going to be dismal and integrity of your data upon being written can not be guaranteed to be valid if parity info isn't written at the same time.

    I would say that if you're going to run Windows then use Windows software RAID and if you're going to use some distro of Linux I would use Linux software raid if 10 disks is going to be your end all. Anything not done at the driver level is concerning and redundancy that doesn't need to be calculated when the data is written is scary as hell and I would trust none of my data to a system that allows that insanity.
     
  5. Geofrancis

    Geofrancis

    Joined:
    Mar 12, 2009
    Messages:
    1,042 (0.49/day)
    Thanks Received:
    153
    Location:
    SCOTLAND!


    It's not file level redundancy with unraid it calculates the parity in real time just like a hardware raid controller but it does not stripe the data across the drives so the read performance is the same as a single drive like parity protected JBOD array . This means in a worst case where I lose 2 data disks or a parity and data disk I can remove the rest of my drives and plug it into any Linux box and just copy the files from it.

    I know I'm limited to 40mb/s writing directly to the array but that's why I have the cache drive to dump files to. And yes I know it's not protected while its on that cache drive but that's why I write my backups direct to the array.

    Flex raid works the same but you can have it with network drives or usb or hard drives or any data store you want

    It has 2 options. Real time parity just like normal raid or you can have it as a scheduled parity so you can restore to the last parity run.

    Your missing my point that raid is not for long term storage it was designed for performance and uptime not redundancy and capacity. Do you need to read and write your movies at 200mb/s? When you are limited to 100mb of gigabit Ethernet?

    Think of it more like NAS than a server
     
  6. Aquinus

    Aquinus Resident Wat-man

    Joined:
    Jan 28, 2012
    Messages:
    6,877 (6.48/day)
    Thanks Received:
    2,470
    Location:
    Concord, NH
    Actually RAID was designed for long term reliable storage. There is no point in running a RAID if it's not going to last because uptime is going to cost you and uptime is usually most likely the reason you're running RAID in the first place. Not to mention uptime is directly related to reliability of your storage devices.

    Even running software RAID offers a lot of the protection of hardware RAID, is usually just as reliable with a performance penelty, but not nearly as steep at 40MB/s. I don't know about you but when I go to backup my RAID (even unraid isn't a replacement for offsite backup,) but I get pretty annoyed at how slow my USB 2.0 external drive is, and running it on USB 3.0 port in Turbo mode, I get about 42MB/s and it sucks. I don't care what kind of data I have, I don't want those kinds of speeds locally.

    Just when you thought a WD Caviar Green on SATA was slow. :p

    So yeah, for strictly storage I'm sure it's fast enough for regular usage, like watching a video but if you're going to copy an HD video to it or do any trans-coding you're going to wish you had a real RAID that writes faster. So if one person is using it and it's strictly for storage and occasionally a single person watching an HD video, it might be enough but I wouldn't try doing anything to it while your backing up, or syncing, or moving any data to or from it. 40MB/s is fairly easy to saturate now I understand you have a caching drive for writes but not all data can be handled that way.
     
  7. Geofrancis

    Geofrancis

    Joined:
    Mar 12, 2009
    Messages:
    1,042 (0.49/day)
    Thanks Received:
    153
    Location:
    SCOTLAND!
    Yea 40mb's is not the fastest but the 1tb cache is more than adequate for dumping large amounts of data to the server. It's set up so that its transparent so you just use the array as normal and you don't notice that its on the cache or array. I have it set up to write out the cache once a day so it's never unprotected for long.

    The only time I write direct to the array is for backing up my pc and laptop but I know this is no substitute for offsite backup.

    The reasons I went with unraid over hardware raid or windows Home server array

    1. Totally hardware independent I just move my flash drive and hard drives to any PC and it boots and works

    2. Drives power up independently when data is accessed so you don't have to power up 10 drives for a single file So it saves about 50w when it's idle

    3. I can add drives without having to backup and rebuild the array.

    If your just a home user wanting to have a large storage pool with enough redundancy to survive a single drive failure then it's ideal.
     
    Last edited: Mar 18, 2013
  8. DanTheBanjoman Señor Moderator

    Joined:
    May 20, 2004
    Messages:
    10,553 (2.73/day)
    Thanks Received:
    1,383
    Why is RAID 10 the most suitable? I'd go for RAID 6 instead. Similar redundancy and more space. Performance shouldn't be a huge issue with the right hardware.

    Also, apart from controller and drives you need to think about housing. Putting 10 drives in a normal case without decent drive bays means you get messy cabling and replacing a drive isn't as straight forwarded.

    I personally use an Axus YI-16SAEU4, you should be able to find similar hardware around the price of a decent hardware RAID controller. You'll have a reliably device with all features you could wish for though.
    Quick look on ebay give me this for eaxmple.
     
  9. Aquinus

    Aquinus Resident Wat-man

    Joined:
    Jan 28, 2012
    Messages:
    6,877 (6.48/day)
    Thanks Received:
    2,470
    Location:
    Concord, NH
    So is any other software raid solution such as Windows and Linux software RAID.
    It's also why your disk access speeds are incredibly slow. Also spinning up and spinning down hard drives on a regular basis puts more stress on the motor than keeping it running. Granted I don't know how often your drives wake up and go to sleep.

    Got me there, but RAID you don't need to backup to add a drive (even though you should,) but in defense of this, my RAID can be degraded and rebuilding and it will still perform better than unraid, that's the point. :)
    I agree but software RAID (real striping RAID,) can offer better performance and equal if not better reliability.
     
  10. Geofrancis

    Geofrancis

    Joined:
    Mar 12, 2009
    Messages:
    1,042 (0.49/day)
    Thanks Received:
    153
    Location:
    SCOTLAND!
    Yet me clarify I don't have to install unraid it runs live from the flash drive and all it ever writes is to the config file when you set it up. So i don't have to set it up when I migrate hardware I just plug and play.

    They are set to power down after 1 hour so for most of the day they are off. i have the filenames buffered so it doesn't have to power on drives when im browsing for something only if I open a file will it power on the drive. So most of my drives are off apart from an hour or 2 a day where a few might come on. The cache drive also keeps drives powered down by buffering all writes to the array and moving everything at once.

    As long as your controller supports online capacity expansion no software raid does.

    Here is my problem with striping your storage array is if you loose too many drives you lose everything with zero chance of getting anything from the remaining drives.

    I can take any drive from my array and plug it into any Linux computer and copy the files from it, even the cache. So I can recover my data far easier than any form of striped raid setup.
     
    Last edited: Mar 17, 2013
  11. DanTheBanjoman Señor Moderator

    Joined:
    May 20, 2004
    Messages:
    10,553 (2.73/day)
    Thanks Received:
    1,383
    mdadm does, as did the raidcore stack. As for the latter, mentioning it made me wonder what happened with it, seems it's offered now under the name assuredVRA. I should play with it.
     
    Aquinus says thanks.
  12. Aquinus

    Aquinus Resident Wat-man

    Joined:
    Jan 28, 2012
    Messages:
    6,877 (6.48/day)
    Thanks Received:
    2,470
    Location:
    Concord, NH
    mdadm also stores RAID information on the RAID drives rather than in a config. So if you move the drives to another *nix machine you can migrate your RAID. It's also pretty quick for software RAID too.

    I hope you don't migrate 10 drives to different hardware often. :twitch:
    Software is like hardware though, like works with like. If you're running a RAID off a particular LSI card, any LSI card like it will take it in. Same with Intel's RST, adadm, you name it. It when you change the controller or software you're using. So if you didn't want to use unraid anymore or if the project died and stopped getting upgraded it wouldn't be easy to migrate it to another system. Just like any other method of redundancy.
     
  13. Geofrancis

    Geofrancis

    Joined:
    Mar 12, 2009
    Messages:
    1,042 (0.49/day)
    Thanks Received:
    153
    Location:
    SCOTLAND!
    Your missing the point I am trying to make I can take any single drive from my array and copy everything off it just by copying and pasting While its plugged into any Linux computer.

    I don't need the unraid OS to recover files as they are all in one Piece on one drive just spread over the drives. So I f I pulled one of my hard drives and looked at the folder structure it would just like the array on one disk but the folders will only contain the files designated to that perticular drive.

    So when I rebuild the array after adding a disk I am not rebuilding the array but the parity disk so even if I pushed the reset button half way through a rebuild it would just reboot and start rebuilding the parity again with no loss.

    I don't know of any raid controller that can survive a total power loss during online capacity expansion process.
     
  14. Easy Rhino

    Easy Rhino Linux Advocate

    Joined:
    Nov 13, 2006
    Messages:
    13,541 (4.57/day)
    Thanks Received:
    3,381
    why would you hit the reset button ? if you are hitting the reset button on an enterprise server running RAID then you should be fired.
     
    MxPhenom 216, Aquinus and brandonwh64 say thanks.
    Crunching for Team TPU
  15. Geofrancis

    Geofrancis

    Joined:
    Mar 12, 2009
    Messages:
    1,042 (0.49/day)
    Thanks Received:
    153
    Location:
    SCOTLAND!
    Obviously I wouldn't do that my point is my data would be fine if something happened during the process like a power cut.
     
  16. Easy Rhino

    Easy Rhino Linux Advocate

    Joined:
    Nov 13, 2006
    Messages:
    13,541 (4.57/day)
    Thanks Received:
    3,381
    OK. But that seems like a pointless thing to say.
     
    Aquinus says thanks.
    Crunching for Team TPU
  17. vawrvawerawe

    Joined:
    Nov 11, 2012
    Messages:
    581 (0.75/day)
    Thanks Received:
    33
    The drives are primarily for large data stores. Two of the drives play media frequently (like video) but most are mainly for storage and accessed relatively infrequently (maybe a few times per day but not continuously like media files).
     
  18. xvi

    xvi

    Joined:
    Nov 10, 2006
    Messages:
    2,252 (0.76/day)
    Thanks Received:
    1,580
    Location:
    Washington, US
    Going to try to steer this back on topic.

    While it is a lot of data, it's not quite irreplaceable data. I think RAID 5 is more than enough redundancy.

    Is there any reason why no one has suggested a NAS? Pair that with four inexpensive 4TB drives and it comes out to 16TB at $0.06 USD per gigabyte or 12TB of redundant storage at $0.08 USD per gigabyte ...including the NAS. The total cost comes out to $999.95 (free shipping) on the Egg and probably cheaper elsewhere. To save a bit, 3TB drives can be used for 12TB or 9TB redundant of storage.
     
    Crunching for Team TPU
  19. Geofrancis

    Geofrancis

    Joined:
    Mar 12, 2009
    Messages:
    1,042 (0.49/day)
    Thanks Received:
    153
    Location:
    SCOTLAND!
    If you have big data stores then go for a hardware raid card running either raid 0+1 if you only have a few disks but If you have more than 4 I would go with raid 5 or 6 if you can as you won't lose as much space due to using parity rather than 1-1 replication.
     
  20. newtekie1

    newtekie1 Semi-Retired Folder

    Joined:
    Nov 22, 2005
    Messages:
    20,237 (6.10/day)
    Thanks Received:
    6,275
    Redundant Array of Independent Disks.

    You sure RAID wasn't designed for redundancy? Because its right in the name.

    RAID was not designed with performance in mind. It was designed for redundancy originally. And it wasn't designed for uptime either. The only version of RAID that puts performance ahead of redundancy is RAID-0, but RAID-0 wasn't one of the original RAID levels, it was added later. And originally, when a drive failed the array had to be taken out of service to replace the drive, and was out of service the entire time during the rebuild. Uptime wasn't a concern either initially, it was all about redundancy. Uptime, capacity, and speed all came later, pretty much in that order.

    Personally, if you have more than 2 drives any level of mirroring is pointless. So I'd say 3 or more drives go with RAID-5 at least.

    I'd grab one of these before going with a NAS. Though I think the reason most didn't suggest a NAS is because he said he wants to use up to 10 drives, and 10 drive NAS devices are expensive.

    But with the external RAID box, you can have up to 10 drives connected to one card, with two enclosures. And all 10 drives can be set up in one big RAID5 or 6 array.
     
    Last edited: Mar 18, 2013
    vawrvawerawe and Geofrancis say thanks.
    Crunching for Team TPU 50 Million points folded for TPU
  21. Geofrancis

    Geofrancis

    Joined:
    Mar 12, 2009
    Messages:
    1,042 (0.49/day)
    Thanks Received:
    153
    Location:
    SCOTLAND!
    Sorry it probably was not the best choice of words on my part all I mean is no version of RAID (apart from raid 1) can you recover data from an individual drive. if you lose 2 drives you cannot recover data from the rest of the drives either as everything is striped between them.


    Maximum chance of recovery and lowest power consumption was what I was looking for with my setup, it suits me for my use of dumping media to it and playing it back. Each 1tb drive can saturate my gigabit home network individually so I did not need the extra performance that raid 5 provided. So it's protected with a parity drive and I can use the 1tb cache as a hot spare of i lose a drive. I just flush it, unmount the array, change it from cache to data and hit rebuild.

    If your doing a business setup and have the cash for 12 port sas controllers and a proper server I would recommend doing that instead.
     
    Last edited: Mar 18, 2013
  22. vawrvawerawe

    Joined:
    Nov 11, 2012
    Messages:
    581 (0.75/day)
    Thanks Received:
    33
    Well actually those 4TB might be relatively inexpensive but you can get 2TB for only $75 which is 4TB for $150 so multiple 2TB is still the cheapest option.

    However, if 4TB eventually lower to the price of 2 x 2TB then certainly it will be better to get 4TB drives.

    Personally I see no reason to buy more HDD now, when I still have a couple TB left and HDD prices continue to lower regularly. I figure, by the time I need more HDD space, which might be a couple to a few months, HDD prices will have gone even lower by then.

    I don't need to buy a dedicated NAS because that's one of the main reasons for building my desktop PC - to house more HDD space.

    Hmm, I'd be open to considering an external dedicated RAID box...
     
    Last edited: Mar 29, 2013
  23. newtekie1

    newtekie1 Semi-Retired Folder

    Joined:
    Nov 22, 2005
    Messages:
    20,237 (6.10/day)
    Thanks Received:
    6,275
    That is what I would do, and in this case the RAID box comes with everything you need, including the card. And if you buy two you get a spare card, you can either keep it just to have a spare in case something happens to the first one, or sell it on ebay or something.
     
    Crunching for Team TPU 50 Million points folded for TPU
  24. Geekoid New Member

    Joined:
    Mar 27, 2013
    Messages:
    77 (0.12/day)
    Thanks Received:
    24
    Location:
    UK
    Yes - a raid box is the way to go. Personally, I use a dual-channel fibre connect and use RAID6 with hot spares. It takes 4 simultaneous disk failures before I'm in trouble. A rebuild takes less than 12 hours, so I'd have to have the other 3 failures within that time. Even a second disk fail has never happened yet during the rebuild window. Full backups only happen each weekend as it takes quite a while to copy the whole 16TB :) As you can tell, I'm all out for keeping my data over disk performance.
     
  25. Geofrancis

    Geofrancis

    Joined:
    Mar 12, 2009
    Messages:
    1,042 (0.49/day)
    Thanks Received:
    153
    Location:
    SCOTLAND!
    I would love to have a little itx box with an external hard drive rack but it was not going to be cheap to build.
     

Currently Active Users Viewing This Thread: 1 (0 members and 1 guest)

Share This Page