• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

A raidy question.... What would you use??

To Raid or not to Raid that is the question... But which one??


  • Total voters
    23
  • Poll closed .
Well you definitely don't want raid5. With drives over a certain size you run a serious risk of having a URE, which is... not good. Raid6 can at least survive one URE and keep chugging.
 
It kinda sounds like in a way that my single Raid 1's however inefficient they might be, seem to be the better way of going as unless both drives die, then I'll have a working array with one drive down. If something mad happens and two drives go in them I'd have to be massively unlucky that they'd be part of the same volume..

That said, in fairness like most of the setups we've mentioned Raid anything can only protect against so much, Raid 6 might help against 2 drives failing but for that I'd possibly prefer to have 6 working drives and then 2 sat in a spare capacity just in case. Thankfully with the WD Reds I've had for going on 4 years, I don't believe any of them have hit 20000 hours and there's little to no signs of any performance degradation.. If I could I'd build another Synology system and have everything copied over to it, then make it my main NAS and make the original one I have now a purely backup model, so just throw all drives in for storage so I could maximise the capacity or failing that just buy enough drives to have another Raid 5 array in it. Either way, it's going to be expensive just to keep my data safe but it's what I personally feel I need to do.

I know @blindfitter has the very similar setup and I believe he uses Raid 5 with his, just with smaller drives.
Having said that, Raid 5 might be fine as they'd be brand new drives and hopefully the chances of them failing would be very minimal ............. I'd like to hope and think!!

Raid 1+0 or raid 6 seem like the best options. We always set our SANs up with raid 10 w/hot spares. but the san has like 48 disks in it and it's tuned for maximum IOPS. Not sure if it would be economical with 6 drives but the performance is great. And if a drive fails, there is still likely a working mirror drive in the raid 0 array to rebuild the hotspare.

We have something similar at work as well, but it's covered over by two SANs, massively overkill but its worked fine for 5/6 years straight :) I think since I've started in IT, there's been one or two of these drives fail but for being constantly on and never turned off, I can't say that's bad going especially with all the work they are doing as well :) My NAS box isn't nearly as busy lol Not even close!! :)
 
You could avoid RAID altogether and use each drive individually and have your backup software write to multiple drives.
 
Having said that, Raid 5 might be fine as they'd be brand new drives and hopefully the chances of them failing would be very minimal ............. I'd like to hope and think!!

RAID is first and foremost useful as a "not a backup" redundancy tool. If you go put very large drives in a RAID 5 just to hope one doesn't fail (because if one does, chances are high it's going to be bad) you are better off not using RAID at all. I recommend you use RAID 5 as much as I recommend you take swift kicks to the nether regions for $1 each. A URE during a rebuild, which is more and more likely to happen the larger your drives are and the more of them you have, destroys the entire array. And the whole point of RAID is to be able to save your data should a drive fail. You're better off using no RAID at all than using RAID 5, at this point.

RAID 6 can handle two failures. A likely scenario would be losing a disc because it just plain failed, and encountering a URE during the rebuild process, and your data is still okay. However, if you get two UREs, good night sweet prince.

In fact, when dealing with 12TB drives, I'm not sure I'd feel good about RAID 6, either. How much would it suck to get two or more UREs during a rebuild, and lose all that data? With drives this large, it's really best to just stick with good old RAID 1, or 10, if you got that many drives.

RAID 1 doesn't care about UREs. If you lose a disc, replace it and remirror, a URE can happen, and it might make that specific file that sector was on unusable (maybe), but at least your entire array won't be rekt.
 
As fun as getting kicks in the nether regions for $1/£1 sound, I think as @qubit mentioned before, whilst Raid 1 might be the most efficient or highest performance, I believe as you've pointed out it might be the safest.

It would just be my luck that I'd setup Raid 5 and then something would happen or I'd get a dodgy batch of drives and boom, like you say, it's all dead Dave....

I'm unsure if Synology supports raid 10, but I will do some digging up on it :) Right now, I'm just doing an extended SMART scan on each drive, just to make sure everything is alright with the drives I currently have, the last thing I want is to loose all of that data before I even get a chance to back it up. The films and such I'm not worried about, but all my daughters photos, house pictures/documents and so on is very important to me, so I want to make sure it's completely and fully backed up :)

I do need to do another backup very soon, so I think when my daughter has to go back to her Mum's, I'll be making sure then everything is backed up and then with some luck, I'll be able to buy this Killdisk program and make sure all of the spare/server drives I have are also OK and properly deleted before I put them into a server setup :)
 
If you can't do RAID 10, RAID 1 is a fair alternative. You'll get a boost on read speed, but no write speed boost. More importantly, your data is mirrored either way.
 
RAID is first and foremost useful as a "not a backup" redundancy tool. If you go put very large drives in a RAID 5 just to hope one doesn't fail (because if one does, chances are high it's going to be bad) you are better off not using RAID at all. I recommend you use RAID 5 as much as I recommend you take swift kicks to the nether regions for $1 each. A URE during a rebuild, which is more and more likely to happen the larger your drives are and the more of them you have, destroys the entire array. And the whole point of RAID is to be able to save your data should a drive fail. You're better off using no RAID at all than using RAID 5, at this point.

I don't agree with this one bit. An URE is bad for sure, but I've had them on RAID5 rebuilds, and it definitely doesn't result in a "lose all your data" scenario. Modern controllers, even the cheap Highpoint onces I tend to use, continue to rebuild after a URE. The data related to that URE is just corrupt. The same it would be if the URE happened on a RAID1 or just a single disk that can't read a sector. Most controllers even have an option to enable or disable continuing to rebuilt after an error.

Also, verifying the array at least every 6 months, and ideally every 3 months, should be done to catch drives with bad sectors early. This is so there is less of a chance of a surprise when you go to do a rebuild.

The corrupt data, and possibility of complete array loss, is specifically why we have backups. Any data you put anywhere should be recoverable from another source if the original source either completely fails, or the data becomes partially corrupt.

RAID5 is a hell of a lot better than no RAID at all, to suggest otherwise just doesn't make any sense. At least with RAID5, a single drive failure results in no data loss. The alternative would always result in data being lost if a drive fails.
 
RAID is first and foremost useful as a "not a backup" redundancy tool.
I would like to reiterate this. I've lost my RAID-5 before due to two disk failures, one shortly after the other, during the rebuild for the first failure. Whatever the OP decides to do, I'd make damn sure that there is a DR strategy in place. There is a reason why I have 4TB and 8TB external drives in addition to my RAID.
 
I would like to reiterate this. I've lost my RAID-5 before due to two disk failures, one shortly after the other, during the rebuild for the first failure. Whatever the OP decides to do, I'd make damn sure that there is a DR strategy in place. There is a reason why I have 4TB and 8TB external drives in addition to my RAID.

Yes, it can't be said enough that RAID is not a backup. It's why every RAID array that I have is backed up nightly to an identical sized backup. Besided the chance of the entire array failing, there is also just human error. Like Shift+Deleting your entire media folder containing movies, tv series, and music by accident...yep, I did that once.
 
I don't agree with this one bit. An URE is bad for sure, but I've had them on RAID5 rebuilds, and it definitely doesn't result in a "lose all your data" scenario. Modern controllers, even the cheap Highpoint onces I tend to use, continue to rebuild after a URE. The data related to that URE is just corrupt. The same it would be if the URE happened on a RAID1 or just a single disk that can't read a sector. Most controllers even have an option to enable or disable continuing to rebuilt after an error.

Also, verifying the array at least every 6 months, and ideally every 3 months, should be done to catch drives with bad sectors early. This is so there is less of a chance of a surprise when you go to do a rebuild.

The corrupt data, and possibility of complete array loss, is specifically why we have backups. Any data you put anywhere should be recoverable from another source if the original source either completely fails, or the data becomes partially corrupt.

RAID5 is a hell of a lot better than no RAID at all, to suggest otherwise just doesn't make any sense. At least with RAID5, a single drive failure results in no data loss. The alternative would always result in data being lost if a drive fails.
Your experience shows differently than what I was reading, then:

http://raidtips.com/raid5-ure.aspx

So I guess it's down to the RAID controller? RAID 5 is certainly attractive from a cost and reliability standpoint then, if you don't have to worry about a URE destroying everything. If I were OP, I might just consider RAID 5 after making sure the controller can handle a URE without exploding.
 
Your experience shows differently than what I was reading, then:

http://raidtips.com/raid5-ure.aspx

So I guess it's down to the RAID controller? RAID 5 is certainly attractive from a cost and reliability standpoint then, if you don't have to worry about a URE destroying everything. If I were OP, I might just consider RAID 5 after making sure the controller can handle a URE without exploding.

It talks about the speculation that chances are you can't even read the entire array without an URE right after you put the data on it. But that's just totally bogus. URE are extremely rare, as the article points out, and to believe that you can't read the data right after writing it would assume that these hard drives are so unreliable that they can't even do what they are designed to do. Which is to store data.

Even still, the article even says that it isn't true that a single URE kills the whole array. That may have been the case on some very old controllers, but it certainly isn't the case on any modern controller I've experienced.

These calculations are based on somewhat naive assumptions, making the problem look worse than it actually is. The silent assumptions behind these calculations are that:
  • read errors are distributed uniformly over hard drives and over time,
  • the single read error during the rebuild kills the entire array.
Both of these are not true, making the result useless.
 
Hmm, I specifically remember seeing that somewhere back when I was thinking about raid myself... maybe I was reading old information or parroted misinformation.
 
Still running FreeNAS with 5 - 6Tb drives in RAID6 Its been going one year now with no issues 24/7. For me, if you are goin to RAID. Do not use a motherboard controller. Use a concroller card or a descent software program like FreeNAS or another ZFS file system. Or even UnRAID which uses a parity-protected array but its not a RAID
p1kalmig2k731.jpg

zp5tm1qbi6.jpg
 
Last edited:
Still running FreeNAS with 5 - 6Tb drives in RAID6 Its been going one year now with no issues 24/7. For me, if you are goin to RAID. Do not use a motherboard controller. Use a concroller card or a descent software program like FreeNAS or another ZFS file system. Or even UnRAID which uses a parity-protected array but its not a RAID

we gotta get you a longer board, so we can fit a 10gb ntework card in it. But this comments off topic so dont pay attention. (for real though do it)
 
I had been looking and wanted a board with ecc also but it works so well I just forgot
 
For me, if you are goin to RAID. Do not use a motherboard controller. Use a concroller card or a descent software program like FreeNAS or another ZFS file system.
I've used RSTe on my X79 board since I bought it almost 7 years ago and I've had no issues with it. I respect your tooling, but a lot of on-board RAID devices work alright. With that said, I've used AMD, nVidia, and Intel chipset RAIDs and Intel is definitely the best. nVidia actually wasn't too bad which is what I was using prior to my current Intel board, but AMD was brutal (I ended up just using mdadm.)

So with that said, it depends on the onboard controller. I think RSTe is probably a better one since that's what they use on server boards. I don't have any experience with plain 'ol RST, so I can't speak to that.
I had been looking and wanted a board with ecc also but it works so well I just forgot
If you use ZFS, you'll really be wanting ECC memory too.
 
Well I'm only using a cheaper Z97 Asrock board, it works and has been fine since it's been in coming up 2 years ago now :) The WD Reds I have are 4 years old now, so whilst I'm not so worried about them as such, I would like to put something newer in there..

Here's the setup at the moment, just something that has been setup and literally forgot as @Jetster :)

03-01-19 NAS Disk Stats 2.PNG


05-01-19 Synology HD Setup 1.PNG


I do have some server PERC cards so I might have a look into using them if it's going to be worth it, firstly I want to get the data from the drives and then do a load of testing with the new drives I get eventually, it's going to be a little while as at £400 a drive, they aren't cheap and getting 4 or more will cost a pretty penny.. I have two Ryzen setups I need to get done before I buy drives! :)

Here's a pic of the Snology setup...

20161021_110328.jpg 20161021_155143.jpg 20161021_224919.jpg 20161022_104845.jpg 20161022_105618.jpg 20161022_105639.jpg 20161022_105854.jpg 20161022_110241.jpg

Currently running DSM 6.1.7 which has been amazingly stable :) I couldn't be more happy with it. So on we go with I think either a newer build or just replacing the drives.
For the moment throughput isn't a massive concern because of the 1Gb network around my home, until I move and upgrade to a 10Gb network, I'll worry a little more about drive performance then :)

Thanks for everyone's replies, I didn't think it would be such a hot topic :D
 
If you are staying with te DSM, I would actually recommend against using a dedicated controller card and instead plug all the drives into the motherboard. The DSM, from what I've read, had a hard time reading drive plugged into anything other than the Intel SATA ports. And if you are setting up the drives in RAID under the DSM software, then you'll be using a software RAID anyway, so there really isn't a point in a hardware RAID controller.
 
Did this turn into an Xpenology thread?!

+1 for RAID 1. I'm all for more capacity but long RAID 5/6 rebuild times on large capacity drives is not a good thing - you are tempting fate I think - if the hard drives are of a similar age/batch then it's not unheard of that the rebuild kills the other drives because it puts a massive strain on the array and then you've lost everything (if you have no backup!). I would say keep it simple - mirror and add new hard drives when you need them, like you are doing now. Just my two cents (and yes I have more than 1 old RAID 5 array :rolleyes:).
 
If you are staying with te DSM, I would actually recommend against using a dedicated controller card and instead plug all the drives into the motherboard. The DSM, from what I've read, had a hard time reading drive plugged into anything other than the Intel SATA ports. And if you are setting up the drives in RAID under the DSM software, then you'll be using a software RAID anyway, so there really isn't a point in a hardware RAID controller.

I've had no need to even think of changing it really. I suppose the only other thing I could try would be a very basic Windows Server or even better a Linux server as that would give me much greater support of hardware (you'd like to think so with Windows anyways) and allow the choice of Raid cards etc to become massive :) I've not tried to make my NAS/HomeServer a complicated issue, I thought the simplier it is, the better it would be to be honest?

Software raid seems to be perfectly fine for this setup for the moment, does anyone else here use it?

Did this turn into an Xpenology thread?!

+1 for RAID 1. I'm all for more capacity but long RAID 5/6 rebuild times on large capacity drives is not a good thing - you are tempting fate I think - if the hard drives are of a similar age/batch then it's not unheard of that the rebuild kills the other drives because it puts a massive strain on the array and then you've lost everything (if you have no backup!). I would say keep it simple - mirror and add new hard drives when you need them, like you are doing now. Just my two cents (and yes I have more than 1 old RAID 5 array :rolleyes:).

It always was an Xpenology thread :laugh: :D

Do you have a similar setup @Owen1982 ? :)
 
(if you have no backup!)
The take away isn't to avoid RAID 5 and 6, it's to always have a backup. A second drive in a 2 disk RAID-1 can fail too and you're hitting the drives just as hard because you still have to copy the entire contents of the drive, just as you do with RAID 5 or 6 and calculating parity isn't extra wear on the drive, it's extra work for the CPU or RAID controller.
 
I've had no need to even think of changing it really. I suppose the only other thing I could try would be a very basic Windows Server or even better a Linux server as that would give me much greater support of hardware (you'd like to think so with Windows anyways) and allow the choice of Raid cards etc to become massive :) I've not tried to make my NAS/HomeServer a complicated issue, I thought the simplier it is, the better it would be to be honest?

Software raid seems to be perfectly fine for this setup for the moment, does anyone else here use it?

Stick with DMS, it's good software. It has run software RAID on all their NAS products and works very well.
 
Stick with DMS, it's good software. It has run software RAID on all their NAS products and works very well.
Software RAID these days really isn't bad. Modern CPUs have more than enough capability to calculate parity without a dedicated controller. At least on Linux, you can get creative and use dm-cache to do read or write-back caching to other logical volumes so, you could accelerate a RAID array with a NVMe drive or a (or many,) SATA SSD(s).
 
Software RAID these days really isn't bad. Modern CPUs have more than enough capability to calculate parity without a dedicated controller. At least on Linux, you can get creative and use dm-cache to do read or write-back caching to other logical volumes so, you could accelerate a RAID array with a NVMe drive or a (or many,) SATA SSD(s).

And the DMS software had SSD caching built in. I use and SSD cache on my Windows based file server to accelerate writes to my primary RAID5 array.
 
Last edited:
I think I'm in good hands, I was considering a bit of a change in the hardware I use, maybe a Xeon instead of my G3258 but at the moment there's little point although I believe as I mentioned to the limiting Gb intranet in home :) Sadly no 10Gb switch here, yet :)

I think I'm going to have a good bit of fun when I get some new drives, I think if I can manage to order at least 4, I'll be on to a winner. That way I'll have either two new Raid 1 volumes and I can retire a pair of 4Tb drives or I can do something else with it :) Either way, testing will be needed!! :)
 
Back
Top