• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Raid failure - can I recover data?

Pandamonium

New Member
Joined
Nov 13, 2018
Messages
9 (0.00/day)
Hi All,

I can't believe this has happened, but despite my best efforts I've just suffered a raid failure and I can't see how to recover from it. If someone could give me some help I would be extremely grateful.

I have a server with a built in LSI SAS8008 controller. I have six drives (2TB each) configured as a raid 10 volume. Today, I rebooted the server and the raid controller shows the volume as failed. Entering the controller itself showed one drive had failed and two others were degraded. I've tried adding another drive alongside and it shows on the controller, but when I manage the volume, I get no options to perform anything in the way of recovery.

I've since removed the failed volume and put it in another PC and it reports as fine, so I've put it back and now the raid controller shows this drive as degraded, so I'm even more confused now. So currently, I have 6 drives in the raid, three now showing degraded.

I should say that I'm running all this through the boot up controller menu as I can't get any software to work with this raid controller in Windows.

Believe it or not, all this happened when I was trying to install a tape drive so I could make some off site backups of the data.

Is there anything I can do please?

Many Thanks in advance

Paul
 
Joined
May 3, 2010
Messages
1,086 (0.21/day)
Location
Essex, England
Processor Ryzen 5900X OC 5150Mhz
Motherboard Asus ROG Crosshair VIII Formula
Cooling Custom EKWB for CPU, VRM's & GPU with 2x 480mm Rads
Memory Gskill TridentZ 3600 Mhz C17
Video Card(s) Powercolor RX 6900XT Liquid Devil Ultimate
Storage Samsung 970 Evo Plus 2TB x 2 Raid0
Display(s) MSI Optix MAG272CQR 27 1440p x2
Case Corsair 1000D
Audio Device(s) onboard 7.1 HD Audio
Power Supply Seasonic PRIME Ultra 1300w PSU
Mouse Logitech G300s
Keyboard Logitech G19s
Software Windows 10 64Bit
Benchmark Scores R20 : 9329 Timespy: 21455
Have you checked the SMART values to make sure the drives are degraded and bad? If the drives are good a simple chkdsk may resolve your issues
 

Pandamonium

New Member
Joined
Nov 13, 2018
Messages
9 (0.00/day)
Have you checked the SMART values to make sure the drives are degraded and bad? If the drives are good a simple chkdsk may resolve your issues

I removed the failed drive and used a separate PC to check the smart values and it came back as fine so I put it back and now it shows as degraded instead of failed.

I can't run a chkdsk as the raid controller has the volume marked as failed and doesn't allow it to be seen in Windows.

Thank you for replying though.
 

Mindweaver

Moderato®™
Staff member
Joined
Apr 16, 2009
Messages
8,194 (1.49/day)
Location
Charleston, SC
System Name Tower of Power / Sechs
Processor i7 14700K / i7 5820k @ 4.5ghz
Motherboard ASUS ROG Strix Z690-A Gaming WiFi D4 / X99S GAMING 7
Cooling CM MasterLiquid ML360 Mirror ARGB Close-Loop AIO / CORSAIR Hydro Series H100i Extreme
Memory CORSAIR Vengeance LPX 32GB (2 x 16GB) DDR4 3600 / G.Skill DDR4 2800 16GB 4x4GB
Video Card(s) ASUS TUF Gaming GeForce RTX 4070 Ti / ASUS TUF Gaming GeForce RTX 3070 V2 OC Edition
Storage 4x Samsung 980 Pro 1TB M.2, 2x Crucial 1TB SSD / Samsung 870 PRO 500GB M.2
Display(s) Samsung 32" Odyssy G5 Gaming 144hz 1440p, ViewSonic 32" 72hz 1440p / 2x ViewSonic 32" 72hz 1440p
Case Phantek "400A" / Phanteks “Enthoo Pro series”
Audio Device(s) Realtek ALC4080 / Azalia Realtek ALC1150
Power Supply Corsair RM Series RM750 / Corsair CXM CX600M
Mouse Glorious Gaming Model D Wireless / Razer DeathAdder Chroma
Keyboard Glorious GMMK with box-white switches / Keychron K6 pro with blue swithes
VR HMD Quest 3 (128gb) + Rift S + HTC Vive + DK1
Software Windows 11 Pro x64 / Windows 10 Pro x64
Benchmark Scores Yes
Because you are using Raid 10 all drives are mirrored. You should be able to change the drive from dynamic to basic and just read the drive if it hasn't failed. If you pulled the drive from the array then you will need to rebuild the array for it to come out of the degraded state. Try putting a new drive in. Your Raid controller should pick up the drive and start the rebuild. The one good thing is RAID 10 is very redundant.
 

Pandamonium

New Member
Joined
Nov 13, 2018
Messages
9 (0.00/day)
Because you are using Raid 10 all drives are mirrored. You should be able to change the drive from dynamic to basic and just read the drive if it hasn't failed. If you pulled the drive from the array then you will need to rebuild the array for it to come out of the degraded state. Try putting a new drive in. Your Raid controller should pick up the drive and start the rebuild. The one good thing is RAID 10 is very redundant.

I only pulled the drive while the server was off and put it back in the same slot before rebooting, so nothing should have changed as far as the raid controller is concerned. However, it's status did change from failed to degraded.

I can't change from dynamic to basic as the raid volume is showing as failed in the raid controller and therefore doesn't show up at all under windows. The only way I can see anything about this volume is through the controller boot menu and I have no recovery options available.

Thanks
 

Mindweaver

Moderato®™
Staff member
Joined
Apr 16, 2009
Messages
8,194 (1.49/day)
Location
Charleston, SC
System Name Tower of Power / Sechs
Processor i7 14700K / i7 5820k @ 4.5ghz
Motherboard ASUS ROG Strix Z690-A Gaming WiFi D4 / X99S GAMING 7
Cooling CM MasterLiquid ML360 Mirror ARGB Close-Loop AIO / CORSAIR Hydro Series H100i Extreme
Memory CORSAIR Vengeance LPX 32GB (2 x 16GB) DDR4 3600 / G.Skill DDR4 2800 16GB 4x4GB
Video Card(s) ASUS TUF Gaming GeForce RTX 4070 Ti / ASUS TUF Gaming GeForce RTX 3070 V2 OC Edition
Storage 4x Samsung 980 Pro 1TB M.2, 2x Crucial 1TB SSD / Samsung 870 PRO 500GB M.2
Display(s) Samsung 32" Odyssy G5 Gaming 144hz 1440p, ViewSonic 32" 72hz 1440p / 2x ViewSonic 32" 72hz 1440p
Case Phantek "400A" / Phanteks “Enthoo Pro series”
Audio Device(s) Realtek ALC4080 / Azalia Realtek ALC1150
Power Supply Corsair RM Series RM750 / Corsair CXM CX600M
Mouse Glorious Gaming Model D Wireless / Razer DeathAdder Chroma
Keyboard Glorious GMMK with box-white switches / Keychron K6 pro with blue swithes
VR HMD Quest 3 (128gb) + Rift S + HTC Vive + DK1
Software Windows 11 Pro x64 / Windows 10 Pro x64
Benchmark Scores Yes
I only pulled the drive while the server was off and put it back in the same slot before rebooting, so nothing should have changed as far as the raid controller is concerned. However, it's status did change from failed to degraded.

I can't change from dynamic to basic as the raid volume is showing as failed in the raid controller and therefore doesn't show up at all under windows. The only way I can see anything about this volume is through the controller boot menu and I have no recovery options available.

Thanks
Then it sounds like the disk has indeed failed. When you say under windows do you mean disk management? If not then you need to look at disk management. Do you have an new drive to replace it? Because that is what the controller needs to rebuild the array. Also, even if the drive is still good it will need the partition deleted before being added back to the array for rebuild.
 

Pandamonium

New Member
Joined
Nov 13, 2018
Messages
9 (0.00/day)
Then it sounds like the disk has indeed failed. When you say under windows do you mean disk management? If not then you need to look at disk management. Do you have an new drive to replace it? Because that is what the controller needs to rebuild the array. Also, even if the drive is still good it will need the partition deleted before being added back to the array for rebuild.

Volume does not show under windows - even in disk management.

I have added another drive, but the controller boot software only allows me to add it as a hot swap. I've tried that but it didn't attempt to use it to rebuild or anything.

I'm beginning to think that the controller itself doesn't give me any options to rebuild. I'm trying to get the windows software to work but it doesn't find the controller. I suspect I'm missing a driver.
 

Mindweaver

Moderato®™
Staff member
Joined
Apr 16, 2009
Messages
8,194 (1.49/day)
Location
Charleston, SC
System Name Tower of Power / Sechs
Processor i7 14700K / i7 5820k @ 4.5ghz
Motherboard ASUS ROG Strix Z690-A Gaming WiFi D4 / X99S GAMING 7
Cooling CM MasterLiquid ML360 Mirror ARGB Close-Loop AIO / CORSAIR Hydro Series H100i Extreme
Memory CORSAIR Vengeance LPX 32GB (2 x 16GB) DDR4 3600 / G.Skill DDR4 2800 16GB 4x4GB
Video Card(s) ASUS TUF Gaming GeForce RTX 4070 Ti / ASUS TUF Gaming GeForce RTX 3070 V2 OC Edition
Storage 4x Samsung 980 Pro 1TB M.2, 2x Crucial 1TB SSD / Samsung 870 PRO 500GB M.2
Display(s) Samsung 32" Odyssy G5 Gaming 144hz 1440p, ViewSonic 32" 72hz 1440p / 2x ViewSonic 32" 72hz 1440p
Case Phantek "400A" / Phanteks “Enthoo Pro series”
Audio Device(s) Realtek ALC4080 / Azalia Realtek ALC1150
Power Supply Corsair RM Series RM750 / Corsair CXM CX600M
Mouse Glorious Gaming Model D Wireless / Razer DeathAdder Chroma
Keyboard Glorious GMMK with box-white switches / Keychron K6 pro with blue swithes
VR HMD Quest 3 (128gb) + Rift S + HTC Vive + DK1
Software Windows 11 Pro x64 / Windows 10 Pro x64
Benchmark Scores Yes
Have you tried using DISKPART to view the drive?
 

newtekie1

Semi-Retired Folder
Joined
Nov 22, 2005
Messages
28,472 (4.23/day)
Location
Indiana, USA
Processor Intel Core i7 10850K@5.2GHz
Motherboard AsRock Z470 Taichi
Cooling Corsair H115i Pro w/ Noctua NF-A14 Fans
Memory 32GB DDR4-3600
Video Card(s) RTX 2070 Super
Storage 500GB SX8200 Pro + 8TB with 1TB SSD Cache
Display(s) Acer Nitro VG280K 4K 28"
Case Fractal Design Define S
Audio Device(s) Onboard is good enough for me
Power Supply eVGA SuperNOVA 1000w G3
Software Windows 10 Pro x64
The one good thing is RAID 10 is very redundant.

The problem with RAID10 is it is still RAID0, it is just a RAID0 of a bunch of 2 drive mirrors. That means if any one of those two drive mirrors has both drives fail, all the data is lost. And that is what sounds like happened in the OP's case. Recovering a RAID10 is not as simple as just popping one of the drives in and reading the data, the data is not on a single drive like it is on a RAID1, it is striped across all the RAID1 mirrors. In the OP's case he has 3 RAID1 mirror arrays. This is why I really do not recommend RAID10, it is only marginally more redundant than RAID1/5. Unless you really need the speed, RAID6 is a much better option for data protection.

There are probably data recovery services that might be able to get the data back, but it ain't going to be cheap. And trying to mess around with individual drives trying to get them to show up in Windows is not going to get you any data back, and is likely just making things worse and harder to recover.
 
Last edited:

Mindweaver

Moderato®™
Staff member
Joined
Apr 16, 2009
Messages
8,194 (1.49/day)
Location
Charleston, SC
System Name Tower of Power / Sechs
Processor i7 14700K / i7 5820k @ 4.5ghz
Motherboard ASUS ROG Strix Z690-A Gaming WiFi D4 / X99S GAMING 7
Cooling CM MasterLiquid ML360 Mirror ARGB Close-Loop AIO / CORSAIR Hydro Series H100i Extreme
Memory CORSAIR Vengeance LPX 32GB (2 x 16GB) DDR4 3600 / G.Skill DDR4 2800 16GB 4x4GB
Video Card(s) ASUS TUF Gaming GeForce RTX 4070 Ti / ASUS TUF Gaming GeForce RTX 3070 V2 OC Edition
Storage 4x Samsung 980 Pro 1TB M.2, 2x Crucial 1TB SSD / Samsung 870 PRO 500GB M.2
Display(s) Samsung 32" Odyssy G5 Gaming 144hz 1440p, ViewSonic 32" 72hz 1440p / 2x ViewSonic 32" 72hz 1440p
Case Phantek "400A" / Phanteks “Enthoo Pro series”
Audio Device(s) Realtek ALC4080 / Azalia Realtek ALC1150
Power Supply Corsair RM Series RM750 / Corsair CXM CX600M
Mouse Glorious Gaming Model D Wireless / Razer DeathAdder Chroma
Keyboard Glorious GMMK with box-white switches / Keychron K6 pro with blue swithes
VR HMD Quest 3 (128gb) + Rift S + HTC Vive + DK1
Software Windows 11 Pro x64 / Windows 10 Pro x64
Benchmark Scores Yes
Yep - doesn't show there either - sorry.
Good luck! Just to be clear you can't boot the server and see your files? I thought you were only trying to fix the array. Any drive failure will degrade your array until you rebuild it.
Thanks for trying - I appreciate it. I'm not giving up on this data - it must still be there
No problem, I may be confused after newtekie's reply, but what are you trying to achieve? Did you lose data or do you just want to rebuild the array to get it out of it's degraded state and back into a redundant array?
The problem with RAID10 is it is still RAID0, it is just a RAID0 of a bunch of 2 drive mirrors. That means if any one of those two drive mirrors has both drives fail, all the data is lost. And that is what sounds like happened in the OP's case. Recovering a RAID10 is not as simple as just popping one of the drives in and reading the data, the data is not on a single drive like it is on a RAID1, it is stripped across all the RAID1 mirrors, in the OP's case he has 3 RAID1 mirror arrays.

There are probably data recovery services that might be able to get the data back, but it ain't going to be cheap.
True, and I understand raid 10 is just raid 0 and 1. I may have misunderstood op. I thought he only had one drive that failed and he was just trying to repair the array? RAID 10 should handle a single drive failure and a six drive raid 10 should handle a 2 drive failure, but it depends on which 2 drives failed.
 

Pandamonium

New Member
Joined
Nov 13, 2018
Messages
9 (0.00/day)
No problem, I may be confused after newtekie's reply, but what are you trying to achieve? Did you lose data or do you just want to rebuild the array to get it out of it's degraded state and back into a redundant array?

True, and I understand raid 10 is just raid 0 and 1. I may have misunderstood op. I thought he only had one drive that failed and he was just trying to repair the array? RAID 10 should handle a single drive failure and a six drive raid 10 should handle a 2 drive failure, but it depends on which 2 drives failed.

I am trying to recover the data - I have an old backup but would like all of it if I can.

Initially, my raid was showing 1 failed and 2 degraded drives. Since pulling and replacing the failed drive, it now shows three degraded drives, so I'm unclear whether I should have some recovery options or not.

I've now managed to get the windows software (Megaraid) to work and, yet again, I have no options for recovery, so it's not looking great.

Thanks all
 
Joined
Feb 2, 2015
Messages
2,707 (0.80/day)
Location
On The Highway To Hell \m/
That means if any one of those two drive mirrors has both drives fail, all the data is lost. And that is what sounds like happened in the OP's case.
Well, not only is that not what it sounds like has happened, it would be very unlikely to ever happen. What you're trying to say is that out of 6 drives, 2 drives failed at the exact same time. And that those 2 drives, out of 6, also happened to be a mirrored pair, out of 3. I'm no statistician. But common sense tells me that's not going to happen very often(like almost never).

Not to mention the fact that this all started with no warning(signs of imminent drive failure), as he was fiddling around with something else(trying to install a tape drive). Which leads me to believe he may have accidentally messed something up in that process.

Common sense FTW.
 

newtekie1

Semi-Retired Folder
Joined
Nov 22, 2005
Messages
28,472 (4.23/day)
Location
Indiana, USA
Processor Intel Core i7 10850K@5.2GHz
Motherboard AsRock Z470 Taichi
Cooling Corsair H115i Pro w/ Noctua NF-A14 Fans
Memory 32GB DDR4-3600
Video Card(s) RTX 2070 Super
Storage 500GB SX8200 Pro + 8TB with 1TB SSD Cache
Display(s) Acer Nitro VG280K 4K 28"
Case Fractal Design Define S
Audio Device(s) Onboard is good enough for me
Power Supply eVGA SuperNOVA 1000w G3
Software Windows 10 Pro x64
Well, not only is that not what it sounds like has happened, it would be very unlikely to ever happen. What you're trying to say is that out of 6 drives, 2 drives failed at the exact same time. And that those 2 drives, out of 6, also happened to be a mirrored pair, out of 3. I'm no statistician. But common sense tells me that's not going to happen very often(like almost never).

Not to mention the fact that this all started with no warning(signs of imminent drive failure), as he was fiddling around with something else(trying to install a tape drive). Which leads me to believe he may have accidentally messed something up in that process.

Common sense FTW.

When it comes to RAID a drive doesn't actually have to fail to be considered failed by the RAID controller.

But the fact is that, yes, this is what it sounds like happened in OP's case exactly. The controller has marked several drives failed. It could be for any number of reasons, but the result is the array is now useless and not going to be something easy to recover. Also, it sounds like the OP didn't have the Windows software for the controller installed and running, so we have no idea when the drives actually failed. There could have been one or two that were marked failed already, and messing around in the case caused the final one to be marked as failed. We don't know, but the fact is the controller has marked drives as failed, and enough of them have failed so the RAID0 array is now unreadable.

Also, I don't think the OP knows enough about what he is doing and isn't even describing things correctly. When he is saying drives are marked as failed and degraded, I think the LSI controller is actually reporting the RAID1 arrays as failed or degraded. LSI controllers don't mark drives as failed or degraded, those are statuses for RAID arrays. When a drive is marked as failed, it is listed as "missing" by the controller.

And while it sounds like the RAID5 arrays are now all back to being degraded instead of one failed, there won't probably be any option to rebuild the RAID0 array because you don't rebuild RAID0 arrays. Once the RAID0 array is marked as failed, it probably isn't going to come back, it's just the nature of RAID0.
 
Last edited:
Joined
Jul 18, 2016
Messages
507 (0.18/day)
System Name Gaming PC / I7 XEON
Processor I7 4790K @stock / XEON W3680 @ stock
Motherboard Asus Z97 MAXIMUS VII FORMULA / GIGABYTE X58 UD7
Cooling X61 Kraken / X61 Kraken
Memory 32gb Vengeance 2133 Mhz / 24b Corsair XMS3 1600 Mhz
Video Card(s) Gainward GLH 1080 / MSI Gaming X Radeon RX480 8 GB
Storage Samsung EVO 850 500gb ,3 tb seagate, 2 samsung 1tb in raid 0 / Kingdian 240 gb, megaraid SAS 9341-8
Display(s) 2 BENQ 27" GL2706PQ / Dell UP2716D LCD Monitor 27 "
Case Corsair Graphite Series 780T / Corsair Obsidian 750 D
Audio Device(s) ON BOARD / ON BOARD
Power Supply Sapphire Pure 950w / Corsair RMI 750w
Mouse Steelseries Sesnsei / Steelseries Sensei raw
Keyboard Razer BlackWidow Chroma / Razer BlackWidow Chroma
Software Windows 1064bit PRO / Windows 1064bit PRO
there are some software that say they can recover data from raid. LSI SAS8008 does not have a software to rebuild the array?
 

Pandamonium

New Member
Joined
Nov 13, 2018
Messages
9 (0.00/day)
When it comes to RAID a drive doesn't actually have to fail to be considered failed by the RAID controller.
Also, I don't think the OP knows enough about what he is doing and isn't even describing things correctly. When he is saying drives are marked as failed and degraded, I think the LSI controller is actually reporting the RAID1 arrays as failed or degraded. LSI controllers don't mark drives as failed or degraded, those are statuses for RAID arrays. When a drive is marked as failed, it is listed as "missing" by the controller.

Sorry, but you are wrong. See the screenshot. It clearly shows the drives marked as degraded and the volume marked as failed.

20181114_090300.jpg

Just to clarify, the server was working fine at the start of the day. I shut it down to put in an additional SAS card for the tape drive and when I rebooted the raid was dead. I seriously don't believe the drives are faulty. I'm now convinced the raid controller has become corrupted somehow. The drive it said was failed checks out fine in another PC.

I'm going to cut the controller out of this today and try this software...

http://www.freeraidrecovery.com/

Nothing to lose now.
 

Aquinus

Resident Wat-man
Joined
Jan 28, 2012
Messages
13,147 (2.94/day)
Location
Concord, NH, USA
System Name Apollo
Processor Intel Core i9 9880H
Motherboard Some proprietary Apple thing.
Memory 64GB DDR4-2667
Video Card(s) AMD Radeon Pro 5600M, 8GB HBM2
Storage 1TB Apple NVMe, 4TB External
Display(s) Laptop @ 3072x1920 + 2x LG 5k Ultrafine TB3 displays
Case MacBook Pro (16", 2019)
Audio Device(s) AirPods Pro, Sennheiser HD 380s w/ FIIO Alpen 2, or Logitech 2.1 Speakers
Power Supply 96w Power Adapter
Mouse Logitech MX Master 3
Keyboard Logitech G915, GL Clicky
Software MacOS 12.1
I'm now convinced the raid controller has become corrupted somehow.
I'm not. That would mean that the firmware for the RAID controller is corrupted and I would argue that the device wouldn't even initialize if that was the case (aka, you wouldn't be able to get into the management interface.) Unless you moved cables around and the cables themselves are bad, what you're seeing is a catastrophic failure of your RAID. RAID-0+1 with 6 disks using 2 disks for each level 1, the only way you could tolerate 3 disks failing would be if one of the RAID-1 pairs for every pair failed. If you have a single pair fail, you're already toast.

I honestly think your last resort is making sure the cables are good. You could have always screwed up the HDD cables. Maybe there is a loose connection somewhere but, I've had 2 drives in a 4-disk RAID-5 fail before. Believe it or not, it does happen.
 

Mindweaver

Moderato®™
Staff member
Joined
Apr 16, 2009
Messages
8,194 (1.49/day)
Location
Charleston, SC
System Name Tower of Power / Sechs
Processor i7 14700K / i7 5820k @ 4.5ghz
Motherboard ASUS ROG Strix Z690-A Gaming WiFi D4 / X99S GAMING 7
Cooling CM MasterLiquid ML360 Mirror ARGB Close-Loop AIO / CORSAIR Hydro Series H100i Extreme
Memory CORSAIR Vengeance LPX 32GB (2 x 16GB) DDR4 3600 / G.Skill DDR4 2800 16GB 4x4GB
Video Card(s) ASUS TUF Gaming GeForce RTX 4070 Ti / ASUS TUF Gaming GeForce RTX 3070 V2 OC Edition
Storage 4x Samsung 980 Pro 1TB M.2, 2x Crucial 1TB SSD / Samsung 870 PRO 500GB M.2
Display(s) Samsung 32" Odyssy G5 Gaming 144hz 1440p, ViewSonic 32" 72hz 1440p / 2x ViewSonic 32" 72hz 1440p
Case Phantek "400A" / Phanteks “Enthoo Pro series”
Audio Device(s) Realtek ALC4080 / Azalia Realtek ALC1150
Power Supply Corsair RM Series RM750 / Corsair CXM CX600M
Mouse Glorious Gaming Model D Wireless / Razer DeathAdder Chroma
Keyboard Glorious GMMK with box-white switches / Keychron K6 pro with blue swithes
VR HMD Quest 3 (128gb) + Rift S + HTC Vive + DK1
Software Windows 11 Pro x64 / Windows 10 Pro x64
Benchmark Scores Yes
Sorry, but you are wrong. See the screenshot. It clearly shows the drives marked as degraded and the volume marked as failed.

View attachment 110494
Just to clarify, the server was working fine at the start of the day. I shut it down to put in an additional SAS card for the tape drive and when I rebooted the raid was dead. I seriously don't believe the drives are faulty. I'm now convinced the raid controller has become corrupted somehow. The drive it said was failed checks out fine in another PC.

I'm going to cut the controller out of this today and try this software...

http://www.freeraidrecovery.com/

Nothing to lose now.
Yea that's bad news. It's not going to hurt to try the software as long as it's not malware at this point. In the 20+ years in IT I have lost more than 1 drive in a RAID 5 array as well. It sucks but RAID's are not 100% that's why you still want to have a backup and it sounds like you have an older backup. So, it's not totally lost. After you have done all you can, make sure to buy a big enough USb3 drive to do backups. I saw you said this all started after trying to setup tape backups? Did mean tapes? That's funny because after switching to drives I still called it swapping out my tapes.. lol Oh, of course, I still tell my wife to tape shows on tv.

Oh and not to kick you while you are down, but next time lead with the screenshot. It's more telling to a lot of us. If the array was salvageable the raid volume/array would be degraded, but with it in a fail state there isn't much you can do other than Aquinus said and by making sure the cables aren't loose and in the correct order.
 

newtekie1

Semi-Retired Folder
Joined
Nov 22, 2005
Messages
28,472 (4.23/day)
Location
Indiana, USA
Processor Intel Core i7 10850K@5.2GHz
Motherboard AsRock Z470 Taichi
Cooling Corsair H115i Pro w/ Noctua NF-A14 Fans
Memory 32GB DDR4-3600
Video Card(s) RTX 2070 Super
Storage 500GB SX8200 Pro + 8TB with 1TB SSD Cache
Display(s) Acer Nitro VG280K 4K 28"
Case Fractal Design Define S
Audio Device(s) Onboard is good enough for me
Power Supply eVGA SuperNOVA 1000w G3
Software Windows 10 Pro x64
Sorry, but you are wrong. See the screenshot. It clearly shows the drives marked as degraded and the volume marked as failed.

Sorry, you're right. It has been a while since I've worked with LSI controllers. I forgot they did it that way. IIRC, now that the disks are all seen, you have to use the Windows utilities to assign them as hot spares again and then the option to rebuilt should show up, or actually it should start rebuilding the arrays automatically.

Degraded status for a drive on and LSI means that the controller marked the drive as failed, but now it is back working properly again but needs to be rebuilt because it has lost sync with the array. This can be caused by something as simple as a cable being loose.

The only problem I see now is that the last two drives are both degraded. I believe the way that controller handles RAID10 is the first two drives are in their own mirror array, the next two are their own mirror array, and the last two are their own mirror array. Then those 3 mirror arrays are striped in RAID0. So with the last two drives marked as failed, that entire mirror array has failed, and hence the RAID0 has failed. And because both drives in the last mirror array have been considered failed, it does not have a good drive to rebuild the array from.
 

bug

Joined
May 22, 2015
Messages
13,226 (4.06/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
True, and I understand raid 10 is just raid 0 and 1. I may have misunderstood op. I thought he only had one drive that failed and he was just trying to repair the array? RAID 10 should handle a single drive failure and a six drive raid 10 should handle a 2 drive failure, but it depends on which 2 drives failed.

Whereas a RAID5 setup will be ok with one drive failure. RAID6 is fine with two failed drives. Just sayin'.
 

Pandamonium

New Member
Joined
Nov 13, 2018
Messages
9 (0.00/day)
Ok, I've finally manged to make some progress using http://www.freeraidrecovery.com/. I attached my drives without using the raid controller and it found the correct raid parameters. I then used the reclaime file recovery software and have recovered EVERYTHING!

This just confirms my suspicions that the drives are and were always fine. So, either the controller has had an issue or (as has been suggested), the cable(s) have failed. I still suspect the raid controller as this whole episode started when I plugged in another controller to run the tape drive.

Now my dilemma is, do I rebuild the raid as before or just drop the idea of raid all together (it didn't really help me here at all!) and make use of the new tape drive to keep simple backups in case of failure. I'm leaning to the second option as I have no need for raid in terms of speed, it was just a way of giving me some fault tolerance and it totally failed to do that in the end.

Thanks for all the replies.
 

bug

Joined
May 22, 2015
Messages
13,226 (4.06/day)
Processor Intel i5-12600k
Motherboard Asus H670 TUF
Cooling Arctic Freezer 34
Memory 2x16GB DDR4 3600 G.Skill Ripjaws V
Video Card(s) EVGA GTX 1060 SC
Storage 500GB Samsung 970 EVO, 500GB Samsung 850 EVO, 1TB Crucial MX300 and 2TB Crucial MX500
Display(s) Dell U3219Q + HP ZR24w
Case Raijintek Thetis
Audio Device(s) Audioquest Dragonfly Red :D
Power Supply Seasonic 620W M12
Mouse Logitech G502 Proteus Core
Keyboard G.Skill KM780R
Software Arch Linux + Win10
Ok, I've finally manged to make some progress using http://www.freeraidrecovery.com/. I attached my drives without using the raid controller and it found the correct raid parameters. I then used the reclaime file recovery software and have recovered EVERYTHING!

This just confirms my suspicions that the drives are and were always fine. So, either the controller has had an issue or (as has been suggested), the cable(s) have failed. I still suspect the raid controller as this whole episode started when I plugged in another controller to run the tape drive.

Now my dilemma is, do I rebuild the raid as before or just drop the idea of raid all together (it didn't really help me here at all!) and make use of the new tape drive to keep simple backups in case of failure. I'm leaning to the second option as I have no need for raid in terms of speed, it was just a way of giving me some fault tolerance and it totally failed to do that in the end.

Thanks for all the replies.
As I have pointed out above, you have enough drives to set up a RAID5 or RAID6 array. These are both more resilient than what you had before and net you more usable storage space. But it's up to you.
 

Mindweaver

Moderato®™
Staff member
Joined
Apr 16, 2009
Messages
8,194 (1.49/day)
Location
Charleston, SC
System Name Tower of Power / Sechs
Processor i7 14700K / i7 5820k @ 4.5ghz
Motherboard ASUS ROG Strix Z690-A Gaming WiFi D4 / X99S GAMING 7
Cooling CM MasterLiquid ML360 Mirror ARGB Close-Loop AIO / CORSAIR Hydro Series H100i Extreme
Memory CORSAIR Vengeance LPX 32GB (2 x 16GB) DDR4 3600 / G.Skill DDR4 2800 16GB 4x4GB
Video Card(s) ASUS TUF Gaming GeForce RTX 4070 Ti / ASUS TUF Gaming GeForce RTX 3070 V2 OC Edition
Storage 4x Samsung 980 Pro 1TB M.2, 2x Crucial 1TB SSD / Samsung 870 PRO 500GB M.2
Display(s) Samsung 32" Odyssy G5 Gaming 144hz 1440p, ViewSonic 32" 72hz 1440p / 2x ViewSonic 32" 72hz 1440p
Case Phantek "400A" / Phanteks “Enthoo Pro series”
Audio Device(s) Realtek ALC4080 / Azalia Realtek ALC1150
Power Supply Corsair RM Series RM750 / Corsair CXM CX600M
Mouse Glorious Gaming Model D Wireless / Razer DeathAdder Chroma
Keyboard Glorious GMMK with box-white switches / Keychron K6 pro with blue swithes
VR HMD Quest 3 (128gb) + Rift S + HTC Vive + DK1
Software Windows 11 Pro x64 / Windows 10 Pro x64
Benchmark Scores Yes
As I have pointed out above, you have enough drives to set up a RAID5 or RAID6 array. These are both more resilient than what you had before and net you more usable storage space. But it's up to you.
I agree with bug. Also, be sure to add a BBU if your card will allow it for the extra performance.
 

Pandamonium

New Member
Joined
Nov 13, 2018
Messages
9 (0.00/day)
As I have pointed out above, you have enough drives to set up a RAID5 or RAID6 array. These are both more resilient than what you had before and net you more usable storage space. But it's up to you.

Hmm, so my motherboard has the LSI 3008 controller which doesn't support raid 5 or 6 - however it looks like the Sata ports also have a raid controller on them which DOES support raid 5. Previously I couldn't use that as I had no ports left for the boot drive, but as part of this recovery, I've had to add extra ports so now I can use this option. That also remove both the Sas cables and the controller - one of which I no longer trust.

Looks like a great way forward - Thank you.
 
Top