• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Bios level RAID 1 - How to know it is working on windows?

Joined
Jul 15, 2020
Messages
1,102 (0.61/day)
System Name Dirt Sheep | Silent Sheep
Processor i5-2400 | 13900K (-0.02mV offset)
Motherboard Asus P8H67-M LE | Gigabyte AERO Z690-G, bios F29 Intel baseline
Cooling Scythe Katana Type 1 | Noctua NH-U12A chromax.black
Memory G-skill 2*8GB DDR3 | Corsair Vengeance 4*32GB DDR5 5200Mhz C40 @4000MHz
Video Card(s) iGPU | NV 1080TI FE
Storage Micron 256GB SSD | 2*SN850 1TB, 230S 4TB, 840EVO 128GB, IronWolf 6TB, 2*HC550 18TB in RAID1
Display(s) LG 21` FHD W2261VP | Lenovo 27` 4K Qreator 27
Case Thermaltake V3 Black|Define 7 Solid: 2*TOUGHFAN 14Pro+2*Stock 14 inlet, NF-A14 PPC-3000+NF-A8 outlet
Audio Device(s) Beyerdynamic DT 990 (or the screen speakers when I'm too lazy)
Power Supply Enermax Pro82+ 525W | Corsair RM650x (2021)
Mouse Logitech Master 3
Keyboard Roccat Isku FX
VR HMD Nop.
Software WIN 10 | WIN 11
Benchmark Scores CB23 SC: i5-2400=641 | i9-13900k=2281 MC: i5-2400=i9 13900k SC | i9-13900k=35500
I successfully config the 2 HDDs to RAID 1 volume in the bios under VMD (NOT from windows). It is NOT a system disk and meant only for big data storage (Gigabyte AERO Z690 G, 2*WD HC550 18TB).
In windows (11) I see only one drive at 18TB, as expected.
But still, how can I be certain that both drive hold the same data?
Is there any software the can show the state of the RAID?
I though of doing the physicals test- copy some data to the 18TB drive (now they are empty), turn computer off, disconnect one of the two 18TB drives (call it A) and see if the other (B) still hold the data. Than disconnected B, connect A and check again. Makes sense?
 
Hi,
Think you break the raid if you remove one.

Raid is not backup
It was a solution for small hdd/ ssd's pretty much obsolete tech now days and not worth the effort
Look for better real backup options like free file sync

FreeFileSync download | SourceForge.net

Or of course system imaging is best plus disconnected when not in use.
 
I know it`s no backup, just redundancy.
But still, if one drive fail I can use the other as a normal, non-raid, drive until I restore with a drive- isn't it?
 
Hi,
Well to me not knowing if it works means it doesn't.
File sync is better and easy to see if it's working or not.
 
So no advantage by doing bios RAID 1?
Just use build-in win11 "Computer Management" tools for mirroring?
 
Hi,
That's another option easier to verify than outdated raid nonsense.
 
So no advantage by doing bios RAID 1?
Just use build-in win11 "Computer Management" tools for mirroring?
That“s just “Windows RAID“ there’s nothing wrong your current array you can always view the physical disks in the BIOS RAID section you created it in if you’re unsure. I just retired my RAID1 array as it was just doing Windows Backups which I’d never use anyway I just had the drives already installed and working. And already had another array I’m still using.
 
So, any suggestions what is better (and why\ pros\cons)- windows tools or 3rd party software?
 
So, any suggestions what is better (and why\ pros\cons)- windows tools or 3rd party software?

oh jeez thats way out of scope for this thread and a whole ass discussion (lots of misinformation and prejudice from gamers here)

Suffice to say hardware RAID is still a far better solution than most software raids. Secondly the "hardware" raid on most motherboards is not a dedicated but instead logical controller. Some people call this "fake raid" since the BIOS instead of a dedicated backplane is controlling the disks, its still a leg up (redundancy and performance wise) over software RAID solutions.

That is ALSO why you can rest easy that your data is being replicated. RAID doesnt work like you seem too allude too in the OP; without breaking the array you cant really "check" the files, you can only go by array health.

When an array is created the volume "Disk" that windows sees is just that. a pretend "hard drive" that the RAID is telling the OS about. The actual duplication/splitting of data is happening on a much lower level. Data cannot "miss" or "lag" getting written to another drive in RAID1.

Now on the topic of array monitoring, I personally dont know, atleast not in the case of your specific board. However, most intel platform RAID control (BIOS) can be checked on and used using Intel RST(e)/VROC. This would give you overall raid health and may even give you control over some array configuration. This ability is baked into the chipset itself, but what the manufacturer (gigabyte in this case) allows to expose is anyones guess.

The version/edition of RST(e)/VROC that you need is dependent on chipset and the added fixtures, best to see if the download section for your board has a version. Of course given how fresh z690 is you can just as well grab it from intels site, it just might take you a few tries to find one that is happy with your platform.

For the record, the "easy" way of setting up windows software raid in device manager defaults to settings that are:

1: Almost impossible to recover from
2: Slow R/W
3: Amplified IOPs (slow response)

I have EXTENSIVE experience in virtualization and data systems (PB scale SAN and cluster storage) from hardware up to kernel I/O scheduling and file systems.

I would recommend windows soft raid never, and if it was needed would never recommend it on a production work load. If you had to use OS RAID switch to linux and just use a ZFS pool at that point or mdraid.
 
Last edited:
Hi,
Well to me not knowing if it works means it doesn't.
File sync is better and easy to see if it's working or not.
With a file sync you miss out on one big benefits of raid 1, double read speed. (or close to it, minus overhead)
 
oh jeez thats way out of scope for this thread and a whole ass discussion (lots of misinformation and prejudice from gamers here)

Suffice to say hardware RAID is still a far better solution than most software raids. Secondly the "hardware" raid on most motherboards is not a dedicated but instead logical controller. Some people call this "fake raid" since the BIOS instead of a dedicated backplane is controlling the disks, its still a leg up (redundancy and performance wise) over software RAID solutions.

That is ALSO why you can rest easy that your data is being replicated. RAID doesnt work like you seem too allude too in the OP; without breaking the array you cant really "check" the files, you can only go by array health.

When an array is created the volume "Disk" that windows sees is just that. a pretend "hard drive" that the RAID is telling the OS about. The actual duplication/splitting of data is happening on a much lower level. Data cannot "miss" or "lag" getting written to another drive in RAID1.

Now on the topic of array monitoring, I personally dont know, atleast not in the case of your specific board. However, most intel platform RAID control (BIOS) can be checked on and used using Intel RST(e)/VROC. This would give you overall raid health and may even give you control over some array configuration. This ability is baked into the chipset itself, but what the manufacturer (gigabyte in this case) allows to expose is anyones guess.

The version/edition of RST(e)/VROC that you need is dependent on chipset and the added fixtures, best to see if the download section for your board has a version. Of course given how fresh z690 is you can just as well grab it from intels site, it just might take you a few tries to find one that is happy with your platform.

For the record, the "easy" way of setting up windows software raid in device manager defaults to settings that are:

1: Almost impossible to recover from
2: Slow R/W
3: Amplified IOPs (slow response)

I have EXTENSIVE experience in virtualization and data systems (PB scale SAN and cluster storage) from hardware up to kernel I/O scheduling and file systems.

I would recommend windows soft raid never, and if it was needed would never recommend it on a production work load. If you had to use OS RAID switch to linux and just use a ZFS pool at that point or mdraid.
Thanks a bunch for that info!
It helps a lot as I serched but didn`t find good prectical info that is not just "pay us to recover your RAID".

I had a feeling that going RAID in bios level is more robust than in windows software plus I can take benefit from faster read speed (when copying archived data to the SSD that I do day to day edit on).
So unless I totally miss understood you- you strongly recommend to stay with bios level RAID1- right?
Do I get any warning from the bios if one of the RAID 1 drive fails (or having problems)?
Also, if one drive fails- can I just reconfig the second one in the bios to operate as normal singel disk and keep using it (until rebulding the array with new disk)?
 
Last edited:
Hi. Long-time BIOS RAID 1 user here, since many, many years. I use it for data redundancy and the (some say minor) speed improvement. It usually works just fine. I should mention that I use BIOS RAID 1 setups only for non-mission critical data storage. And, for any data that I wanna be really sure about, I do take external backups in addition to the RAID 1.

I had a feeling that going RAID in bios level is more robust than in windows software plus I can take benefit from faster read speed (when copying archived data to the SSD that I do day to day edit on).
Yes, it is.

So unless I totally miss understood you- you strongly recommend to stay with bios level RAID1- right?
I do.

Do I get any warning from the bios if one of the RAID 1 drive fails (or having problems)?
I have had RAID 1 failures before, and BIOS used to give warnings. Not 100% sure if your current BIOS will do it, but it most probably will.

Also, if one drive fails- can I just reconfig the second one in the bios to operate as normal singel disk and keep using it (until rebulding the array with new disk)?
Usually YES. I have had that happen to me, and have been able to use the working drive until I was able to replace with a new array or repair the array with a similar drive.

I think you will be fine with your setup, unless your data is extremely mission critical. For mission critical data I would recommend hardware RAID with multiple failure protection like RAID 5 or, even better, RAID 10. I think some NAS setups also have these built into their design. You can always do some more research into alternatives if your data is that important.
 
I think you will be fine with your setup, unless your data is extremely mission critical. For mission critical data I would recommend hardware RAID with multiple failure protection like RAID 5 or, even better, RAID 10. I think some NAS setups also have these built into their design. You can always do some more research into alternatives if your data is that important.
Thank you very much!
It is a RAW video+photo archive and I need old materials from time to time to use in new projects, but no one`s life depends on it.
I already have external offline backup, currently on the smaller 2.5` 4-5TB drives and it`s enough atm.
I just need an 'online' reliable redundancy until I backup (say once per month).
 
For your above stated usage scenario, I think you will be absolutely fine with what you have done.

If you wanna make sure that the BIOS warns you about disk failure, one thing you can try to do is disconnect the power plug from one of your RAID 1 drives, but do not change anything in the BIOS. Then when you boot up, it should warn you that the RAID has failed. After getting the warning, you can just reconnect the power to the drive, and everything *should* work as well as before. If you have a spare system, you can try connecting the disconnected RAID drive and physically verifying it on the other system. I am not sure if it will work on the same system without breaking the RAID. I probably did all these experiments way back when I started to use RAID 1 for data redundancy, but I don't remember all the details exactly right now.

If you get the warning, the BIOS is letting you know about drive failure. So, you can be sure you have a warning system in place.

Also, some motherboards also have RAID software tools that let you know about RAID disk drive health. Check your motherboard vendor support web-page and see if there are any RAID tools to help you. They may not be too high quality, but all you need is something to alert you about possible issues before you have a failure.

Your BIOS RAID 1 should be working correctly once you have it set up. The above steps will just help you get some more peace of mind :) Enjoy!
 
Each side Intel and AMD do have a Windows application to monitor any RAID setup you have created without knowing which you are on AMD is RAIDXpert and for Intel it is Intel RST
 
So I'm playing with some testing in order to understand how that RAID1 matrix works.
Using Intel RST is can easily see the RAID state and reconfig it just as in the bios, with extra useful info and option regarding cache behavior.

Now doing\learning about the rebuild of the array after (intentionally) 'damaging' it by unplugging the power to one drive, adding files to the second one, than reconnecting the first back.
Firstly, no problem using only one drive as usual.
Second, It seems like a very very slow process done automatically by the RST software in windows. About 1% progress per 10 min...
It is good to see that both bios, during post, and also RST in windows inform me about problem in the array.
 
So I'm playing with some testing in order to understand how that RAID1 matrix works.
Using Intel RST is can easily see the RAID state and reconfig it just as in the bios, with extra useful info and option regarding cache behavior.

Now doing\learning about the rebuild of the array after (intentionally) 'damaging' it by unplugging the power to one drive, adding files to the second one, than reconnecting the first back.
Firstly, no problem using only one drive as usual.
Second, It seems like a very very slow process done automatically by the RST software in windows. About 1% progress per 10 min...
It is good to see that both bios, during post, and also RST in windows inform me about problem in the array.

Nice job man, yeah RST is the way to go imo on intel raid options, they make it easy for users.

Got a screen shot of RST for your victory?

Yup it will be slow it’s a 1:1 (RAID1) and it will walk through the drive sector by sector calculating and correcting differences so that’s normal. This is a little faster in other raid types where more disks can be polled to rebuild the data but I digress this isn’t the place for this discussion.

Nice work

done automatically by the RST software in windows

EDIT:: Just a slight correction. The software in this case RST only provides you "control" over features and functions that are exposed by the raid controller. The software is not the thing that is doing your rebuild. The controller on the board is doing this automatically for you. While it may be possible to stop, pause, eject, replace from software, you are just issuing commands to the controller. This is an important distinction and something you must understand fundamentally if you wish to dive deeper into RAID in the future. The software be it for AMD, Intel, LSI are just "windows" into what the controller is doing, THEY are not doing anything FOR you.
 
Last edited:
So some conclusion so far:
1- rebuilding works and took around 24 house, on a 18TB array with only 1GB on it.
2- Detaching one drive and moving it to another system (quite old sandy bridge one) is straightforward as it can be. Simply works. No need even to reconfig in the bios of either systems.
3- The bios register array abnormaltis even when the system is off all the time when I removed and returned the disk. When I turned it on after returning the drive I did a quick checkup before windows started that is different than rebuilding.

4- I try crystal disk mark to see the read speed of the array but is show around 270 mb\s just as using one disk. I would expect to see 500mb\s +
Any suggestions?
 
Last edited:
4- I try crystal disk mark to see the read speed of the array but is show around 270 mb\s just as using one disk. I would expect to see 500mb\s +
Any suggestions on that matter?

Thats expected behavior. RAID 1 is disks in mirror, they are not splitting the load they are direct copies, so no performance improvement only redundancy.
 
RAID 1 is a direct copy of files on drive A to drive B.
There is no performance benefits, no safeties other than having a backup drive if drive A or B dies.

Speed boosts require a different RAID level, and more disks for most of them.
 
I just registered here to add some more info to this topic (even though it might be a bit off-top)
I was also surprised seeing no performance improvement for the RAID 1 on Intel RST and started digging deeper, and it seems that Intel RST does not support load balancing with RAID 1, but that doesn't mean, that this is always the case. Some Linux distributions take advantage for the RAID 1 potential and show significanlty improved performance. Unfortunately Windows doesn't.

Moreover consumer Intel RST do not support TRIM in RAID 1, which makes the SSD RAID performance diminish over time. You can read more about this on this intel post:

Below is my benchmark:
No RAID, single WD Black SN850X SSD drive:
1677142195523.png

RAID 1, double WD Black SN850X SSD drive, Read-Only Cache:
1677142110704.png


As You can see, RAID 1 performance is worse all around in windows, than a single SSD drive. I'm abandoning RAID 1 because of that. For me, It's better to have two separate drives and a sync copy between those, than using RAID 1.

EDIT: Interestingly enough, I have converted that array to RAID 0 (sector size 64k) just to see, how much better it would look like, and the reads are totally not impressive. Not sure why is that. RAID 0 should have almost double the read and write performance, while I can only see improvements in sequential writes. Everything else is below the base line, which is single, non-raided drive.

1677145467220.png
 
Last edited:
I just registered here to add some more info to this topic (even though it might be a bit off-top)
I was also surprised seeing no performance improvement for the RAID 1 on Intel RST and started digging deeper, and it seems that Intel RST does not support load balancing with RAID 1, but that doesn't mean, that this is always the case. Some Linux distributions take advantage for the RAID 1 potential and show significanlty improved performance. Unfortunately Windows doesn't.

Moreover consumer Intel RST do not support TRIM in RAID 1, which makes the SSD RAID performance diminish over time. You can read more about this on this intel post:

Below is my benchmark:
No RAID, single WD Black SN850X SSD drive:
View attachment 285143
RAID 1, double WD Black SN850X SSD drive, Read-Only Cache:
View attachment 285141

As You can see, RAID 1 performance is worse all around in windows, than a single SSD drive. I'm abandoning RAID 1 because of that. For me, It's better to have two separate drives and a sync copy between those, than using RAID 1.

EDIT: Interestingly enough, I have converted that array to RAID 0 (sector size 64k) just to see, how much better it would look like, and the reads are totally not impressive. Not sure why is that. RAID 0 should have almost double the read and write performance, while I can only see improvements in sequential writes. Everything else is below the base line, which is single, non-raided drive.

View attachment 285156
If you want help, make your own thread - and post your full system specs when you do
 
If you want help, make your own thread - and post your full system specs when you do
I don't want help. I just wanted to share my experience, in the matter discussed here, with others, that might be asking themselves simillar answers in future. That is why I didn't want to create my own thread, as I'm not expecting any help. Just sharing with others.
 
Back
Top