• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Raid 0 or Raid 1? I'm puzzled, need advice

mariuski

New Member
Joined
Sep 5, 2004
Messages
22 (0.00/day)
I wonder.......:

- what is the upside/downside by using raid instead of plain s-ata?

-should I use raid 0 or raid 1? and why?

-am I off track if I think it's possible to install OS on two raid-disks and encrease transfer-speed and general performance by "mirroring" the two disks?
if so...is it true that you (sort of) double performance regards to the disks Rpm's? (example: 1 disk at 10K rpm and 1 at 7200 rpm together in raid = 7200x2 ?

my point is: will it encrease performance on PC (for gaming & internet) by using Raid?

In general...... is it someone out there who could & would write facts about this issue in plain english?

Thnx
 

cram

New Member
Joined
Aug 16, 2004
Messages
23 (0.00/day)
Location
Saskatoon, Canada
- what is the upside/downside by using raid instead of plain s-ata?

-should I use raid 0 or raid 1? and why?
Raid 0 makes two drives appear as a single big drive, think of it like taking the actual platters from both drives and putting them together in a single device. This will allow you to use all the space on the drives, so 120GB x 2 = 240 GB, and in theory this should double your access speed so 7200 x 2 = 14000. In practice of course, it will be slower, but still impressive. The downside is that if one drive gets fried, you lose all your data.

Raid 1 causes the drives to mirror each other, so each drive has exactly the same data. This may still give you speed improvements because you have increased the bandwidth to your drives, but in most circumstances, not much speed improvement. Also since the drives have the same data 120GB x 2 = 120GB. The upside is that you have reliability, if one drive gets fried the other one has that data backed up.


am I off track if I think it's possible to install OS on two raid-disks and encrease transfer-speed and general performance by "mirroring" the two disks?
if so...is it true that you (sort of) double performance regards to the disks Rpm's? (example: 1 disk at 10K rpm and 1 at 7200 rpm together in raid = 7200x2 ?
So yes, you can install the OS on drives and more or less double the effective RPM (but this would be RAID 0 (striping), not RAID 1 (mirroring)).


my point is: will it encrease performance on PC (for gaming & internet) by using Raid?
If you're willing to take the risk of doubling your chances of data loss, then yes you will see some performance improvements.
 

wazzledoozle

New Member
Joined
Aug 30, 2004
Messages
5,358 (0.75/day)
Location
Seattle
Processor X2 3800+ @ 2.3 GHz
Motherboard DFI Lanparty SLI-DR
Cooling Zalman CNPS 9500 LED
Memory 2x1 Gb OCZ Plat. @ 3-3-2-8-1t 460 MHz
Video Card(s) HIS IceQ 4670 512Mb
Storage 640Gb & 160Gb western digital sata drives
Display(s) Hanns G 19" widescreen LCD w/ DVI 5ms
Case Thermaltake Soprano
Audio Device(s) Audigy 2 softmod@Audigy 4, Logitech X-530 5:1
Power Supply Coolermaster eXtreme Power Plus 500w
Software XP Pro
Would you just enable Raid 0 or 1 in Bios when your hard drives are new, or some special software?
 

cram

New Member
Joined
Aug 16, 2004
Messages
23 (0.00/day)
Location
Saskatoon, Canada
Would you just enable Raid 0 or 1 in Bios when your hard drives are new, or some special software?
If your mobo has built-in RAID support or you have bought a RAID card, then (i think) all you would need to do is enable it in the BIOS and/or install the appropriate drivers. It is also possible to do pure software RAID but this is probably a bad idea since it's gonna require considerable overhead in other system resources, and it will make it very difficult to access your data if you want to change/upgrade OSs.
 

mariuski

New Member
Joined
Sep 5, 2004
Messages
22 (0.00/day)
ok. that was helpful!

since I don't mind playing "unsafe" (calculated risks) I guess raid 0 would be the option for me then..

with two WD raptor 10K rpm's it should be a little better than with only 1 7200 rpm's sata-disk?

the Raid-configuration on my MB is another story, but I guess I just have to read the manual carefully.
 

cram

New Member
Joined
Aug 16, 2004
Messages
23 (0.00/day)
Location
Saskatoon, Canada
with two WD raptor 10K rpm's it should be a little better than with only 1 7200 rpm's sata-disk?
Definitely but be warned that while your benchmark figures will kick ass, the real-world benefits may not be significant. I think Anandtech did a review of raptors in RAID 0 a while back, you might want to check that out.
 

wazzledoozle

New Member
Joined
Aug 30, 2004
Messages
5,358 (0.75/day)
Location
Seattle
Processor X2 3800+ @ 2.3 GHz
Motherboard DFI Lanparty SLI-DR
Cooling Zalman CNPS 9500 LED
Memory 2x1 Gb OCZ Plat. @ 3-3-2-8-1t 460 MHz
Video Card(s) HIS IceQ 4670 512Mb
Storage 640Gb & 160Gb western digital sata drives
Display(s) Hanns G 19" widescreen LCD w/ DVI 5ms
Case Thermaltake Soprano
Audio Device(s) Audigy 2 softmod@Audigy 4, Logitech X-530 5:1
Power Supply Coolermaster eXtreme Power Plus 500w
Software XP Pro
Does a hard drive configuration exist where the computer would see 2 identical drives as one, and when the data was sent to the drive controller, the controller would write every other data bit to each drive? And so when it read form the drives, it would theoretically double read/write speed? This probably require a much faster southbridge link to the northbridge. Though it seems like the PCI-X architecture could handle a much faster controller.
 

cram

New Member
Joined
Aug 16, 2004
Messages
23 (0.00/day)
Location
Saskatoon, Canada
Does a hard drive configuration exist where the computer would see 2 identical drives as one, and when the data was sent to the drive controller, the controller would write every other data bit to each drive? And so when it read form the drives, it would theoretically double read/write speed?
This is basically RAID 0. A better explanation than mine can be found here. And a more complete discussion of various RAID configurations in this Arstechnica article.

This probably require a much faster southbridge link to the northbridge. Though it seems like the PCI-X architecture could handle a much faster controller.
I don't know much about this but I don't think this would be a problem, since disk transfer rates are so much slower than just about everything else in the system.
 

C&C Freak 2K

New Member
Joined
Oct 19, 2004
Messages
205 (0.03/day)
Location
Applegate, CA
There's a lot of different modes. Here they are (information from http://www.acnc.com/raid.html):

RAID 0 is a striped disk array without fault tolerance. For sections "labelled" as A through P for 4 drives, for sections each, A, E, I and M would be on disk 1, B, F, J, N would be on disk 2, etc. It's not a "true" RAID array because it is NOT fault tolerant.
RAID 1 is a mirroring and duplexing array. This one mirrors one drive to another, and stripes mirrored pairs.
RAID 2 is Hamming Code ECC. Each bit of data word is written to a data disk drive. Each data word has its Hamming Code ECC word recorded on the ECC disks. On Read, the ECC code verifies correct data or corrects single disk errors.
RAID 3 is Parallel Transfer with Parity. The data block is subdivided ("striped") and written on the data disks. Stripe parity is generated on Writes, recorded on the parity disk and checked on Reads.
RAID 4 is Independant Data Disks with Shared Parity disks. Each entire block is written onto a data disk. Parity for same rank blocks is generated on Writes, recorded on the parity disk and checked on Reads.
RAID 5 is Independant Data Disks with Distributed Parity Blocks. Each entire data block is written on a data disk; parity for blocks in the same rank is generated on Writes, recorded in a distributed location and checked on Reads.
RAID 6 is Independant Data Disks with Two Independant Distributed Parity Schemes. Data is striped on a block level across a set of drives, just like in RAID 5, and a second set of parity is calculated and written across all the drives; RAID 6 provides for an extremely high data fault tolerance and can sustain multiple simultaneous drive failures .
RAID 10 is a combination of Mirroring and Striping, where some disks may be mirrored (but part of the striping array), but others may only be striped (not mirrored).
RAID 50 is implemented as a striped (RAID level 0) array whose segments are RAID 3 arrays (so it should in theory be called RAID 03).
RAID 0+1 is Mirroring and Striping, similar but different from RAID 10. In this configuration, a whole array of striped disks is mirrored as a whole.

For more information, search Google or try this: http://www.acnc.com/raid.html

EDIT: Also, EIDE doesn't have as much bandwidth as hard disks are capable of doing in the first place, so the northbridge-southbridge bandwidth won't bottleneck disk access speed, mostly because it will already be slow anyway. However, you -might- saturate your PCI bus, but I'm not sure because I'm not entirely sure about all the numbers for the components involved for trucking data to and from the IDE card.
 
Last edited:

mariuski

New Member
Joined
Sep 5, 2004
Messages
22 (0.00/day)
C&C Freak 2K said:
There's a lot of different modes. Here they are (information from http://www.acnc.com/raid.html):

RAID 0 is a striped disk array without fault tolerance. For sections "labelled" as A through P for 4 drives, for sections each, A, E, I and M would be on disk 1, B, F, J, N would be on disk 2, etc. It's not a "true" RAID array because it is NOT fault tolerant.
RAID 1 is a mirroring and duplexing array. This one mirrors one drive to another, and stripes mirrored pairs.
RAID 2 is Hamming Code ECC. Each bit of data word is written to a data disk drive. Each data word has its Hamming Code ECC word recorded on the ECC disks. On Read, the ECC code verifies correct data or corrects single disk errors.
RAID 3 is Parallel Transfer with Parity. The data block is subdivided ("striped") and written on the data disks. Stripe parity is generated on Writes, recorded on the parity disk and checked on Reads.
RAID 4 is Independant Data Disks with Shared Parity disks. Each entire block is written onto a data disk. Parity for same rank blocks is generated on Writes, recorded on the parity disk and checked on Reads.
RAID 5 is Independant Data Disks with Distributed Parity Blocks. Each entire data block is written on a data disk; parity for blocks in the same rank is generated on Writes, recorded in a distributed location and checked on Reads.
RAID 6 is Independant Data Disks with Two Independant Distributed Parity Schemes. Data is striped on a block level across a set of drives, just like in RAID 5, and a second set of parity is calculated and written across all the drives; RAID 6 provides for an extremely high data fault tolerance and can sustain multiple simultaneous drive failures .
RAID 10 is a combination of Mirroring and Striping, where some disks may be mirrored (but part of the striping array), but others may only be striped (not mirrored).
RAID 50 is implemented as a striped (RAID level 0) array whose segments are RAID 3 arrays (so it should in theory be called RAID 03).
RAID 0+1 is Mirroring and Striping, similar but different from RAID 10. In this configuration, a whole array of striped disks is mirrored as a whole.

For more information, search Google or try this: http://www.acnc.com/raid.html

EDIT: Also, EIDE doesn't have as much bandwidth as hard disks are capable of doing in the first place, so the northbridge-southbridge bandwidth won't bottleneck disk access speed, mostly because it will already be slow anyway. However, you -might- saturate your PCI bus, but I'm not sure because I'm not entirely sure about all the numbers for the components involved for trucking data to and from the IDE card.

maybe a little to much info without any breaks in between. thank you, but I lost it somewhere in here...:)
.
 

mariuski

New Member
Joined
Sep 5, 2004
Messages
22 (0.00/day)
OK!

I may have lost it, but I want to try out another question...

what if I got 2 pcs of 15000rpm HD's on scsi and put them in raid 0?

it's not about the benchmarks, but rather about how OS & other programs appear.

hmm. there may also be something about me being norwegian & a total gadget-freak.

:)
 
Joined
Jun 4, 2004
Messages
480 (0.07/day)
System Name Blackbird
Processor AMD Threadripper 3960X 24-core
Motherboard Gigabyte TRX40 Aorus Master
Cooling Full custom-loop water cooling, mostly Aqua Computer and EKWB stuff!
Memory 4x 16GB G.Skill Trident-Z RGB @3733-CL14
Video Card(s) Nvidia RTX 3090 FE
Storage Samsung 950PRO 512GB, Crusial P5 2TB, Samsung 850PRO 1TB
Display(s) LG 38GN950-B 38" IPS TFT, Dell U3011 30" IPS TFT
Case CaseLabs TH10A
Audio Device(s) Edifier S1000DB
Power Supply ASUS ROG Thor 1200W (SeaSonic)
Mouse Logitech MX Master
Keyboard SteelSeries Apex M800
Software MS Windows 10 Pro for Workstation
Benchmark Scores A lot.
maybe i have to clear up a few things first:

if you put 2 (say identical) drives into a raid-0 array then your disks don't "double" their rotation speeds, not even theoretically...

all what is done there is that the raid-controller splits the data-stream on a block-basis (say 64kb each block) into 2 streams (or for more drives into even more streams) and pass it to the individual drives. the result is a higher possible bandwidth your array can deliver, but the access-times remains the same as it is on the slowest of the array members. a disk with a doubled rotation speed has definitely lower access-times which is for most cases more important than raw bandwidth.
you should always keep in mind, that today hard disk drives can deliver enough bandwidth so that in most cases it does not make much sense to couple more than 2 disks in a raid-0 configuration: a wd raptor for instance deliver a peak of 70mb/s or so and if you put two of them together you can never get that doubled bandwidth out of the array if you drive it on a regular pci-controller (you will need a pci-e oder pci64/pci-x based card here to get the needed bandwidth but those are only available on high end workstation boards and pci-e cards haven't hit the market yet).

so things get 'worse' if you put 2 15k-scsi-drives together in a raid-0 (todays 15k scsi-drives can deliver a bandwidth of over 90mb/s): besides the doubled size i think there is nearly no gain in bandwidth on a standard pci-bus controller - so if you don't need the doubled space on them, put them together as one raid-1-array and you will never have to worry about for data-safety. if one drive fails in a raid-0 array, the whole array is destroyed, in a raid-1 array all you have to do is replacing the failed drive and you are happy.
another goodie is that if your raid-controller is clever enough, you can have faster access-times in a raid-1 configration in read requests, because the controller can take the member of the array which can deliver the data first and don't have to wait untill all disks have found the requested data...

cheers
breit
 
Last edited:
Joined
Jun 4, 2004
Messages
480 (0.07/day)
System Name Blackbird
Processor AMD Threadripper 3960X 24-core
Motherboard Gigabyte TRX40 Aorus Master
Cooling Full custom-loop water cooling, mostly Aqua Computer and EKWB stuff!
Memory 4x 16GB G.Skill Trident-Z RGB @3733-CL14
Video Card(s) Nvidia RTX 3090 FE
Storage Samsung 950PRO 512GB, Crusial P5 2TB, Samsung 850PRO 1TB
Display(s) LG 38GN950-B 38" IPS TFT, Dell U3011 30" IPS TFT
Case CaseLabs TH10A
Audio Device(s) Edifier S1000DB
Power Supply ASUS ROG Thor 1200W (SeaSonic)
Mouse Logitech MX Master
Keyboard SteelSeries Apex M800
Software MS Windows 10 Pro for Workstation
Benchmark Scores A lot.
mariuski said:
[...] but rather about how OS & other programs appear.[...]

if i understand that right you mean how the array oder say the disks appear to the os...

that depends on how you create your array:

  • if you have a hardware-scsi-raid-controller then the os, and all other applications of course too, see this array as one scsi-disk with twice the size that the smallest of both disks have.
  • if you have a standard (non-raid) scsi-controller und you use some kind of software raid (build in raid capability of windows for instance or third party software) than the os will see both disks individually and only the applications see this array as one disk.

note: if you create an array of disks using dynamic disks in windows then there is another method of creating one logical disk called JBOD which means that only the space is concatenated and there is no gain in performance because the blockwise datastream split which is done in raid-0 is not used here.

hope this helps a little in understanding raid arrays...

;)

--breit
 

mariuski

New Member
Joined
Sep 5, 2004
Messages
22 (0.00/day)
hey!

thanks to all for bringing me more up to date on this issue.
I still don't know exactly what I'm gonna do, but there's got to be something to make pc go faster....:)
 
Joined
Jun 4, 2004
Messages
480 (0.07/day)
System Name Blackbird
Processor AMD Threadripper 3960X 24-core
Motherboard Gigabyte TRX40 Aorus Master
Cooling Full custom-loop water cooling, mostly Aqua Computer and EKWB stuff!
Memory 4x 16GB G.Skill Trident-Z RGB @3733-CL14
Video Card(s) Nvidia RTX 3090 FE
Storage Samsung 950PRO 512GB, Crusial P5 2TB, Samsung 850PRO 1TB
Display(s) LG 38GN950-B 38" IPS TFT, Dell U3011 30" IPS TFT
Case CaseLabs TH10A
Audio Device(s) Edifier S1000DB
Power Supply ASUS ROG Thor 1200W (SeaSonic)
Mouse Logitech MX Master
Keyboard SteelSeries Apex M800
Software MS Windows 10 Pro for Workstation
Benchmark Scores A lot.
yeah, then a scsi-15k drive can do a lot for start-up times and overall performance, but is it worth the cost?

if you will have a faster pc then you first have to define 'faster': is it faster graphics? faster io-performance? smoother feeling / responsiveness?

for the responsiveness and the smoother feeling maybe you have to go to a dual-system (i would prefer that anyway). this is the most expensive way to get more power under the hood, but if you ever experienced such a system you will never have a up-system back... ;)
if you only want more graphics power simply buy a faster card or try to overclock it if you only want a small gain in performance.
you can also buy a new processor or try to overclock your old one...

so what do you want exactly going faster? ;)

--breit
 

wazzledoozle

New Member
Joined
Aug 30, 2004
Messages
5,358 (0.75/day)
Location
Seattle
Processor X2 3800+ @ 2.3 GHz
Motherboard DFI Lanparty SLI-DR
Cooling Zalman CNPS 9500 LED
Memory 2x1 Gb OCZ Plat. @ 3-3-2-8-1t 460 MHz
Video Card(s) HIS IceQ 4670 512Mb
Storage 640Gb & 160Gb western digital sata drives
Display(s) Hanns G 19" widescreen LCD w/ DVI 5ms
Case Thermaltake Soprano
Audio Device(s) Audigy 2 softmod@Audigy 4, Logitech X-530 5:1
Power Supply Coolermaster eXtreme Power Plus 500w
Software XP Pro
How is the performance of SATA raid configurations?
 
Joined
Jun 4, 2004
Messages
480 (0.07/day)
System Name Blackbird
Processor AMD Threadripper 3960X 24-core
Motherboard Gigabyte TRX40 Aorus Master
Cooling Full custom-loop water cooling, mostly Aqua Computer and EKWB stuff!
Memory 4x 16GB G.Skill Trident-Z RGB @3733-CL14
Video Card(s) Nvidia RTX 3090 FE
Storage Samsung 950PRO 512GB, Crusial P5 2TB, Samsung 850PRO 1TB
Display(s) LG 38GN950-B 38" IPS TFT, Dell U3011 30" IPS TFT
Case CaseLabs TH10A
Audio Device(s) Edifier S1000DB
Power Supply ASUS ROG Thor 1200W (SeaSonic)
Mouse Logitech MX Master
Keyboard SteelSeries Apex M800
Software MS Windows 10 Pro for Workstation
Benchmark Scores A lot.
wazzledoozle said:
How is the performance of SATA raid configurations?

if you spend your parallel-ata disks one channel each then the performence of a parallel-ata array should be nearly identical compared to the performance of a serial-ata array using the same disk model, but on the other hand the performance greatly depends on the controller being used so you can't answer that in general.

here are some atto-pics of real raid configurations. since the used controllers are also available as serial-ata version there schould be no difference in performance compared to the serial-ata version.
the used controller is a 3ware escalade 7500-8 parallel-ata raid controller (8 separate channels, only 1 disk per channel, pci 64bit/33mhz), all tests were run under windows server 2003 using ntfs on basic disks (write cache enabled):

raid-0 with 6x hitachi/ibm 120gb 8mb cache 7200rpm disks, controller in 64bit/33mhz mode:


raid-5 with 8x hitachi/ibm 120gb 8mb cache 7200rpm disks, controller in 64bit/33mhz pci mode:


raid-5 with 5x hitachi/ibm 250gb 8mb cache 7200rpm disks, controller in 32bit/33mhz pci mode:


as you can see the 5-disk raid-5 array maxed out at 115-117mb/s read speeds wich is the limit of the 32bit pci bus. on the 64bit pci bus you see a limit of 180-184mb/s for the raid-0 array. because the 120gb disks are slower than the 250gb model you cant say what the limit is here, maybe it is the pci-bus, maybe the disks itself but besides that 184mb/s is a really satisfying result i might say! :D

if you wonder about the 'slow' write speeds of the raid-5 arrays: this is because a raid-5 array is an array with parity and if you write something on it the controller has to calculate the parity information on each block written and writes that also to the array. what you now have for a simple write operation is 1 write, 1 read and another write and that is what you pay for safety, but believe me its worth it... ;)

if you want more information about raid performance than maybe you have to specify what kind of disks you are planning to use and what type of raid controller you have in mind.

one last note: when you are using windows xp then you schould be aware of the presence of the 'scsi-bug' since most raid controllers were recognised as scsi-controllers and then the raid-arrays appear to the system as scsi disks. this results in extremely low write performance, but can be bypassed by converting your array into a dynamic disk. problem is: you can only read those from windows and you are unable to boot from dynamic disks.

cheers
breit

ps: sorry for the long post, but i thought this could be helpful... :rolleyes:
 
Top