1. Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Best Raid Controller Card?

Discussion in 'General Hardware' started by fraya713, Jan 23, 2014.

  1. fraya713

    Joined:
    May 1, 2007
    Messages:
    320 (0.13/day)
    Thanks Received:
    4
    Looking to replace my motherboard RAID config with a standalone dedicated RAID controller card

    Currently setup a RAID 10 with 4 of these guys:
    http://www.amazon.com/dp/B00691WMJG/?tag=tec06d-20

    What are your thoughts? I definitely think I can get better performance.

    Thanks all!
  2. Steevo

    Steevo

    Joined:
    Nov 4, 2005
    Messages:
    7,989 (2.59/day)
    Thanks Received:
    1,084
    How much do you want to spend? There are some cards out that cost over a grand, some that cost $20, why do you want to use those drives specifically, do you have available PCIe slots?
    10 Million points folded for TPU
  3. fraya713

    Joined:
    May 1, 2007
    Messages:
    320 (0.13/day)
    Thanks Received:
    4
    I already bought the drives, so it's just what I am using.
    I'm not looking to spend anything over the $500ish range if I can help it.

    It's a personal PC that I use to game a lot and also have some work that I do on it.
    Just looking to get the RAID speed without it impacting my processing performance if I can help it.

    Thanks again!
  4. Mindweaver

    Mindweaver Moderato®™ Staff Member

    Joined:
    Apr 16, 2009
    Messages:
    5,055 (2.76/day)
    Thanks Received:
    2,606
    Location:
    Statesville, NC
    I would not use those drives in a RAID 10 array. I would use an enterprise drive with that array. I think you'll only run into problems using your current drives.
    Crunching for Team TPU
  5. Easy Rhino

    Easy Rhino Linux Advocate

    Joined:
    Nov 13, 2006
    Messages:
    13,203 (4.86/day)
    Thanks Received:
    3,126
    If you care about data integrity you won't use a raid card with those hybrid drives. RAID works by writing data across several physical discs. Hybrid drives work by storing heavily used data in cache. Your raid card will not recognize that differing data stored across the array in the cache leading to corruption and possibly a file system disaster.
    AthlonX2 says thanks.
  6. Mindweaver

    Mindweaver Moderato®™ Staff Member

    Joined:
    Apr 16, 2009
    Messages:
    5,055 (2.76/day)
    Thanks Received:
    2,606
    Location:
    Statesville, NC
    Well said! :toast:
    Crunching for Team TPU
  7. Steevo

    Steevo

    Joined:
    Nov 4, 2005
    Messages:
    7,989 (2.59/day)
    Thanks Received:
    1,084
    As long as things like NCQ are turned off on the controller and you are aware your "ssd" portion of the drives will wear out faster than intended due to the cache policy of the RAID card VS the cache policy of the disks.

    http://www.newegg.com/Product/Product.aspx?Item=N82E16816115059

    This plus the correct battery backup for the card, and making sure to use a good UPS for the system with low power shutdown and there should be few issues. You might saturate the SATAII bus on the card, but with the cache and each drives throughput you will only notice a difference if you are running benchmarks, and perhaps a second on windows boot or other disk intensive pure write or read activity. But if you were looking for the performance there you should have gone a different direction with the disks/storage anyway.
    10 Million points folded for TPU
  8. Aquinus

    Aquinus Resident Wat-man

    Joined:
    Jan 28, 2012
    Messages:
    5,562 (6.83/day)
    Thanks Received:
    1,752
    Location:
    Concord, NH
    I agree with others here, don't go with hybrid drives in a RAID setup. That's only asking for trouble.

    Personally, I like LSI cards. We have the 4i variant of this one in one of our servers at work and it's our best performing RAID card (in RAID-6, with 4 drives versus some of our 6-disk arrays with 3Ware/Adaptec cards.)

    I always get at least WD Blacks for RAID. I would recommend WD RE series drives or Seagate Constellation ES drives if you're "getting serious" about SATA RAID.

    Edit: Are you still using the rig on your account or is that old? What are you putting this into?
    fraya713 says thanks.
  9. fraya713

    Joined:
    May 1, 2007
    Messages:
    320 (0.13/day)
    Thanks Received:
    4

    No, those specs are really old. I'm certainly not talking enterprise or corporate level drive redundancy here, just a simple RAID setup that isn't over the top in price and where I'm not losing performance based on my motherboard RAID config. I've confirmed that the SSHD's can be setup in RAID 0 and 1, however, 5,6 and 10 haven't really been tested (however, I've been running mine for a good 6-8 months without issue.)


    My PC Specs
    Operating System: Microsoft Windows 7 64 bit Ultimate Edition
    Processor: Intel Core i7-920 Bloomfield 2.66GHz (Overclocked to 4.0 ghz) Quad-Core Processor
    Motherboard: ASUS P6T Deluxe V2 LGA 1366 Intel X58 ATX Intel Motherboard
    Cooling: Prolimatech Megahalems Rev.B CPU Cooler with Antec 120mm Blue LED Fan + Antec 120mm Blue LED Case Fan (x8) + EVERCOOL 50mm Case Fan (x3) + Antec 200mm top fan
    Memory: Kingston HyperX 12GB DDR3 SDRAM 1600 Desktop Memory
    Video Card(s): EVGA ACX Cooler GeForce GTX 780 3GB GDDR5
    Hard Disk(s): Seagate Momentus XT 750 GB 7200RPM SATA 6Gb/s 32 MB Cache 2.5 Inch Solid State Hybrid Drive (x4) in RAID 10 for 1.36 TB
    CD/DVD Drive: Pioneer Black SATA Blu-ray Disc/DVD/CD Writer
    LCD/CRT Monitor: BenQ High Performance Gaming 120hz 27-Inch Screen LED-Lit Monitor
    Case: Antec Twelve Hundred Black Steel ATX Full Tower Computer Case
    Sound Card: ASUS Xonar Essence STX Virtual 7.1 Headphone AMP Card
    Power Supply: EVGA SuperNOVA NEX1500 Classified 1500W 80 PLUS GOLD Certified Modular Power Supply
    Speakers / Headphones: SENNHEISER PC350 Circumaural Headset
    Card Reader: AFT PRO-35U All-in-one USB 2.0 Card Reader
    Keyboard: Logitech G19 USB Gaming Keyboard
    Mouse / Mousepad: RAZER DeathAdder Black 3500 dpi Mouse
    / RAZER eXactMat and eXactRest
    Other Hardware: Logitech QuickCam Orbit USB 2.0 WebCam
  10. Steevo

    Steevo

    Joined:
    Nov 4, 2005
    Messages:
    7,989 (2.59/day)
    Thanks Received:
    1,084
    Why RAID 10 and not RAID 5? Just as much redundancy and failure tolerance, but more storage and speed.
    10 Million points folded for TPU
  11. fraya713

    Joined:
    May 1, 2007
    Messages:
    320 (0.13/day)
    Thanks Received:
    4
    I decided RAID 10 because I could potentially rebuild my raid faster if a drive did fail, I also based this decision off of never hitting the 1.36tb of my combined raid so more space would've just been simply that.. more space. Also, I believe RAID 10's are more reliable and have more integrity when rebuilding against corrupted data, as it has two sources to compare from.

    Basically I made the decision based on my needs at the time and so far it hasn't been an issue.

    After doing a little reading, I may need to check to see if Smart Response Technology is even turned on on my Intel RAID application :rolleyes: to use that cache space i've had.

    Ultimately, I've had no problems with my disks or my raid configuration, but a co-worker and i got into a RAID conversation and he brought up the fact that motherboard raid configs have a processing performance impact, so I wanted to see if it was worth venturing into obtaining a dedicated RAID controller for my PC.
    Last edited: Jan 24, 2014
  12. The Von Matrices

    The Von Matrices

    Joined:
    Dec 16, 2010
    Messages:
    1,036 (0.85/day)
    Thanks Received:
    308
    Replying directly to the OP, if you're doing RAID 10, you will see negligible performance benefit from using a dedicated controller. The SSHDs will not max out the uplink to the processor, and there is no computation of parity data that a RAID controller could accelerate. The only real advantage of a dedicated card would be that you could move the drive among platforms without having to reformat.

    RAID 5 won't be any faster reading, and it will be much slower writing due to the parity calculations. The only advantage of RAID 5 is that you would get 3/4 the max capacity versus 1/2 with RAID 10. Of course, RAID 5 has its own list of horror stories during recovery regarding corrupt parity data, which you do not have with RAID 10.

    The cache on the SSHDs is non-volatile and is managed by the disk's controller. To the RAID controller, it's no different than a conventional hard drive; I see no reason why the SSHDs would have any less data integrity than conventional hard drives. The main issue would be with the lack of time limited error recovery causing drives to drop from the array, which is just as much of a problem in the Western Digital Black drives you recommend.
    fraya713 says thanks.
  13. Steevo

    Steevo

    Joined:
    Nov 4, 2005
    Messages:
    7,989 (2.59/day)
    Thanks Received:
    1,084
    I haven't used a controller card since Intel Pentium PRO days that couldn't rebuild on the fly. RAID5 performance degradation during a live rebuild on a 20 user system for me was about 25% slower for 12 hours. I could have even scheduled it for offline that night but I didn't want to wait or stick around and babysit.

    Hardware RAID5 is **close to with real world data** as fast as two drives in RAID0 for reads, and only slightly slower in writes. Older cards with slow CPU's and slower SCSI bus and multiple drives on the same bus were slower. I have personally tested 4 3TB drives in multiple configurations and RAID5 was the safest, highest performance for the dollar.
    Last edited: Jan 24, 2014
    10 Million points folded for TPU
  14. fraya713

    Joined:
    May 1, 2007
    Messages:
    320 (0.13/day)
    Thanks Received:
    4
    Thanks for that, ultimately, my main concern was the raid leaching cpu performance - I game a lot and though I understand there isn't a lot of data reading while I actually am in game (besides map loading, etc), if there is anything impacting my cpu performance I definitely want to try to nip it in the bud.

    The question was more academic and I've learned a lot from this as it seems a raid controller would really only be ideal and worth while in a designated SAN/NAS situation for direct redundant backup or file storage.

    Understood, and price comparison was definitely a concern - hence why I went with the SSHD as it was the best bang for the buck when comparing size and performance
    So as far as the drive choice and RAID choice, it's pretty much explained.
  15. Steevo

    Steevo

    Joined:
    Nov 4, 2005
    Messages:
    7,989 (2.59/day)
    Thanks Received:
    1,084
  16. newtekie1

    newtekie1 Semi-Retired Folder

    Joined:
    Nov 22, 2005
    Messages:
    19,486 (6.35/day)
    Thanks Received:
    5,727
    You'd be surprised at how little CPU power is really used by Software/Firmware RAID controllers. Yeah, they use some, but it is relatively nothing with todays CPUs. It used to be an issue to worry about back in the PIII days, when a software RAID controller might use 25% of a 1GHz processor. But that really only amounts to 250MHz. And with today's more efficient processors and multiple cores, you're talking maybe 100MHz on one of four cores. You'll never notice that.

    At this point, you'd be better off spending that money on a decent sized SSD for your OS and main programs/games. And using the hard drives as a storage area. If you haven't come close to filling the 1.36TB you have, then that $500 you were willing to spend on a RAID card would be far better spent on a 480GB SSD like this one. http://www.newegg.com/Product/Product.aspx?Item=N82E16820226255
    fraya713 says thanks.
    Crunching for Team TPU
  17. Easy Rhino

    Easy Rhino Linux Advocate

    Joined:
    Nov 13, 2006
    Messages:
    13,203 (4.86/day)
    Thanks Received:
    3,126
    I have no doubht raid 0 works with sshds, but for how long and how stable? with traditional harddrives in raid, if you lose power the array remains in tact since all information is available. if you lose power with SSHDs then you lose what is in cache which apparently is controlled by the hdd controller which means it has to be stored. since the cache wont be written in time and your cache is part of the file system then you could lose your entire file system to corruption since your controller now has to tell your raid card what info is has in cache.

    i really dont see how this is a good option at all. if you want speed, buy two SSDs and use software raid 0 to great effect. That would be cheaper than a $500 raid card which you will only be using 25% of its ability.
  18. yogurt_21

    Joined:
    Feb 18, 2006
    Messages:
    4,277 (1.43/day)
    Thanks Received:
    537
    This to me makes the most sense with your setup. If you're already using mobo raid you're not going to see anything close to a 500$ value return by getting a raid card. But a 300$ 480GB SSD? You'd get a value return out of that. 200$ cheaper likely faster while also offering more storage. Also you wouldn't have to worry about having to backup data and re-creating the raid. You could keep your raid 10 as is and simply install windows on the ssd.
    fraya713 says thanks.
  19. Useful Idiot New Member

    Joined:
    Jul 19, 2013
    Messages:
    7 (0.03/day)
    Thanks Received:
    4
    Where exactly are you getting this information, a link please? This is flat out misleading and wrong. The SSHD handles LBA abstraction, it is transparent to the RAID controller.
    No, it was not.

    So much misinformation. You would not want to turn off NCQ, that would lead to single QD performance. The drives internally manage the cache of the SSHD, and the cache of the RAID controller will only increase the longevity of the SSHD. The RAID controller cache takes random data and sequentializes it, which is friendlier to NAND.

    it will be fine :)

    You might wish to read a primer on RAID 5. RAID 5 produces faster reads (x the number of drives) but typicallyonly writes at the speed of one drive.

    Agreed. Unless you are pushing 250K+ IOPS CPU utilization will be negligible. Storage Spaces would allow for concatenation of the two SSHDs, and a fast SSD boot disk is uber. The bad thing about SSHD, at the end of the day they arent fast, only 5,400 RPM. No getting around that. Buy an SSD.
    fraya713, Steevo and The Von Matrices say thanks.
  20. The Von Matrices

    The Von Matrices

    Joined:
    Dec 16, 2010
    Messages:
    1,036 (0.85/day)
    Thanks Received:
    308
    I do not understand why you think traditional hard drives are better. I interpret that you're worried about write-back caching, but the SSHD's NAND write-back cache is non-volatile and won't be lost if the disk loses power. I think what you're also missing is that the SSHDs he's talking about are completely self-contained. The RAID controller sees only one volume per SSHD; all the caching is done by the drive's controller and is completely transparent to the RAID controller.

    In SSHDs, the only data that can be lost in a power loss is small amount of data in the controller's SDRAM. However, this is nothing exclusive to SSHDs; all storage mediums including pure HDDs and SSDs (except a few Sandforce models) have this SDRAM cache. Certain disks can be forced to operate in write-through mode, which greatly improves data security in the event of a power loss, but you likely won't find that feature in anything but enterprise level RAID controllers and hard drives.

    Thank you for helping to clear up the misinformation.

    RAID 5 can theoretically write at the speed of a N-1 drive RAID 0; of course, most RAID 5 arrays are constrained by the speed of parity calculations and in effect write much slower than that. However, saying it only writes at the speed of one drive is a vast oversimplification and ignores the varying levels of speed that RAID controllers and CPUs can calculate parity.

    Modern RAID controllers can compute RAID 5 parity at about 3/4 the speed of RAID 0.

    Top image: RAID 0 performance
    Bottom image: RAID 5 performance

    [​IMG]
    [​IMG]
    Last edited: Jan 24, 2014
  21. Useful Idiot New Member

    Joined:
    Jul 19, 2013
    Messages:
    7 (0.03/day)
    Thanks Received:
    4
    ...hence the inclusion of the word 'typically' :). In NAS usage and some 'cheaper' RAID controllers and HBA's this can be a reality due to parity overhead.
  22. Easy Rhino

    Easy Rhino Linux Advocate

    Joined:
    Nov 13, 2006
    Messages:
    13,203 (4.86/day)
    Thanks Received:
    3,126
    Well I learned something new today.
  23. newtekie1

    newtekie1 Semi-Retired Folder

    Joined:
    Nov 22, 2005
    Messages:
    19,486 (6.35/day)
    Thanks Received:
    5,727
    Yep, in fact my RAID5 reads and writes at almost the same speed. It is in the 120MB/s+ speed constant across the entire array, which maxes out my gigabit network so I'm cool with that. I think the array is actually limited by the fact that it is run off a PCI-E x1 card, I'm kind of kicking myself for not getting the PCI-E x4 card.

    The OP's SSHD drives are actually 7,200RPM, it is only the latest generation that is 5,400RPM, and the 3.5" drives are also 7,200RPM.
    Crunching for Team TPU
  24. Mindweaver

    Mindweaver Moderato®™ Staff Member

    Joined:
    Apr 16, 2009
    Messages:
    5,055 (2.76/day)
    Thanks Received:
    2,606
    Location:
    Statesville, NC
    Really I think not? I would not use those drives for any type of RAID array, because I care about my data. They are not built for that purpose. The money spent for those drives could have easily got him enterprise drives with error recovery and are 10k. I have and have used many RAID arrays in a production environment. Now can he use those drive? Yes, but why? I'm just suggesting that his money could have been spent better. He is buying the drives thinking it will be faster because of the Hybrid SSD Cache, but in reality he is better off getting regular SSD's. You are putting a lot of faith in a hybrid drive that is not built for RAID.

    Really? Maybe in a software RAID array, but using a Hardware RAID array the write is much faster. Below is one of my RAID5 arrays using 4x 150GB WD Raptor 10k drives and a 3ware 9650SE. Keep in mind it has about 15+ users reading and writing when I performed this test, but with out anyone using them it's closer to 350mb /sec Read and 350mb /sec write.

    [​IMG]
    Crunching for Team TPU
  25. Steevo

    Steevo

    Joined:
    Nov 4, 2005
    Messages:
    7,989 (2.59/day)
    Thanks Received:
    1,084
    4561_16_lsi_9265_8i_megaraid_sas_raid_controller_review.png

    See that little segment that says 8D R5 Fast Path? Its all of 73 slower maximum than the RAID0 option.


    On a multiple controller depth to physical medium setup such as this NCQ can increase the overhead and increase the queuing time in real world reads and writes. Its the RAID cards job to handle the disks, use the setup for Fast Path on these LSI cards, they wouldn't have made it and show such great numbers if their logic was somehow worse than what the standard is.


    Lastly the TRIM will not pass through on these disks with a RAID card, other features windows implements for SSD's will not pass though, and since the RAID card will be blissfully unaware there is cache on the hybrid drive it will treat them as standard mechanical drives, and perform standard mechanical drive operations, which will get run through the cache and wear it out ever so slightly faster.
    10 Million points folded for TPU

Currently Active Users Viewing This Thread: 2 (0 members and 2 guests)

Share This Page