• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Odd NAS Array Performance

Joined
Sep 13, 2016
Messages
10 (0.00/day)
System Name Workstation
Processor 2x Intel Xeon E5-2697 V3
Motherboard Supermicro X10DRi-T
Cooling 2x Noctua NH-U12DX i4
Memory 4x 32GB Samsung ECC 2400MHz
Video Card(s) Nvidia Geforce Titan X (Maxwell)
Storage Samsung 6.4TB PM1725a + Intel 800GB 750 series + 2x Micron 4TB 5100 ECO
Display(s) 3x BenQ 3200PT + Dell UltraSharp UP2720Q
Case NZXT S340 White
Audio Device(s) Grace Design m9XX
Power Supply Corsair HX850
Mouse Logitech MX Master 3
Keyboard CM NovaTouch TKL (MX compatible stem Topre!)
Software Windows 10
So I have just set up a BTrfs RAID6 across twelve 4TB hard drives in my NAS and decided to benchmark the performance. I used CrystalDiskMark and noticed that, while the write speeds were what I was expecting, the sequential read speeds were much slower. File transfers exhibit the same behavior.
I would have thought read and write speeds on spinning drives in BTrfs RAID6 would at least be comparable, if not skewed in the favour of reads (considering the parity). Does anyone have a similar experience or is able to explain why this is not the case?
DiskMark64_2016-09-15_00-22-42.png
 
https://btrfs.wiki.kernel.org/index.php/RAID56
The parity RAID code has multiple serious data-loss bugs in it. It should not be used for anything other than testing purposes.


Certainly odd, but with that recent recommendation I'd look into implementing redundancy by another means instead of troubleshooting.
 
https://btrfs.wiki.kernel.org/index.php/RAID56



Certainly odd, but with that recent recommendation I'd look into implementing redundancy by another means instead of troubleshooting.
Good thing I am only using it for testing purposes then, still surprising that after more than a year that issue is still not fixed. I would have thought it would be a top priority, but who am I to say.
Edit: Ah, turns out that issue, despite being present in version 3.19 onward, was only discovered last month.
 
Last edited:
with that many drives a hardware raid controller would be good to have.
 
Back
Top