• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Advice needed for new RAID system

Joined
Apr 6, 2015
Messages
246 (0.07/day)
Location
Japan
System Name ChronicleScienceWorkStation
Processor AMD Threadripper 1950X
Motherboard Asrock X399 Taichi
Cooling Noctua U14S-TR4
Memory G.Skill DDR4 3200 C14 16GB*4
Video Card(s) AMD Radeon VII
Storage Samsung 970 Pro*1, Kingston A2000 1TB*2 RAID 0, HGST 8TB*5 RAID 6
Case Lian Li PC-A75X
Power Supply Corsair AX1600i
Software Proxmox 6.2
Hello,

Background:
I am recently upgrading the storage of my workstation.
To begin with the background, this will be used as a 2nd copy of some critical experimental data for myself, where the first copy will be in the lab computers.
While losing them on this workstation won't probably kill me, but I prefer it to be as reliable as possible for a peace of mind. Therefore over RAID 10 with 4 drives, I prefer RAID 6 where it will be larger and can stand 2 drives failure.

Size requirement:
When the experiment is running it can generate hundreds of GBs a day, that's why I try to get a relatively large array in terms of volume. I am buying 5x 8TB drives and hoping to set up a RAID 6 array.
This will give me 24 TB which should be good enough as a buffer (I will remove data as soon as they are analysed and found to be not useful).

Speed requirement:
Actually it is quite low, as I push the data through internet after the acquisition, and my internet is just 300 Mbps.
Still I hope to get as much as 200 MB/s+ throughput from the array.

My workstation's current spec:
CPU: Threadripper 1950X
RAM: 64 GB DDR4 quad-channel
MB: Asrock X399 with 8 SATA ports
OS: ubuntu 16.04 LTS

Questions: Should I get a hardware RAID card like one from LSI?
I found this:
https://www.newegg.ca/Product/Product.aspx?Item=N82E16816118169
It seems to be reasonably high performance, but I have to make sure the 8 TB drives will work with it.
However, I also keep seeing people mentioning the new CPUs are strong enough to do RAID without the hardware controller nowadays. Maybe it's true for Threadripper?
I won't mind a performance hit to other processes when a throughput happening.
What concerns me more, however, is the time it takes to rebuild. When 1 or 2 drives stopped working, I was wondering if software RAID will take much longer for this.

Thanks for reading.
 
Last edited:
Joined
Jan 31, 2010
Messages
5,377 (1.04/day)
Location
Gougeland (NZ)
System Name Cumquat 2021
Processor AMD RyZen R7 7800X3D
Motherboard Asus Strix X670E - E Gaming WIFI
Cooling Deep Cool LT720 + CM MasterGel Pro TP + Lian Li Uni Fan V2
Memory 32GB GSkill Trident Z5 Neo 6000
Video Card(s) Sapphire Nitro+ OC RX6800 16GB DDR6 2270Cclk / 2010Mclk
Storage 1x Adata SX8200PRO NVMe 1TB gen3 x4 1X Samsung 980 Pro NVMe Gen 4 x4 1TB, 12TB of HDD Storage
Display(s) AOC 24G2 IPS 144Hz FreeSync Premium 1920x1080p
Case Lian Li O11D XL ROG edition
Audio Device(s) RX6800 via HDMI + Pioneer VSX-531 amp Technics 100W 5.1 Speaker set
Power Supply EVGA 1000W G5 Gold
Mouse Logitech G502 Proteus Core Wired
Keyboard Logitech G915 Wireless
Software Windows 11 X64 PRO (build 23H2)
Benchmark Scores it sucks even more less now ;)
I run SSD raid0 on my FX8320 and it tops out the through put for the SATA ports so I'd imagine that a Threadripper would be more than capable of doing the job you need but you not going to get raid6 you can do raid10 though which will give you throughput and space

Use an M2 slot to boot from and use the SATA ports for your raid array
 

Athlon2K15

HyperVtX™
Joined
Sep 27, 2006
Messages
7,909 (1.23/day)
Location
O-H-I-O
Processor Intel Core i9 11900K
Motherboard MSI Z590 Carbon EK X
Cooling Custom Water
Memory Team DDR4 4000MHz
Video Card(s) ASUS TUF RTX 3080 OC
Storage WD WN850 1TB
Display(s) 43" LG NanoCell 4K 120Hz
Power Supply Asus Thor 1200w
Mouse Asus Strix Evolve
Keyboard Asus Strix Claymore
Joined
Jan 17, 2010
Messages
12,280 (2.36/day)
Location
Oregon
System Name Juliette // HTPC
Processor Intel i7 9700K // AMD Ryzen 5 5600G
Motherboard ASUS Prime Z390X-A // ASRock B550 ITX-AC
Cooling Noctua NH-U12 Black // Stock
Memory Corsair DDR4 3600 32gb //G.SKILL Trident Z Royal Series 16GB (2 x 8GB) 3600
Video Card(s) ASUS RTX4070 OC// GTX 1650
Storage Samsung 970 EVO NVMe 1Tb, Intel 665p Series M.2 2280 1TB // Samsung 1Tb SSD
Display(s) ASUS VP348QGL 34" Quad HD 3440 x 1440 // 55" LG 4K SK8000 Series
Case Seasonic SYNCRO Q7// Silverstone Granada GD05
Audio Device(s) Focusrite Scarlett 4i4 // HDMI to Samsung HW-R650 sound bar
Power Supply Seasonic SYNCRO 750 W // CORSAIR Vengeance 650M
Mouse Cooler Master MM710 53G
Keyboard Logitech 920-009300 G512 SE
Software Windows 10 Pro // Windows 10 Pro
Tough questions. Yes, LSI card if your working your CPU hard you don't want it try to manage a RAID that big. Or you could build another PC for a NAS with FreeeNAS, that would move the drives, workload, heat, and noise to another system. ZFS also has one small advantage over a hardware raid. The ability to compare the checksum of each block and correct it. The array can also be saved if there is a hardware failure.
 
Last edited:
Joined
Apr 6, 2015
Messages
246 (0.07/day)
Location
Japan
System Name ChronicleScienceWorkStation
Processor AMD Threadripper 1950X
Motherboard Asrock X399 Taichi
Cooling Noctua U14S-TR4
Memory G.Skill DDR4 3200 C14 16GB*4
Video Card(s) AMD Radeon VII
Storage Samsung 970 Pro*1, Kingston A2000 1TB*2 RAID 0, HGST 8TB*5 RAID 6
Case Lian Li PC-A75X
Power Supply Corsair AX1600i
Software Proxmox 6.2
Thanks all for your ideas,
I run SSD raid0 on my FX8320 and it tops out the through put for the SATA ports so I'd imagine that a Threadripper would be more than capable of doing the job you need but you not going to get raid6 you can do raid10 though which will give you throughput and space

Use an M2 slot to boot from and use the SATA ports for your raid array
Indeed, there will be no performance hit for RAID 0/1, but I am not sure if this applies also to RAID with parities like 5/6.
And I saw people taking a week to rebuild a smaller array than what I am trying to make, where the hardware controller does it in a day or less.
So if the card can help with this I will be glad.

Get the RAID card for hardware RAID, then grab 8087 to SATA breakout cables, you will need two for 8 drives.

https://www.amazon.com/Cable-Matters-Internal-Mini-SAS-Breakout/dp/B012BPLYJC
Yes, I am planning to get it, and the card I am looking at has 2 connectors for the 5 drives I need if I didn't understand wrong.

Tough questions. Yes, LSI card if your working your CPU hard you don't want it try to manage a RAID that big. Or you could build another PC for a NAS with FreeeNAS, that would move the drives, workload, heat, and noise to another system. ZFS also has one small advantage over a hardware raid. The ability to compare the checksum of each block and correct it. The array can also be saved if there is a hardware failure.

I would have done this 2-3 years ago, but now I am a (hopefully) finishing PhD student :D
I might have to move in a year, and I already have a 2 desktops, 1 WS, multiple laptops and pads........
If possible I want to stuff them into a single box.

After all, maybe I can try with this cheaper LSI model to see how it can work for me, just as also one more thing to learn.
 
Joined
Jul 21, 2015
Messages
501 (0.16/day)
Thanks all for your ideas,

Indeed, there will be no performance hit for RAID 0/1, but I am not sure if this applies also to RAID with parities like 5/6.
And I saw people taking a week to rebuild a smaller array than what I am trying to make, where the hardware controller does it in a day or less.
So if the card can help with this I will be glad.

Parity RAID (5/6) arrays will always have superior performance with a dedicated RPU (RAID Processing Unit) in all operations that require parity (Write/Rebuild/Read-from-degraded). No matter how fast CPUs get, they suck at XOR operations when compared to an RPU. My PERC 700 card rebuilds a 2TB SATA6 drive in about 5 hours. I would NOT use RAID10, as it is only tolerant of one drive failure on either side of the stripe. If you lose both mirrors of the same stripe before the rebuild is complete, the array is toast. RAID6 is tolerant of any two failures, and while a third failure could happen before one of the failed drives is rebuilt, it isn't likely if you use enterprise drives and ideally buy them from different sources or at least at different times from a high turnover supplier (cuts down the possibility of "batch failures").


Yes, I am planning to get it, and the card I am looking at has 2 connectors for the 5 drives I need if I didn't understand wrong.
You got it right. That card can support 8 drives on the two ports. The SFF-8087 cables break out to 4 SATA/SAS connectors. So you will only use one SATA connector on the second port.



I would have done this 2-3 years ago, but now I am a (hopefully) finishing PhD student :D
I might have to move in a year, and I already have a 2 desktops, 1 WS, multiple laptops and pads........
If possible I want to stuff them into a single box.

Nothing wrong with that. :D


After all, maybe I can try with this cheaper LSI model to see how it can work for me, just as also one more thing to learn.

If you want to go even cheaper, look for used Dell PERC H730 server pulls. It's a better card (still made by LSI) and can be had for a couple hundred on ebay. Just make sure you look for the PCIe form, not the blade socket cards.
 
Last edited:
Joined
Apr 6, 2015
Messages
246 (0.07/day)
Location
Japan
System Name ChronicleScienceWorkStation
Processor AMD Threadripper 1950X
Motherboard Asrock X399 Taichi
Cooling Noctua U14S-TR4
Memory G.Skill DDR4 3200 C14 16GB*4
Video Card(s) AMD Radeon VII
Storage Samsung 970 Pro*1, Kingston A2000 1TB*2 RAID 0, HGST 8TB*5 RAID 6
Case Lian Li PC-A75X
Power Supply Corsair AX1600i
Software Proxmox 6.2
Parity RAID (5/6) arrays will always have superior performance with a dedicated RPU (RAID Processing Unit) in all operations that require parity (Write/Rebuild/Read-from-degraded). No matter how fast CPUs get, they suck at XOR operations when compared to an RPU. My PERC 700 card rebuilds a 2TB SATA6 drive in about 5 hours. I would NOT use RAID10, as it is only tolerant of one drive failure on either side of the stripe. If you lose both mirrors of the same stripe before the rebuild is complete, the array is toast. RAID6 is tolerant of any two failures, and while a third failure could happen before one of the failed drives is rebuilt, it isn't likely if you use enterprise drives and ideally buy them from different sources or at least at different times from a high turnover supplier (cuts down the possibility of "batch failures").



You got it right. That card can support 8 drives on the two ports. The SFF-8087 cables break out to 4 SATA/SAS connectors. So you will only use one SATA connector on the second port.





Nothing wrong with that. :D




If you want to go even cheaper, look for used Dell PERC H730 server pulls. It's a better card (still made by LSI) and can be had for a couple hundred on ebay. Just make sure you look for the PCIe form, not the blade socket cards.
Thanks for your extremely informative reply :D

I have already ordered the harddrives and the cards earlier last week, though they are still on their way home.
I got the LSI 9266-8i I mentioned, after reading some I also bought a small fan to blow on it directly as it seems these cards were not supposed to use in a tower.
About the harddrives batch issue, that was a good point. Unfortunately I already ordered all 5 from a single source (Newegg.ca), let's hope I don't make lemonade.
 
Joined
Apr 6, 2015
Messages
246 (0.07/day)
Location
Japan
System Name ChronicleScienceWorkStation
Processor AMD Threadripper 1950X
Motherboard Asrock X399 Taichi
Cooling Noctua U14S-TR4
Memory G.Skill DDR4 3200 C14 16GB*4
Video Card(s) AMD Radeon VII
Storage Samsung 970 Pro*1, Kingston A2000 1TB*2 RAID 0, HGST 8TB*5 RAID 6
Case Lian Li PC-A75X
Power Supply Corsair AX1600i
Software Proxmox 6.2
So the guys are in house:







Now I just have to give it two days....


Some tips on using LSI Megaraid on ubuntu:

It turns out the support isn't as native as I wished it to be.
First, you need to install the driver manually, just go to LSI's (Broadcom) website and down the driver for the card. You can find the one for ubuntu.
Follow the readme, it can be installed rather easily with just a few commands. A reboot is needed after this point.

Then, you probably want to install the Megaraid manager from LSI, it will make life easier if you have access to GUI. (I am running a headless server with virtual VNC server).
There is however no such installation dedicated for Debian systems, it turns out one can download the .rpm and convert them into .deb
This link has all the details, simply follow it step-by-step: https://djmobley.com/index.php/2017...orage-manager-on-ubuntu-16-04-64-bit-desktop/
if you haven't already, you DO need to unlock the root account to use the Megaraid manager.

Now I can manage it from my Fedora desktop remotely. Be sure to check if the firewall port 3071 and 5571 are open, in case you cannot connect initially.
 
Last edited:
Joined
Jan 17, 2010
Messages
12,280 (2.36/day)
Location
Oregon
System Name Juliette // HTPC
Processor Intel i7 9700K // AMD Ryzen 5 5600G
Motherboard ASUS Prime Z390X-A // ASRock B550 ITX-AC
Cooling Noctua NH-U12 Black // Stock
Memory Corsair DDR4 3600 32gb //G.SKILL Trident Z Royal Series 16GB (2 x 8GB) 3600
Video Card(s) ASUS RTX4070 OC// GTX 1650
Storage Samsung 970 EVO NVMe 1Tb, Intel 665p Series M.2 2280 1TB // Samsung 1Tb SSD
Display(s) ASUS VP348QGL 34" Quad HD 3440 x 1440 // 55" LG 4K SK8000 Series
Case Seasonic SYNCRO Q7// Silverstone Granada GD05
Audio Device(s) Focusrite Scarlett 4i4 // HDMI to Samsung HW-R650 sound bar
Power Supply Seasonic SYNCRO 750 W // CORSAIR Vengeance 650M
Mouse Cooler Master MM710 53G
Keyboard Logitech 920-009300 G512 SE
Software Windows 10 Pro // Windows 10 Pro
When you get it running let me know how loud it is. Mine with 3 Tb drives is fairly loud. Like a refrigerator running
 
Joined
Apr 6, 2015
Messages
246 (0.07/day)
Location
Japan
System Name ChronicleScienceWorkStation
Processor AMD Threadripper 1950X
Motherboard Asrock X399 Taichi
Cooling Noctua U14S-TR4
Memory G.Skill DDR4 3200 C14 16GB*4
Video Card(s) AMD Radeon VII
Storage Samsung 970 Pro*1, Kingston A2000 1TB*2 RAID 0, HGST 8TB*5 RAID 6
Case Lian Li PC-A75X
Power Supply Corsair AX1600i
Software Proxmox 6.2
When you get it running let me know how loud it is. Mine with 3 Tb drives is fairly loud. Like a refrigerator running
It is already up and running, however I am afraid I can't make a good judgement: 10 fans power up at the same time and some at full speed really prevented me from distinguishing the noise.
So far I can only say that they run quieter than my WD Black. But putting 5 drives together they do resonate and make some hums. (already all suspended with rubber)
 
Last edited:
Joined
Jul 21, 2015
Messages
501 (0.16/day)
When you get it running let me know how loud it is. Mine with 3 Tb drives is fairly loud. Like a refrigerator running
I have 13 drives in mine and I can't hear them at all even without rubber mounts. The fans are pretty quiet when they're throttled down but they're still louder than the drives.

 
Last edited:
Joined
Jan 17, 2010
Messages
12,280 (2.36/day)
Location
Oregon
System Name Juliette // HTPC
Processor Intel i7 9700K // AMD Ryzen 5 5600G
Motherboard ASUS Prime Z390X-A // ASRock B550 ITX-AC
Cooling Noctua NH-U12 Black // Stock
Memory Corsair DDR4 3600 32gb //G.SKILL Trident Z Royal Series 16GB (2 x 8GB) 3600
Video Card(s) ASUS RTX4070 OC// GTX 1650
Storage Samsung 970 EVO NVMe 1Tb, Intel 665p Series M.2 2280 1TB // Samsung 1Tb SSD
Display(s) ASUS VP348QGL 34" Quad HD 3440 x 1440 // 55" LG 4K SK8000 Series
Case Seasonic SYNCRO Q7// Silverstone Granada GD05
Audio Device(s) Focusrite Scarlett 4i4 // HDMI to Samsung HW-R650 sound bar
Power Supply Seasonic SYNCRO 750 W // CORSAIR Vengeance 650M
Mouse Cooler Master MM710 53G
Keyboard Logitech 920-009300 G512 SE
Software Windows 10 Pro // Windows 10 Pro
lol love the case, that's awesome
 
Joined
Apr 6, 2015
Messages
246 (0.07/day)
Location
Japan
System Name ChronicleScienceWorkStation
Processor AMD Threadripper 1950X
Motherboard Asrock X399 Taichi
Cooling Noctua U14S-TR4
Memory G.Skill DDR4 3200 C14 16GB*4
Video Card(s) AMD Radeon VII
Storage Samsung 970 Pro*1, Kingston A2000 1TB*2 RAID 0, HGST 8TB*5 RAID 6
Case Lian Li PC-A75X
Power Supply Corsair AX1600i
Software Proxmox 6.2
If I may continue to bother you guys regarding the RAID system, here is my second question after having everything installed.

I found that after the full initialization, the speed of the system isn't optimal, with read at ~3xx MB/s (this is OK) while write at ~60 MB/s.
Looking it up on the web, it seems I need the BBU or some caching unit for a higher throughput. Then I came across these on Newegg.ca:
My questions is, do I need *both* for the BBU to function? I couldn't find an accessible explanation over this stuff.

1.LSI LSI00297 CacheVault Accessory kit for 9271 and 9266 Series
https://www.newegg.ca/Product/Product.aspx?Item=N82E16816118171
2.LSI LSI00279 MegaRAID LSIiBBU09 Battery Backup Unit
https://www.newegg.ca/Product/Product.aspx?Item=9SIABDC57U8151

Update: I found this:
https://docs.broadcom.com/docs/12348580
And it said it replaces a BBU regarding the power loss protection,
I think I will just go with it.
 
Last edited:
Joined
Jan 17, 2010
Messages
12,280 (2.36/day)
Location
Oregon
System Name Juliette // HTPC
Processor Intel i7 9700K // AMD Ryzen 5 5600G
Motherboard ASUS Prime Z390X-A // ASRock B550 ITX-AC
Cooling Noctua NH-U12 Black // Stock
Memory Corsair DDR4 3600 32gb //G.SKILL Trident Z Royal Series 16GB (2 x 8GB) 3600
Video Card(s) ASUS RTX4070 OC// GTX 1650
Storage Samsung 970 EVO NVMe 1Tb, Intel 665p Series M.2 2280 1TB // Samsung 1Tb SSD
Display(s) ASUS VP348QGL 34" Quad HD 3440 x 1440 // 55" LG 4K SK8000 Series
Case Seasonic SYNCRO Q7// Silverstone Granada GD05
Audio Device(s) Focusrite Scarlett 4i4 // HDMI to Samsung HW-R650 sound bar
Power Supply Seasonic SYNCRO 750 W // CORSAIR Vengeance 650M
Mouse Cooler Master MM710 53G
Keyboard Logitech 920-009300 G512 SE
Software Windows 10 Pro // Windows 10 Pro
My freeNAS is 100MB/s both directions. Mt Synology is 90 MB/s read and 40 MB/s write

How you getting 300MB/s on your network?
 
Joined
Jul 21, 2015
Messages
501 (0.16/day)
If I may continue to bother you guys regarding the RAID system, here is my second question after having everything installed.

I found that after the full initialization, the speed of the system isn't optimal, with read at ~3xx MB/s (this is OK) while write at ~60 MB/s.
Looking it up on the web, it seems I need the BBU or some caching unit for a higher throughput. Then I came across these on Newegg.ca:
My questions is, do I need *both* for the BBU to function? I couldn't find an accessible explanation over this stuff.

1.LSI LSI00297 CacheVault Accessory kit for 9271 and 9266 Series
https://www.newegg.ca/Product/Product.aspx?Item=N82E16816118171
2.LSI LSI00279 MegaRAID LSIiBBU09 Battery Backup Unit
https://www.newegg.ca/Product/Product.aspx?Item=9SIABDC57U8151
No you only need one. The first one and the second one do the same thing in operation - they have high speed RAM as the cache. The difference is the second one is a traditional BBU, where the battery is used to "hold" data in the RAM until power is restored. These can generally hold data for a couple days before the battery dies. The first one is called "FlashCache". It still has the same high speed RAM, but it also has flash NVRAM and supercapacitors (instead of a battery) onboard. When the power fails and the cache is dirty (contains unwritten information), the supercapacitors hold enough power to allow the data in the RAM to be dumped to the flash - where it can be held pretty much indefinitely. When the power is restored, the data is sent back into the RAM, and then flushed to the disks.

There's nothing wrong with the traditional BBU, aside from periodic battery replacement. Most people won't be without power for more than a couple hours at a time (and you should have it connected to a UPS with software to trigger a graceful shutdown anyway). The CacheVault is just a newer technology. And once your unit is installed, make sure you enable Write Back Caching.

My freeNAS is 100MB/s both directions. Mt Synology is 90 MB/s read and 40 MB/s write

How you getting 300MB/s on your network?
It's not a NAS, it's a RAID array in a workstation.
 
Joined
Apr 6, 2015
Messages
246 (0.07/day)
Location
Japan
System Name ChronicleScienceWorkStation
Processor AMD Threadripper 1950X
Motherboard Asrock X399 Taichi
Cooling Noctua U14S-TR4
Memory G.Skill DDR4 3200 C14 16GB*4
Video Card(s) AMD Radeon VII
Storage Samsung 970 Pro*1, Kingston A2000 1TB*2 RAID 0, HGST 8TB*5 RAID 6
Case Lian Li PC-A75X
Power Supply Corsair AX1600i
Software Proxmox 6.2
My freeNAS is 100MB/s both directions. Mt Synology is 90 MB/s read and 40 MB/s write

How you getting 300MB/s on your network?
These numbers were actually from a quick test using the built-in benchmark of Gnome disk utility;
I have yet to test it with real world throughput. I will post them later.

I don't have any particular configuration, basically everything is default and the 5 drives in RAID 6 are empty at the moment of the test.
 
Joined
Jan 17, 2010
Messages
12,280 (2.36/day)
Location
Oregon
System Name Juliette // HTPC
Processor Intel i7 9700K // AMD Ryzen 5 5600G
Motherboard ASUS Prime Z390X-A // ASRock B550 ITX-AC
Cooling Noctua NH-U12 Black // Stock
Memory Corsair DDR4 3600 32gb //G.SKILL Trident Z Royal Series 16GB (2 x 8GB) 3600
Video Card(s) ASUS RTX4070 OC// GTX 1650
Storage Samsung 970 EVO NVMe 1Tb, Intel 665p Series M.2 2280 1TB // Samsung 1Tb SSD
Display(s) ASUS VP348QGL 34" Quad HD 3440 x 1440 // 55" LG 4K SK8000 Series
Case Seasonic SYNCRO Q7// Silverstone Granada GD05
Audio Device(s) Focusrite Scarlett 4i4 // HDMI to Samsung HW-R650 sound bar
Power Supply Seasonic SYNCRO 750 W // CORSAIR Vengeance 650M
Mouse Cooler Master MM710 53G
Keyboard Logitech 920-009300 G512 SE
Software Windows 10 Pro // Windows 10 Pro
Joined
Apr 6, 2015
Messages
246 (0.07/day)
Location
Japan
System Name ChronicleScienceWorkStation
Processor AMD Threadripper 1950X
Motherboard Asrock X399 Taichi
Cooling Noctua U14S-TR4
Memory G.Skill DDR4 3200 C14 16GB*4
Video Card(s) AMD Radeon VII
Storage Samsung 970 Pro*1, Kingston A2000 1TB*2 RAID 0, HGST 8TB*5 RAID 6
Case Lian Li PC-A75X
Power Supply Corsair AX1600i
Software Proxmox 6.2
No you only need one. The first one and the second one do the same thing in operation - they have high speed RAM as the cache. The difference is the second one is a traditional BBU, where the battery is used to "hold" data in the RAM until power is restored. These can generally hold data for a couple days before the battery dies. The first one is called "FlashCache". It still has the same high speed RAM, but it also has flash NVRAM and supercapacitors (instead of a battery) onboard. When the power fails and the cache is dirty (contains unwritten information), the supercapacitors hold enough power to allow the data in the RAM to be dumped to the flash - where it can be held pretty much indefinitely. When the power is restored, the data is sent back into the RAM, and then flushed to the disks.

There's nothing wrong with the traditional BBU, aside from periodic battery replacement. Most people won't be without power for more than a couple hours at a time. The CacheVault is just a newer technology. And once your unit is installed, make sure you enable Write Back Caching.


It's not a NAS, it's a RAID array in a workstation.
Thanks for the clarification, I think I'd better stick with something I don't have to replace once in a while.
While it may post extra cost to the setup, the backup in event of power loss gives me a peace of mind.

I was doing a quick test again with moving a large folder, meanwhile in the background there is an SFTP transfer happening (finally I can backup the 8 TB of data :D)
While the number here for the throughput looks really great, as you can see, there are some "black-out" every now and then.
My guess is as I anyway turned on Write-Back which utilities the cache, it fills up the cache and then has to stop and flush data to drive periodically.

To more accurately describe the performance, I will do the test later after the SFTP drop is finished, and time the transfer of that big folder, and just look at the average transfer rate.

Screenshot from 2018-02-12 22-10-09.png
 
Joined
Apr 6, 2015
Messages
246 (0.07/day)
Location
Japan
System Name ChronicleScienceWorkStation
Processor AMD Threadripper 1950X
Motherboard Asrock X399 Taichi
Cooling Noctua U14S-TR4
Memory G.Skill DDR4 3200 C14 16GB*4
Video Card(s) AMD Radeon VII
Storage Samsung 970 Pro*1, Kingston A2000 1TB*2 RAID 0, HGST 8TB*5 RAID 6
Case Lian Li PC-A75X
Power Supply Corsair AX1600i
Software Proxmox 6.2
Finished in around 2 hours, a 1TB transfer test result, turns out to be quite good after installing the CacheVault:
Screenshot from 2018-02-17 13-09-45.png
 
Top