• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

ZFS vs. BTRFS: Safest Choice for Dummies?

cin5

New Member
Joined
May 21, 2022
Messages
19 (0.02/day)
After I finally decide which cloud storage service to subscribe to for my ~ 6TB of precious data (any suggested vendors ???), as a NAS newbie (dummy?), I'm torn between have a local IT guy build me a btrfs or zfs RAID redundant NAS.

If the NAS can playback a ripped DVD movie saved to an uncompressed MKV or ISO file without any latency "hiccups", read/write speed means nothing compared to data protection against bit rot. Therefore, most of what I've read (and can comprehend via my totally non-Linux newbie brain) says ZFS offers substantially better protection than btrfs.
https://hydrogenaud.io/index.php/topic,124563.0.html




But what scares me towards btrfs is that Synology's market presence seems far greater than QNap's. Perhaps that's to be expected as Synology's a US-based brand. However, in any case, it's bizarre that ZFS did not become the standard for home and small business NAS users who care about their data.

Furthermore, it looks like the only other ZFS brand alternative to QNap is TrueNAS. But unless my IT guy would 1.) agree to build my NAS with it and 2.) configure and set the NAS up with it for dummies like me not to accidentally “break”, then what else ends up being the most practical and safest alternative?
 
Last edited:
as a choice for dummies I wouldnt pick either. I would pick HW raid with like ext4 and never touch the btrfs or zfs settings on the nas.

As for which if you needed to pick two. it looks like you have a "nas" guy meaning you wont be managing any of this, and if thats the case I would pick ZFS. I would probably pick ZFS over btrfs everytime.

Its old, its performant and more importantly its well documented and well understood. btrfs is newer and still has some teething issues.

While you can find boiler plate articles about the advancements and "goals" of btrfs you can find a slew of first hand experience like this:



Given its constant rolling nature because its growing you run into many discussions like this when something behind the curtain needs to be done.


just to name a few. We use ZFS currently for several PB of data and have looked into newer things like btrfs or bcachefs (which has its own issues) and there just isnt a compelling reason and it adds more complexity.
 
How years you want to save it?
Given that even though many claim that bit rot is actually an infrequent occurrence and that I regard at least most of my data as precious and that at least movies purchased on burned-not pressed-DVDs (e.g. Amazon, Warner Archives) have the most limited lifespans of all optical discs, I'd prefer that my data would remain intact for at least the next few decades.
 
Given that even though many claim that bit rot is actually an infrequent occurrence and that I regard at least most of my data as precious and that at least movies purchased on burned-not pressed-DVDs (e.g. Amazon, Warner Archives) have the most limited lifespans of all optical discs, I'd prefer that my data would remain intact for at least the next few decades.

Bitrot is just one way you can lose data. Yes, its important to think about how to solve bitrot (answer: ZFS with regular scrubs every 6 months). A "scrub" is when ZFS is commanded to read every file, and if any error occurs to rebuild the file from error-correction (if possible). That way, if bitrot happens "a little bit", you heal and the data is renewed. It takes 10+ hours to scrub but that's just the computer working in the background. As long as you regularly scrub, bitrot is basically solved.

But you can also lose data to fires, dropping the hard drives, viruses, etc. etc. You can solve these with other mechanisms. Viruses / Ransomware commands your computer to delete all data it can touch, but ZFS snapshots likely solve that problem. Physical damage can destroy one computer, but if you build a backup computer and keep it in sync regularly, you'll be better. Offsite backup (or even cloud-service backup) can even protect your data in case of fire.

A lot of this is overlapping. The truly paranoid will enable 3, 4, 5+ different methodologies to save their data, but everyone has a limit for what they find reasonable. Nothing is perfect either, so even if you have 5+ different protections (ZFS mirrored drives with scrub and regular snapshots + backup drive at a seperate family's house in case of fire...), you could have... say a hurricane wipe out a large region (both your house and your family-member's alternative house) and your data is lost. There's almost always a crazy link that could simultaneously destroy data in many locations at once, maybe even solar flares that we can't do anything about.

Or what if a virus infects your computer and ransomwares everything secretly right before a backup? When you then backup and copy the data over the old backup, now its a virus-ransomware'd file and you've lost everything.

-------------

So do what you can, and accept everything you do lose afterwards as beyond the scope of what you were willing to do / protect yourself with. Have a number of scenarios planned out but no one's plans are perfect.
 
as a choice for dummies I wouldnt pick either. I would pick HW raid with like ext4 and never touch the btrfs or zfs settings on the nas.

As for which if you needed to pick two. it looks like you have a "nas" guy meaning you wont be managing any of this, and if thats the case I would pick ZFS. I would probably pick ZFS over btrfs everytime.

Its old, its performant and more importantly its well documented and well understood. btrfs is newer and still has some teething issues.

While you can find boiler plate articles about the advancements and "goals" of btrfs you can find a slew of first hand experience like this:



Given its constant rolling nature because its growing you run into many discussions like this when something behind the curtain needs to be done.


just to name a few. We use ZFS currently for several PB of data and have looked into newer things like btrfs or bcachefs (which has its own issues) and there just isnt a compelling reason and it adds more complexity.
Trust me, the last thing I would knowingly do is to ever fool with any btrfs or zfs settings. In fact, I will definitely be asking my IT guy if he can hide all settings. save for the most user essential ones. The only such functionality which springs to mind is the view of the drive "tree" when the NAS is connected to my pc; that is, the pc's (Windows 10 pro) system drive, my external USB drive (when I connect it to back up files, usually from it to the NAS) and the NAS's (mirrored redundant array?) storage drive.

And when I need to do a search for an MKV (movie) or FLAC (music) file, do I use the search box in Windows Explorer or one in the NAS's GUI?

But since I've consistantly failed to find any local IT guy willing to build me the NAS from scratch, if I go with ZFS, what proprietary software/hardware (ECC RAM supporting motherboards and CPUs) brand to choose for him to configure?

Recommended HDDs and/or SSDs makes/models?


h many to
 
with ZFS, what proprietary software/hardware (ECC RAM supporting motherboards and CPUs) brand to choose for him to configure?
None ZFS is a file system and has nothing to do with hardware
 
This is a very ambitious deadline to keep. Even if you succeed, maybe after so many years there won't be any hardware or software to copy and reproduce them. But you might keep a current generation computer in the garage for this purpose. :confused:
 
-------------

So do what you can, and accept everything you do lose afterwards as beyond the scope of what you were willing to do / protect yourself with. Have a number of scenarios planned out but no one's plans are perfect.
Got it. I would think that a well designed ZFS NAS and reputable cloud storage service (any suggested brands?) would offer pretty robust protection. Plus, if it's true that (enterprise grade??) SSDs can go without seeing voltage for ten years before losing data, then I would want to store my most valuable data on one or two of them.

None ZFS is a file system and has nothing to do with hardware
Yes, but as building a NAS from hardware scratch isn't an option for me, I need to choose the most brand using ZFS in the most reliable hardware I can afford. AND the quiestest, which is certainly not something like a Dell or HP (xeon) servers, with their notoriously high fan noise levels.

So who's left?


If those two are the only best ZFS choices for home use, which one has the most dummy proof updates? NOT that I'm hardly keen on doing ANY OS and/or firmware updating on my own. But if the NAS ever prompts me to download an update and I then wait for my IT guy to carefully review it for bugs, which ZFS NAS brand's proprietary update should I trust more?
 
which ZFS NAS brand's proprietary update should I trust more?
You keep mention proprietary but idk what you mean by it. You can go download the ZFS source code. There is nothing special about there implementation's. I have used both and I prefer qnap to synology
 
This is a very ambitious deadline to keep. Even if you succeed, maybe after so many years there won't be any hardware or software to copy and reproduce them. But you might keep a current generation computer in the garage for this purpose. :confused:
If you're talking about the impending death of the optical disc format, which Hollywood and the music industry-and of course the streaming industry-are eager to celebrate-like many collectors, the plan is to go through the (limited) lifespans of internal/external BD drives to rip select content to store on site and in a reputable cloud server. Surely, the latter are paid to insure that my data is periodically migrated-via several bit protection schemes-to newer hardware. So while any disaster is still possible, at least my cloud storage is most reliable storage scheme I could probably have.

You keep mention proprietary but idk what you mean by it. You can go download the ZFS source code. There is nothing special about there implementation's. I have used both and I prefer qnap to synology
But you're comparing apples and oranges. Synology uses btrfs. What not compare qnap to TrueNAS, as they are both use zfs?

BUT if you've never used or evaluated TrueNAS, then about qnap:

Easy for my IT guy to configure its GUI to hide settings which I won't and should not ever touch?

Easy for my IT guy to set qnap up to do background backups to my cloud storage, but intermittently? That's because my internet connection is only via my iPhone's wifi hotspot, so I don't have a 24/7 connection.

Easy to use the qnap GIU to search for my document, movie and music files?


Any recommended HDDs and/or SSDs makes/models?
 
Last edited:
But you're comparing apples and oranges.

No im not. Im comparing file systems. I guess in that way im comparing apples and oranges.

BUT if you've never used or evaluated TrueNAS, then about qnap:

Iv used all this technology.

Easy for my IT guy to configure its GUI to hide settings which I won't and should not ever touch?

Easy for my IT guy to set qnap up to do background backups to my cloud storage, but intermittently? That's because my internet connection is only via my iPhone's wifi hotspot, so I don't have a 24/7 connection.

Easy to use the qnap GIU to search for my document, movie and music files?


Any recommended HDDs and/or SSDs makes/models?

Sounds like you should be asking him and not making these decisions yourself if im completely honest, since time and time again you seem to be more focused on the GUI aspect. If you have an IT guy you should be letting him make these decisions if you are going to expect him to maintain it.

If the GUI is important to you, this doesnt change based on file system. You appear to be focused on the wrong thing. I would look at screen shots or setup videos and decide what is more comfortable for you to use instead of dictating the technology that has no bearing on your everyday use case.

In my opinion.
 
Bitrot is just one way you can lose data. Yes, its important to think about how to solve bitrot (answer: ZFS with regular scrubs every 6 months). A "scrub" is when ZFS is commanded to read every file, and if any error occurs to rebuild the file from error-correction (if possible). That way, if bitrot happens "a little bit", you heal and the data is renewed. It takes 10+ hours to scrub but that's just the computer working in the background. As long as you regularly scrub, bitrot is basically solved.
This is great news. No doubt my IT guy can set up qnap to do this. But if I accidentally or unknowingly powered off the NAS during a scrub, will it remember to resume the scrub when I power it back on? But if yes, will it have to begin that 10 + hour scrub all over again or can it pick up from where it left off?

And when it's done, does it compare its data integrity with that of the same files stored in my cloud doing check sums via its ECC RAM?

If yes, about how long will take with ~ 4 TB of data?
 
This is great news. No doubt my IT guy can set up qnap to do this. But if I accidentally or unknowingly powered off the NAS during a scrub, will it remember to resume the scrub when I power it back on? But if yes, will it have to begin that 10 + hour scrub all over again or can it pick up from where it left off?

And when it's done, does it compare its data integrity with that of the same files stored in my cloud doing check sums via its ECC RAM?

If yes, about how long will take with ~ 4 TB of data?

A typical hard drive operates at 200 MB/s when performing large sequential bulk work (like a scrub). 4TB would be a bit under 6 hours.

As for your other questions: those details depend on which software you use and other such details. XigmaNAS (the software I use) remembers the scrub location and continues upon bootup, but no guarantees about other software.
 
as a choice for dummies I wouldnt pick either. I would pick HW raid with like ext4 and never touch the btrfs or zfs settings on the nas.

As for which if you needed to pick two. it looks like you have a "nas" guy meaning you wont be managing any of this, and if thats the case I would pick ZFS. I would probably pick ZFS over btrfs everytime.

Its old, its performant and more importantly its well documented and well understood. btrfs is newer and still has some teething issues.

While you can find boiler plate articles about the advancements and "goals" of btrfs you can find a slew of first hand experience like this:



Given its constant rolling nature because its growing you run into many discussions like this when something behind the curtain needs to be done.


just to name a few. We use ZFS currently for several PB of data and have looked into newer things like btrfs or bcachefs (which has its own issues) and there just isnt a compelling reason and it adds more complexity.

Iv used all this technology.

I was hoping I can pick your brain with a quick question related to the topic. Would you consider Synology SHR-2 + BTRFS a reliable combination? This year I was going to migrate data from my older Synology SHR-2 (2 drive fault tolerance) w/ EXT4 setup to a newer Synology and setup as SHR-2 w/ BTRFS since ZFS wasn't available. I figured BTRFS was at least a bit of a step up in terms of maintaining data integrity but after reading your links I'm a bit concerned maybe I should just stick with SHR2+EXT4.
 
I was hoping I can pick your brain with a quick question related to the topic. Would you consider Synology SHR-2 + BTRFS a reliable combination? This year I was going to migrate data from my older Synology SHR-2 (2 drive fault tolerance) w/ EXT4 setup to a newer Synology and setup as SHR-2 w/ BTRFS since ZFS wasn't available. I figured BTRFS was at least a bit of a step up in terms of maintaining data integrity but after reading your links I'm a bit concerned maybe I should just stick with SHR2+EXT4.

A few things I would investigate first is if anyone has successfully done the migration. There are instances where the various migration wizards wont work because the operation isnt support by the manufacturer not necessarily on a technological level.

For what its worth, while I personally am not super comfy running btrfs in production, for all file systems (even older ones) unless you are in root mode via cli they dont expose certain features and options in the GUI that would otherwise be availably via command flags. So there particular implementation (allowance) might be stable. There are stories of releases from both sides though like anything else of certain fw (kernel versions) being unstable. For that reason alone I think when going with a packages product I would stick with something more mature (its feasibly harder to fuck up).

All of that aside, as long as your not someone that goes behind the curtain and its a supported migration path I wouldn't anticipate problems (as long as both units are fully upto date) but as anything else data related, prepare before incase things go bad.

That was all pretty ambiguous but it really comes down to preference and your acceptable degree of risk. My data isnt my steam library, so for me btrfs is too much.
 
What kind of disk setup and "raid level" do you want to run?

It isn't dummy-safe. For example you want to practice changing disks in the "raid".

Rebuild times are notoriously unpredictable in ZFS. And unlike regular RAID it depends on how full the filesystem is.
 
A few things I would investigate first is if anyone has successfully done the migration. There are instances where the various migration wizards wont work because the operation isnt support by the manufacturer not necessarily on a technological level.

For what its worth, while I personally am not super comfy running btrfs in production, for all file systems (even older ones) unless you are in root mode via cli they dont expose certain features and options in the GUI that would otherwise be availably via command flags. So there particular implementation (allowance) might be stable. There are stories of releases from both sides though like anything else of certain fw (kernel versions) being unstable. For that reason alone I think when going with a packages product I would stick with something more mature (its feasibly harder to fuck up).

All of that aside, as long as your not someone that goes behind the curtain and its a supported migration path I wouldn't anticipate problems (as long as both units are fully upto date) but as anything else data related, prepare before incase things go bad.

That was all pretty ambiguous but it really comes down to preference and your acceptable degree of risk. My data isnt my steam library, so for me btrfs is too much.
In terms of migration I was going to keep it simple and do a simple copy or restore from backup and a manual resync with the old system for any newer files.
Thanks for the feedback.

I have some more research to do but found some interesting information so far.
 
Rebuild times are notoriously unpredictable in ZFS. And unlike regular RAID it depends on how full the filesystem is.

We havent even attacked the 1gb of ram to 1tb of storage rule yet for parity sets. If you get 4gb of ram and run a 6tb array your gonna have a bad time. You need working set+


In terms of migration I was going to keep it simple and do a simple copy or restore from backup and a manual resync with the old system for any newer files.
Thanks for the feedback.

That is honestly what I would do.
 
Last edited:
We havent even attacked the 1gb of ram to 1tb of storage rule yet for parity sets. If you get 4gb of ram and run a 6tb array your gonna have a bad time. You need working set+




That is honestly what I would do.
A free tool call SyncToy would work :) Setup your backups as basic drives (no RAID for the backups) and away you go.

RAID for a bunch of disks, RAID 10 is bit more of a favourite of mine because it speeds up the drive throughput more but you do loose a lot of drive space.. RAID 5 or 6 I would say is more than enough for most things :) RAID 1 I'd say is more than enough for a basic setup, but it all depends on how much data you have to backup and what size drives your going to use :)

I have to look into something myself but that's another story :)
 
Keep in mine that a redundancy of 1 (raid5, raid1 with 2 disks) is risky. When one drive fails you need to sync in another, and that creates large amounts of load on the previously surviving (but probably old) drive, and it has an increased chance of failure during this time. If that second drive fails during rebuild you are screwed.

ZFS offers "raid7" (raidz3), which is what I use for my big array with 8 disks.
 
ZFS is a lot more mature, so I would go with that.

For reference I run a TrueNAS Core VM under Proxmox, two virtual disks for its boot process mirrored (SSD hosted), and I directly pass through a cheap ASMedia SATA card so the data pool is direct not virtual.

There is many advantages of software raid over hardware raid, bitrot protection is often talked about, but is many others such as the flexibility to do weird configurations.

Here is how I have things currently setup on my small NAS setup.

Two new 8TB ironwolf pro's and two very old 3TB WD Red's.

I didnt want to run the very old Red's in their own mirror, so I did this.

3TB Raidz2 split across 3tb partition's on all 4 drives.
On the remaining space on the Ironwolf Pro's a mirrored pool,
So two pools.

I get the checksum integrity checks, compression on compressible files, dynamic cluster size, an intelligent adaptive read cache (ARC, this is better than typical FIFO caches, it has a frequently used algorithm).

I didnt reveal my hardware setup on the TrueNAS forum as on there they all swear by expensive complex hardware raid cards running in a special mode for drive passthru, mine is a far simpler setup over basic AHCI so will be more reliable (as well as cheaper). They also throw a fit if not using ECC ram lol.

The advantages of ZFS mirror is its the cheapest to get going, 2 drives is enough. If you want e.g. an 8 drive Raidz2 you need to spend on the 8 drives from the off. I personally wouldnt go below 25% redundancy cost (so not more than 8 drives on Raidz2, if more than 8 I would go Raidz3).

I only access over gigabit ethernet, the speeds are more than enough for me, no latency issues at all, can barely tell the difference from accessing local spindles.

I do agree on the above comment regarding the one drive failure on mirror, but I built with a budget and it was still an upgrade over me previously storing data with no redundancy at all.

I still havent gone into full commital mode hence the low budget of the setup, as now i have faster internet I might start automating remote backups to the cloud. I still also have over 6TB of free space on the setup as well, so no need to expand it to more drives currently although the red's will probably be replaced within a year because of their age.

Also checksum repairs (bitrot), is automated live on the system, scrub isnt actually needed, but like others I do run it scheduled every so often just to have a preemptive measure and also its a warning something is up if it ever has to repair anything on a scrub.
 
Last edited:
Back
Top