• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Backblaze Data Shows SSDs May In Fact be More Reliable Than HDDs

Raevenlord

News Editor
Joined
Aug 12, 2016
Messages
3,755 (1.16/day)
Location
Portugal
System Name The Ryzening
Processor AMD Ryzen 9 5900X
Motherboard MSI X570 MAG TOMAHAWK
Cooling Lian Li Galahad 360mm AIO
Memory 32 GB G.Skill Trident Z F4-3733 (4x 8 GB)
Video Card(s) Gigabyte RTX 3070 Ti
Storage Boot: Transcend MTE220S 2TB, Kintson A2000 1TB, Seagate Firewolf Pro 14 TB
Display(s) Acer Nitro VG270UP (1440p 144 Hz IPS)
Case Lian Li O11DX Dynamic White
Audio Device(s) iFi Audio Zen DAC
Power Supply Seasonic Focus+ 750 W
Mouse Cooler Master Masterkeys Lite L
Keyboard Cooler Master Masterkeys Lite L
Software Windows 10 x64
Cloud storage provider Backblaze is one of the industry players providing insightful reports into the health and reliability of the storage mediums they invest in to support their business. In its most recent report, the company shared data that may finally be pointing towards the general perception (and one of SSD's call to fame upon their introduction): that they boast of higher reliability and lower failure rates than HDDs.

The company's latest reports shows that SSDs have entered their fifth operating year without an escalation in failure rates: something that seems to plague HDDs pretty heavily starting from year 4. The idea is simple: SSDs should be more reliable because there are no moving part (no platters and no read/write heads that can fail). However, SSDs do have other points of failure, such as NAND itself (the reason there's TBW ratings) or its controller. Backblaze's data does however show that those concerns may be overrated. Of course, there's a chance that SSDs employed by Backblaze will hit a "reliability" wall of the sort that HDDs seem to enter in year four of their operation, where failure rates increase immensely. More data throughout a larger span of time will be welcome, but for now, it does seem that SSDs are the best way for users to keep their data available.





In other news, a recent study called to question the environmental friendliness of SSDs as compared to their HDD counterparts, claiming that SSDs actually imposed a steeper environmental cost than HDDs. But not all may be exactly what it seems in that front.

View at TechPowerUp Main Site | Source
 
How long does a SSD hold storage when not given power though? 2 years? 5 years? 10 years?
 
How long does a SSD hold storage when not given power though? 2 years? 5 years? 10 years?

Supposedly:
  • >10 years with fresh NAND memory.
  • 1 year (consumer drives) or 3 months (enterprise drives) at 100% wear.
 
That's because when Linus from LTT drops a SSD the media inside ain't as fragile as in a HDD

tenor.gif


:roll:
 
Last edited:
Those SSD's made 8 years ago are not the same tech node as current ones and the early nodes were more robust in RW cycle counts.
 
Those SSD's made 8 years ago are not the same tech node as current ones and the early nodes were more robust in RW cycle counts.
True, last year I had earlier SSD's around 128-180GB even Intel 5100 series / Samsung SSD's in laptop dying that got replaced with either Gigabyte or Teamgroup SSD's that was even faster.
 
Those SSD's made 8 years ago are not the same tech node as current ones and the early nodes were more robust in RW cycle counts.
Failures captured on that chart are clearly not for that reason.

On the other hand, issues like controller quirks were not quite as ironed out as they are now.

Lower individual write cycle counts are also compensated by higher capacity, since controller has more free space to perform wear-levelling on.
 
Tech report did ssd endurance years ago on consumer drives. However with TLC and QLC it may actually be worse today. They did last awhile though.

How many “real” years that would be who knows. Possibly 20 depending how much you use the drive??
 
I want to see them recover something from a corrupted SSD.
According to BuildZoid, if you're using ZFS and you've got the file system spread across enough SSDs, you'll never lose a single byte of data due to hardware failure or simply data corruption; ZFS will automatically detect any errors during reads and fix it for you on the fly.
 
According to BuildZoid, if you're using ZFS and you've got the file system spread across enough SSDs, you'll never lose a single byte of data due to hardware failure or simply data corruption; ZFS will automatically detect any errors during reads and fix it for you on the fly.
Few years ago, I lost all my files when one of my sata SSDs that was in raid zero died, the SSD just died, not just failed like an HDD would. Probably doesn't apply to newer SSDs, though...

Correct me if I'm wrong but the current SSDs have many more parts susceptible to failure, such as arm chips, controllers, Ram etc...
 
just stay away from quad cell from questionable brand names, and it will outlast its capacity becoming too s,mall!

I prefer drive makers with their own flash, but you can still get decent lifetimes from larger re branders like seagate, Inland and Kingston.
 
Few years ago, I lost all my files when one of my sata SSDs that was in raid zero died, the SSD just died, not just failed like an HDD would. Probably doesn't apply to newer SSDs, though...

Correct me if I'm wrong but the current SSDs have many more parts susceptible to failure, such as arm chips, controllers, Ram etc...
True, but again... if you have enough drives that are all mirroring the data the likelihood of all drives dying at the same time along with a ZFS-based setup, you're (probably, most likely) never going to lose your data.
 
How long does a SSD hold storage when not given power though? 2 years? 5 years? 10 years?

Theoretically some last just as little as one year without power before having issues.

Anecdotally, I have powered on an old 240GB Kingston SSD that was idle for 4 years and it had no issues with the data on it.

Edit: shfs37a240g if you care about the exact model I had
 
Well my Crucial M4 turned 10 years, I had two of them and both are still alive without smart errors. That's a MLC drive and is still powered on occasionally as it resides Linux boot for maintenance work for my dedicated NAS. So well done Crucial.

I had SSD failures myself, and see them all day long as I work in service business. Most death percent comes from laptops, where a wild zoo of drives is being used, but the damage cause is mechanical failure, as laptops are carried around, bent, abused, and shit thermals... as everyone wants thin lappies... thin melting hot lappies that throttle even using youtube.

So all things considered. We all agree that this chart is not apples and oranges... but apples and shoelaces. There are vastly different points. Consumer drives vs enterprise ie how much space is reserved for reallocation and how well cooled they are. MLC/SLC versus new gen multibits. Controller failures are a different topic. They even can't be compared and make a claim SSD may be more reliable. They are the same in my point it just depends how you use them, you can manage to kill either of those.

In the end... the topic about RAID emerged. Not sure how ZFS is tailored towards NAND if actually and does it support TRIM and manage data rot, it is more tailored towards spinners . I haven't looked into it but in general RAIDing SSD's is a bad idea for consumers, especially in RAID0(leave aside enterprise). You make them work worse as you kill the access time and 4K and 4K mulithread performance. Linear writes do not matter, quit the epeen stuff about it.

Good idea is to do Hybrid raid. BTRFS actually does native support of RAID1 with SSD and HDD. It will do the job on the SSD later syncing up a mirror to the spinner. Doing Snapshots is a decent and mature plan B without any RAID magic. But BTRFS RAID is still in beta stage as such, so you have to read and experiment.

I will say it again.

BACKUP IS YOUR ONLY FRIEND. The better, backup of your backup is your next best friend. I have a files on my workstation, I backup it fast on NAS SSD's and is is later mirrored once more to my two spinners in RAID1 also, then I kinda feel safe about my data.
 
Last edited:
1. RAID 1 1TB SSDs since 2015 zero issues.
2. Occasional clones to cold storage drives that are only hot during the actual cloning.

There is no point to running an SSD in RAID 0 unless you have some very very specific justifiable reasons to do so.

There is no reason to not run your SSD in RAID 1 unless you really don't care about the data or just occasionally clone to an external drive.

By external I mean not a "commercial external" drive, I mean an internal drive that you don't have physically hooked up to anything as a back up drive.

I've been using Paragon Hard Disk Manager for years now. I clone both full disk-to-disk and individual partitions (e.g. 1TB C:\ and 1TB D:\) to an external 4TB. It's worth the money to buy a program like Paragon Hard Disk Manager, the built-in drive tools for Windows just suck. It takes days to setup my computer if I do it from scratch, less than an hour via a clone. Even better, I can clone from a smaller-to-larger drive and a larger-to-smaller drive setup. Obviously when cloning a larger drive to a smaller drive the data needs to not exceed what the smaller drive uses. The program intelligently readusts the subjective sizes of partitions so that the data partition always uses up the available space (by default) though you can adjust it before making the commit. I have zero complaints and having good software has saved me countless hours of aggravation in addition to the multiple RAID 1s I run.
 
Back
Top