Friday, February 2nd 2018

Backblaze Releases Hard Drive Stats for 2017, HGST Most Reliable

Overview
At the end of 2017 we had 93,240 spinning hard drives. Of that number, there were 1,935 boot drives and 91,305 data drives. This post looks at the hard drive statistics of the data drives we monitor. We'll review the stats for Q4 2017, all of 2017, and the lifetime statistics for all of the drives Backblaze has used in our cloud storage data centers since we started keeping track.

Hard Drive Reliability Statistics for Q4 2017
At the end of Q4 2017 Backblaze was monitoring 91,305 hard drives used to store data. For our evaluation we remove from consideration those drives which were used for testing purposes and those drive models for which we did not have at least 45 drives (read why after the chart). This leaves us with 91,243 hard drives. The table below is for the period of Q4 2017.
A few things to remember when viewing this chart:
  • The failure rate listed is for just Q4 2017. If a drive model has a failure rate of 0%, it means there were no drive failures of that model during Q4 2017.
  • There were 62 drives (91,305 minus 91,243) that were not included in the list above because we did not have at least 45 of a given drive model. The most common reason we would have fewer than 45 drives of one model is that we needed to replace a failed drive and we had to purchase a different model as a replacement because the original model was no longer available. We use 45 drives of the same model as the minimum number to qualify for reporting quarterly, yearly, and lifetime drive statistics.
  • Quarterly failure rates can be volatile, especially for models that have a small number of drives and/or a small number of drive days. For example, the Seagate 4 TB drive, model ST4000DM005, has a annualized failure rate of 29.08%, but that is based on only 1,255 drive days and 1 (one) drive failure.
  • AFR stands for Annualized Failure Rate, which is the projected failure rate for a year based on the data from this quarter only.
Bulking Up and Adding On Storage
Looking back over 2017, we not only added new drives, we "bulked up" by swapping out functional and smaller 2, 3, and 4TB drives with larger 8, 10, and 12TB drives. The changes in drive quantity by quarter are shown in the chart below:
For 2017 we added 25,746 new drives, and lost 6,442 drives to retirement for a net of 19,304 drives. When you look at storage space, we added 230 petabytes and retired 19 petabytes, netting us an additional 211 petabytes of storage in our data center in 2017.

2017 Hard Drive Failure Stats
Below are the lifetime hard drive failure statistics for the hard drive models that were operational at the end of Q4 2017. As with the quarterly results above, we have removed any non-production drives and any models that had fewer than 45 drives.
The chart above gives us the lifetime view of the various drive models in our data center. The Q4 2017 chart at the beginning of the post gives us a snapshot of the most recent quarter of the same models.

Let's take a look at the same models over time, in our case over the past 3 years (2015 through 2017), by looking at the annual failure rates for each of those years.
The failure rate for each year is calculated for just that year. In looking at the results the following observations can be made:

The failure rates for both of the 6 TB models, Seagate and WDC, have decreased over the years while the number of drives has stayed fairly consistent from year to year.
While it looks like the failure rates for the 3 TB WDC drives have also decreased, you'll notice that we migrated out nearly 1,000 of these WDC drives in 2017. While the remaining 180 WDC 3 TB drives are performing very well, decreasing the data set that dramatically makes trend analysis suspect.
The Toshiba 5 TB model and the HGST 8 TB model had zero failures over the last year. That's impressive, but with only 45 drives in use for each model, not statistically useful.
The HGST/Hitachi 4 TB models delivered sub 1.0% failure rates for each of the three years. Amazing.

A Few More Numbers
To save you countless hours of looking, we've culled through the data to uncover the following tidbits regarding our ever changing hard drive farm.
  • 116,833 - The number of hard drives for which we have data from April 2013 through the end of December 2017. Currently there are 91,305 drives (data drives) in operation. This means 25,528 drives have either failed or been removed from service due for some other reason - typically migration.
  • 29,844 - The number of hard drives that were installed in 2017. This includes new drives, migrations, and failure replacements.
  • 81.76 - The number of hard drives that were installed each day in 2017. This includes new drives, migrations, and failure replacements.
  • 95,638 - The number of drives installed since we started keeping records in April 2013 through the end of December 2017.
  • 55.41 - The average number of hard drives installed per day from April 2013 to the end of December 2017. The installations can be new drives, migration replacements, or failure replacements.
  • 1,508 - The number of hard drives that were replaced as failed in 2017.
  • 4.13 - The average number of hard drives that have failed each day in 2017.
  • 6,795 - The number of hard drives that have failed from April 2013 until the end of December 2017.
  • 3.94 - The average number of hard drives that have failed each day from April 2013 until the end of December 2017.
Source: Backblaze
Add your own comment

68 Comments on Backblaze Releases Hard Drive Stats for 2017, HGST Most Reliable

#2
dozenfury
The article mentions a disclaimer about the volatility, but I think you can probably disregard any row with less than 100k drive hours (ST4000DM005 for example). Some of those sample sizes are way too small to give much credence to. But even with taking that into account it's pretty clear overall that HGST is way ahead on these reliability numbers. Years ago HGST drives had some pretty widespread reliability issues, so it's good to see that they've apparently reversed that trend and moved to the top in that area.
Posted on Reply
#3
AnarchoPrimitiv
HGST has CONTINUALLY been the most reliable for a while now, that's why I only use there He10 drives in my NAS
Posted on Reply
#4
newtekie1
Semi-Retired Folder
Once again, data that people think tells them something, but because of how the drives are used and what BackBlaze considers a "failure", actually mean nothing to normal consumers...

Oh, and there is also the fact that their failure rate calculations make no f'n sense. Tell me how they have 60 Seagate ST4000DM005 drives, and 1 ST4000DM005 failure, and get a 29.08% failure rate for that drive. Yet they have 45 WD40EFRX drives, and 1 WD40EFRX failure, and that's only an 8.87% failure rate. What maths are they using here? Because by my math, thats a 1.6% and 2.2% failure rate respectively.
Posted on Reply
#5
natr0n
Cue the seagate defenders.
Posted on Reply
#6
Steevo
CrAsHnBuRnXpNote to self, never buy a Seagate.
If that is all you took from this, I feel bad for you son. Normalization is needed, and just giving quick looks into the numbers, Seagate isn't bad, they just aren't great.


I am running HGST drives on my RAID card.
Posted on Reply
#7
CrAsHnBuRnXp
SteevoIf that is all you took from this, I feel bad for you son.
I got 99 problems but a hard drive aint one.
Posted on Reply
#8
Steevo
CrAsHnBuRnXpI got 99 problems but a hard drive aint one.
Graphics card pricing
Dram pricing

Ain't it the shit.
Posted on Reply
#9
yotano211
I never had a seagate drive, I only ever bought Samsung or HGST drives. This year will be my first year with a seagate drive after 1 of my Samsung 2tb laptop drives developed bad sectors. I had those Samsung 2tb drives for over 5 years.
Posted on Reply
#10
Readlight
Looks like my old OEM WD drive after 10 years starts to wear out, sometimes it won't show up in explorer.
Posted on Reply
#11
jagjitnatt
newtekie1Once again, data that people think tells them something, but because of how the drives are used and what BackBlaze considers a "failure", actually mean nothing to normal consumers...

Oh, and there is also the fact that their failure rate calculations make no f'n sense. Tell me how they have 60 Seagate ST4000DM005 drives, and 1 ST4000DM005 failure, and get a 29.08% failure rate for that drive. Yet they have 45 WD40EFRX drives, and 1 WD40EFRX failure, and that's only an 8.87% failure rate. What maths are they using here? Because by my math, thats a 1.6% and 2.2% failure rate respectively.
Their calculations take into account the combined days the drive worked, so even if the same number of drives failed, if one lasted longer, the annual failure rate goes down.
It's not failure rate, but the annual failure rate they calculate.

So for Seagate that's:

1 drive * (365/1255 drive days) * 100 = 29.08 percent

Had the drives run for 10000 days combined, the failure rate would have been 3.65 %
Posted on Reply
#12
newtekie1
Semi-Retired Folder
jagjitnattTheir calculations take into account the combined days the drive worked, so even if the same number of drives failed, if one lasted longer, the annual failure rate goes down.
It's not failure rate, but the annual failure rate they calculate.

So for Seagate that's:

1 drive * (365/1255 drive days) * 100 = 29.08 percent

Had the drives run for 10000 days combined, the failure rate would have been 3.65 %
Except that isn't how you calculate part failure rates. You take the number of failures divided by the total number of parts.
Posted on Reply
#13
jagjitnatt
newtekie1Except that isn't how you calculate part failure rates. You take the number of failures divided by the total number of parts.
Not really because everything eventually fails. It’s how long it took to fail that matters.
As per your calculation, a hard drive that failed in a day is as good as one that failed after 10 years
Posted on Reply
#14
newtekie1
Semi-Retired Folder
jagjitnattNot really because everything eventually fails. It’s how long it took to fail that matters.
As per your calculation, a hard drive that failed in a day is as good as one that failed after 10 years
Yep, that that is why you are supposed to have a suitably long sample time when doing studies like this. You select a start point when you start monitoring parts put in service after that point, and you pick a set "expected lifespan" of those parts. Then you take the number of parts that fail before they reach the expected life of the part, and that is your failure rate.

In their method, a drive could fail and be replaced after 3 years in service, and it increases the failure rate the same as if it had died on the first day and been replaced.
Posted on Reply
#15
dir_d
I use Toshiba NAS drives. Haven't let me down in 4 years so far
Posted on Reply
#16
cdawall
where the hell are my stars
These studies from Blake blaze are always worthless.

Hey guys I have 94000 consumer grade drives running in a commercial environment.
Posted on Reply
#17
Prima.Vera
Just came here to confirm that Seagate drives are ubber crap.
Not disappointed.
Posted on Reply
#18
FordGT90Concept
"I go fast!1!11!1!"
newtekie1Oh, and there is also the fact that their failure rate calculations make no f'n sense. Tell me how they have 60 Seagate ST4000DM005 drives, and 1 ST4000DM005 failure, and get a 29.08% failure rate for that drive. Yet they have 45 WD40EFRX drives, and 1 WD40EFRX failure, and that's only an 8.87% failure rate. What maths are they using here? Because by my math, thats a 1.6% and 2.2% failure rate respectively.
Likely based on operational hours. WDs were in service significantly longer so the Seagate failures are statistically more significant.
Posted on Reply
#19
Ahhzz
newtekie1Once again, data that people think tells them something, but because of how the drives are used and what BackBlaze considers a "failure", actually mean nothing to normal consumers...

Oh, and there is also the fact that their failure rate calculations make no f'n sense. Tell me how they have 60 Seagate ST4000DM005 drives, and 1 ST4000DM005 failure, and get a 29.08% failure rate for that drive. Yet they have 45 WD40EFRX drives, and 1 WD40EFRX failure, and that's only an 8.87% failure rate. What maths are they using here? Because by my math, thats a 1.6% and 2.2% failure rate respectively.
Once again, this is data I like to have. I leave both my server and my gaming PC on all day long, and the server (as a data server for the house) sees constant use, either thru Her watching a show, a movie, moving files, saving craft patterns, me moving files to or from the office, streaming, tons of other traffic. My game box download updates, email, etc all day, and heavy use at night and early in the morning. I want to see how consumer drives fare with heavy use. They're also confirming what most of us who use, purchase, and resell bare drives believe is accurate: Seagate drives don't last, WD's are good, and HGST have been solid for several years.
Posted on Reply
#20
newtekie1
Semi-Retired Folder
AhhzzOnce again, this is data I like to have. I leave both my server and my gaming PC on all day long, and the server (as a data server for the house) sees constant use, either thru Her watching a show, a movie, moving files, saving craft patterns, me moving files to or from the office, streaming, tons of other traffic. My game box download updates, email, etc all day, and heavy use at night and early in the morning. I want to see how consumer drives fare with heavy use. They're also confirming what most of us who use, purchase, and resell bare drives believe is accurate: Seagate drives don't last, WD's are good, and HGST have been solid for several years.
I literally have a 12 bay 2U rackmount enclosure sitting in my home's office connected to my home's server. This data is marginally useful for even me, because I'm not stupid enough to use desktop drives in my RAID arrays, but is useless for normal consumers.

And at the end of the day, the dive model is more important than the manufacturer. Remember, BackBlaze has admitted they have completely thrown out data for WD models that had 100% failure rates. So, if you are gathering that WD drives are good from their data, you drawing inaccurate conclusion(thanks to inaccurate data).

I've dealt with a crap load of hard drives, and the model is what matters, not the manufacturer.
Posted on Reply
#21
dj-electric
newtekie1I've dealt with a crap load of hard drive, and the model is what matters, not the manufacturer.
Listen to this guy, he knows what he is talking about.
Posted on Reply
#22
NdMk2o1o
Meh hardly had any drives die on me over the years and yes, I have a seagate barracuda 1TB (storage and programs) in my main rig (boot drive is SSD) that's getting on for a few years old now with 0 problems, it's the luck of the draw most of the time when it comes to HDD failure, likelyhood if it doesnt die within the first few months of normal usage it will go on to lead a perfectly healthy lifespan.
Agree with Newtekie as historically certain models/revisions have been plagued with higher mtbf rates, this can be said for most manufacturers though and isn't specific to one brand.
Posted on Reply
#23
chaosmassive
I have HGST 500GB from 2012 and still running strong today,
albeit with few relocated sectors
Posted on Reply
#24
Antykain
I've used a number of Seagate HD's over the years, and oldest Seagate HD I still have in use atm is a ST3500418AS 500GB, which has been in service since 2009. A 320GB ST3320620AS which has been in service since 2006. Anywho, I have a couple of 1TB and 4TB Seagate HD's, along with the other WD and HGST drives, as well.. All of which now have their home in my Media Server, DC rigs, and/or just sitting around waiting to be used again somewhere in the future. My main 'gaming' rig is using only SSD's.

</knockonwood> So far I've never had a Seagate failure over my years of using them. I'm not a Seagate fanboy by any means and actually like and prefer HGST drives.. But I can honestly say I've never had a Seagate drive failure. Luck of the draw some would say... I say, I'll take it.

Now watch.. now that I've said my piece on never having a Seagate fail on me, they will all take massive dumps on my chest.. figuratively speaking, of course. I mean come one now.. Hard Drives can't physically take dumps, per se. Let alone taking a dump on my chest! My wife is the only one.. wait, nevermind. right!
Posted on Reply
#25
eidairaman1
The Exiled Airman
Never had a Seagate go bust on me, maxtor, IBM, WD ive had fails from.

I had a 60gb hitachi fail as an external unit for a laptop.

When Hitachi bought deskstar from ibm they turned it around and the series no longer were called deathstar but deskstar.
Posted on Reply
Add your own comment
Apr 24th, 2024 08:55 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts