Friday, February 2nd 2018

Backblaze Releases Hard Drive Stats for 2017, HGST Most Reliable

Overview
At the end of 2017 we had 93,240 spinning hard drives. Of that number, there were 1,935 boot drives and 91,305 data drives. This post looks at the hard drive statistics of the data drives we monitor. We'll review the stats for Q4 2017, all of 2017, and the lifetime statistics for all of the drives Backblaze has used in our cloud storage data centers since we started keeping track.

Hard Drive Reliability Statistics for Q4 2017
At the end of Q4 2017 Backblaze was monitoring 91,305 hard drives used to store data. For our evaluation we remove from consideration those drives which were used for testing purposes and those drive models for which we did not have at least 45 drives (read why after the chart). This leaves us with 91,243 hard drives. The table below is for the period of Q4 2017.
A few things to remember when viewing this chart:
  • The failure rate listed is for just Q4 2017. If a drive model has a failure rate of 0%, it means there were no drive failures of that model during Q4 2017.
  • There were 62 drives (91,305 minus 91,243) that were not included in the list above because we did not have at least 45 of a given drive model. The most common reason we would have fewer than 45 drives of one model is that we needed to replace a failed drive and we had to purchase a different model as a replacement because the original model was no longer available. We use 45 drives of the same model as the minimum number to qualify for reporting quarterly, yearly, and lifetime drive statistics.
  • Quarterly failure rates can be volatile, especially for models that have a small number of drives and/or a small number of drive days. For example, the Seagate 4 TB drive, model ST4000DM005, has a annualized failure rate of 29.08%, but that is based on only 1,255 drive days and 1 (one) drive failure.
  • AFR stands for Annualized Failure Rate, which is the projected failure rate for a year based on the data from this quarter only.
Bulking Up and Adding On Storage
Looking back over 2017, we not only added new drives, we "bulked up" by swapping out functional and smaller 2, 3, and 4TB drives with larger 8, 10, and 12TB drives. The changes in drive quantity by quarter are shown in the chart below:
For 2017 we added 25,746 new drives, and lost 6,442 drives to retirement for a net of 19,304 drives. When you look at storage space, we added 230 petabytes and retired 19 petabytes, netting us an additional 211 petabytes of storage in our data center in 2017.

2017 Hard Drive Failure Stats
Below are the lifetime hard drive failure statistics for the hard drive models that were operational at the end of Q4 2017. As with the quarterly results above, we have removed any non-production drives and any models that had fewer than 45 drives.
The chart above gives us the lifetime view of the various drive models in our data center. The Q4 2017 chart at the beginning of the post gives us a snapshot of the most recent quarter of the same models.

Let's take a look at the same models over time, in our case over the past 3 years (2015 through 2017), by looking at the annual failure rates for each of those years.
The failure rate for each year is calculated for just that year. In looking at the results the following observations can be made:

The failure rates for both of the 6 TB models, Seagate and WDC, have decreased over the years while the number of drives has stayed fairly consistent from year to year.
While it looks like the failure rates for the 3 TB WDC drives have also decreased, you'll notice that we migrated out nearly 1,000 of these WDC drives in 2017. While the remaining 180 WDC 3 TB drives are performing very well, decreasing the data set that dramatically makes trend analysis suspect.
The Toshiba 5 TB model and the HGST 8 TB model had zero failures over the last year. That's impressive, but with only 45 drives in use for each model, not statistically useful.
The HGST/Hitachi 4 TB models delivered sub 1.0% failure rates for each of the three years. Amazing.

A Few More Numbers
To save you countless hours of looking, we've culled through the data to uncover the following tidbits regarding our ever changing hard drive farm.
  • 116,833 - The number of hard drives for which we have data from April 2013 through the end of December 2017. Currently there are 91,305 drives (data drives) in operation. This means 25,528 drives have either failed or been removed from service due for some other reason - typically migration.
  • 29,844 - The number of hard drives that were installed in 2017. This includes new drives, migrations, and failure replacements.
  • 81.76 - The number of hard drives that were installed each day in 2017. This includes new drives, migrations, and failure replacements.
  • 95,638 - The number of drives installed since we started keeping records in April 2013 through the end of December 2017.
  • 55.41 - The average number of hard drives installed per day from April 2013 to the end of December 2017. The installations can be new drives, migration replacements, or failure replacements.
  • 1,508 - The number of hard drives that were replaced as failed in 2017.
  • 4.13 - The average number of hard drives that have failed each day in 2017.
  • 6,795 - The number of hard drives that have failed from April 2013 until the end of December 2017.
  • 3.94 - The average number of hard drives that have failed each day from April 2013 until the end of December 2017.
Source: Backblaze
Add your own comment

68 Comments on Backblaze Releases Hard Drive Stats for 2017, HGST Most Reliable

#51
Keullo-e
S.T.A.R.S.
Never Seagate for me. I used WD drives when I used HDDs.
Posted on Reply
#52
cdawall
where the hell are my stars
Chloe PriceNever Seagate for me. I used WD drives when I used HDDs.
Just as a little side bit on this. For drive failures I saw in person. Nothing, I mean nothing beats the WD blue 5400RPM and WD green drives. I really didn't see that many failed Seagate/HGST drives.
Posted on Reply
#53
newtekie1
Semi-Retired Folder
cdawallCouple years back they said it was because HGST/WD couldn't keep up with supply, Seagate can
Yeah, I remember them saying WD has issues with supply because of the floods. But that is long over, and I don't believe WD is having any supply issues in the past couple years. So the fact that they are only using less than 1000 WD drives is some other issue than supply.
Posted on Reply
#54
Vya Domus
Consumer hard drives are all shit. They will all fail on you no matter the brand/model within margins so low that it doesn't even matter to the average consumer. Why the intense fanboysm ?

Part of the reason why we should move faster towards all solid state drives.
Posted on Reply
#56
newtekie1
Semi-Retired Folder
Batou1986Reminder
Backblaze bought a bunch of enterprise drives a few years back and the enterprise drives had a higher failure rate so any "MUH CONSUMER DRIVES IN A DATACENTER" cry's are invalid
www.backblaze.com/blog/enterprise-drive-reliability/
No, it only looks that way because they used a small sample size of Enterprise drives and used their BS method to calculate failure rates. Any time you have a smaller sample size, with low working time, any failures you have make the failure rate look much worse than it is. And that is the problem with their calculation method.
Posted on Reply
#57
Vya Domus
Batou1986Reminder
Backblaze bought a bunch of enterprise drives a few years back and the enterprise drives had a higher failure rate so any "MUH CONSUMER DRIVES IN A DATACENTER" cry's are invalid
www.backblaze.com/blog/enterprise-drive-reliability/
The more sample data the more relevant the statistics , that being said this doesn't even come close to telling the whole story.
Posted on Reply
#58
repman244
Batou1986Reminder
Backblaze bought a bunch of enterprise drives a few years back and the enterprise drives had a higher failure rate so any "MUH CONSUMER DRIVES IN A DATACENTER" cry's are invalid
www.backblaze.com/blog/enterprise-drive-reliability/
But when you read that you get to this bit: "It turns out that the consumer drive failure rate does go up after three years, but all three of the first three years are pretty good. We have no data on enterprise drives older than two years, so we don’t know if they will also have an increase in failure rate. It could be that the vaunted reliability of enterprise drives kicks in after two years, but because we haven’t seen any of that reliability in the first two years, I’m skeptical."

And it's not just about reliability, but functions like TLER, SAS options etc.
Posted on Reply
#59
Fahim
HGST has always been the most reliable drives for me. Currently using about 20 x 8TB He8 and 4 x 10TB He10 drives. Will get some more soon.
Posted on Reply
#60
sutyi
silkstoneI just got a Seagate 5 Tb slim drive for my media server. From the reviews, the failure rate seemed a little higher than the WD ones on sale, but then I didn't know if more Seagate units were sold or not. At the end of the day, any hard drive can go bad, I've had WDD, SG, Toshiba, Maxtor drives die before with seemingly no rhyme nor reason.
True.

Have an old 40GB Maxtor Fireball 3 you know the slim one (the hotplate series) still going in a friends old PC.
Then thing is like 60°C when it is operating and been like that for the past 12 years...
Posted on Reply
#61
Ubersonic
cdawallThese studies from Blake blaze are always worthless
Indeed, they take a bunch of consumer grade HDDs then subject them to a 24/7 torture test until they die, it's not a real world scenario and the data is essentially worthless as it only tells you what will happen if you buy a bunch of consumer grade HDDs then subject them to a 24/7 torture test until they die lol.
Posted on Reply
#62
cdawall
where the hell are my stars
UbersonicIndeed, they take a bunch of consumer grade HDDs then subject them to a 24/7 torture test until they die, it's not a real world scenario and the data is essentially worthless as it only tells you what will happen if you buy a bunch of consumer grade HDDs then subject them to a 24/7 torture test until they die lol.
It does show over and over again that WD can't hack it. How many years in a row have they had to notate that their drives had a 100% failure rate and were not included? :roll:
Posted on Reply
#63
Keullo-e
S.T.A.R.S.
cdawallJust as a little side bit on this. For drive failures I saw in person. Nothing, I mean nothing beats the WD blue 5400RPM and WD green drives. I really didn't see that many failed Seagate/HGST drives.
Still running a WD Green 500GB SATA-II with 45127 hours.
Posted on Reply
#64
newtekie1
Semi-Retired Folder
Chloe PriceStill running a WD Green 500GB SATA-II with 45127 hours.
And statistically, most people will have drives that last a long time. Remember, when Google released their hard drive study, it showed that even after 5 years of constant use their drive failure rate was not above 10% for any age of drive. That means that 90% of the drives didn't fail after 5 years. So you are way more likely to find a person that say "I have XYZ from years ago that still runs fine." The real fact is that while some people will have you believe that drives have crazy high failure rates, and they all fail in a year or so, but that just isn't true. You're far more likely to have a hard drive that lasts a long time than one that fails before you replace the computer it is in.
Posted on Reply
#65
Keullo-e
S.T.A.R.S.
And more likely people will report failed drives but not the working ones.
Posted on Reply
#66
cdawall
where the hell are my stars
newtekie1And statistically, most people will have drives that last a long time. Remember, when Google released their hard drive study, it showed that even after 5 years of constant use their drive failure rate was not above 10% for any age of drive. That means that 90% of the drives didn't fail after 5 years. So you are way more likely to find a person that say "I have XYZ from years ago that still runs fine." The real fact is that while some people will have you believe that drives have crazy high failure rates, and they all fail in a year or so, but that just isn't true. You're far more likely to have a hard drive that lasts a long time than one that fails before you replace the computer it is in.
WD and Seagate the two biggest hard drive manufacturers maintain the industry best (entire tech industry) RMA cost to profit numbers. It is in the low 1% range. So like you said most drives aren't failing period.
Posted on Reply
#67
John Naylor
Oh Geez... backblaze again... any discussion of backblaze in relation to consumer drives is simply irrelevant. When a "source" takes consumer dives and puts them in a service contrary to manufacturers recommendation the data is irrelevant. When a server farm is a series of PC cases on flimsy shelving with the drives held in place by rubberbands the data is irrelevant. When you place consumer drives, which features such as "head parking" and the manufacturer advises not to use any drives with this feature in a server environment, the data is irrelevant. How many people would bother to read an article "here's the latest scoop on reliability of devices when installed in direct conflict with manufacturer's specifications?

Many consumer drives include a feature called head parking. What this means is that when the HD is not in use, the head is moved "off platter" and "parked". The feature serves well in a consumer an office environments in instances for exqample, a colleague bumps ya desk when carrying a box of copy paper, or plops it on your desk while loading the machine ... when ya dog, napping under ya desk jumps up when the doorbell rings. When the heads are parked, no damage will occur from the vibration. Consumer HDs are rated for between 250k and 500k parking cycles. When the HD is idle or with writes being in RAM or to the HD cache, the head will move to the parked position. A typical consumer drive might see as much as 25 - 50,000 parking cycles per year....maybe as much as 100k for an enthusiast box... in which case you hopefully didn't cheap out in your HD selection.

Now if ya take that same exact physical drive and use it in a server environment, it will have a different firmware and it will not have the head parking "feature". This is because server drives are almost get many times more data access requests. They can therefore use up those rated parking cycles in a matter of months. Because of economies of scale, that same drive might be sold as a consumer device for $70 ... That same drive as a server drive is much more expensive. What backblaze does, with no worries about data protection given redundancies, they buy consumer drives instead of server drives because they are cheaper. Because they are replaced so often, they were secured in place only by rubberbands, .. tho hopefully they moved away from this silliness by now Backblaze sells their service based upon price so proper server room design is just something that isn't there. Instead of a building designed with thick concrete floors and all racks firmly secured in place to prevent vibration, Backblaze does none of these things.

So what happens is ... the very feature which extends the life of a consumer drive is what's actually killing these drives when inappropriately placed in a server environment. Alternately, we do have available actual published RMA data readily available telling us what % of consumer drives are actually being RMA'd. The data is collected and published every 6 months and they report drives that failed during 6 and 12 months of operation. While this data doesn't tell is what % of drives might fail during their warranty periods, it is statistically relevant as all mechanical drives should follow the same failure / time curve. In addition what value is lifetime data ? By the time ya get it, it's irrelevant as those drives are not on the market anymore. And, it also eliminates DOAs which can result from issues outside the manufacturer's control such as user error and mishandling.

To avoid statistical anomalies, I always look at the data for the last two periods... and ya know what ... there's not a lot of difference between manufacturers ... there are huge differences between models. If ya look at storagereview.com's historical database, you will see that Seagate has the honor of delivering the most reliable and worst reliable drives. Anyway here's the combined data for last 2 reporting periods 12 months):
  • HGST 0.975%
  • Seagate 0.825 %
  • Toshiba 0.93%
  • Western 1.15%
Not exactly a Secretariat like win here ... So it's not so much a matter of which brand but which model. Just avoid the duds and your OK. Among the individual winners in the dud (> 2% Failures) category are.
  • 10,00% Seagate Desktop HDD 6 To
  • 6,78% Seagate Enterprise NAS HDD 6 To
  • 5,08% WD Black 3 To
  • 4,70% Toshiba DT01ACA300 3 To
  • 3,48% Seagate Archive HDD 8 To
  • 3,48% Hitachi Travelstar 5K1000 1 To
  • 3,42% Toshiba X300 5 To
  • 3,37% WD Red WD60EFRX 6 To
  • 3,04% WD Black WD3003FZEX
  • 3,06% WD Red Pro WD4001FFSX 4 To
  • 2,95% WD Red 4 To SATA 6Gb/s
  • 2,81% Seagate IronWolf 4 To
  • 2,67% WD Green WD60EZRX
  • 2,49% WD Purple Videosurveillance 4 To
  • 2,39% Toshiba DT01ACA200
  • 2,89% Toshiba DT01ACA300
  • 2,37% WD Purple WD40PURX
  • 2,29% Seagate Enterprise NAS HDD ST3000VN0001
  • 2,23% WD Red Pro WD3001FFSX
  • 2,18% WD Green WD30EZRX
  • 2,02% WD Red WD40EFRX
If ya counting, that's 5 for Seagate, 10 for WD, 3 for Toshiba and just 1 for Hitachi

WD has about 40% market share and it produced 10 duds or 2.5 duds per 10% market share
SG has about 37% market share and it produced 5 duds or 1.3 dud per 10% market share
TS has about 23% market share and it produced 3 duds or 1.3 duds per 10% market share

Does that have any significance ? ... well, if ya avoid the duds, then no. The fact is, if ya avoid the duds, your chances are just about 1 in a 100 that you will experience a drive failure between 6 and 12 months. Over the last 8 reporting periods (4 years), manufacturers of consumer drives have broken the 1.00% failure rate ceiling only 17 out of 32 instances:

Seagate = 0 (0.60 - 0.95%
HGST = 5 (0.60 - 1.13%
Toshiba = 6 (0.80 - 1.54%
WD = 6 (0.90 - 1.26%

Now lets not look at this as a big win for Seagate, the range of numbers over those 8 periods are indicated in parenthesis. So, yet again, with regard to consumer drives used in a consumer environment, there is no evidence which justifies any vast measure of superiority of any HD brand over another. While an argument, not a conclusive one mind you, could be made that over the last 4 years, Seagate has fared bettetr overall, from best to worst over the last year we are talking 8 failures a year versus 11 failures per 1,000 and that is not a big enough number so as to lie outside the realm of normal statistical variations.
Posted on Reply
#68
TheGuruStud
Another day, another failed Seagate came to me. Got an i7 all in one from the chopping block. Drive doesn't even spin. Took it apart and found the prime suspect.
Posted on Reply
Add your own comment
Apr 18th, 2024 22:25 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts