Friday, February 2nd 2018

Backblaze Releases Hard Drive Stats for 2017, HGST Most Reliable

Overview
At the end of 2017 we had 93,240 spinning hard drives. Of that number, there were 1,935 boot drives and 91,305 data drives. This post looks at the hard drive statistics of the data drives we monitor. We'll review the stats for Q4 2017, all of 2017, and the lifetime statistics for all of the drives Backblaze has used in our cloud storage data centers since we started keeping track.

Hard Drive Reliability Statistics for Q4 2017
At the end of Q4 2017 Backblaze was monitoring 91,305 hard drives used to store data. For our evaluation we remove from consideration those drives which were used for testing purposes and those drive models for which we did not have at least 45 drives (read why after the chart). This leaves us with 91,243 hard drives. The table below is for the period of Q4 2017.
A few things to remember when viewing this chart:
  • The failure rate listed is for just Q4 2017. If a drive model has a failure rate of 0%, it means there were no drive failures of that model during Q4 2017.
  • There were 62 drives (91,305 minus 91,243) that were not included in the list above because we did not have at least 45 of a given drive model. The most common reason we would have fewer than 45 drives of one model is that we needed to replace a failed drive and we had to purchase a different model as a replacement because the original model was no longer available. We use 45 drives of the same model as the minimum number to qualify for reporting quarterly, yearly, and lifetime drive statistics.
  • Quarterly failure rates can be volatile, especially for models that have a small number of drives and/or a small number of drive days. For example, the Seagate 4 TB drive, model ST4000DM005, has a annualized failure rate of 29.08%, but that is based on only 1,255 drive days and 1 (one) drive failure.
  • AFR stands for Annualized Failure Rate, which is the projected failure rate for a year based on the data from this quarter only.
Bulking Up and Adding On Storage
Looking back over 2017, we not only added new drives, we "bulked up" by swapping out functional and smaller 2, 3, and 4TB drives with larger 8, 10, and 12TB drives. The changes in drive quantity by quarter are shown in the chart below:
For 2017 we added 25,746 new drives, and lost 6,442 drives to retirement for a net of 19,304 drives. When you look at storage space, we added 230 petabytes and retired 19 petabytes, netting us an additional 211 petabytes of storage in our data center in 2017.

2017 Hard Drive Failure Stats
Below are the lifetime hard drive failure statistics for the hard drive models that were operational at the end of Q4 2017. As with the quarterly results above, we have removed any non-production drives and any models that had fewer than 45 drives.
The chart above gives us the lifetime view of the various drive models in our data center. The Q4 2017 chart at the beginning of the post gives us a snapshot of the most recent quarter of the same models.

Let's take a look at the same models over time, in our case over the past 3 years (2015 through 2017), by looking at the annual failure rates for each of those years.
The failure rate for each year is calculated for just that year. In looking at the results the following observations can be made:

The failure rates for both of the 6 TB models, Seagate and WDC, have decreased over the years while the number of drives has stayed fairly consistent from year to year.
While it looks like the failure rates for the 3 TB WDC drives have also decreased, you'll notice that we migrated out nearly 1,000 of these WDC drives in 2017. While the remaining 180 WDC 3 TB drives are performing very well, decreasing the data set that dramatically makes trend analysis suspect.
The Toshiba 5 TB model and the HGST 8 TB model had zero failures over the last year. That's impressive, but with only 45 drives in use for each model, not statistically useful.
The HGST/Hitachi 4 TB models delivered sub 1.0% failure rates for each of the three years. Amazing.

A Few More Numbers
To save you countless hours of looking, we've culled through the data to uncover the following tidbits regarding our ever changing hard drive farm.
  • 116,833 - The number of hard drives for which we have data from April 2013 through the end of December 2017. Currently there are 91,305 drives (data drives) in operation. This means 25,528 drives have either failed or been removed from service due for some other reason - typically migration.
  • 29,844 - The number of hard drives that were installed in 2017. This includes new drives, migrations, and failure replacements.
  • 81.76 - The number of hard drives that were installed each day in 2017. This includes new drives, migrations, and failure replacements.
  • 95,638 - The number of drives installed since we started keeping records in April 2013 through the end of December 2017.
  • 55.41 - The average number of hard drives installed per day from April 2013 to the end of December 2017. The installations can be new drives, migration replacements, or failure replacements.
  • 1,508 - The number of hard drives that were replaced as failed in 2017.
  • 4.13 - The average number of hard drives that have failed each day in 2017.
  • 6,795 - The number of hard drives that have failed from April 2013 until the end of December 2017.
  • 3.94 - The average number of hard drives that have failed each day from April 2013 until the end of December 2017.
Source: Backblaze
Add your own comment

68 Comments on Backblaze Releases Hard Drive Stats for 2017, HGST Most Reliable

#26
TheGuruStud
Jfc, seagate is junk, just deal with it. I can make a 99% accurate guess before I open a computer what drive inside has failed. If it's not a seagate, then it's a wore out WD green (likely killed from head parking). I'll take a green alllll day over SG. At least it'll last outside warranty, which is useless. You'll never receive usable drive back from those SG clowns (probably not WD, either).
Posted on Reply
#28
evernessince
"newtekie1 said:
Except that isn't how you calculate part failure rates. You take the number of failures divided by the total number of parts.
That doesn't work for hard hard drives, especially in this use case. They know the drives are going to fail, it's just a matter of when. The method you mentioned is a less accurate way of measuring reliability by excluding how long the drive has been in service. Using your method, one could easily manipulate the numbers so that Seagate looks good. For example, take a 6 year old array of WD drives and bit them against brand new seagate drives. Which drives do you think are going to start to fail first? That's not a fair comparison yet it's the method you're advocating for.

"TheGuruStud said:
Jfc, seagate is junk, just deal with it. I can make a 99% accurate guess before I open a computer what drive inside has failed. If it's not a seagate, then it's a wore out WD green (likely killed from head parking). I'll take a green alllll day over SG. At least it'll last outside warranty, which is useless. You'll never receive usable drive back from those SG clowns (probably not WD, either).
I remember RMA'ing a drive 14 years ago to seagate. They kept sending me replacements that would fail within a month. I eventually gave up and bought a WD Black. I now have multiple computers with WD Blacks and Toshiba X300s. I've never had a black fail, even my first one, which is a 500GB model. I can vouch for WD's refurbished drives. I bought 2 of them and they are just as good as new. Seagate is likely single handedly responsible for 1/2 WD Black sales, people were so fed up with their quality they just decide to buy something that won't fail.

"NdMk2o1o said:
Meh hardly had any drives die on me over the years and yes, I have a seagate barracuda 1TB (storage and programs) in my main rig (boot drive is SSD) that's getting on for a few years old now with 0 problems, it's the luck of the draw most of the time when it comes to HDD failure, likelyhood if it doesnt die within the first few months of normal usage it will go on to lead a perfectly healthy lifespan.
Agree with Newtekie as historically certain models/revisions have been plagued with higher mtbf rates, this can be said for most manufacturers though and isn't specific to one brand.
You don't even need to wait that first month. Simply fill the drive up with data, wipe, and fill it up again. This will find any error on the disc's surface, test the performance, and stress the mechanics. If the drive is a good one, it won't have any issues.
Posted on Reply
#29
TheLostSwede
"dir_d said:
I use Toshiba NAS drives. Haven't let me down in 4 years so far
Same here, knock on wood...

I think my last really bad experience with a hard drive was WD, as I had one fail, got a replacement that failed almost instantly and then got a replacement for that. That said, this was well over 10 years ago.

Back in the not so good, old days, with Conner, Quantum/Maxtor (they merged at one point) it was much more likely you'd see drive failures, as they were the bottom of the barrel products imho and experience. Sure, they were cheap, but oh so unreliable. Guess who bought both companies?
Posted on Reply
#30
lZKoce
I haven't had that much drives in my life. My first PC that I assembled, the drive was Seagate. Failed after 3 months of watching movies on it. The replacement from the warranty though is now 6 years old- no problems. Had two WD Blue laptop drives that failed. I totally discard the WD Blue series now. I have the WD Black series and they are alright. Also an external HGST drive, that I am really happy with. I would say regardless of the brand, more important is to build backup habits and stick to them. Yesterday my SD Card failed on my phone. I haven't backed up in months- I lost valuable photos. I am kinda pissed actually. Next time I will buy a phone with 64GB internal memory and not bother with SD at all. It's the second SD card that is less than 6 months old and that fails on me: one Lexar and one Kingston so far. Terrible stuff.
Posted on Reply
#31
Mescalamba
Makes me wonder what went wrong with that one particular SeaGate model.. 30% fail rate with so few pieces? Jeez.

On other hand, reminds me time when I bought cheapo consumer SeaGate (1TB) and WD. And SeaGate was half-dead within 6 months and WD died in 3. Tthats not for comparing those two companies, its just that regular consumer HDDs are shit, without pardon.

Might buy some HGST, looks interesting.. funny enough, long time ago, they didnt have exactly best reputation. :D If you remember Hitachi Deathstar..
Posted on Reply
#32
newtekie1
Semi-Retired Folder
"evernessince said:
That doesn't work for hard hard drives, especially in this use case. They know the drives are going to fail, it's just a matter of when. The method you mentioned is a less accurate way of measuring reliability by excluding how long the drive has been in service. Using your method, one could easily manipulate the numbers so that Seagate looks good. For example, take a 6 year old array of WD drives and bit them against brand new seagate drives. Which drives do you think are going to start to fail first? That's not a fair comparison yet it's the method you're advocating for.
Read it again. This is why you measure from the beginning of the parts life, when it is first put in service to when it dies or when you decide the end your evaluation period. It doesn't matter if it is hard drives or a water pump on a car. Come on, this is standardized failure rate testing procedure here...

Your concern applies equally to their testing and stat reporting method. We don't know how long these drives have been in service. We only know the pool of drives during the short 4 month period of time these numbers cover provided a certain amount of drive days of work. We don't know how old those drive actually are, or how long they've been in service before the stat period started. In their method, if they started using PartA a year before they started their stat recording period, and started using PartB only a month before, then I guarantee you PartB is going to look like it has a lower failure rate. It is this exact reason that failure rate test is not done this way. It is measured from moment a part is put in service to the moment it dies or the moment it reaches the decided "EOL".

Useful information would be how many drives failed in the first, say, 3 years they are in use. That'd be a useful statistic and a proper failure rate number. Not this bullshit "we had 1 drive fail and we're going to call it a 30% failure rate".

Of course, even if they did provide the proper failure rate, BackBlazes numbers would still be completely meaningless to the average consumer anyway...

"TheGuruStud said:
Jfc, seagate is junk, just deal with it. I can make a 99% accurate guess before I open a computer what drive inside has failed. If it's not a seagate, then it's a wore out WD green (likely killed from head parking). I'll take a green alllll day over SG. At least it'll last outside warranty, which is useless. You'll never receive usable drive back from those SG clowns (probably not WD, either).
I'd take a Seagate desktop/Barracuda drive over a WD Green/Blue any day. The WD Green/Blue drives are garbage. But I'd take a WD Black/Purple/Red/Gold over every Seagate except the ES drives, and I'd take a Seagate ES drive over pretty much any other drive on the market. Those drives are damn near bulletproof. It is about the model of drive, not the brand.

You have to ask yourself. If Seagate was junk, why does BackBlaze, a data storage company, use by a large margin more Seagate drives than any other brand? 74% of their drive are Seagate. That's three times as much as the next manufacturer, HGST, which only accounts for 25% of their drives. WD only accounts for a whole 1% of the drives they use, and they don't even have 1000 WD drives in service, so it really isn't a good enough sample size to judge accurately how they perform.

"Mescalamba said:
Makes me wonder what went wrong with that one particular SeaGate model.. 30% fail rate with so few pieces? Jeez.
It is because they had 60 drives and one failed, so they say that is a 30% failure rate... It's not, but they say it is.
Posted on Reply
#33
sutyi
"newtekie1 said:
Once again, data that people think tells them something, but because of how the drives are used and what BackBlaze considers a "failure", actually mean nothing to normal consumers...

Oh, and there is also the fact that their failure rate calculations make no f'n sense. Tell me how they have 60 Seagate ST4000DM005 drives, and 1 ST4000DM005 failure, and get a 29.08% failure rate for that drive. Yet they have 45 WD40EFRX drives, and 1 WD40EFRX failure, and that's only an 8.87% failure rate. What maths are they using here? Because by my math, thats a 1.6% and 2.2% failure rate respectively.
ST4000DM005 failure rate:
365 days per year / ( days / drives / failrues)
365 / ( 1255 / 60 / 1 ) = 17.45 drives replaced annually.
( 17.45 / 60 ) * 100 = 29.083% of drives replaced per year.

HGST HMS5C4040BLE640:
365 / ( 1,369,721 / 14,797 / 17 ) = 67.032 drives replaced annually.
( 67.032 / 14,797 ) * 100 = 0.453% of drives replaced per year.

These are projected defect rates for a year based on how many of said drives they are currently running. Makes sense?
Posted on Reply
#34
rtwjunkie
PC Gaming Enthusiast
"sutyi said:
These are projected defect rates for a year based on how many of said drives they are currently running. Makes sense?
No. It merely shows how many failed. How long were each of them on service? THAT is more useful.

Were these failures during infancy (the most common failure in HDD), or after being run for 6 years (also common)? Do we know anything else about these drives other than how many got replaced? No.

I've never understood why so many sheep put so much stock in BB using commercial drives in enterzprise environments. Use the tool for the job. They are using drives as not intended, and thus not providing any useful info to either consumers or enterprise users.
Posted on Reply
#35
newtekie1
Semi-Retired Folder
"sutyi said:
These are projected defect rates for a year based on how many of said drives they are currently running. Makes sense?
I understand perfectly, my statement is that they don't provide any useful information doing it that way.

We have no idea how long the drive were already in service before they started tracking these numbers. And guessing on future failure rates based on extremely small sample sizes and small time frames does not provide any sort of accurate data either.

This is specifically why failure rates are not measured this way.

"rtwjunkie said:
I've never understood why so many sheep put so much stock in BB using commercial drives in enterzprise environments. Use the tool for the job. They are using drives as not intended, and thus not providing any useful info to either consumers or enterprise users.
Also, this this and this again! You can try to justify their method of calculating failure rates all you want, but at the end of the day the data is still useless because of how they are using the drives as well as what they consider a failure. They will consider a drive failed if the RAID array marks it failed. But consumer drives in RAID arrays often get falsely marked as failed because consumer drives don't support TLER. They also pack consumer drives into huge multi drive enclosures, exposing them to heat and vibration they were never designed to encounter.

Again, they have admitted to not including data from Western Digital models that had 100% failure rates. Let that sink in for a second. When has research data ever been accurate when they just threw out data they didn't like? Answer, it hasn't.

Now, before anyone says "OMG, you're just defending Seagate and trying to bash WD by pointing out WD has models with 100% failure rate" again I'm going to go back to the fact that I'm stating that BackBlazes finding mean absolutely nothing because they are using desktop drives in enterprise environments. The WD models did not fail because they were bad drives or WD is a bad company. Those WD models failed because WD desktop drives simply do not like to run in RAID arrays. They made a very large noise about people using desktop drives in RAID arrays, and were the first to remove TLER from their desktop drives. In fact, it has been speculated that they actually purposely increased the time it takes their desktop drives to recover from an error specifically to make them largely incompatible with RAID arrays. Again, this isn't a bash on them. These 100% failure drives work great for their intended purpose, which is running as a single drive in a desktop computer.

If you want an interesting example of why TLER is important and how vibration affects large arrays of hard drive just watch this video:
<div class="youtube-embed" data-id="tDacjrSCeq4"><img src="https://i.ytimg.com/vi/tDacjrSCeq4/hqdefault.jpg" /><div class="youtube-play"></div><a href="https://www.youtube.com/watch?v=tDacjrSCeq4" target="_blank" class="youtube-title"></a></div>

When a drive encounters vibration issues like this, TLER kicks in and tells the controller that the disk is having an issue completely the commands sent to it in a timely manner. It basically lets the RAID controller know the drive is still working, but it is taking longer than normal to completely the commands for some reason. I guarantee you that if the drives in that video didn't support TLER, the RAID array would have likely marked the drives responsible for the long wait time as failed, because without TLER the drive just sits there working on the command and never reports back to the RAID controller. RAID controllers don't like this, if they issue a command and it takes too long without any response at all from the drive, they mark the drive failed.
Posted on Reply
#36
Mescalamba
Since we have retailer that actually shows failure rate, I can share real stats for those cuda drives.

ST4000DM005 - 2,10%

ST4000DM004 - 3,60%

There was a bit more of later sold, which is probably why it has bit higher % failure. Based on customer reports, they kinda crap, but thats understandable for that price. Still very far from 30%. :D

They dont share stats if its sold under 100 pieces. Based on reviews, they are actually sold a lot.. (I guess that low price does its magic).

Yea and those % are drives that were accepted as RMA pieces. Means they really failed.
Posted on Reply
#37
newtekie1
Semi-Retired Folder
"Mescalamba said:
they kinda crap, but thats understandable for that price
And that is a very good bottom line, cheap drives from any manufacturer are cheap drives. You get what you pay for. If you want a reliable drive, pay a little more for it.
Posted on Reply
#38
oxidized
I've been fixing laptops for a while now, and most of the times when they have a bad or dying HDD it's from HGST, i'm not sure whether to trust this or not.
Posted on Reply
#39
NC37
"Antykain said:
I've used a number of Seagate HD's over the years, and oldest Seagate HD I still have in use atm is a ST3500418AS 500GB, which has been in service since 2009. A 320GB ST3320620AS which has been in service since 2006. Anywho, I have a couple of 1TB and 4TB Seagate HD's, along with the other WD and HGST drives, as well.. All of which now have their home in my Media Server, DC rigs, and/or just sitting around waiting to be used again somewhere in the future. My main 'gaming' rig is using only SSD's.

</knockonwood> So far I've never had a Seagate failure over my years of using them. I'm not a Seagate fanboy by any means and actually like and prefer HGST drives.. But I can honestly say I've never had a Seagate drive failure. Luck of the draw some would say... I say, I'll take it.

Now watch.. now that I've said my piece on never having a Seagate fail on me, they will all take massive dumps on my chest.. figuratively speaking, of course. I mean come one now.. Hard Drives can't physically take dumps, per se. Let alone taking a dump on my chest! My wife is the only one.. wait, nevermind. right!
Likely so. I've had good Seagates over the years but unfortunately...that's changing. None of the good ones are from drives within the last decade. All are older Seagates. Just had a 3TB suddenly die on me. Was maybe only a few years old. Then before that I've had some issues with other computers I did builds on. All while machines I built with WD drives are still running fine. Finally just started switching to WD on my own. Course now I'm going to SSDs for anything up to 1TB.
Posted on Reply
#40
timta2
"oxidized said:
I've been fixing laptops for a while now, and most of the times when they have a bad or dying HDD it's from HGST, i'm not sure whether to trust this or not.
Laptop drives are a whole different ballgame. The amount of abuse and carelessness, that a lot of laptops see, could kill any hard drive. In addition, if the population of the pool of repairs you work on use one particular brand or more of one brand, that could skew the results.
Posted on Reply
#41
silkstone
I just got a Seagate 5 Tb slim drive for my media server. From the reviews, the failure rate seemed a little higher than the WD ones on sale, but then I didn't know if more Seagate units were sold or not. At the end of the day, any hard drive can go bad, I've had WDD, SG, Toshiba, Maxtor drives die before with seemingly no rhyme nor reason.
Posted on Reply
#42
djisas
Let me add something from my experience...
I always bought Seagate drives, had 2 x 320 models in raid, one died (was made in china whereas the surviving one was made in taiwan), had a few 500 GB ones all 7200.11, one of the 500GB died (probably made in china as well), at one point bought a new one for a rigg i was fixing, at the store i was given a chinese seagate and i thought to myself it was going to fail, i had an hard time installing and getting it to run and it popped in less than half an hour, there was even smoke coming from the new hdd, returned it and replaced it for a samsung F3 i still own...
Still have a seagate 7200.12 (made in taiwan) it has some bad sectors but it still works pretty good, i've had 2 WD drives a 640GB black and 1TB green, the black is about 30k+ h the green was also over 30K all without issues, but these where mostly idle drives...
Last year when i was in need of increasing storage and after reading Blackblaze reports, i decided on a 4TB WD red, 4TB drives proved to be more reliable than 3TB ones and WD always been more reliable too, Hitachi drives where harder to find though...
Posted on Reply
#43
oxidized
"timta2 said:
Laptop drives are a whole different ballgame. The amount of abuse and carelessness, that a lot of laptops see, could kill any hard drive. In addition, if the population of the pool of repairs you work on use one particular brand or more of one brand, that could skew the results.
Fair
Posted on Reply
#44
Vayra86
"newtekie1 said:
I literally have a 12 bay 2U rackmount enclosure sitting in my home's office connected to my home's server. This data is marginally useful for even me, because I'm not stupid enough to use desktop drives in my RAID arrays, but is useless for normal consumers.

And at the end of the day, the dive model is more important than the manufacturer. Remember, BackBlaze has admitted they have completely thrown out data for WD models that had 100% failure rates. So, if you are gathering that WD drives are good from their data, you drawing inaccurate conclusion(thanks to inaccurate data).

I've dealt with a crap load of hard drives, and the model is what matters, not the manufacturer.
That's exactly what the chart shows, every year. The failure rates vary wildly per model even if you ignore the tiny sample size models, and the same models pop up every year with higher failure rates, Seagate has been leading the pack ever since what, 2013?

Other than all of this, what would Backblaze stand to gain from reporting bad figures? It doesn't help them or any of the manufacturers/parties involved.
Posted on Reply
#45
Roph
"cdawall said:
These studies from Blake blaze are always worthless.

Hey guys I have 94000 consumer grade drives running in a commercial environment.
Lol? Is the drive supposed to "know" that its owner is making money off its usage? "Consumer" and "commercial" are just buzzwords (along with "professional"), free tickets to charge more money for the same thing.

"newtekie1 said:

You have to ask yourself. If Seagate was junk, why does BackBlaze, a data storage company, use by a large margin more Seagate drives than any other brand? 74% of their drive are Seagate. That's three times as much as the next manufacturer, HGST, which only accounts for 25% of their drives. WD only accounts for a whole 1% of the drives they use, and they don't even have 1000 WD drives in service, so it really isn't a good enough sample size to judge accurately how they perform.
Because seagate is (almost always) cheaper than the competition. Even with seagate's higher failure rate, they save money overall. Backblaze wants the cheapest hard drives.
Posted on Reply
#46
rtwjunkie
PC Gaming Enthusiast
"Roph said:
"Consumer" and "commercial" are just buzzwords (along with "professional"), free tickets to charge more money for the same thing.
I simply cannot believe that you actually believe that. If you don't know the differences then I sincerely hope no one with a large server room hires you for IT support. That's not a cutdown, it's an honest wish for business' sake.
Posted on Reply
#47
cdawall
where the hell are my stars
"Roph said:
Lol? Is the drive supposed to "know" that its owner is making money off its usage? "Consumer" and "commercial" are just buzzwords (along with "professional"), free tickets to charge more money for the same thing.
Those aren't buzz words with hard drives.

Would you mind sitting here and explaining to the crowd the difference between the different hard drives? It is quite massive when you take a standard desktop drive, NAS drive, enterprise drive etc. These aren't just all the same with a different sticker. There is a reason mtbf isn't the same.
Posted on Reply
#48
newtekie1
Semi-Retired Folder
"Vayra86 said:
Other than all of this, what would Backblaze stand to gain from reporting bad figures? It doesn't help them or any of the manufacturers/parties involved.
Simple, free publicity.

"Roph said:
Because seagate is (almost always) cheaper than the competition. Even with seagate's higher failure rate, they save money overall. Backblaze wants the cheapest hard drives.
That doesn't make sense if you are a data storage company, which BackBlaze is. If the drives are actually so much more unreliable than the rest, then the small amount of money you save by going with them is quickly lost by potential downtime and labor costs in replacing drives. The fact is they use Seagate because they are cheap AND the failure rate really isn't significantly worse than any other manufacturer.

"Roph said:
Lol? Is the drive supposed to "know" that its owner is making money off its usage? "Consumer" and "commercial" are just buzzwords (along with "professional"), free tickets to charge more money for the same thing.
I literally explained a good part of the difference 10 post before you posted this... Go do some research.:shadedshu:
Posted on Reply
#49
silkstone
"cdawall said:
Those aren't buzz words with hard drives.

Would you mind sitting here and explaining to the crowd the difference between the different hard drives? It is quite massive when you take a standard desktop drive, NAS drive, enterprise drive etc. These aren't just all the same with a different sticker. There is a reason mtbf isn't the same.
Pretty sure some of them enterprise drives are even filled with helium.
Posted on Reply
#50
cdawall
where the hell are my stars
"newtekie1 said:
That doesn't make sense if you are a data storage company, which BackBlaze is. If the drives are actually so much more unreliable than the rest, then the small amount of money you save by going with them is quickly lost by potential downtime and labor costs in replacing drives. The fact is they use Seagate because they are cheap AND the failure rate really isn't significantly worse than any other manufacturer.
Couple years back they said it was because HGST/WD couldn't keep up with supply, Seagate can
Posted on Reply
Add your own comment