• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Anyone with true HDDs still around here?

A 4T drive likely uses SMR (Shingled Magnetic Recording). The tracks are so close together, that rewriting a sector damages adjacent tracks. The controller deals with this by dividing the media into "zones" similar to flash media "erase blocks". Sectors are only appended to a zone, and starts over when full, and the controller manages mapping. All this can make even small writes take a long time. I would avoid >2T HDD except for backup media (and I still prefer 2T for that application).
Very much depends on model, not just capacity.

For "portable" drives (2.5" external ones). 1 TB and below is likely CMR. 2 TB and up (they come in 5 TB or now 6 TB as the highest capacity) are all SMR.

For internal drives, all drives above 8 TB should be CMR I think? Most 1 TB and below should also be CMR. At 2 TB to 8 TB is where it's "it could be either" and it depends a bit on brand and model. For example, the Barracudas between 2 TB and 8 TB are all SMR I think. For Western Digital, it's more of a mix. The Blue 4 TB has both a CMR (EZRZ) and SMR (EZAZ) variant. At 6 TB, it's only SMR (this explains situations where you may notice the 6 TB Blue drive is barely more expensive than some 4 TB Blue drives, because that's probably the 4 TB CMR model, as CMR is more expensive than SMR). At 8 TB, the Blue is only CMR, as the 8 TB Blue is effectively the same drive as the 8 TB Red but renamed with some feature differences if I'm not mistaken (this explains the odd 5,640 RPM instead of 5,400 RPM at the 8 TB capacity), whereas the Seagate Barracuda at 8 TB is only SMR. In other words... it depends.

Both Western Digital and Seagate have, thankfully, gotten better at disclosing this information on their websites after the mess it caused years ago, where I think a Black and even some Reds were SMR.
That's interesting to know, although it's not the root cause of my issue. Read/write speeds are fine. The issue is that whenever I give the drive something to do, there's a 50/50 chance that it'll do it, or get stuck at 100% usage with 0 transfer for about 30-45 seconds before doing it. I mean, it completely freezes. But when it comes back to its senses, it does the job just fine. It's not dependent on power settings, it can happen in the middle of watching a video, too. I've run many checks on it, its SMART is all fine, there are no bad sectors, etc. Also, my 8 TB drive doesn't seem to have this problem.

Once a drive starts giving me attitude like that, it never stops and needs to be removed. I used to try isolating that file (because it houses a bad sector) and using the rest of the HDD but it has always been a harbinger of more bad sector quickly to come. When I replace the HDD, the new one tends to work for 5+ years and most are retired while still fully functional but yeah when running 2x4-disk arrays, first failures are going to come at the 3-4 year mark or earlier and as the otehr 3 in the array are already older, they get deprecated to backup duty and eventually retired.

I have not found reliability to be any worse recently but as I alluded to above, I now run fewer drives than before thanks to increased capacities so it's not a 1:1 comparison.
That's what I'm thinking about, too. When my 4 TB drive dies (who knows when that'll happen with the above symptoms), I'll replace not just that, but also my 8 TB one with a single, higher capacity industrial drive. I'm just wondering whether I can expect better reliability from those drives. If not, then there's no point actually. I might be better off with a SATA SSD (although that would cost a lot more per TB).

Yes, the original DeathStar! :D Though I bought late enough in the cycle to know that buying the 45GB version was safe because it used a different internal design so did not experience the higher failure rates of the 60 and 75GB versions. Neither of mine failed before being retired. The 1.5TB HDDs I had were also notorious for high failures, as shown by BackBlaze in one of their earliest press releases.
I never had the pleasure of working with those, but I've heard a lot about them. :D My second guess would have been Maxtor. A great brand before Seagate completely swallowed it.
 
I use HDDs for backup. And on one server. SSDs are faster and more durable - BUT their kryptonite is data retention. Leave an SSD unpowered for years, and it will lose tons of data. They rely on being powered and the controller continuously scrubbing data.

HDDs have decades of data retention. Weak point: more fragile, especially when spinning. You do NOT want to bump when it is writing. And much slower than SSD.
Ironically I have had a few HDDs fail whilst sitting on a shelf, so working properly when retired, put them somewhere for a few years, then go to use it for some task and it has issues. Although I have two HDDs which survived this treatment, a WD raptor drive, and the very first WD black model released which was built similar to raptor's at the time. I guess just unlucky or stored in wrong conditions. Now days most of my retired drives I use for non drive purposes as they heavy and strong can be used as strong support structures.
 
Two 3Tb WD Red WD30EFRX (CMR) in my old Synology DS214se NAS.
 
Ironically I have had a few HDDs fail whilst sitting on a shelf...
I had this happen to a drive once. They definitely don't like to sit around doing nothing.
 
To the OP's point, absolutely. In a year I pay for 12 humble bundles...or used to. It's cheaper than two AAA games, and you wind up with everything that was hot stuff about 8-12 months after it hit peak...because that's how the modern games market works.

If you've got that much stuff, a few years of pictures, legacy media that you don't want turned off, and some useful documents you're well beyond the 2TB limit of reasonably priced SSD...which leaves HDD. A back-up, and a back-up's back-up, and you're looking at 3+ 10 TB drives to cover most things. You could do that with a string of SSDs...but I don't think my old family movies from a cassette tape needs SSD speeds to load...and I can load most of my games onto the backup while my active library sits on a smaller but faster SSD. I see why people can get by on SSD alone, but I don't see a world where HDDs die until you can cram 10 TB into a sub $300 offering and have it in enough volumes to replace the current stock of spinners.

That said, people often forget spinners are the vanguard of progress. New tech around reading magnetics, and physical orientation of the drive data, is nerdy crap that means I can still get 5x the storage of my SSD for the came cost in a high volume HDD. Assuming you have a plan to back-up data and prevent critical failure there's no reason to support 5x the cost to make the same SSD storage a reality.
 
That's interesting to know, although it's not the root cause of my issue. Read/write speeds are fine. The issue is that whenever I give the drive something to do, there's a 50/50 chance that it'll do it, or get stuck at 100% usage with 0 transfer for about 30-45 seconds before doing it. I mean, it completely freezes. But when it comes back to its senses, it does the job just fine. It's not dependent on power settings, it can happen in the middle of watching a video, too. I've run many checks on it, its SMART is all fine, there are no bad sectors, etc. Also, my 8 TB drive doesn't seem to have this problem.
That almost sounds like what the 500 GB Samsung HDD I had was doing, with the two exceptions that it wouldn't entirely freeze, but would it would still get very slow, and it would only happen (and only sometimes) right after starting Windows, but never after that.

That was the drive I never figured out if it was borderline faulty or if the Windows install was awry. I both replaced the drive with an SSD and reinstalled Windows and the issue went away.
 
That almost sounds like what the 500 GB Samsung HDD I had was doing, with the two exceptions that it wouldn't entirely freeze, but would it would still get very slow, and it would only happen (and only sometimes) right after starting Windows, but never after that.

That was the drive I never figured out if it was borderline faulty or if the Windows install was awry. I both replaced the drive with an SSD and reinstalled Windows and the issue went away.
I suspected firmware bugs with some Samsung HDDs, before Samsung went solid-state-storage-only. Had an Spin Point SP0802N 80 GB that I was checking, from 2005, IIRC, where a SMART attribute randomly plunged to an insanely low number, then went back to normal, causing the BIOS to yell at me about HDD failure! Yes, back in the Spin Point days!
 
Last edited:
That almost sounds like what the 500 GB Samsung HDD I had was doing, with the two exceptions that it wouldn't entirely freeze, but would it would still get very slow, and it would only happen (and only sometimes) right after starting Windows, but never after that.

That was the drive I never figured out if it was borderline faulty or if the Windows install was awry. I both replaced the drive with an SSD and reinstalled Windows and the issue went away.
Yeah, I don't know if it's a somewhat faulty drive, bad firmware or something's wrong with my Windows installation which I've had across multiple system upgrades. Unfortunately, I'm too lazy to reinstall it, but I know I'm just delaying the inevitable in either case. :ohwell:
 
Still have two 6Tb WD Blacks in a backup rig that holds a subset of my Steam library -- in order to fit everything, I'd have to upgrade the two 1Tb m.2s to 2Tb and get a pair of 4Tb SATAs. Waiting for the prices to drop a little more before I pull the trigger.

I also still have a couple of retro 90s/00s builds in storage that have mechanicals, but I don't fire them up often enough these days to be bothered with it. Maybe someday, though.
 
For the Maxtor Diamond Max 8 series, 6E030L0 (30 GB) and 6E040L0 (40 GB) have extremely high failure rates. They are not the same design that's tried-and-true.
 
Nah. :pimp:

View attachment 364352

Once/Twice a season maybe. 2016 Core and Nanoserver behave well enough that power outages are the concern.
It's probably something to do with the way Windows manages memory as this is before the threshold of what we have for Win10/11.
I'm quite happy with the result but I also haven't shifted this box into full time 24/7 distribution duty.
I can expect CPU spikes in the 60-70% range and all manner of HDD chatter throughout the day without Steam or Epic spun up.
My windows server for my web domains internal site and services has seen 30 day uptimes as well. Usually by then I do a "patch tuesday"
 
I still use them for everything except games and boot drives.
Backups, media, documents, etc.
 
I got maybe 100+- 1tb drives.

Can be used for for certain huge games when you dont want to kill an ssd.

I use them for backups too like when I have no time to wait for a fresh windows install... I just dump data to hdd and start fresh.

Other day I cleaned up 4 drives I never got back too and sorted stuff it can be fun in a way.

I'm tempted to install Win7 on a hdd for old time sake maybe like over 100 mb r/w too fun nightmare
 
Last edited:
Yeah I’m still running a 1TB WD Blue for storing all my downloads, drivers etc.
 
My windows server for my web domains internal site and services has seen 30 day uptimes as well. Usually by then I do a "patch tuesday"
I avoid the updates entirely. There is NO reason to get them anymore. This HW config is probably set in stone and so is the software until better appears.
I'm not able to punch a hole through to my HTTP/FTP anymore and probably won't until I yeet this insufferable XB3 modem, which isn't soon enough.
No HTTP means all the IIS+SQL/.NET is purely Intranet and my UT99/WoT servers aren't worth spinning up. Everything else is keeping very specific stuff alive.

SQL 2016 and the management studio take up the most local resources but having everything managed on one low power system seems worth it.
I am all for remote management kits but the reality is that without WinPE and other very specific proctoring tools, there's no way to get things to work right.
The CPU doesn't even spin up beyond 40% until calling remote management tools or launching some very modern app that would otherwise chug under Win7.
The day I move off the Athlon 2650e and onto the FX-8370 is when I start Windows from a sata SSD again and go back to containers and VMs, like a rocket.
Just look at this SQL insanity. This is the majority of software on the system and I know it isn't complete detection because more gets pulled up under CCleaner.

1727301375640.png
1727301594666.png


So even the WmiObject calls suck. If nothing else, it's an excellent storage device that doesn't get randomly CrowdStrike'd by anything without my consent.
Yeah I’m still running a 1TB WD Blue for storing all my downloads, drivers etc.
I use a 320GB WD Blue for holding driver kits, 3D models, Inetpub backups, vKet and a massive Downloads ingest that needs to be sorted (usually memes). Installer tools, patches, OBS plugins and software profile backups are kind of really important not just to get things working again but to have some level of security in a metered environment. At some point it's probably going to be my new Holding area or at least a test dump when pulling stuff from dead HDDs. Thankfully I keep those test situations on a separate system to avoid problems.
 
XB3 modem
Did the LEDs on yours degrade? The one I got on August 2, 2016, had degraded LEDs in just months, IIRC! WTF! But it was fine, other than that, but, I only had it until February 24, 2018, when I moved back to FTTH land.

thousands in "Raw Read Error Rate" and "Seek Error Rate" attributes.
That's normal to see on Seagate, OTOH. The way Seagate reports SMART, the SMART can look like that on a Seagate and still be fine.
 
Got 4 x 4TB & 2 x 8TB 5400rpm spinning rust HDD’s in my Plex Server presently (OS & applications all on a 512GB Gen3 OEM Samsung NVMe though) that will be having more bigger HDD’s added to it very soon as all the HDD’s are down to ~8% free space on them presently. Got 3 x 3.5” sleds available still & room for another SAS/SATA HBA PCIe adaptor along with 3 x 2.5” SSD sleds (Corsair 780T chassis with additional HDD bays added to max out available usage). I also have a pair of 3TB Seagate Barracuda HDD’s that will be putting in temporarily (SMR unfortunately!).
A couple of the 4TB HDD’s report Unsuccessful Sector Allocation errors in HWINFO64 so will need replacing sometime down the line even though SeaTools Long Test all pass fine on them still (they have over 50K hours usage on each so have had a good hard life!).
Will probably replace the ‘failing’ 4TB drives with 20TB+ IronWolf/IronWolf Pro HDD’s when can afford to, as well as add in more of the same 20TB+ HDD’s to fill up the empty bays (& give me enough storage for another couple of years maybe).
I intend to replace all existing HDDs with the 20TB+ drives eventually unless similar capacity SAS/SATA SSD’s come down in price by that time - dreaming here now!!!)
 
Will probably replace the ‘failing’ 4TB drives with 20TB+ IronWolf/IronWolf Pro HDD’s when can afford to, as well as add in more of the same 20TB+ HDD’s to fill up the empty bays (& give me enough storage for another couple of years maybe).
That's my plan as well. I'm just wondering if those enterprise drives are any more reliable than our basic commercial ones. If they fail just the same, then I might be better off with several lower capacity ones instead.
 
That's my plan as well. I'm just wondering if those enterprise drives are any more reliable than our basic commercial ones. If they fail just the same, then I might be better off with several lower capacity ones instead.
TBH I haven’t really noticed much difference between CMR consumer & CMR Enterprise drives other than the MTBF or data recovery services being provided with the drive. My data isn’t that important that would send them off to recover it if the drive failed (only BluRay rips that I have the media for on the shelf anyway), so would rather buy the non-Pro version ones if it is more than a say £10 difference per drive in pricing. The usage they get is mainly Read anyway unless putting new media onto them.
 
TBH I haven’t really noticed much difference between CMR consumer & CMR Enterprise drives other than the MTBF or data recovery services being provided with the drive. My data isn’t that important that would send them off to recover it if the drive failed (only BluRay rips that I have the media for on the shelf anyway), so would rather buy the non-Pro version ones if it is more than a say £10 difference per drive in pricing. The usage they get is mainly Read anyway unless putting new media onto them.
Well, I have a 4 and an 8 TB drive. The 4 TB one has been acting weirdly (I made a post on it above). If (when) it fails, I was thinking about replacing both drives with a 20 TB industrial drive, like an Exos or IronWolf, or Toshiba MG. If they're all the same reliability-wise as a Barracuda or WD Green, then I might as well just get another normal 8 TB HDD instead, and set it up for RAID 1 with the one I currently have.
 
I have a WDC WD6400AACS-0 596 GB in my desktop that I use exclusively for backups of my data via rsync.
By now the HDD is really old but still works perfectly as far as I know.
 
Home server:
- two 14 TB enterprise drives + two 6 TB Red Plus drives in my main storage pool (ZFS, mirrored)
- one 4 TB WD Red Pro, mostly just as a place to stick Steam installs when I'm not using them
- three 2 TB Red Plus, miscellaneous backup/scratch storage, mostly because I had them and they still work

I also have four 6 TB + one 2 TB drives (Red Plus) in my "secondary" rig. These are in a mergerfs+snapraid pool, whose job is to store automated backups of the server's main pool.

And then there are external drives, which I use for periodic "offline" backups. Most of the time these are disconnected. All told, I think I have 104 TB of raw HDD storage, but once you subtract parity drives and backup storage, you're looking at about a third of that in usable space.

I no longer run HDDs in my main "gaming" rig, though I'm not opposed to doing so. About a year ago I had to RMA the gaming rig's HDD. In the interim I got a bizarre urge to mod the case. Now there's nowhere to put the HDD cages.

To echo others in the thread, HDDs aren't going anywhere. They're cheap; they're reliable; they don't need to be powered on to retain data integrity, and they're plenty fast enough for the most obvious bulk-storage use cases. If we're comparing to new(ish) NVME SSDs, it's worth pointing out that cooling is easier to manage too. In my personal experience, SSDs not only fail more often; they're also prone to sudden catastrophic failure, whereas HDDs tend to give you ample warning. You can even configure SMART to run automated tests and mail you if there's a problem.

On the con side, HDDs can be noisy. During a scrub, I can hear the enterprise drives in the next room. It isn't an unpleasant sound, but it is there.

I went out of my way to mention "Red Plus" because that's WD's most affordable NAS drive that isn't SMR. Avoid vanilla Reds. I'd also avoid the likes of Seagate Barracuda or WD Green. In my 30 years of collecting HDDs, I've only ever had two outright failures, both from Seagate's notoriously tainted batch of dirt cheap Barracudas in ~2013. I also had an old Toshiba laptop HDD that started throwing errors after about ten years. It still worked, but I threw it out. The aforementioned RMA from my gaming rig was a WD Red Pro, which never threw any SMART errors, but it did start clicking about three years in. WD replaced it without complaint.

For affordable enterprise drives, there are compelling refurbished options. EDIT: If you do buy enterprise drives (Exos, Ultrastar), though, keep in mind that their mounting scheme might not exactly match with your case's mounting scheme (see here and here. Also, some of the enterprise models have a disable feature on the 3.3v pin, which may not be compatible with all power supplies out of the box. You can fix this by covering the relevant pin with kapton tape, or you can sidestep the issue by referencing spec sheets before buying. (These issues can also arise with consumer drives, but they're much less common, mostly relevant to people who shuck external drives.)

That's my plan as well. I'm just wondering if those enterprise drives are any more reliable than our basic commercial ones. If they fail just the same, then I might be better off with several lower capacity ones instead.
There's what you might call a price-reliability sweet spot. Even the most reliable HDD in the history of the planet, by itself, would offer less peace of mind than two middling drives with one set as back up. On the other hand, if you buy the cheapest crap HDDs, even having a back up or two probably won't help you sleep at night--sure, you can recover in the event of failure, but you don't want to go through that process on a regular basis.

Opinions will differ on how exactly to reach that sweet spot, but FWIW my recs are in the previous post (which the forum auto-folded into this post, lol)--Red Plus (or Pro, if you want a little extra warranty in return for a higher price), or refurbished enterprise drives, which might fail but again you're getting such an enormous discount on the storage space that they're a no brainer for redundant (parity + backup) schemes.

Avoid shingled drives (SMR) at all costs. There are references to double check whether a given model is SMR or CMR. This one, for example.
 
Last edited:
Home server:
- two 14 TB enterprise drives + two 6 TB Red Plus drives in my main storage pool (ZFS, mirrored)
- one 4 TB WD Red Pro, mostly just as a place to stick Steam installs when I'm not using them
- three 2 TB Red Plus, miscellaneous backup/scratch storage, mostly because I had them and they still work

I also have four 6 TB + one 2 TB drives (Red Plus) in my "secondary" rig. These are in a mergerfs+snapraid pool, whose job is to store automated backups of the server's main pool.

And then there are external drives, which I use for periodic "offline" backups. Most of the time these are disconnected. All told, I think I have 104 TB of raw HDD storage, but once you subtract parity drives and backup storage, you're looking at about a third of that in usable space.

I no longer run HDDs in my main "gaming" rig, though I'm not opposed to doing so. About a year ago I had to RMA the gaming rig's HDD. In the interim I got a bizarre urge to mod the case. Now there's nowhere to put the HDD cages.

To echo others in the thread, HDDs aren't going anywhere. They're cheap; they're reliable; they don't need to be powered on to retain data integrity, and they're plenty fast enough for the most obvious bulk-storage use cases. If we're comparing to new(ish) NVME SSDs, it's worth pointing out that cooling is easier to manage too. In my personal experience, SSDs not only fail more often; they're also prone to sudden catastrophic failure, whereas HDDs tend to give you ample warning. You can even configure SMART to run automated tests and mail you if there's a problem.

On the con side, HDDs can be noisy. During a scrub, I can hear the enterprise drives in the next room. It isn't an unpleasant sound, but it is there.

I went out of my way to mention "Red Plus" because that's WD's most affordable NAS drive that isn't SMR. Avoid vanilla Reds. I'd also avoid the likes of Seagate Barracuda or WD Green. In my 30 years of collecting HDDs, I've only ever had two outright failures, both from Seagate's notoriously tainted batch of dirt cheap Barracudas in ~2013. I also had an old Toshiba laptop HDD that started throwing errors after about ten years. It still worked, but I threw it out. The aforementioned RMA from my gaming rig was a WD Red Pro, which never threw any SMART errors, but it did start clicking about three years in. WD replaced it without complaint.

For affordable enterprise drives, there are compelling refurbished options. EDIT: If you do buy enterprise drives (Exos, Ultrastar), though, keep in mind that their mounting scheme might not exactly match with your case's mounting scheme (see here and here. Also, some of the enterprise models have a disable feature on the 3.3v pin, which may not be compatible with all power supplies out of the box. You can fix this by covering the relevant pin with kapton tape, or you can sidestep the issue by referencing spec sheets before buying. (These issues can also arise with consumer drives, but they're much less common, mostly relevant to people who shuck external drives.)


There's what you might call a price-reliability sweet spot. Even the most reliable HDD in the history of the planet, by itself, would offer less peace of mind than two middling drives with one set as back up. On the other hand, if you buy the cheapest crap HDDs, even having a back up or two probably won't help you sleep at night--sure, you can recover in the event of failure, but you don't want to go through that process on a regular basis.

Opinions will differ on how exactly to reach that sweet spot, but FWIW my recs are in the previous post (which the forum auto-folded into this post, lol)--Red Plus (or Pro, if you want a little extra warranty in return for a higher price), or refurbished enterprise drives, which might fail but again you're getting such an enormous discount on the storage space that they're a no brainer for redundant (parity + backup) schemes.

Avoid shingled drives (SMR) at all costs. There are references to double check whether a given model is SMR or CMR. This one, for example.
Thanks for all the info. :)

So basically, commercial drives are meh, enterprise drives are better, but two commercial drives in RAID 1 are even better than that. That means, running two enterprise drives in RAID 1 is the best option, right? :D
 
Thanks for all the info. :)

So basically, commercial drives are meh, enterprise drives are better, but two commercial drives in RAID 1 are even better than that. That means, running two enterprise drives in RAID 1 is the best option, right? :D
Yeah, the most important point is that two drives are better than one. I should correct myself a little bit, though--if I implied that there's a direct correlation between price and reliability, that was an error. As far as I know, there is no publicly available data quantifying the reliability of any particular product line. We do have Backblaze's famous reports, but the sample sizes aren't consistent enough to draw firm conclusions with respect to most any brand/model. I find Backblaze's data most interesting in the aggregate. In 2021, for example, they posted an HDD-life-expectancy article, which contained the following chart:

drive-survival-chart-.jpg


Of Backblaze's immense and varied pool of HDDs, all working in an intense commercial environment 24/7, about 84% lived at least five years. The drives' median lifespan was projected at close to seven years. That's pretty good.

To echo @OliverQueen, I have no reason to believe that consumer grade HDDs are substantially less reliable than enterprise drives under consumer-use-case conditions. I do know that SMR drives are shit (in general, not particularly with regard to reliability), and I wouldn't trust the cheapest drives, particularly those from Seagate, but in the general case whatever you buy will most likely work just fine. What you can get from spending a little more money, sometimes, is a longer warranty, but when we discuss reliability, what we're most worried about is the safety of your data, not of the disk.

This is why one should always assume the drive will fail when planning a storage scheme, even though it probably won't fail any time soon.

If you have to choose between parity and back up, I'd pick back up. Parity is about uptime; parity alone won't save you if e.g. you accidentally delete a bunch of files. With a proper back up, you can grab yesterday's, or last week's, archive of those files. You've mentioned you might switch to Linux. If you do, Vorta's a very easy GUI option for automated backups on any schedule you like. They are encrypted by default.

As long as I'm rambling on, I'll also clarify that when I referenced "refurbished" enterprise drives I actually meant the "manufacturer recertified" drives on the linked site--or elsewhere; it's not like I'm an affiliate; I just find that particular vendor trustworthy. "Seller refurbished drives" are a different animal, and IMO, a bit riskier.
 
Last edited:
Back
Top