Reliable External Hard Disk questions and recommendations

Someone who thinks that enterprise hard drives don't die,
Did I really say this? "Enterprise drives don't fail often if at all."
If at all? Yeah, they die regularly if you have a big enough installation of them. We get weekly failures and the disks are provided by the dominant server manufacturer (of course, they simply rebadge disks from Seagate/WD).
and has owned a total of 1 SSD in his life,
Why would I give an SSD another chance when enterprise drives outperform most of my client's and my own needs (and certainly the OP's needs) and have proven to be uber reliable which is what the OP is looking for. In fact, enterprise drives are usually the only item in a old computer that doesn't become obsolete.
You seem to be from a very niche sector of IT, because hard drives have got to be one of the areas of computing that go obsolete the fastest. Data grows fast. Drive capacities increase faster than for any other component type. Any decently experienced techie has a collection of old disks which have become obsolete just because they are no longer big enough to justify occupying a drive bay (currently at around the 1TB mark, from what I have observed).

Contrast this to the minimal performance increments made by the past 4-5 iterations on Intel CPU architecture.
I am not embarrassed to admit that I've built a hundred computers with all kinds of components, some professional level, and others with the kind of cr@p you apparently use or tolerate. I am not aware of an enterprise drive I've installed failing ever.
Then you're probably not tracking equipment failure very effectively. If you're still putting together hardware with your own hands after 20 years, then I wouldn't be surprised.
By the way, my earliest experiences with tech were VAX/VMS (not work, just play) so I possibly predate you here ;)
You need to predate me working in IT because, whether you realize it or not, this conversation is about experience. In an adult argument, you should be providing proof that counters my assertions. All you are doing is hurling insults.
Uh, pot meet kettle? You're reaping the consequences of replying with single line "BS" posts and relying on infantile, crude language because you cannot string together a rational argument.

You assertions are pretty much grounded in "read my mind" and a misunderstanding of publicly available data.

Frankly, again simply because this is so unbelievable, anyone who says proudly that they've only used 1 SSD in a tech career should not be giving tech advice. Technology improves all the time, it's a competent professional's job to evaluate through regular testing. What exactly do you think high performance SANs use as buffer? How many people do you think who post in this subforum alone, have used more than 1 SSD in their lives?
 
Last edited:
OK - to answer most of your questions, mine are the standard 3.5" variety. My current WD external drives are 2 Terabytes in size. I'm looking to move up to anywhere from 4-8 TB in size.

It's interesting to find out that the only OEM drive makers remaining at Seagate, WD, and Toshiba. Makes the narrowing down very easy.
And probably, your 4-8Tb requirement will narrow the possibilities even further. :-D

I presume that you saw this WD announcement

Returning to the pathetically inadequate anecdotal evidence, I just did a survey of the various computers that we have at the moment.
  • Self-built desktop. 2 x 500Gb WD 3.5" (going strong for 4 years).
  • Acer notebook. 500Gb WD 2.5" (6 months old).
  • Acer notebook. 128Gb Hynix SSD, 2Tb Seagate 2.5" (New).
  • HP notebook. 300Gb WD 2.5" (4 years travel use).
  • 2x 1Tb Toshiba USB external backup.
The SSD+HDD Acer will be used as a desktop replacement, so I'm hoping that it will last. The Hynix SSD doesn't seem to be as fast as the Samsung SSD that I have on test.
Edit: Just corrected the sizes for those HDDs.
 
Someone who thinks that enterprise hard drives don't die,
Did I really say this? "Enterprise drives don't fail often if at all."
If at all? Yeah, they die regularly if you have a big enough installation of them. We get weekly failures and the disks are provided by the dominant server manufacturer (of course, they simply rebadge disks from Seagate/WD).
and has owned a total of 1 SSD in his life,
Why would I give an SSD another chance when enterprise drives outperform most of my client's and my own needs (and certainly the OP's needs) and have proven to be uber reliable which is what the OP is looking for. In fact, enterprise drives are usually the only item in a old computer that doesn't become obsolete.
You seem to be from a very niche sector of IT, because hard drives have got to be one of the areas of computing that go obsolete the fastest. Data grows fast. Drive capacities increase faster than for any other component type. Any decently experienced techie has a collection of old disks which have become obsolete just because they are no longer big enough to justify occupying a drive bay (currently at around the 1TB mark, from what I have observed).

Contrast this to the minimal performance increments made by the past 4-5 iterations on Intel CPU architecture.
I am not embarrassed to admit that I've built a hundred computers with all kinds of components, some professional level, and others with the kind of cr@p you apparently use or tolerate. I am not aware of an enterprise drive I've installed failing ever.
Then you're probably not tracking equipment failure very effectively. If you're still putting together hardware with your own hands after 20 years, then I wouldn't be surprised.
By the way, my earliest experiences with tech were VAX/VMS (not work, just play) so I possibly predate you here ;)
You need to predate me working in IT because, whether you realize it or not, this conversation is about experience. In an adult argument, you should be providing proof that counters my assertions. All you are doing is hurling insults.
Uh, pot meet kettle? You're reaping the consequences of replying with single line "BS" posts and relying on infantile, crude language because you cannot string together a rational argument.

You assertions are pretty much grounded in "read my mind" and a misunderstanding of publicly available data.

Frankly, again simply because this is so unbelievable, anyone who says proudly that they've only used 1 SSD in a tech career should not be giving tech advice. Technology improves all the time, it's a competent professional's job to evaluate through regular testing. What exactly do you think high performance SANs use as buffer? How many people do you think who post in this subforum alone, have used more than 1 SSD in their lives?
This Trump-esque rant is quite telling.
 
This information may prove useful. Usually I would just provide the link but the article is in French so I am including this passable translation along with the link at the bottom:

The first question we have to answer is of course the origin of these statistics. They come from a large French e-merchant, where we were able to have direct access to the databases. So we were able to directly extract the statistics we needed.

How is a part declared defective at this dealer? There are two cases. Either the technician considers that the exchange of information with the customer (type of failure, cross tests) is enough to declare the product as broken, or there is a doubt and the part will be tested by the merchant in order to validate or not the said failure.

Of the untested returns, it is likely that some of the parts advertised as problematic by the customers are not in reality, despite the precautions taken by the technician. A marginal phenomenon linked to the e-commerce profession itself: in practice, there is no objective argument to say that a manufacturer or a product model is more impacted than another by this phenomenon.

It should be added that these statistics are logically limited only to the products sold by this e-merchant, and to customer returns made directly via it, which is not always the case since it is possible to make returns directly to the manufacturer, especially in the field of storage: however this represents a minority during the first year, especially since the e-merchant from which these statistics come are responsible for the return costs. Some manufacturers go further by encouraging live returns via specific labels in product boxes, as is the case for Corsair watercooling kits.

Unfortunately, there is no other way to have more reliable statistics, and in the absence of perfect beings, they have the merit of existing. Who would believe, for example, rates of return given directly by the manufacturer of a product?

  • Seagate 0.72% (vs. 0.69%)
  • Toshiba 0.80% (vs. 1.15%)
  • Western 1.04% (vs. 1.03%)
  • HGST 1.13% (vs. 0.60%)
A lot of movement this time since HGST, which is part of the Western Digital group, goes from the first to the last place. Seagate takes advantage of it to pass first, although in any case the figures are good.

Here are the 5 least reliable discs, with in 1 case sales that are in the low range and thus in italics (100-200 pieces):
  • Seagate Archive HDD 8TB
  • 3,48% Hitachi Travelstar 5K1000 1 TB
  • 3,42% Toshiba X300 5 TB
  • WD Red WD60EFRX 6 TB
  • 3,06% WD Red Pro WD4001FFSX 4 TB
Here are the rankings obtained by capacity in 3.5 "for the discs sold to more than 100 copies (italic for 100 to 200 sales):

2 TB:
  • 2.39% Toshiba DT01ACA200
  • 1.25% WD Red Pro WD2001FFSX
  • 1.10% WD Blue WD20EZRZ
  • 0.82% Seagate Barracuda 7200.14
  • 0.81% WD Red WD20EFRX
  • 0.77% Seagate Enterprise NAS HDD ST2000VN0001
  • 0.77% WD Purple WD20PURX
  • 0.72% WD Green WD20EZRX
  • 0.56% Seagate NAS HDD ST2000VN000
  • 0.45% WD Black WD2003FZEX
  • 0.43% Seagate Desktop SSHD ST2000DX001
  • 0.41% Seagate SpinPoint M9T ST2000LM003
  • 0.36% WD Re WD2000FYYZ
  • Seagate Surveillance HDD ST2000VX000
  • 0.00% WD SE WD2000F9YZ
  • 0,00% Seagate Enterprise Capacity ST2000NM0033
  • 0,00% Toshiba E300
  • 0,00% Toshiba P300
3 TB:
  • 3,04% WD Black WD3003FZEX
  • 2.89% Toshiba DT01ACA300
  • 2.29% Seagate Enterprise NAS HDD ST3000VN0001
  • 2.23% WD Red Pro WD3001FFSX
  • 2.18% WD Green WD30EZRX
  • 1.52% Seagate Barracuda 7200.14 ST3000DM001
  • 1.41% Seagate NAS HDD ST3000VN000
  • 0.96% Western Red WD30EFRX
  • 0.75% Seagate Surveillance HDD ST3000VX000
4TB:
  • 2.37% WD Purple WD40PURX
  • 2,02% WD Red WD40EFRX
  • 1.89% Seagate Desktop SSHD ST4000DX001
  • 1.53% Seagate Desktop HDD.15 ST4000DM000
  • 1.04% Seagate NAS HDD ST4000VN000
  • 1,02% WD Blue WD40EZRZ
  • 0.95% WD Green WD40EZRX
  • 0.90% WD RE WD4000FYYZ
  • 0.56% Toshiba MD04ACA40
  • 0.40% Seagate Constellation ES ST4000NM0033
  • 0.32% Hitachi Deskstar 7K4000
  • 0.37% Toshiba X300
  • 0.21% Hitachi Deskstar NAS
  • 0.00% WD Black WD4003FZEX
5/6 TB:
  • 3,42% Toshiba Toshiba X300 5 TB
  • 3.37% WD Red WD60EFRX
  • 2.67% WD Green WD60EZRX
  • 1.43% WD Red WD50EFRX
  • 0.87% Seagate Enterprise NAS HDD ST6000VN0001
  • 0.74% Seagate Desktop HDD ST6000DM001
  • Seagate NAS HDD ST6000VN0021
http://www.hardware.fr/articles/954-6/disques-durs.html
 
This information may prove useful. Usually I would just provide the link but the article is in French so I am including this passable translation along with the link at the bottom:
That's a very interesting site for PC components' early failure rates. Bookmarked, thanks.
 
should not be admitting to having been in the IT industry. It's just embarrassing.
I am not embarrassed to admit that I've built a hundred computers with all kinds of components, some professional level, and others with the kind of cr@p you apparently use or tolerate. I am not aware of an enterprise drive I've installed failing ever.
oooh....you built 100 computers in your career?! Was this meant to be a serious complement to support your vast experience?

Google's well known white paper on drive reliability came out right (2007) as you were retiring - it very pointed showed that the higher MTBFs of "enterprise' drives were mythical.
By the way, my earliest experiences with tech were VAX/VMS (not work, just play) so I possibly predate you here ;)
You need to predate me working in IT because, whether you realize it or not, this conversation is about experience. In an adult argument, you should be providing proof that counters my assertions. All you are doing is hurling insults.
That would be you...and funny, the second blowhard on an entirely different topic in the same thread! But playing along...your experience with 1 SSD and a whopping 100 systems is not experience. I make updates to 45000 servers on a weekly to monthly basis. We buy 6-10k systems at a time.

Any claims that these enterprise drives don't fail would be amazing if true. But not remotely so. A very nice 1% failure rates mean 4 systems getting hit EVERY DAY.

And no - no enterprise task that is bottlenecked by data access would be foregoing the benefit of SSDs, not when it's a choice between 200 IOPS or 10,000. That would be the worst example of pennywise, pound foolish there is in the compute realm these days.
 
Back to the OP:

It's probably wrong to hold onto the grudge against Seagate, just as some do against WD. (though I'll admit that the irrational part of me leans your way).

If space isn't a concern, the best answer is the one Malch said - get a high quality enclosure (with proper cooling) and then install your choice of an internal drive in it. As he's documented, when you buy WD external kits, you're not guaranteed on the contents within, and refurbs seem to get used there. One reason why they're often priced cheaper - they use whatever they have in stock. They're often cramped, and the controller/connectors can fail just as easily as the drive itself.
 
This information may prove useful. Usually I would just provide the link but the article is in French so I am including this passable translation along with the link at the bottom:

<snip>
Thanks for posting that up, it's really hard to find publicly available information on drive reliability. Lots of problems associated with this data, but it's certainly better than the "I had 3 hard drives and 2 Seagates failed" that usually crops up.

My main takeaway from that was that drive failures seem to follow no manufacturer pattern at all... and that Toshiba is a lot more common in Europe than North America and Asia.

I'll have to read it a couple more times to digest.

--
A camera is "investment" only if you make money with it
 
Last edited:
should not be admitting to having been in the IT industry. It's just embarrassing.
I am not embarrassed to admit that I've built a hundred computers with all kinds of components, some professional level, and others with the kind of cr@p you apparently use or tolerate. I am not aware of an enterprise drive I've installed failing ever.
oooh....you built 100 computers in your career?! Was this meant to be a serious complement to support your vast experience?
What is your experience excluding the computer in your bedroom?
Google's well known white paper on drive reliability came out right (2007) as you were retiring - it very pointed showed that the higher MTBFs of "enterprise' drives were mythical.
It is quite clear that you either have not read the paper yourself (rather depending on some other internet troll to do your thinking for you) or you do not understand what you've read.

Have you seen google's study of SSD drives?

By the way, my earliest experiences with tech were VAX/VMS (not work, just play) so I possibly predate you here ;)
You need to predate me working in IT because, whether you realize it or not, this conversation is about experience. In an adult argument, you should be providing proof that counters my assertions. All you are doing is hurling insults.
<snip foolishness>
 
oooh....you built 100 computers in your career?! Was this meant to be a serious complement to support your vast experience?
What is your experience excluding the computer in your bedroom?
For some reason you snipped that answer out of my post, but to repeat - I have nearly 50,000 production servers used by millions of customers daily around the world. Whereas you've been retired for 11 years, and have odd misconceptions about drives in general.

You confused a google study on SSDs with THE white paper on hard drives from 9 years earlier. But funny enough, doesn't seem like you even read the wrong one. From your zdnet citation:

"Two standout conclusions from the study. First, that MLC drives are as reliable as the more costly SLC "enteprise" drives. This mirrors hard drive experience, where consumer SATA drives have been found to be as reliable as expensive SAS and Fibre Channel drives."

BTW, these days Google and other hyperscale cloud companies would like to see hard drives that don't come without most error correction protocols, would rather solve that problem at the system level. They can better handle drive failures this way, because they know these will happen. People no longer pretend to design for 100% reliable components. They design for component failure while maintaining the service.

At some point, this will probably be 'bad' for us consumer data rats who need double digit terabytes of storage. The cloud providers will leave the 3.5" design behind for something more practical for the data center (bigger, I expect), and we'll no longer directly benefit from the innovations. Most consumers don't need more storage than is provided by SSDs and perhaps a pair of 4 or 6 TB drives for cheap local backups.
 
oooh....you built 100 computers in your career?! Was this meant to be a serious complement to support your vast experience?
What is your experience excluding the computer in your bedroom?
For some reason you snipped that answer out of my post, but to repeat -
I thought it was just getting more insults so I didn't want to read any further.
You have?
nearly 50,000 production servers used by millions of customers daily around the world.
So your advice for the OP is to buy $99 drives? Is there a limit in your mind as to how cheap a fellow can go and still have a reliable drive?
Whereas you've been retired for 11 years, and have odd misconceptions about drives in general.
And yet, I've never had an enterprise drive fail on me or any of my clients (now former). You may scoff at my experience but you haven't really told us what your experience with failure rates has been.

You and the other dude are not drilling down far enough. There's a huge difference in a Google server farm and the OP's presumed usage where I am sure that a drive failure doesn't have the catastrophic effect on the whole operation that a drive failure would have on an individual . For a nominal upcharge, he can buy a drive that may or will not fail before his computer becomes obsolete or he can buy his $99 HD special and take a big risk.
You confused a google study on SSDs with THE white paper on hard drives from 9 years earlier. But funny enough, doesn't seem like you even read the wrong one. From your zdnet citation:
I confused nothing. It is you that is confused. I consider SSD consumer grade. You and the other dude are pitting enterprise drives against consumer drives
"Two standout conclusions from the study. First, that MLC drives are as reliable as the more costly SLC "enteprise" drives. This mirrors hard drive experience, where consumer SATA drives have been found to be as reliable as expensive SAS and Fibre Channel drives."
I couldn't find these statements in the Google study I found online.
BTW, these days Google and other hyperscale cloud companies would like to see hard drives that don't come without most error correction protocols, would rather solve that problem at the system level. They can better handle drive failures this way, because they know these will happen. People no longer pretend to design for 100% reliable components. They design for component failure while maintaining the service.

At some point, this will probably be 'bad' for us consumer data rats who need double digit terabytes of storage. The cloud providers will leave the 3.5" design behind for something more practical for the data center (bigger, I expect), and we'll no longer directly benefit from the innovations. Most consumers don't need more storage than is provided by SSDs and perhaps a pair of 4 or 6 TB drives for cheap local backups.
I suppose you read what you want, but I said early on cloud storage is the most reliable and safe - and let me add this - for the individual. But again, the one ugly factor that will raise its head on this idea too is the cost of it.
 
You and the other dude are not drilling down far enough. There's a huge difference in a Google server farm and the OP's presumed usage where I am sure that a drive failure doesn't have the catastrophic effect on the whole operation that a drive failure would have on an individual.
Are you suggesting that the likelihood of failure is proportional to the size of the catastrophe it would cause? I know that Murphy would enthusiastically agree with you, but I wouldn't personally buy that argument.

If Google finds that Enterprise drives are no more likely to fail than consumer drives are, I see no reason why consumers should assume differently.
 
Last edited:
I thought it was just getting more insults so I didn't want to read any further.
Reading is hard!
So your advice for the OP is to buy $99 drives? Is there a limit in your mind as to how cheap a fellow can go and still have a reliable drive?
I've made no statement to this effect. To be clear, however, spending 3x as much for an 'enterprise' class drive becase Rick Knepper never experienced one failing is a good way to waste a few hundred dollars. They do not fail less. The main reason to buy them is (potentially) performance related, or because you believe their URE really is 10x better and you're (foolishly) running Raid 5 for your data. They will likely still offer a 5 year warranty over the 2-3 we now commonly see for SATA, but that doesn't protect your data, just gets you a replacement (refurb, likely) drive when it fails in year 4.

Enterprise drives run at 7200, 10k, 15k. This translates to more noise and heat than 5400 drive which work fine for my needs for bulk storage, at a lower energy cost.
And yet, I've never had an enterprise drive fail on me or any of my clients (now former). You may scoff at my experience but you haven't really told us what your experience with failure rates has been.
Drive failures are routine events in production. Anyone with any experience doesn't need to be informed of this. Nor do they need to be told why mirroring is standard practice to compensate (though now being challenged by grid style compute models). Figure on a 1% mortality rate per year at a minimum. For your 100 machines, those are decent odds. For anyone with thousands, they are not.
I confused nothing. It is you that is confused. I consider SSD consumer grade. You and the other dude are pitting enterprise drives against consumer drives
It's a good think you don't have clients anymore. SSDs radically changed the nature of performance, and pretending they're toys will only benefit your competitors.
"Two standout conclusions from the study. First, that MLC drives are as reliable as the more costly SLC "enteprise" drives. This mirrors hard drive experience, where consumer SATA drives have been found to be as reliable as expensive SAS and Fibre Channel drives."
I couldn't find these statements in the Google study I found online.
Laughing hysterically. IT CAME FROM YOUR CITATION.
I suppose you read what you want, but I said early on cloud storage is the most reliable and safe - and let me add this - for the individual. But again, the one ugly factor that will raise its head on this idea too is the cost of it.
The classic statement holds - cost, performance, reliability. Pick 2. There are no free lunches.
 
You and the other dude are not drilling down far enough. There's a huge difference in a Google server farm and the OP's presumed usage where I am sure that a drive failure doesn't have the catastrophic effect on the whole operation that a drive failure would have on an individual.
Are you suggesting that the likelihood of failure is proportional to the size of the catastrophe it would cause?
The point was about the usage. Google stresses drives like no individual can. Yet, we (consumers) all have had consumer drives fail. I, as an IT consultant, has never had an enterprise drive fail in my own computer or a customer's. Of course, I am going to recommend what I know.

Also, implied in my statement, is the fact that Google uses redundancy where many consumers do not even use RAID or even a secondary backup. A drive failure for Google means little more than pulling a drive and plugging another one in whereas a failure for an individual could be catastrophic so why not take another precaution.

I know that Murphy would enthusiastically agree with you, but I wouldn't personally buy that argument.

If Google finds that Enterprise drives are no more likely to fail than consumer drives are, I see no reason why consumers should assume differently.
I haven't actually found this statement yet in the Google white paper. Maybe I am looking at the wrong paper.
 
They will likely still offer a 5 year warranty over the 2-3 we now commonly see for SATA, but that doesn't protect your data, just gets you a replacement (refurb, likely) drive when it fails in year 4.
Excellent point.
Enterprise drives run at 7200, 10k, 15k. This translates to more noise and heat than 5400 drive which work fine for my needs for bulk storage, at a lower energy cost.
Excellent point.
It's a good think you don't have clients anymore. SSDs radically changed the nature of performance, and pretending they're toys will only benefit your competitors.
And another excellent point. If you really need performance, SSD's are so much faster they render HDD's pretty much moot.

The corporate data centers at Google, Backblaze et al are nothing at all like home computer setups. The most useful aspect of their reliability data relates to the fact that buyers can avoid those specific drive models which appear to have below-par MTBF's.
 
The point was about the usage. Google stresses drives like no individual can.
Google does not actually stress drives in an unusual way. They just have a lot more of them. Most of them idle, doing the occasional read. Just like our's at home. And you can bet that the ones that do get hammered are going to have an SSD component.
Also, implied in my statement, is the fact that Google uses redundancy where many consumers do not even use RAID or even a secondary backup. A drive failure for Google means little more than pulling a drive and plugging another one in whereas a failure for an individual could be catastrophic so why not take another precaution.
Because we're already establishing that spending your time and $$ trying to increase the MTBF is a fool's errand. One should instead focus on MTTR - mean time to recovery.

(consumers not engaging in any data protection measures frequently learn hard lessons.)
 
The corporate data centers at Google, Backblaze et al are nothing at all like home computer setups. The most useful aspect of their reliability data relates to the fact that buyers can avoid those specific drive models which appear to have below-par MTBF's.
This has been the clearest conclusion from the collections of failures. There aren't bad manufacturers (and now there only remains 2.5 of them anyway), but there definitely are bad models, and sometimes just bad batches. Backblaze's tendency to get a lot of whatever looks well priced right now gives a nice diversity of data.
 
(consumers not engaging in any data protection measures frequently learn hard lessons.)
I count myself fortunate to have learned those hard lessons a long time ago on a drive that had only a few megabytes to lose. To see the loss of someone's lifetime store of data nowadays is sad indeed.

But we still see it happen. :-(
 
For some reason you snipped that answer out of my post, but to repeat - I have nearly 50,000 production servers used by millions of customers daily around the world. Whereas you've been retired for 11 years, and have odd misconceptions about drives in general.
A digression, but I'm curious as I haven't been in a seriously large scale enterprise environment for fifteen years now. Back then the emphasis was all SAN/NAS with fibrechannel connects, and site redundancy was heavily reliant on centralised storage racks being replicated to remote sites via dark fibre. This was high performance, highly reliable, but staggeringly expensive to set up and maintain.

What's the storage approach used in your environment? As it sounds seriously large. Centralised, distributed (I'm very curious as to how Hadoop or similar approaches are common these days), or the primitive but simple disks-in-individual-servers. I'd be guessing not the last one if only because paying people to run up and down aisles doing hard drive replacements can't be a good way to spend money.
 

Keyboard shortcuts

Back
Top