Longterm storage of backup HDD : Waterproof container?

Sebastian Cohen

Forum Enthusiast
Messages
255
Reaction score
141
So, I am doing an archival backup of EVERYTHING. This will be the "Doomsday Vault Backup" and it won't get backed up further or connected. It is going on a 3.5" HDD, no case. FYI, I will still be doing my regular backup routine with my other drives as normal.

Lately I have been thinking about how I store my backup drives, in general. I know they are airtight so air moisture variations shouldn't really be an issue. Problem is, they usually end up all over the place. Drawers, bottom of, cardboard boxes, usually bottom of.

So I've been thinking of sticking them into one of those plastic IKEA airtight refrigerator boxes. The type has this rubber seal which would also make them waterproof. It is an added protection AND it will be easier to find. I can also put in a note with what/where etc so I don't have to connect it if I don't remember. Sounds like I have a ton of discs, but you would be surprised how confused you can get with just a couple when you find one 4 years later.

SO, long question short. I will be creating a permanent "environment" in that box and I am wondering if or how that can be negative for the drive when the outside fluxtuates?

I am planning on throwing in one or two of those silica gel pouches as well, should negate any negative variations?

This will prob just lie there for 3-4 years.

I am aware of that the lubrication might dry out in the bearings, but this has not been an issue so far.
 
Gently, maybe the problem is not how robust M disks are but whether there will be drives to read them in 50 years. But I'm sure that the folks who use M disks for backup will be on top of the situation and transfer the data to the then current media.

Happy Holidays.
Part of the reason for the endless back-and-forth is that some of us are using Archive and Backup interchangeably. Archive is a collection of documents that is fixed and held that way for a long time. It may be the only version of the data. Backup is a copy of an original that is updated frequently as the source data is updated. So the technical requirements are different.
Yes, I should have said "But I'm sure that the folks who use M disks to archive data will be on top of the situation and transfer the data to the then current media.

Thanks.
 
Gently, maybe the problem is not how robust M disks are but whether there will be drives to read them in 50 years. But I'm sure that the folks who use M disks for backup will be on top of the situation and transfer the data to the then current media.

Happy Holidays.
the 5" optical drive exists in nearly every household in the western world, through 4 generations (cd, dvd, blu, uhd). It's not going to disappear. Those in the archive business will both 1) maintain hardware readers and 2) copy to newer higher density choices as they prove out.

I'm less convinced that the media is guaranteed to endure. I can't even confirm that the M BR got the same estimated endurance as the M DVD did. And it is of course an estimate. There are no 100 year old examples, and the company won't have to make good on any warranty claims.

I do know it's a lot easier to transfer a large quantity of cloud data than it is to do so with hundreds of M discs. And as I've harped on, much easier to verify at any moment. So more likely to be done.
 
Gently, maybe the problem is not how robust M disks are but whether there will be drives to read them in 50 years. But I'm sure that the folks who use M disks for backup will be on top of the situation and transfer the data to the then current media.

Happy Holidays.
the 5" optical drive exists in nearly every household in the western world, through 4 generations (cd, dvd, blu, uhd). It's not going to disappear. Those in the archive business will both 1) maintain hardware readers and 2) copy to newer higher density choices as they prove out.
What is uhd? I've heard of dual layer BluRay and double sided dual layer, but not UHD.
I'm less convinced that the media is guaranteed to endure. I can't even confirm that the M BR got the same estimated endurance as the M DVD did. And it is of course an estimate. There are no 100 year old examples, and the company won't have to make good on any warranty claims.
Yes there is certainly a paucity of testing information. Here is one at-home test that shows M-Disc outlasted BluRay after being buried in the garden, etc.

http://www.microscopy-uk.org.uk/mag...-uk.org.uk/mag/artsep16/mol-mdisc-review.html
I do know it's a lot easier to transfer a large quantity of cloud data than it is to do so with hundreds of M discs. And as I've harped on, much easier to verify at any moment. So more likely to be done.
Google Drive and MS OneDrive really confuse me - terrible UI. Whereas I like cPanel. Perhaps the "real" cloud backup solutions are easier to use than the aforementioned.
 
Last edited:
What is uhd? I've heard of dual layer BluRay and double sided dual layer, but not UHD.
UHD =4k Bluray. The format does 50-100GB, on two or three layers, with shorter pits than for the blu-ray standard.

It has far lower adoption, but still adds to the gigantic consumer base of 5" disc and readers.
Google Drive and MS OneDrive really confuse me - terrible UI. Whereas I like cPanel. Perhaps the "real" cloud backup solutions are easier to use than the aforementioned.
Qnap has merged it all into a single client. I image similar for Syn. The amazon clients are often clunky if you throw a big job at it. But you just have to muddle through until you can get a working scheduled job.
 
A hard drive is most susceptible to magnetic fields which is why the safest place is a box in a bank vault where all the metal protects the drive. At home it would be better to double bag the drive and put it in a metal box in your attic and have a second drive in another town.
 
A hard drive is most susceptible to magnetic fields which is why the safest place is a box in a bank vault where all the metal protects the drive. At home it would be better to double bag the drive and put it in a metal box in your attic and have a second drive in another town.
Short of actually clamping a magnet to the case of the drive (or putting the drive into a MRI scanner), the strongest magnetic field that a drive will see is the one from the magnets that are already inside the drive - the ones that drive the spindle and especially the access arm.

And even clamping a strong magnet to the case of the drive doesn't destroy the data. It has to be really, really strong.

For those of us not in the habit of storing magnets next to our hard drives, no need to worry about metal boxes.
 
Last edited:
I don't use SMART or chkdsk - none of those actually check all the data on the drive. My backups generate MD5 checksums which I then verify using a checksumming utility. I haven't had any failures at all on any of my backup drives.
Thank you. Could you please elaborate (for a dummy) on how to go about using MDS checksums. FWIW, I am on windows 10.

I back up (incrementaly) my data regularly on multiple drives, and because of the incrementalality, some of the data is 'stale', and maybe more subject to bit rot, or more stable because of fewer rewrites and potential miswrites (?).

I also use both older (because I have them) spin drives and newer SSD's (because of their speed and convenience). I have read (I think ... somewhere) that SSD is more prone to loss over extended time when unpowered. Reality check please?. For context .. I am not concerned about 100 yrs ... I am 75 years old, and have no need for my data to outlive me.

Thanks again for your helpful contributions, I always respect your comments.
 
I don't use SMART or chkdsk - none of those actually check all the data on the drive. My backups generate MD5 checksums which I then verify using a checksumming utility. I haven't had any failures at all on any of my backup drives.
Thank you. Could you please elaborate (for a dummy) on how to go about using MDS checksums. FWIW, I am on windows 10.
You can use a utility such as this one available from Microsoft . It's a command line utility, I'm sure if you search around you can find others with friendlier user interfaces.

The basic idea is to run the utility to read all the data files on your drive and generate checksums for them. These are then stored in a checksum file.

Then later, whenever you want to make sure that the files haven't changed, you run the utility again to re-read the data files, generate their checksums, and compare them to the checksum file to make sure they haven't changed.
 
I don't use SMART or chkdsk - none of those actually check all the data on the drive. My backups generate MD5 checksums which I then verify using a checksumming utility. I haven't had any failures at all on any of my backup drives.
Thank you. Could you please elaborate (for a dummy) on how to go about using MDS checksums. FWIW, I am on windows 10.
You can use a utility such as this one available from Microsoft . It's a command line utility, I'm sure if you search around you can find others with friendlier user interfaces.
Is md5sum provided with Windows 10? Might be Cygwin. I use it to verify software downloads, which are often listed with MD5 checksum.
The basic idea is to run the utility to read all the data files on your drive and generate checksums for them. These are then stored in a checksum file.

Then later, whenever you want to make sure that the files haven't changed, you run the utility again to re-read the data files, generate their checksums, and compare them to the checksum file to make sure they haven't changed.
Many backup and sync programs automatically verify checksums. FreeFileSync does, for instance, using CRC hash, though not by default (set VerifyCopiedFilesEnabled = true).

Macrium does as well, I believe by default.
 
Last edited:
Many backup and sync programs automatically verify checksums.

Macrium does as well, I believe by default.
You have to be careful, because sometimes what seems like a verify actually isn't.

I run my image backups using Macrium and I always use the verify option - but I have 64GB of RAM and with the backup image files usually running in the 50+GB range I've found that Macrium doesn't bypass the RAM cache when it re-reads the backup to verify it.

My backup typically takes about 12 or 13 minutes to run, and the subsequent verify should take the same ballpark time, but it often completes in just a couple of minutes without any disk activity going on. Macrium is happily reporting that the file has been successfully verified when in fact it has not actually re-read the data from the disk media to ensure it was properly recorded. They are apparently not using the Windows "unbuffered I/O" APIs which allow the program to bypass caching.

When that happens I have to dismount my backup drive and remount it again (to force the cache to be invalidated) and then use Macrium's "restore" option to find the backup file and run a verification on it manually.

I've actually taken to running checksums against large files while doing an image backup so that the disk cache is flushed to prevent this from happening.
Is md5sum provided with Windows 10? Might be Cygwin. I use it to verify software downloads, which are often listed with MD5 checksum.
Not sure. I use a checksumming utility I wrote myself because I couldn't find the features I wanted (handling alternate data streams, including file metadata in the checksum, etc) in anything else.
 
Last edited:
I don't use SMART or chkdsk - none of those actually check all the data on the drive. My backups generate MD5 checksums which I then verify using a checksumming utility. I haven't had any failures at all on any of my backup drives.
The Windows utility chkdsk verifies all the data (as well as the unused sectors). Add a /r to correct errors.

Extended SMART also verifies every sector.

They both take several hours to run on a big disk.
 
Last edited:
\The Windows utility chkdsk verifies all the data (as well as the unused sectors). Add a /r to correct errors.
Well, it confirms that blocks can be successfully read from the media without it reporting an error. But if there's one of those apocryphal "bit rot" errors where the drive doesn't detect a problem and yet the data it returns is not correct then chkdsk isn't going to know.

I'm skeptical of how likely these "bit rot" errors actually are, but the value of checksums is that it's an end-to-end confirmation that the data matches its previous state.
 
\The Windows utility chkdsk verifies all the data (as well as the unused sectors). Add a /r to correct errors.
Well, it confirms that blocks can be successfully read from the media without it reporting an error. But if there's one of those apocryphal "bit rot" errors where the drive doesn't detect a problem and yet the data it returns is not correct then chkdsk isn't going to know.
I don't understand what you are talking about. An "error" means the data is not correct. The /r extension allows the routine to correct the error. You don't need to go looking for 3rd-party programs when Windows gives you a system utility that performs the same function. Please explain the difference.
 
thank you
 
\The Windows utility chkdsk verifies all the data (as well as the unused sectors). Add a /r to correct errors.
Well, it confirms that blocks can be successfully read from the media without it reporting an error. But if there's one of those apocryphal "bit rot" errors where the drive doesn't detect a problem and yet the data it returns is not correct then chkdsk isn't going to know.
I don't understand what you are talking about. An "error" means the data is not correct. The /r extension allows the routine to correct the error. You don't need to go looking for 3rd-party programs when Windows gives you a system utility that performs the same function. Please explain the difference.
chkdsk doesn't care one bit about the actual data, just that it can read the file. That's the difference.

If malware alters your files, chkdsk isn't going to say a word. Same if LR/PS make a bad save. Chkdsk only has potential to help if the machine crashes/loses power right as the file is being written, and its help may just be to fake fix it, rather than tell you its toast and you need the backup.
 
\The Windows utility chkdsk verifies all the data (as well as the unused sectors). Add a /r to correct errors.
Well, it confirms that blocks can be successfully read from the media without it reporting an error. But if there's one of those apocryphal "bit rot" errors where the drive doesn't detect a problem and yet the data it returns is not correct then chkdsk isn't going to know.
I don't understand what you are talking about. An "error" means the data is not correct. The /r extension allows the routine to correct the error.
When you use "/r" with chkdsk the only kind of "error" it will recognize in file data (beyond the normal file system consistency checks) is if the disk reports that it's unable to read a sector. If the disk reports that it read the sector successfully, chkdsk figures all is well. Even if the data in the sector isn't what it's supposed to be. Chkdsk doesn't actually look at the data and make some sort of judgement call as to whether it's correct or not.

And all that "fixing" does for a bad data sector is to move that sector to the file system's bad block list so that it doesn't get used for new data. That's redundant, since all modern hard drives do the same thing internally at the media level anyway.

A checksumming utility verifies that none of the data has changed since the checksum was initially computed, no matter what is going on at the media, file system, communication path, or RAM level. It can't fix problems (nothing can if the data really is unreadable) but that's why you have multiple backup cycles. And why you verify the checksums regularly so as to discover any problems with your backup drive before you actually have to rely on it.
 
chkdsk doesn't care one bit about the actual data, just that it can read the file. That's the difference.
That is incorrect. If it was only trying to read the file, it would not know that there were errors or correct the errors.
 
chkdsk doesn't care one bit about the actual data, just that it can read the file. That's the difference.
That is incorrect. If it was only trying to read the file, it would not know that there were errors or correct the errors.
Chkdsk checks for the consistency of file system metadata, but it doesn't care about what's inside data files, apart from the fact that if you use "/r" then it makes sure the disk reports that it can read them.
 
\The Windows utility chkdsk verifies all the data (as well as the unused sectors). Add a /r to correct errors.
Well, it confirms that blocks can be successfully read from the media without it reporting an error. But if there's one of those apocryphal "bit rot" errors where the drive doesn't detect a problem and yet the data it returns is not correct then chkdsk isn't going to know.
I don't understand what you are talking about. An "error" means the data is not correct. The /r extension allows the routine to correct the error.
When you use "/r" with chkdsk the only kind of "error" it will recognize in file data (beyond the normal file system consistency checks) is if the disk reports that it's unable to read a sector.
No. There is either a "hard error" (unreadable or unwritable bit) or a "soft error" (the bit is incorrect).
If the disk reports that it read the sector successfully, chkdsk figures all is well. Even if the data in the sector isn't what it's supposed to be. Chkdsk doesn't actually look at the data and make some sort of judgement call as to whether it's correct or not.
No. Chkdsk rewrites soft errors.
And all that "fixing" does for a bad data sector is to move that sector to the file system's bad block list so that it doesn't get used for new data.
Yes for hard errors (like any other disk diagnostic program).
A checksumming utility verifies that none of the data has changed since the checksum was initially computed, no matter what is going on at the media, file system, communication path, or RAM level.
I assume by "media" you mean the HDD itself. Communication path, RAM and file system have nothing to do with it. These checks are done at the sector level.
It can't fix problems (nothing can if the data really is unreadable)
Not true. There is a big difference between "soft errors" (which can be rewritten) and "hard errors" (which result in the sector being taken off-line).
but that's why you have multiple backup cycles. And why you verify the checksums regularly so as to discover any problems with your backup drive before you actually have to rely on it.
I get the impression that you are making many assumptions. Please read about how the chksum utility works, as many 3rd-party programs (maybe even the ones you use) are based on it. You seem to assume that your 3rd-party "checksum" program does something that the chkdsk utility does not, but you still can't say exactly what that is.

I am not against 3rd-party software, but when it just puts a wrapper on an existing system utility that is accessible (and easy to use), I prefer to use the system utility.
 
chkdsk doesn't care one bit about the actual data, just that it can read the file. That's the difference.
That is incorrect. If it was only trying to read the file, it would not know that there were errors or correct the errors.
Chkdsk checks for the consistency of file system metadata, but it doesn't care about what's inside data files, apart from the fact that if you use "/r" then it makes sure the disk reports that it can read them.
/r means repair, not report.

OK, I give up, the chkdsk documentation clearly states what it does and which types of errors it corrects (not just "reports"). You seem to want to argue for argument's sake rather than reading the documentation on this useful utility.
 
Last edited:

Keyboard shortcuts

Back
Top