Another argument for film photography - longevity vs digital archiving.

Status
Not open for further replies.
Tony Northrup just posted a video about that he just found out that he has lost many old personal photographs without knowing about it, and despite doing backups of backups.
I'm going to point out that the actual reason for his loss is not being addressed in this discussion. It wasn't addressed in his own video, either.

The reason for the loss was undiscovered corruption of data in his older stored files, and because he was not aware of the corruption, those damaged files were eventually replicated across all of his backups before he discovered the problem. Then it was too late.

That particular issue requires a special solution, not more backups, or different types of backups. It would be best addressed by having a method of periodically - and automatically - checking all those stored files to identify any corrupted ones so they can be restored from good backups while the backups are still good.

There should be a reasonable way to run such a process on a regular basis. Is there?
You just create a hash (a short representation of the data as a number - see for example https://en.wikipedia.org/wiki/MD5 ) of the image when you store it and add it to a database. Periodically you recalculate the hash of every image and check them against the database. This will tell you (with a reasonable degree of accuracy) if any files have changed. I do this (for a different reason) on my home NAS, it’s pretty standard stuff.
I know about hashes. What I don't know about are the specific (Windows) tools for regular folks to use for automating their creation and running such periodic checks. What are those?
I don’t know what Windows tools are available. I wrote my own in Python - I generate the hashes on the NAS using a Linux find command on a regular basis, and then I have a Python program which can tell me if every hash in one list is contained in the master list (I use it to know that I’ve copied everything off SD cards etc)
Which sort of leads me on to another point that gets lost in these discussions, which is that you can only reasonably compare if you’re doing best in class for all storage mechanisms. So you can only reasonably compare museum quality storage in temperature controlled facilities vs rotated backups with off site storage and data integrity checking. At which point it’s down to which format will you be able to read in the future: film - yes, digital - probably if you choose the right format.

(*) https://en.wikipedia.org/wiki/Backup_rotation_scheme
My main use would be to run these periodic checks on the primary repository for files, my NAS, and maybe some folders on the computer's internal drives. I'd run it on the external backup drives only occasionally. Those normally stay disconnected from everything until they need to be either written or read.
If it’s a Linux based NAS it’s pretty simple to generate the master list. If you sorted the master list and your new list into order then you could probably use something like diff ( https://en.wikipedia.org/wiki/Diff ) to tell you what had changed ( I’m sure Windows versions will be available )
 
Tony Northrup just posted a video about that he just found out that he has lost many old personal photographs without knowing about it, and despite doing backups of backups.

It seems that digital are not a perfected backup solution yet, and that film can be a safer long time archive format, if stored and protected well.

So even if some people scoffs about the use of film in 2022, it might not be such a bad idea for important personal moments, that you want to be sure that last for a long time.

BR Strobist
Well, a lot of old negative from commercial developpers have gone really bad. The negative from film developped at pro level might stay better. I have stored all my negative in the dark in special folders for long time storage. A lot of those negative seem to have lost their colour. So I seriously doubt film is a better storage solution.
 
Tony Northrup just posted a video about that he just found out that he has lost many old personal photographs without knowing about it, and despite doing backups of backups.
I'm going to point out that the actual reason for his loss is not being addressed in this discussion. It wasn't addressed in his own video, either.

The reason for the loss was undiscovered corruption of data in his older stored files, and because he was not aware of the corruption, those damaged files were eventually replicated across all of his backups before he discovered the problem. Then it was too late.

That particular issue requires a special solution, not more backups, or different types of backups. It would be best addressed by having a method of periodically - and automatically - checking all those stored files to identify any corrupted ones so they can be restored from good backups while the backups are still good.

There should be a reasonable way to run such a process on a regular basis. Is there?
I use Acronis for good reason. When it does a backup there's an option to check that does a sector-by-sector verification of the backup. That avoids undiscovered corruption of backed-up data. I'd not use anything else that didn't as my most trusted backup. I've had cases where I did a backup without sector-by-sector verification and the backups had problems. The free solutions are great as secondary backup tools and methods, but It's the old saying - one gets what one pays for.

Mike
 
………….
There should be a reasonable way to run such a process on a regular basis. Is there?
It's supposed to be a verify-function in Lightroom combined with DNG-files, but dont use DNG so I haven't looked into it. It would however be one reason to convert to DNG, but it would be much better if LR extended the function to other filetypes. That would be a great help to people using LR. They may actually have changed the function, it's been several years since I looked it up.
 
Tony Northrup just posted a video about that he just found out that he has lost many old personal photographs without knowing about it, and despite doing backups of backups.

It seems that digital are not a perfected backup solution yet, and that film can be a safer long time archive format, if stored and protected well.

So even if some people scoffs about the use of film in 2022, it might not be such a bad idea for important personal moments, that you want to be sure that last for a long time.

BR Strobist
Well, a lot of old negative from commercial developpers have gone really bad. The negative from film developped at pro level might stay better. I have stored all my negative in the dark in special folders for long time storage. A lot of those negative seem to have lost their colour. So I seriously doubt film is a better storage solution.
 
Tony Northrup just posted a video about that he just found out that he has lost many old personal photographs without knowing about it, and despite doing backups of backups.
I watched it. Consider this:

SD card failures correspond to film camera light leaks.
Not rally A card failure is lost photos but a light leak is a photo that looks different than you expected
I'm not sure what hard drive failures correspond to; perhaps lab errors that mess up negatives.

Bit rot corresponds to print fading.

All in all, film hasn't been a more reliable medium than digital in my personal experience. However, one advantage is that images on film can also exist digitally. That's not quite as easy to do in reverse.
Slide film has proven to be far more reliable in my home which is why my most important photos are shot on E100
 
Tony Northrup just posted a video about that he just found out that he has lost many old personal photographs without knowing about it, and despite doing backups of backups.
I'm going to point out that the actual reason for his loss is not being addressed in this discussion. It wasn't addressed in his own video, either.

The reason for the loss was undiscovered corruption of data in his older stored files, and because he was not aware of the corruption, those damaged files were eventually replicated across all of his backups before he discovered the problem. Then it was too late.

That particular issue requires a special solution, not more backups, or different types of backups. It would be best addressed by having a method of periodically - and automatically - checking all those stored files to identify any corrupted ones so they can be restored from good backups while the backups are still good.

There should be a reasonable way to run such a process on a regular basis. Is there?
I use Acronis for good reason. When it does a backup there's an option to check that does a sector-by-sector verification of the backup. That avoids undiscovered corruption of backed-up data. I'd not use anything else that didn't as my most trusted backup. I've had cases where I did a backup without sector-by-sector verification and the backups had problems. The free solutions are great as secondary backup tools and methods, but It's the old saying - one gets what one pays for.
That doesn't address the problem at all. Verifying that a backup is an exact duplicate of the source does no good if the source is already corrupted and you don't know it's corrupted. That's exactly what happened to Northrup, and has happened to lots of other people.

The way to avoid this is to generate a hash or checksum of every file at the time when it's first created and you know it's healthy (and again if you make any edits to it). That collection of checksums is then kept for future reference and periodically compared to their corresponding files. Any discrepancy between those comparisons indicates that a file has changed without your knowledge, which is a warning that corruption has occurred. At that point you can go to your unaffected backups to restore a healthy version of the file.

I know what needs to be done. The stumbling block in my case is finding the Windows tools that will do it.

An alternative is to directly compare every source file to its corresponding backup version periodically before updating the backup, using a comparison method that can identify differences at the bit level. But that seems clunky, and would surely take much longer.
 
Last edited:
Tony Northrup just posted a video about that he just found out that he has lost many old personal photographs without knowing about it, and despite doing backups of backups.
I watched it. Consider this:

SD card failures correspond to film camera light leaks.
Not rally A card failure is lost photos but a light leak is a photo that looks different than you expected
A card failure does not always mean a photo is lost. It can also produce a photo that looks different than you expected.

Likewise, a light leak can produce a photo that is completely useless, so it might as well not exist at all.

Anyway, my little analogies are not intended to represent textbook equivalencies between recording mediums. The point is that film is not infallible, and never has been.
I'm not sure what hard drive failures correspond to; perhaps lab errors that mess up negatives.

Bit rot corresponds to print fading.

All in all, film hasn't been a more reliable medium than digital in my personal experience. However, one advantage is that images on film can also exist digitally. That's not quite as easy to do in reverse.
Slide film has proven to be far more reliable in my home which is why my most important photos are shot on E100
Sure, that will be true for some people and untrue for others. It can go either way.
 
Last edited:
Tony Northrup just posted a video about that he just found out that he has lost many old personal photographs without knowing about it, and despite doing backups of backups.
I'm going to point out that the actual reason for his loss is not being addressed in this discussion. It wasn't addressed in his own video, either.

The reason for the loss was undiscovered corruption of data in his older stored files, and because he was not aware of the corruption, those damaged files were eventually replicated across all of his backups before he discovered the problem. Then it was too late.

That particular issue requires a special solution, not more backups, or different types of backups. It would be best addressed by having a method of periodically - and automatically - checking all those stored files to identify any corrupted ones so they can be restored from good backups while the backups are still good.

There should be a reasonable way to run such a process on a regular basis. Is there?
I use Acronis for good reason. When it does a backup there's an option to check that does a sector-by-sector verification of the backup. That avoids undiscovered corruption of backed-up data. I'd not use anything else that didn't as my most trusted backup. I've had cases where I did a backup without sector-by-sector verification and the backups had problems. The free solutions are great as secondary backup tools and methods, but It's the old saying - one gets what one pays for.
That doesn't address the problem at all. Verifying that a backup is an exact duplicate of the source does no good if the source is already corrupted and you don't know it's corrupted. That's exactly what happened to Northrup, and has happened to lots of other people.

The way to avoid this is to generate a hash or checksum of every file at the time when it's first created and you know it's healthy (and again if you make any edits to it). That collection of checksums is then kept for future reference and periodically compared to their corresponding files. Any discrepancy between those comparisons indicates that a file has changed without your knowledge, which is a warning that corruption has occurred. At that point you can go to your unaffected backups to restore a healthy version of the file.

I know what needs to be done. The stumbling block in my case is finding the Windows tools that will do it.
You can use the Microsoft program FCIV to generate the hashes ( https://support.microsoft.com/en-gb/topic/d92a713f-d793-7bd8-b0a4-4db811e29559 ). I think FCIV with the -v flag will verify a file system against a database you’ve already generated using FCIV ( see https://en.wikibooks.org/wiki/File_Checksum_Integrity_Verifier_(FCIV)_Examples ).

I generate the database on my NAS using the find and md5sum commands, and then use a set of hashes generated by FCIV on Windows to compare what’s on my SD card with all the photos I have in my archive. I wrote a program to do this last bit though
An alternative is to directly compare every source file to its corresponding backup version periodically before updating the backup, using a comparison method that can identify differences at the bit level. But that seems clunky, and would surely take much longer.
 
Tony Northrup just posted a video about that he just found out that he has lost many old personal photographs without knowing about it, and despite doing backups of backups.
I'm going to point out that the actual reason for his loss is not being addressed in this discussion. It wasn't addressed in his own video, either.

The reason for the loss was undiscovered corruption of data in his older stored files, and because he was not aware of the corruption, those damaged files were eventually replicated across all of his backups before he discovered the problem. Then it was too late.

That particular issue requires a special solution, not more backups, or different types of backups. It would be best addressed by having a method of periodically - and automatically - checking all those stored files to identify any corrupted ones so they can be restored from good backups while the backups are still good.

There should be a reasonable way to run such a process on a regular basis. Is there?
I use Acronis for good reason. When it does a backup there's an option to check that does a sector-by-sector verification of the backup. That avoids undiscovered corruption of backed-up data. I'd not use anything else that didn't as my most trusted backup. I've had cases where I did a backup without sector-by-sector verification and the backups had problems. The free solutions are great as secondary backup tools and methods, but It's the old saying - one gets what one pays for.

Mike
I charge out my time to myself for software development at £1000/hour, so that makes my solution comfortably expensive :-)

I use the backup program on my NAS and occasionally connect the backup disks to a different Linux laptop to check I can load in a few random files.
 
... I know what needs to be done. The stumbling block in my case is finding the Windows tools that will do it.
You can use the Microsoft program FCIV to generate the hashes ( https://support.microsoft.com/en-gb/topic/d92a713f-d793-7bd8-b0a4-4db811e29559 ). I think FCIV with the -v flag will verify a file system against a database you’ve already generated using FCIV ( see https://en.wikibooks.org/wiki/File_Checksum_Integrity_Verifier_(FCIV)_Examples ).
Thanks, I'll have to dig into that.
 
Last edited:
I have recordable cds that I had burned in 1997. So I trust cdr and dvdr . But I dont have a hdd so much old. And the trend is the recordable media to be abandoned from modern pcs. So , yes , I dont believe that digital photos will survive very much , if we dont... print them !
 
I have recordable cds that I had burned in 1997. So I trust cdr and dvdr . But I dont have a hdd so much old. And the trend is the recordable media to be abandoned from modern pcs. So , yes , I dont believe that digital photos will survive very much , if we dont... print them !
If you read the comments regarding how to safely preserve the integrity of digital photos you will quickly see that it’s not simple or cheap and it’s beyond most peoples technical skills and or desires so the reality is very few people will ever have properly stored digital photos. So yes printed books etc will be their future in many cases
 
Last edited:
My saying: "If something exists in only one place, it may not be there the next time you go looking for it."

That goes for both physical and digital objects.

Physically distant copies are essential for anything you consider important.

It's important to understand the various potential failure modes that can affect your storage, and plan for the reasonable scenarios.

For digital storage, understand that "RAID is not backup!" I've lived through two RAID failures; fortunately we had backup too, but we were offline for half a day each time. RAID is for service continuity, not disaster recovery.

You don't really have a backup unless you test restoring from it regularly. You don't really have a Disaster Recovery Plan unless it's tested (management hates that - unless they've previously lived through a disaster - because it's usually expensive to run DR tests and they keep thinking they solved the problem just by buying and installing extra hardware).

Never waste a good disaster.

For physical storage, not much beats the longevity of papyrus manuscripts buried in the dry Egyptian desert, still readable 2500 years or more later. And of course wall paintings and carvings are much older than that, but papyrus scrolls are more information-dense and are portable (i.e. easier to maintain physically distant copies).

Sterling
--
Lens Grit
 
having a method of periodically - and automatically - checking all those stored files to identify any corrupted ones so they can be restored from good backups while the backups are still good.

There should be a reasonable way to run such a process on a regular basis. Is there?
Enterprise-level Storage Area Networks have background processes to check for "bit rot". This is required for enterprise-level medical imaging archives, amongst other things. I don't know if there are any similar solutions that would be considered affordable by most individuals. You could probably build some scripts that would run CRC checks on the local & remote copies of files and alert you when you need to check for unexpected alterations; that would get you a lot of the way there.

"Bit rot" is outside of our control, e.g., high speed cosmic rays ripping through disk platters and randomly flipping bits. (Interestingly, that happens less frequently when the disk is close to sea level because the atmosphere stops a lot of the highly charged particles. This has been a consideration in data centre design for over 20 years that I am aware of.)

Regards,
Sterling
--
Lens Grit
 
Enterprise-level Storage Area Networks have background processes to check for "bit rot". This is required for enterprise-level medical imaging archives, amongst other things. I don't know if there are any similar solutions that would be considered affordable by most individuals.
File systems like ZFS and Btrfs can do this on any Linux box.
 
Tony Northrup just posted a video about that he just found out that he has lost many old personal photographs without knowing about it, and despite doing backups of backups.

It seems that digital are not a perfected backup solution yet, and that film can be a safer long time archive format, if stored and protected well.

So even if some people scoffs about the use of film in 2022, it might not be such a bad idea for important personal moments, that you want to be sure that last for a long time.

BR Strobist
In theory, sure. In practice, my digital archives are in much better shape than my negative archives...
That can only be down to one person.
 
So even if some people scoffs about the use of film in 2022, it might not be such a bad idea for important personal moments, that you want to be sure that last for a long time.
Let's assume a situation where caretakers actually care about long term storage and do everything right in both cases. Film can be destroyed by fire, flood, and even fungus. And sometimes film just degrades all by itself in storage. Digital can be easily replicated across multiple geographic locations and even multiple domains (hard drive, optical disc, thumb drive, tape drive). This mitigates both disaster risk and the risk that a storage medium you thought would last ends up degrading. To match that capability requires duplicating negs, with an inherent quality loss, and storing those as well. Which can be done in theory but which nobody but a museum would actually pay for. Speaking of money, if you really want to insure longevity you periodically review all stored photographs and duplicate them at regular intervals to "fresh" media. Something someone with digital media might actually do but no one will actually do with film.

There is no longevity comparison between the two. Digital wins this contest hands down.
Not really, because the backup regime you suggest just doesn't happen. Most people just keep their images on their phone, and guess how that plays out? I can go to many auctions and still find beautiful glass plate negatives, or carefully preserved albums, hard drives? not so much.
Now let's assume the real world situation for 99% of people: film thrown in a shoebox (if it's kept at all), digital thrown on a single hard drive, and prints in an album (if they're made at all). Neither one wins this because the hard drive will fail and the shoebox will likely get tossed in the trash at some future spring cleaning event. The prints will survive unless/until there is a fire or flood, or the prints themselves degrade.

Which brings me to the #1 real world step you can take to insure your photographs survive a while after you're gone, even with 'normies' handling things: make prints. 8x10 albums of pigment ink prints in proper archival sleeves. Duplicate albums distributed to different people who might actually care. Great grandchildren might actually keep the album around since they can thumb through it and wonder how things were, who people are. Instead of just tossing it out with the garbage which is likely to happen with both film, optical discs, and hard drives that won't connect to anything they own.

You could include, in the back pages of the album, optical discs or duplicate negs, depending on your medium. Since it's part of the album it's not as likely to get tossed, and someone down the line might actually care enough to try to do something with it.
 
"Bit rot" is outside of our control, e.g., high speed cosmic rays ripping through disk platters and randomly flipping bits. (Interestingly, that happens less frequently when the disk is close to sea level because the atmosphere stops a lot of the highly charged particles. This has been a consideration in data centre design for over 20 years that I am aware of.)
I’ve always thought that this was an urban myth (for hard disks at least), going back to the early days of computers. I remember people telling me about it when I first started work, but I’ve never seen an example. I’ve seen problems with head crashes, or machines being switched off with all the data not written to disk etc.
 
Status
Not open for further replies.

Keyboard shortcuts

Back
Top