bit Rot examples

What's important for this talk, a NAND cell, including the electric charge in it, is the medium.
I disagree. I am not going to explain that again.
That's basically the point of this argument. The electrons that form the charge are the medium (or part of it) by definition. They directly used to encode our digital data.
You know what, I think I know what you mean and can see how you could define the 'charge' or whatever the medium. It just took me a night's sleep.

After all it is supposed to represent a zero or one. I guess I got confused or terminology mixed up, I am no scientist or anything, I learn by doing and then you tend to create your own model to explain stuff you see and discover by experiment.

I repair photos, people send me digital photos that somehow got corrupted. So fore example left is how I receive it, right is how I return it.

335e863cb7804c3c9976732e8eea61b6.jpg

Sometimes it pays off to however work with the 'corrupt' drive. Suppose the whole flash drive is filled with photos like this one. That's too much work to repair. So then I ask for the flash drive itself.

I use tools (hardware and software) from http://www.flash-extractor.com/ to read the NAND. These drives in themselves are functional. I can read from them without error using more conventional methods. And here's where part of my mislabeling may come from: I call the such drives or media intact, if you'd want you could still use them and they will work. And I tend to 'label' them failed if I got plain read errors while accessing them. Such failed drives lost their ability to store data altogether.

Now the situation where I dump the NAND directly, I physically remove the NAND chips for that. If I get chips that are hard to read I first apply ECC error correction, or better said I can make the tool do that. If I still get many bit errors I can play with thresholds. And this is where I feel the analog part comes in. You can basically instruct the chip to lower thresholds which is simply a value, and so a lower charge gets accepted as 1 rather than 0. You do not do this per bit. Then by reading a page again, and again verifying against stored ECC value you can see if it results in less bit errors.

So a very analog process (?), no actual bits on the drive or in the cells, and the cell's charge in itself is not a zero or one, it becomes one depending on that we do with it. Still, I am trying to get a binary value from it. Indeed, it's not the charge itself that is of interest, getting the right binary state is. But in the end we store digital information and the charge is our medium we use to do it.

So while my bucket story in itself isn't a very bad analogy, the bottom line is that you could label the water in the bucket the medium, and so also the charge. And even though in itself imperfect and analog, it's a medium or used as such at least, that stores binary data. What's supposed to come out of it is a zero or one.

Now with regards to 'bit rot', as it is imperfect and analog, it can 'decay' which makes it hard to get the original binary data from it. All NAND cells leak charge. All you need time. Also, the charge can be disturbed causing it slightly change. If we regard the charge the medium, indeed we could reach conclusion that at the moment it's charge results in reading a 0 rather than the original 1 the medium has thus failed. The decay of the medium goes slowly. At some point it reaches a level where one read may decode to a 1, the next to a 0. Eventually the bit, the digital info will flip.

Yep, I think I see your point now.
Thanks! - and it was actually about a subtle difference in the terminology.

However I wonder if the bit rot in question (as a charge leak) really happens that often. Did a quick search and found some examples (anecdotal evidence but...)


Most of them seem to be related to other types of data corruption - there's more chances to get the files corrupted due to software bugs, errors during copying, broken controllers etc.

Faulty RAM in a PC may cause lots of nasty issues. I had an issue with a faulty SD card reader that was damaging SD cards somehow (I wasn't even writing anything, just reading) - don't know what it was exactly but I was able to reformat them. A good reliable power supply is very important. Too low or too high voltage in any component may cause very bad consequences.

In a flash/SSD, IMHO, corruption of whole arrays of cells is more probable than a single bit flip.

That is, slow deterioration is definitely real, but 'instant' corruption due to misc. h/w and s/w issues is much more probable. The OP said in another thread that he loses 1% of photos in the backups https://www.dpreview.com/forums/thread/4574584?page=2#forum-post-65139066 - I think it's too much for 'bit rot', most likely it's related to faulty or poor quality h/w.
I used We Black, almsot exclusivity. Its true, for temporary stuff I use lower drives, my Overflow, the core photos back ed up on WD black - which I have found to be the best thus far.

I am aware its probably I do not change my hard drives fast enough= once every 5 years, nor do I go threw the dozens f them on a cycle powering them removing them from storage etc. I used to, but as my archive of photos gets bigger I spend more and more time maintaining rather than producing more. I am hoping SD drives, which I am experimenting with, have a better cycle

And the transfer process probably itself is a point of failure. Its many man hours already ever 5 or so years. I have to admit i get sloppy.

As well some of it is bad equipment, sometimes a main drive just dies, and the secondary I drive which is often cheaper, is all I have for x or y photo. I got burned by the controller chip failure 10 or so years back. Anyway a good article on the issue

 
What's important for this talk, a NAND cell, including the electric charge in it, is the medium.
I disagree. I am not going to explain that again.
That's basically the point of this argument. The electrons that form the charge are the medium (or part of it) by definition. They directly used to encode our digital data.
You know what, I think I know what you mean and can see how you could define the 'charge' or whatever the medium. It just took me a night's sleep.

After all it is supposed to represent a zero or one. I guess I got confused or terminology mixed up, I am no scientist or anything, I learn by doing and then you tend to create your own model to explain stuff you see and discover by experiment.

I repair photos, people send me digital photos that somehow got corrupted. So fore example left is how I receive it, right is how I return it.

335e863cb7804c3c9976732e8eea61b6.jpg

Sometimes it pays off to however work with the 'corrupt' drive. Suppose the whole flash drive is filled with photos like this one. That's too much work to repair. So then I ask for the flash drive itself.

I use tools (hardware and software) from http://www.flash-extractor.com/ to read the NAND. These drives in themselves are functional. I can read from them without error using more conventional methods. And here's where part of my mislabeling may come from: I call the such drives or media intact, if you'd want you could still use them and they will work. And I tend to 'label' them failed if I got plain read errors while accessing them. Such failed drives lost their ability to store data altogether.

Now the situation where I dump the NAND directly, I physically remove the NAND chips for that. If I get chips that are hard to read I first apply ECC error correction, or better said I can make the tool do that. If I still get many bit errors I can play with thresholds. And this is where I feel the analog part comes in. You can basically instruct the chip to lower thresholds which is simply a value, and so a lower charge gets accepted as 1 rather than 0. You do not do this per bit. Then by reading a page again, and again verifying against stored ECC value you can see if it results in less bit errors.

So a very analog process (?), no actual bits on the drive or in the cells, and the cell's charge in itself is not a zero or one, it becomes one depending on that we do with it. Still, I am trying to get a binary value from it. Indeed, it's not the charge itself that is of interest, getting the right binary state is. But in the end we store digital information and the charge is our medium we use to do it.

So while my bucket story in itself isn't a very bad analogy, the bottom line is that you could label the water in the bucket the medium, and so also the charge. And even though in itself imperfect and analog, it's a medium or used as such at least, that stores binary data. What's supposed to come out of it is a zero or one.

Now with regards to 'bit rot', as it is imperfect and analog, it can 'decay' which makes it hard to get the original binary data from it. All NAND cells leak charge. All you need time. Also, the charge can be disturbed causing it slightly change. If we regard the charge the medium, indeed we could reach conclusion that at the moment it's charge results in reading a 0 rather than the original 1 the medium has thus failed. The decay of the medium goes slowly. At some point it reaches a level where one read may decode to a 1, the next to a 0. Eventually the bit, the digital info will flip.

Yep, I think I see your point now.
Thanks! - and it was actually about a subtle difference in the terminology.

However I wonder if the bit rot in question (as a charge leak) really happens that often. Did a quick search and found some examples (anecdotal evidence but...)


Most of them seem to be related to other types of data corruption - there's more chances to get the files corrupted due to software bugs, errors during copying, broken controllers etc.

Faulty RAM in a PC may cause lots of nasty issues. I had an issue with a faulty SD card reader that was damaging SD cards somehow (I wasn't even writing anything, just reading) - don't know what it was exactly but I was able to reformat them. A good reliable power supply is very important. Too low or too high voltage in any component may cause very bad consequences.

In a flash/SSD, IMHO, corruption of whole arrays of cells is more probable than a single bit flip.

That is, slow deterioration is definitely real, but 'instant' corruption due to misc. h/w and s/w issues is much more probable. The OP said in another thread that he loses 1% of photos in the backups https://www.dpreview.com/forums/thread/4574584?page=2#forum-post-65139066 - I think it's too much for 'bit rot', most likely it's related to faulty or poor quality h/w.
I used We Black, almsot exclusivity. Its true, for temporary stuff I use lower drives, my Overflow, the core photos back ed up on WD black - which I have found to be the best thus far.

I am aware its probably I do not change my hard drives fast enough= once every 5 years, nor do I go threw the dozens f them on a cycle powering them removing them from storage etc. I used to, but as my archive of photos gets bigger I spend more and more time maintaining rather than producing more. I am hoping SD drives, which I am experimenting with, have a better cycle

And the transfer process probably itself is a point of failure. Its many man hours already ever 5 or so years. I have to admit i get sloppy.

As well some of it is bad equipment, sometimes a main drive just dies, and the secondary I drive which is often cheaper, is all I have for x or y photo. I got burned by the controller chip failure 10 or so years back. Anyway a good article on the issue

https://www.howtogeek.com/660727/bit-rot-how-hard-drives-and-ssds-die-over-time/
Have to say such a large number of corrupted images sounds unexpected. I've been lately going through all my digital photos, the first of which are from 2002 and literally none of the images I've reprocessed have had any issues with corruption. I upgrade my primary drives when they get too small for my photos and other content, so maybe a couple of times in a decade. The last time an in-use hard drive has failed on me was somewhere in 2003 or 2004.

Still, I do take backups to two external drives and copy particularly important photos to online storage too; better to be safe than sorry.
 
Simply finding a bad image doesn't prove bit rot, which is one of those vague and meaningless terms. What you call an ancient scroll that is hard to read? Alphabet rot?

The card has failed or become scrambled or was never good to began with. If it was some type of "bit rot," all your images from that era would shows signs of degradation.
That statement suggests a lack of understanding of stochastics and compression.

Toronto Photography was using the term "bitrot" correctly in the OP - see Wikipedia.
Now being in this discussion for over a day and I wonder about meaningfulness of the term bit rot. For example, explain JPEG example in the Wikepedia article.
If it's about this article https://en.wikipedia.org/wiki/Data_degradation it seems to be flawed. First question, there's 326k bits in the image, they flipped one of them, but which one? If it was done manually, a 'proper' bit could've been flipped to make the effect more dramatic.
If that file is on my hard drive you'd either expect that never to happen -or- get some kind of Windows error message about file being unreadable, CRC error and whatnot. Don't get me wrong, I see many files like that every week because I offer a repair service.

However, back to hard drive. See if you follow and tell me where I am wrong:

Eventually file is stored in sectors. each of those 'protected' by ECC code. The ECC itself protected by a CRC.

Now a bit flips, detected AND corrected by ECC. This is why we ECC to begin with.

Now assume ECC can correct n bits, however n + 2 bits flipped. This should cause controller to respond with an error message (and eventually Windows too), not with the corrupt data.

IOW, 'bit rotten' files should result in being corrected or error. Apparently examples passed ECC verification, chances of a bit flipping in data and in ECC to match are astronomically small I'd say. Ergo, files are written that way to begin with.
Exactly. HDDs and SSDs should generally have good error correction mechanisms. The ones designed for media streaming may be more forgiving. Cheap USB sticks may be unreliable. But all of them should have basic ECC I guess.
This topic is far more complex than one might think. too bad Wikepedia article does not touch this, it seems to assume data is stored unprotected and served without verification and error correction.
It mentions ECC but fails to mention that a one-bit flip will always be corrected by ECC.
 
That is, slow deterioration is definitely real, but 'instant' corruption due to misc. h/w and s/w issues is much more probable. The OP said in another thread that he loses 1% of photos in the backups https://www.dpreview.com/forums/thread/4574584?page=2#forum-post-65139066 - I think it's too much for 'bit rot', most likely it's related to faulty or poor quality h/w.
I used We Black, almsot exclusivity. Its true, for temporary stuff I use lower drives, my Overflow, the core photos back ed up on WD black - which I have found to be the best thus far.

I am aware its probably I do not change my hard drives fast enough= once every 5 years, nor do I go threw the dozens f them on a cycle powering them removing them from storage etc. I used to, but as my archive of photos gets bigger I spend more and more time maintaining rather than producing more. I am hoping SD drives, which I am experimenting with, have a better cycle

And the transfer process probably itself is a point of failure. Its many man hours already ever 5 or so years. I have to admit i get sloppy.

As well some of it is bad equipment, sometimes a main drive just dies, and the secondary I drive which is often cheaper, is all I have for x or y photo. I got burned by the controller chip failure 10 or so years back. Anyway a good article on the issue

https://www.howtogeek.com/660727/bit-rot-how-hard-drives-and-ssds-die-over-time/
There's lots of points of failure apart from the storage (HDD/SSD). If it's a big problem for you, I'd suggest to do a test - try and back up some large amount of images (maybe the whole your collection) and run a full content comparison right away.
 
What's important for this talk, a NAND cell, including the electric charge in it, is the medium.
I disagree. I am not going to explain that again.
That's basically the point of this argument. The electrons that form the charge are the medium (or part of it) by definition. They directly used to encode our digital data.
You know what, I think I know what you mean and can see how you could define the 'charge' or whatever the medium. It just took me a night's sleep.

After all it is supposed to represent a zero or one. I guess I got confused or terminology mixed up, I am no scientist or anything, I learn by doing and then you tend to create your own model to explain stuff you see and discover by experiment.

I repair photos, people send me digital photos that somehow got corrupted. So fore example left is how I receive it, right is how I return it.

335e863cb7804c3c9976732e8eea61b6.jpg

Sometimes it pays off to however work with the 'corrupt' drive. Suppose the whole flash drive is filled with photos like this one. That's too much work to repair. So then I ask for the flash drive itself.

I use tools (hardware and software) from http://www.flash-extractor.com/ to read the NAND. These drives in themselves are functional. I can read from them without error using more conventional methods. And here's where part of my mislabeling may come from: I call the such drives or media intact, if you'd want you could still use them and they will work. And I tend to 'label' them failed if I got plain read errors while accessing them. Such failed drives lost their ability to store data altogether.

Now the situation where I dump the NAND directly, I physically remove the NAND chips for that. If I get chips that are hard to read I first apply ECC error correction, or better said I can make the tool do that. If I still get many bit errors I can play with thresholds. And this is where I feel the analog part comes in. You can basically instruct the chip to lower thresholds which is simply a value, and so a lower charge gets accepted as 1 rather than 0. You do not do this per bit. Then by reading a page again, and again verifying against stored ECC value you can see if it results in less bit errors.

So a very analog process (?), no actual bits on the drive or in the cells, and the cell's charge in itself is not a zero or one, it becomes one depending on that we do with it. Still, I am trying to get a binary value from it. Indeed, it's not the charge itself that is of interest, getting the right binary state is. But in the end we store digital information and the charge is our medium we use to do it.

So while my bucket story in itself isn't a very bad analogy, the bottom line is that you could label the water in the bucket the medium, and so also the charge. And even though in itself imperfect and analog, it's a medium or used as such at least, that stores binary data. What's supposed to come out of it is a zero or one.

Now with regards to 'bit rot', as it is imperfect and analog, it can 'decay' which makes it hard to get the original binary data from it. All NAND cells leak charge. All you need time. Also, the charge can be disturbed causing it slightly change. If we regard the charge the medium, indeed we could reach conclusion that at the moment it's charge results in reading a 0 rather than the original 1 the medium has thus failed. The decay of the medium goes slowly. At some point it reaches a level where one read may decode to a 1, the next to a 0. Eventually the bit, the digital info will flip.

Yep, I think I see your point now.
Thanks! - and it was actually about a subtle difference in the terminology.

However I wonder if the bit rot in question (as a charge leak) really happens that often. Did a quick search and found some examples (anecdotal evidence but...)


Most of them seem to be related to other types of data corruption - there's more chances to get the files corrupted due to software bugs, errors during copying, broken controllers etc.

Faulty RAM in a PC may cause lots of nasty issues. I had an issue with a faulty SD card reader that was damaging SD cards somehow (I wasn't even writing anything, just reading) - don't know what it was exactly but I was able to reformat them. A good reliable power supply is very important. Too low or too high voltage in any component may cause very bad consequences.

In a flash/SSD, IMHO, corruption of whole arrays of cells is more probable than a single bit flip.

That is, slow deterioration is definitely real, but 'instant' corruption due to misc. h/w and s/w issues is much more probable. The OP said in another thread that he loses 1% of photos in the backups https://www.dpreview.com/forums/thread/4574584?page=2#forum-post-65139066 - I think it's too much for 'bit rot', most likely it's related to faulty or poor quality h/w.
I used We Black, almsot exclusivity. Its true, for temporary stuff I use lower drives, my Overflow, the core photos back ed up on WD black - which I have found to be the best thus far.

I am aware its probably I do not change my hard drives fast enough= once every 5 years, nor do I go threw the dozens f them on a cycle powering them removing them from storage etc. I used to, but as my archive of photos gets bigger I spend more and more time maintaining rather than producing more. I am hoping SD drives, which I am experimenting with, have a better cycle

And the transfer process probably itself is a point of failure. Its many man hours already ever 5 or so years. I have to admit i get sloppy.

As well some of it is bad equipment, sometimes a main drive just dies, and the secondary I drive which is often cheaper, is all I have for x or y photo. I got burned by the controller chip failure 10 or so years back. Anyway a good article on the issue

https://www.howtogeek.com/660727/bit-rot-how-hard-drives-and-ssds-die-over-time/
Have to say such a large number of corrupted images sounds unexpected. I've been lately going through all my digital photos, the first of which are from 2002 and literally none of the images I've reprocessed have had any issues with corruption. I upgrade my primary drives when they get too small for my photos and other content, so maybe a couple of times in a decade. The last time an in-use hard drive has failed on me was somewhere in 2003 or 2004.

Still, I do take backups to two external drives and copy particularly important photos to online storage too; better to be safe than sorry.


Well I have to admit I have had some bad luck. Several drive failures of other wise good brands and storage was spotty pre 2003 ish.

Certainly I suppose itt must be use as well. As I sued to travel that gives an additional point of failure, flight air pressure, elements etc. The Tropics probably did damage that I did not see to the card at the time, and sub 30 degree weather on the other direction.

The only camera that I had last any length of time is the Panasonic T3C which was water proof, still kind of is, just not entirely. Good for rain and the pool I would no longer trust it at depth.. Before that model destroyed 5 water proof cameras. all replaced under warranty since there supposed to be water proof and not leak.

Anyway, I imagine that my failures are not necessarily typical in the rate. To reduce errors I now only use an SD card a few times before retiring it, in come case cases only once. I just reformatted my first SD card in the last two years, so using that one twice.

The exception is my old XT camera, where I am using CF which I suspect is past its do date. I am not sure if I will buy more CF cards or an adulator as of yet as I sue the camera as a back up. I actually find using a camera where you can not see the photo you have taken i ve bee spoiled.
 
I haven't tested it myself, but have been working with JPEG files for over 25 years and remember others testing this process and it indeed slowly lowered the quality over a large number of Saves. It takes a lot, but does happen. I'm open to the fact that they may have made improvements to algorithms that have reduced the effect though.
If you're re-saving the file, i.e. running it through the compression routine again, then there would be changes and degradation -- that's the nature of compression. If you are merely copying the file, there should never be any changes or degradation unless the disk/media is going bad.

Aaron
Yes, that's what I was talking about (Saving within an application). Copying a file won't change anything.
 
I haven't tested it myself, but have been working with JPEG files for over 25 years and remember others testing this process and it indeed slowly lowered the quality over a large number of Saves. It takes a lot, but does happen. I'm open to the fact that they may have made improvements to algorithms that have reduced the effect though.
If you're re-saving the file, i.e. running it through the compression routine again, then there would be changes and degradation -- that's the nature of compression. If you are merely copying the file, there should never be any changes or degradation unless the disk/media is going bad.

Aaron
Yes, that's what I was talking about (Saving within an application). Copying a file won't change anything.
Well that's semi- true. Lets call copying subject to mutation - that is copy error. Each copy will be slightly less than the original or at least some of the time. Digital is easier to copy with little or no errors but they do happen.

In case one- info is lost on purpose, and in case two, by accident.

More than once I have had to copy a file more than once to get a byte for byte copy. Yesterday in fact.
 
More than once I have had to copy a file more than once to get a byte for byte copy. Yesterday in fact.
It's unusual (although could've been expected given your issues with the spoilt files).

Your target drive (whatever it is) could be faulty, or maybe you're having issues with RAM, so that the data gets corrupted in transit.

Try copying (not moving) a large amount of files from one folder to another temp folder on the same drive, then run comparison. This may help identify the culprit.

Also, RAM, HDD or SSD diagnostic tools may be of some help (the RAM one should be run from a separate memory stick at PC boot - there's lots of free RAM tests).

--
https://www.instagram.com/quarkcharmed/
https://500px.com/quarkcharmed
 
Last edited:
Well that's semi- true. Lets call copying subject to mutation - that is copy error. Each copy will be slightly less than the original or at least some of the time. Digital is easier to copy with little or no errors but they do happen.

In case one- info is lost on purpose, and in case two, by accident.

More than once I have had to copy a file more than once to get a byte for byte copy. Yesterday in fact.
What was the configuration in which you didn't have a perfect copy?

For important archives I use utilities that verify after copying. In the unlikely event there was an error, it gets detected, and I can redo the copy.
 
Well that's semi- true. Lets call copying subject to mutation - that is copy error. Each copy will be slightly less than the original or at least some of the time. Digital is easier to copy with little or no errors but they do happen.

In case one- info is lost on purpose, and in case two, by accident.

More than once I have had to copy a file more than once to get a byte for byte copy. Yesterday in fact.
What was the configuration in which you didn't have a perfect copy?
For important archives I use utilities that verify after copying. In the unlikely event there was an error, it gets detected, and I can redo the copy.
A very good habit. I have a number of batch files using xcopy for backing up my files and use the /v switch to verify that the destination copy is the same as the source.
 
I haven't tested it myself, but have been working with JPEG files for over 25 years and remember others testing this process and it indeed slowly lowered the quality over a large number of Saves. It takes a lot, but does happen. I'm open to the fact that they may have made improvements to algorithms that have reduced the effect though.
If you're re-saving the file, i.e. running it through the compression routine again, then there would be changes and degradation -- that's the nature of compression. If you are merely copying the file, there should never be any changes or degradation unless the disk/media is going bad.

Aaron
Yes, that's what I was talking about (Saving within an application). Copying a file won't change anything.
Well that's semi- true. Lets call copying subject to mutation - that is copy error. Each copy will be slightly less than the original or at least some of the time. Digital is easier to copy with little or no errors but they do happen.

In case one- info is lost on purpose, and in case two, by accident.

More than once I have had to copy a file more than once to get a byte for byte copy. Yesterday in fact.
If you're seeing changes to files often from just a simple copying process, you may have other issues going on. In 40 years of working on and supporting computers (of all types), I don't think I can think of any noticable data loss that wasn't attributed to failing storage media, failing RAM, failing network device, or a very poorly written program. In today's world, simply copying a file through the normal means should not cause data loss without there being something seriously wrong.
 
More than once I have had to copy a file more than once to get a byte for byte copy. Yesterday in fact.
It's unusual (although could've been expected given your issues with the spoilt files).

Your target drive (whatever it is) could be faulty, or maybe you're having issues with RAM, so that the data gets corrupted in transit.

Try copying (not moving) a large amount of files from one folder to another temp folder on the same drive, then run comparison. This may help identify the culprit.

Also, RAM, HDD or SSD diagnostic tools may be of some help (the RAM one should be run from a separate memory stick at PC boot - there's lots of free RAM tests).
Ya its probably lack of Ram. Always running out that , and I know some of it is over taxing the CPU. I m down to one prime machine, a tablet, and a off net machine so that clearly not enough workhorses to get anything substantive done. I have been surprised how useful the tablet has been however.

How much ram do you recommend? How many cores?
 
More than once I have had to copy a file more than once to get a byte for byte copy. Yesterday in fact.
It's unusual (although could've been expected given your issues with the spoilt files).

Your target drive (whatever it is) could be faulty, or maybe you're having issues with RAM, so that the data gets corrupted in transit.

Try copying (not moving) a large amount of files from one folder to another temp folder on the same drive, then run comparison. This may help identify the culprit.

Also, RAM, HDD or SSD diagnostic tools may be of some help (the RAM one should be run from a separate memory stick at PC boot - there's lots of free RAM tests).
Ya its probably lack of Ram.
Not the lack of RAM - I meant the (potentially) faulty RAM.
Always running out that , and I know some of it is over taxing the CPU. I m down to one prime machine, a tablet, and a off net machine so that clearly not enough workhorses to get anything substantive done. I have been surprised how useful the tablet has been however.

How much ram do you recommend? How many cores?
For just copying/backing up data - it doesn't matter too much.

For photo processing - each editing software has its requirements, you can check what's their min requirements and multiply by 2 :) They may also specify the recommended PC configuration.
 
More than once I have had to copy a file more than once to get a byte for byte copy. Yesterday in fact.
It's unusual (although could've been expected given your issues with the spoilt files).

Your target drive (whatever it is) could be faulty, or maybe you're having issues with RAM, so that the data gets corrupted in transit.

Try copying (not moving) a large amount of files from one folder to another temp folder on the same drive, then run comparison. This may help identify the culprit.

Also, RAM, HDD or SSD diagnostic tools may be of some help (the RAM one should be run from a separate memory stick at PC boot - there's lots of free RAM tests).
Ya its probably lack of Ram.
Not the lack of RAM - I meant the (potentially) faulty RAM.
Always running out that , and I know some of it is over taxing the CPU. I m down to one prime machine, a tablet, and a off net machine so that clearly not enough workhorses to get anything substantive done. I have been surprised how useful the tablet has been however.

How much ram do you recommend? How many cores?
For just copying/backing up data - it doesn't matter too much.

For photo processing - each editing software has its requirements, you can check what's their min requirements and multiply by 2 :) They may also specify the recommended PC configuration.
How long does Ram last? how do you check for faulty Ram ?
 
The information is the medium in certain sense. If the medium is intact, the information is intact. Full stop.
Please back with data. This is contrary to what I have read, and experienced.
There's no need to back that with data - the above statement is in the same league as flat earth conspiracy and alike.
I am sure your right, I m no exptert but lets play Devils Advocate:

That is a fundamental misunderstanding of particle science and Thermodynamics.

Reminder Entropy

plus we have not idea how effective quantum tunneling is.
We do. There are tunneling semiconductors that use it.
However your right election loss is almost certainly from some from of medium decay,
You always need a physical carrier to store the information. You can't store information without making physical changes in the medium. Maybe you imply that there's silicone medium and electrons in it are something different (non-medium)?
I guess you could argue that this are all destroying the medium -that a chalk board and its chalk are one and if you remove the chalk its the medium that suffers as well.
The chalk is the medium.

However you can lose information when the medium is intact, the information is intact, but the protocol is broken, so the information is inaccessible or corrupted. It can be broken on different levels. An SSD controller may break, or on a higher level, your software may get an upgrade and stop supporting an old file format. Or it can be a glitch in the software that incorrectly decodes certain files, or a glitch on a lower level, e.g. faulty RAM.

With your original examples, things could go wrong on many different levels, it didn't necessarily have to be a faulty storage.
On the chalk example, the electrons represent the chalk and the harddrive the chalkboard. Just to clarify.

And I agree, There are numerous point of possible failures. As I have had a large enough data size over time with many solutions tried over time i say the 1% rate is the best an average person can hope for, thought with numerous backups of course the odds of two identical files being destroyed at the same time is low. I used to only keep two copies of files to save space, now that my base, and make numerous other copies over time.

One early mistake was believing that early CDs where a good way to keep files (they where not) and the same is true of early USB drives (which all have failed from the early days) At the time the marketing hype was different.

I just learned that SD fail IF not in use. Or rather they need power to keep files intact. Which means that if you use SD not contected to your computer you have to regularly cycle threw them (which I have not been doing) More lost files waiting to be discovered.
 
How long does Ram last? how do you check for faulty Ram ?
For example:


The RAM may last forever but may fail unpredictably. Also, power supply is very important. It may cause unpredictable behaviour in all our some PC components.
 
How long does Ram last? how do you check for faulty Ram ?
For example:

https://www.cnet.com/how-to/how-to-test-your-ram-in-windows/

The RAM may last forever but may fail unpredictably. Also, power supply is very important. It may cause unpredictable behaviour in all our some PC components.
Huge point you mention here. The power supply (PSU) is usually overlooked, most people just think if it supplies enough power it's good enough. It's always a good idea to get a better rated PSU, and get a top shelf brand if you can.

Also get one that's a higher output than needed, if your components rate at a peak of 280w, don't get a 300w PSU as it's more likely to fail if it's pushed to the limit all the time. Better quality units (and higher output units) are less likely to spit out uneven current and are less likely to fail.
 
While all the others here are busy correcting each other over some terminology and all the details of where and how the fiddly bit thingies end up being stored, I have something else for you:


It covers oldschool film, but also 2 examples of digital corruption. The talk is a bit sluggish at the start. First digital thing pops up at ~27:30 min mark (broken cam).
 
Wow...
 

Keyboard shortcuts

Back
Top