How much disk space do I use in a year of hybrid camera use?

I shoot 61MP A7CR, and save all my images in jpeg/uncompressed RAW format. I go out to shoot a few times a month, and usually end up with less than 100 photos a session. I also shoot sports photos, and videos and when I do I end up with just less than 1000 photos, and 4k60p video in XAVC-HS format with Proxies.

I end up making 30x20 photos, and videos of the events. After each even I make a folder on my computer and upload the entire SD card from my camera, my folder structure looks like this:

01012024 - NewYearsDayParty

12242024 - Christmas Day

etc...

This year I have 44 events, including practice and testing sessions.

I save everything, because one year I didn't copy the entire card over, and lost the video of the last Christmas with one of my family members. So this technique prevents that from happening.

I store everything on my pc that has 72TB of raid space. I will then back up the year onto an external Solid state drive, and I have copies of jpegs and videos in google photos and youtube.

After all of that I have taken 3TB of data for the year.

Today I start a new folder for 2025.
As someone who religiously uses the 20/30fps on the A1, I'm amazed that somebody with an A7RV is outshooting me and having keepers at about 7 times the rate I am................
 
Last edited:
I shoot 61MP A7CR, and save all my images in jpeg/uncompressed RAW format. I go out to shoot a few times a month, and usually end up with less than 100 photos a session. I also shoot sports photos, and videos and when I do I end up with just less than 1000 photos, and 4k60p video in XAVC-HS format with Proxies.

I end up making 30x20 photos, and videos of the events. After each even I make a folder on my computer and upload the entire SD card from my camera, my folder structure looks like this:

01012024 - NewYearsDayParty

12242024 - Christmas Day

etc...

This year I have 44 events, including practice and testing sessions.

I save everything, because one year I didn't copy the entire card over, and lost the video of the last Christmas with one of my family members. So this technique prevents that from happening.

I store everything on my pc that has 72TB of raid space. I will then back up the year onto an external Solid state drive, and I have copies of jpegs and videos in google photos and youtube.

After all of that I have taken 3TB of data for the year.

Today I start a new folder for 2025.
You could use less space if you chose lossless compressed RAW - uncompressed RAW uses almost twice the space for no good reason.

I shot uncompressed RAW on the A7RIV, because there was no alternative. Lossless compressed (unlike lossy compressed, which was the original Sony compressed format) retains all the data, just compressed like a ZIP file. I think lossless compression arrived with the A7IV, but it was ported back to the original A1. It is in the A7RV, and it's in the A7CR.

I know a few people use unusual tools on their RAW files, and need uncompressed for that, but all the mainstream tools support lossless compressed now. There are a few that don't support the "scaled" lossless compressed files (eg: the 15Mpixel RAW-S), but they still support the true RAW files.
Space is cheap, so the only benefit to using the lossless compressed files is buffer performance. I have seen enough comparisons to know that all of the compressions do impart a difference into the final product regardless of what the label says.
No... This really isn't about "what the label says", you're literally arguing against facts and math here.
The only thing that matters is the results you see with your eyes. If you are going to use math as a defense you will need to show your work.
No, you're the one making the initial claim against what's known and understood. Have you done your own test? I'd be genuinely curious to see the results.
That's not true you and Aleph are trying to convince me lossless is better than uncompressed,
I'm not! Honest, you do you friend, but I will stand against the spread of what I see as misinformation which others could take as gospel, the video you linked falls squarely in that category, sorry.
after I stated I have no issue with space
That's fair.
The burden of proof is on the two of you. Uncompressed is the unalgorthmic dorm of the file.
What does that even mean?
I don't want to use anything else except where I need performance. Space is not an issue for me 99% of the time.
Besides, I wonder how they call the MRAW and SRAW lossless, when clearly there is a difference in the final file, so the title of lossless means little to me.
That's different, but yeah no argument there...
I use uncompressed for true file fidelity and compressed (lossy) for performance.
The point is that lossless compression (when used on the full res L RAW) is the best of both, with no downside (not small, none).
It is an improved compression, but either you want to save space or you want file fidelity. Lossless gives you less of both.
No, it's a different type of compression not an improved form of compression. If lossless compression didn't give exact byte per byte and 0 per 0 fidelity then software install packages and zip compression would literally not work... Think about it.
I didn't say it was an improved version of the lossy compression, I said it is an improved compression meaning it could be version 2.0 of the lossy compression,
Factually incorrect.
or it could be version 1.0 of a new compression the difference doesn't matter to me.
Facts matter though.
Very familiar with computer compressions, and you and I both know errors can happen in the process of compression,
When errors happen with lossless compression it'll be plainly visible and/or the uncompression will fail.
not to mention the extra computer power it takes to do the compression.
So you've got no issues with using up 2x the amount of storage but computing power is a concern? 🤔
Lossy compression isn't so bad that I would never use it yet I get real buffer gains when using lossy compression vs lossless compression.
You should be using it all the time but you do you, Aleph already pointed out the only plausible justification not to use it and it's got nothing to do with fidelity.
Otherwise I don't have anything against any of the formats as I am all for more options, but I prefer to use the formats that capture the most unaltered data. Just a personal preference.
There's a reason it's called lossless. I'm not trying to change your mind (honest), but the way you're laying it out borders on misinformation (no offense, to you or that random YT'er trying to spot differences at 500%... yeah there's better ways to do that).
I just need to see the better way to prove it, like I said if it is math we need to see the work.
The entire Internet and software industry would like a word.
Another YouTuber (mathphotographer) does this with the Q3. We just need someone to do the same for Sony, so that I can stop using my eyes.
And what did he conclude?
He came away with the MRaw version actually improved DR and had lower noise than the LRaw versions.

(As an aside I thought Sony was doing the same thing, but they aren't. I see a real value in a 36 MP raw that is using pixel binning for its smaller size. Almost gives you a free sensor type)
Again not enough difference between the three formats for me to be concerned, but if I have the space why not compress on my computer (ie convert to DNG which is a smaller and more useful format than .arw).
I'd suggest doing your own test if you're gonna argue this vehemently, no offense but the YouTube test you linked is ridiculous. A test like that shouldn't be performed outside where lighting conditions are changing by the second (and they did!), that undermines the credibility and reliability of the whole test, and even then the YT'er you linked only thinks he saw a difference in 1 out of 4 tests he performed.

Frankly the whole thing smacks of bad methodology, because the test where he thinks he did see a difference is the one where one would expect less difference (with the less resolving of the two lenses he used and with no over/under exposure used)...
I agree he could have used a more scientific and stable environment, but there is enough there to question what Sony really means by lossless,
There really really isn't enough there to even be a there.
when they carelessly use it with the other two formats.
That's fair, I dunno what they should've called the downscaled lower res options but I think CaNikon have also called them RAW so it's not totally unusual or marketing gone wild (in a new sense that is).
 
Even Sony is confusing whether the lossless compression is the same, that's enough difference for me to continue using uncompressed until I need to save space or improve performance.

"Lossless compressed RAW:

Lossless compression is a format that allows you to reduce image size without compromising on quality. A lossless compressed image is processed by post-processing software and the data is decompressed like a ZIP file. Decompression allows you to expand the compressed file back to its original size.

This is a popular format that occupies less space with minimal quality loss.

Lossless Compressed Raw is recommended when you want to record content in a higher image quality equivalent to uncompressed RAW in a smaller file size format."

 
Even Sony is confusing whether the lossless compression is the same, that's enough difference for me to continue using uncompressed until I need to save space or improve performance.

"Lossless compressed RAW:

Lossless compression is a format that allows you to reduce image size without compromising on quality. A lossless compressed image is processed by post-processing software and the data is decompressed like a ZIP file. Decompression allows you to expand the compressed file back to its original size.

This is a popular format that occupies less space with minimal quality loss.

Lossless Compressed Raw is recommended when you want to record content in a higher image quality equivalent to uncompressed RAW in a smaller file size format."

https://www.sony.co.uk/electronics/support/articles/00257081
Oh we went through this multiple times in the past. They're literally contradicting themselves in the previous 3 sentences - "Without compromising on quality".

Anyways the lossless compressed raw is using Lossless JPEG (ljpeg92), which lo and behold actually is lossless. They didn't do any 14+7 lossy compression. If you don't believe, here's how Rawtherapee handles it: note the inclusion of the LibRaw_LjpegDecompressor function. https://github.com/Beep6581/RawTherapee/blob/dev/rtengine/libraw/src/decoders/sonycc.cpp

You're free to use whatever format you want, but I find it hilarious that you care about that perceived image quality loss when you're using lenses such as the 24-240............
 
Last edited:
I shoot 61MP A7CR, and save all my images in jpeg/uncompressed RAW format. I go out to shoot a few times a month, and usually end up with less than 100 photos a session. I also shoot sports photos, and videos and when I do I end up with just less than 1000 photos, and 4k60p video in XAVC-HS format with Proxies.

I end up making 30x20 photos, and videos of the events. After each even I make a folder on my computer and upload the entire SD card from my camera, my folder structure looks like this:

01012024 - NewYearsDayParty

12242024 - Christmas Day

etc...

This year I have 44 events, including practice and testing sessions.

I save everything, because one year I didn't copy the entire card over, and lost the video of the last Christmas with one of my family members. So this technique prevents that from happening.

I store everything on my pc that has 72TB of raid space. I will then back up the year onto an external Solid state drive, and I have copies of jpegs and videos in google photos and youtube.

After all of that I have taken 3TB of data for the year.

Today I start a new folder for 2025.
As someone who religiously uses the 20/30fps on the A1, I'm amazed that somebody with an A7RV is outshooting me and having keepers at about 7 times the rate I am................
Who are you referencing? First I shoot an a7CR and I shoot equal amounts uncompressed photos and high data rate 4k videos.

No one is comparing keeper rates or 20/30 fps... but if you are, you win.
 
Even Sony is confusing whether the lossless compression is the same, that's enough difference for me to continue using uncompressed until I need to save space or improve performance.

"Lossless compressed RAW:

Lossless compression is a format that allows you to reduce image size without compromising on quality. A lossless compressed image is processed by post-processing software and the data is decompressed like a ZIP file. Decompression allows you to expand the compressed file back to its original size.

This is a popular format that occupies less space with minimal quality loss.

Lossless Compressed Raw is recommended when you want to record content in a higher image quality equivalent to uncompressed RAW in a smaller file size format."

https://www.sony.co.uk/electronics/support/articles/00257081
Oh we went through this multiple times in the past. They're literally contradicting themselves in the previous 3 sentences - "Without compromising on quality".
Exactly. No where is there proof that this is truly lossless. My best idea to prove this is to extract the lossless *.arw file and it should be the EXACT same size and hex values as an uncompressed file.

I actually started testing this, buy using the remote camera app to take an uncompressed picture, then taking another one in the compressed format. This takes about 10 seconds to do and the camera and lighting conditions don't change.

I was actually playing with rawwork and libraw and exiffile trying to find a way to decompress the files on my windows machine, but as I don't have a c compiler, I will need to build a VM and use the linux to compile and write a program to decompress the *.arw file.
Anyways the lossless compressed raw is using Lossless JPEG (ljpeg92), which lo and behold actually is lossless. They didn't do any 14+7 lossy compression. If you don't believe, here's how Rawtherapee handles it: note the inclusion of the LibRaw_LjpegDecompressor function. https://github.com/Beep6581/RawTherapee/blob/dev/rtengine/libraw/src/decoders/sonycc.cpp
So you are saying a predictive compression algorithm is going to successfully compute an exact copy of a file 100% of the time?

From Wikipedia: https://en.wikipedia.org/wiki/Lossless_JPEG

"This is a model in which predictions of the sample values are estimated from the neighboring samples that are already coded in the image. Most predictors take the average of the samples immediately above and to the left of the target sample. DPCM encodes the differences between the predicted samples instead of encoding each sample independently. The differences from one sample to the next are usually close to zero."

Actually this language explains Sony waffling. They can't say it is 100% lossless, but a best effort lossless that is better than the prior form of compression.
You're free to use whatever format you want, but I find it hilarious that you care about that perceived image quality loss when you're using lenses such as the 24-240............
I really expect better from you than a comment like that. The 24-240 is a special purpose lens. That's like expecting a tilt shift lens to give you the same quality as GM. I appreciate all glass for what it can do.
 
Even Sony is confusing whether the lossless compression is the same, that's enough difference for me to continue using uncompressed until I need to save space or improve performance.

"Lossless compressed RAW:

Lossless compression is a format that allows you to reduce image size without compromising on quality. A lossless compressed image is processed by post-processing software and the data is decompressed like a ZIP file. Decompression allows you to expand the compressed file back to its original size.

This is a popular format that occupies less space with minimal quality loss.

Lossless Compressed Raw is recommended when you want to record content in a higher image quality equivalent to uncompressed RAW in a smaller file size format."

https://www.sony.co.uk/electronics/support/articles/00257081
Oh we went through this multiple times in the past. They're literally contradicting themselves in the previous 3 sentences - "Without compromising on quality".
Exactly. No where is there proof that this is truly lossless. My best idea to prove this is to extract the lossless *.arw file and it should be the EXACT same size and hex values as an uncompressed file.

I actually started testing this, buy using the remote camera app to take an uncompressed picture, then taking another one in the compressed format. This takes about 10 seconds to do and the camera and lighting conditions don't change.

I was actually playing with rawwork and libraw and exiffile trying to find a way to decompress the files on my windows machine, but as I don't have a c compiler, I will need to build a VM and use the linux to compile and write a program to decompress the *.arw file.
Anyways the lossless compressed raw is using Lossless JPEG (ljpeg92), which lo and behold actually is lossless. They didn't do any 14+7 lossy compression. If you don't believe, here's how Rawtherapee handles it: note the inclusion of the LibRaw_LjpegDecompressor function. https://github.com/Beep6581/RawTherapee/blob/dev/rtengine/libraw/src/decoders/sonycc.cpp
So you are saying a predictive compression algorithm is going to successfully compute an exact copy of a file 100% of the time?

From Wikipedia: https://en.wikipedia.org/wiki/Lossless_JPEG

"This is a model in which predictions of the sample values are estimated from the neighboring samples that are already coded in the image. Most predictors take the average of the samples immediately above and to the left of the target sample. DPCM encodes the differences between the predicted samples instead of encoding each sample independently. The differences from one sample to the next are usually close to zero."

Actually this language explains Sony waffling. They can't say it is 100% lossless, but a best effort lossless that is better than the prior form of compression.
You're free to use whatever format you want, but I find it hilarious that you care about that perceived image quality loss when you're using lenses such as the 24-240............
I really expect better from you than a comment like that. The 24-240 is a special purpose lens. That's like expecting a tilt shift lens to give you the same quality as GM. I appreciate all glass for what it can do.
That's not what predictive compression is at all.

All it means is that the amount of compression differs based on the content. Zip files use exactly the same principle albeit more tailored for general data structures - don't think you'd accuse zipped files of being lossy despite it also using predictive encoding. Predictive compression is dependent on a concept called entropy, and it's why very noisy images do not compress as well as clean images. You can very easily test that out.

Also, that part you highlighted doesn't mean what you think it means. The initial prediction will have errors, and the errors are essentially what are being encoded to correct it.

Here's an example: say I have 5 numbers: 100, 101, 102, 103 and 104. If uncompressed, each number requires 7 bits, making this sequence 35 bits long. If I use a predictive algorithm, I initially predict each number in the 4 is 100. That means I have an error of 0, 1, 2, 3, 4 and 5. But if I encode 0, 1, 2, 3, 4 and 5 I only need:
  • 1 bit for 0
  • 1 bit for 1
  • 2 bits for 2
  • 2 bits for 3
  • 3 bits for 4
  • 3 bits for 5
So, if I compress this I'd need 7 bits for the 100, followed by 1+1+2+2+3+3 for a total of 19 bits - almost 50% reduction from 35 bits. Reality is a bit more complicated, but that's the basic principle. This is what the comment above means - the differences are close to zero, therefore the bits needed to represent them are much smaller.

You can also see where the algorithm becomes less efficient the more chaotic (or entropic) the samples get. Eventually, it'll approach the same efficiency as non-compressed data.
 
Last edited:
I shoot 61MP A7CR, and save all my images in jpeg/uncompressed RAW format. I go out to shoot a few times a month, and usually end up with less than 100 photos a session. I also shoot sports photos, and videos and when I do I end up with just less than 1000 photos, and 4k60p video in XAVC-HS format with Proxies.

I end up making 30x20 photos, and videos of the events. After each even I make a folder on my computer and upload the entire SD card from my camera, my folder structure looks like this:

01012024 - NewYearsDayParty

12242024 - Christmas Day

etc...

This year I have 44 events, including practice and testing sessions.

I save everything, because one year I didn't copy the entire card over, and lost the video of the last Christmas with one of my family members. So this technique prevents that from happening.

I store everything on my pc that has 72TB of raid space. I will then back up the year onto an external Solid state drive, and I have copies of jpegs and videos in google photos and youtube.

After all of that I have taken 3TB of data for the year.

Today I start a new folder for 2025.
As someone who religiously uses the 20/30fps on the A1, I'm amazed that somebody with an A7RV is outshooting me and having keepers at about 7 times the rate I am................
Who are you referencing? First I shoot an a7CR and I shoot equal amounts uncompressed photos and high data rate 4k videos.
No one is comparing keeper rates or 20/30 fps... but if you are, you win.
No, the comment is more remarking on the fact you keep a very high number of images. I'm much more miserly with keeping despite shooting upwards of a thousand pictures per weekend - I usually distil that down into less than 50. I don't know what your breakdown between video and photo are, I never shoot video.....
 
Even Sony is confusing whether the lossless compression is the same, that's enough difference for me to continue using uncompressed until I need to save space or improve performance.

"Lossless compressed RAW:

Lossless compression is a format that allows you to reduce image size without compromising on quality. A lossless compressed image is processed by post-processing software and the data is decompressed like a ZIP file. Decompression allows you to expand the compressed file back to its original size.

This is a popular format that occupies less space with minimal quality loss.

Lossless Compressed Raw is recommended when you want to record content in a higher image quality equivalent to uncompressed RAW in a smaller file size format."

https://www.sony.co.uk/electronics/support/articles/00257081
Oh we went through this multiple times in the past. They're literally contradicting themselves in the previous 3 sentences - "Without compromising on quality".
Exactly. No where is there proof that this is truly lossless. My best idea to prove this is to extract the lossless *.arw file and it should be the EXACT same size and hex values as an uncompressed file.

I actually started testing this, buy using the remote camera app to take an uncompressed picture, then taking another one in the compressed format. This takes about 10 seconds to do and the camera and lighting conditions don't change.

I was actually playing with rawwork and libraw and exiffile trying to find a way to decompress the files on my windows machine, but as I don't have a c compiler, I will need to build a VM and use the linux to compile and write a program to decompress the *.arw file.
Anyways the lossless compressed raw is using Lossless JPEG (ljpeg92), which lo and behold actually is lossless. They didn't do any 14+7 lossy compression. If you don't believe, here's how Rawtherapee handles it: note the inclusion of the LibRaw_LjpegDecompressor function. https://github.com/Beep6581/RawTherapee/blob/dev/rtengine/libraw/src/decoders/sonycc.cpp
So you are saying a predictive compression algorithm is going to successfully compute an exact copy of a file 100% of the time?

From Wikipedia: https://en.wikipedia.org/wiki/Lossless_JPEG

"This is a model in which predictions of the sample values are estimated from the neighboring samples that are already coded in the image. Most predictors take the average of the samples immediately above and to the left of the target sample. DPCM encodes the differences between the predicted samples instead of encoding each sample independently. The differences from one sample to the next are usually close to zero."

Actually this language explains Sony waffling. They can't say it is 100% lossless, but a best effort lossless that is better than the prior form of compression.
You're free to use whatever format you want, but I find it hilarious that you care about that perceived image quality loss when you're using lenses such as the 24-240............
I really expect better from you than a comment like that. The 24-240 is a special purpose lens. That's like expecting a tilt shift lens to give you the same quality as GM. I appreciate all glass for what it can do.
That's not what predictive compression is at all.

All it means is that the amount of compression differs based on the content. Zip files use exactly the same principle albeit more tailored for general data structures - don't think you'd accuse zipped files of being lossy despite it also using predictive encoding. Predictive compression is dependent on a concept called entropy, and it's why very noisy images do not compress as well as clean images. You can very easily test that out.

Also, that part you highlighted doesn't mean what you think it means. The initial prediction will have errors, and the errors are essentially what are being encoded to correct it.

Here's an example: say I have 5 numbers: 100, 101, 102, 103 and 104. If uncompressed, each number requires 7 bits, making this sequence 35 bits long. If I use a predictive algorithm, I initially predict each number in the 4 is 100. That means I have an error of 0, 1, 2, 3, 4 and 5. But if I encode 0, 1, 2, 3, 4 and 5 I only need:
  • 1 bit for 0
  • 1 bit for 1
  • 2 bits for 2
  • 2 bits for 3
  • 3 bits for 4
  • 3 bits for 5
So, if I compress this I'd need 7 bits for the 100, followed by 1+1+2+2+3+3 for a total of 19 bits - almost 50% reduction from 35 bits. Reality is a bit more complicated, but that's the basic principle. This is what the comment above means - the differences are close to zero, therefore the bits needed to represent them are much smaller.

You can also see where the algorithm becomes less efficient the more chaotic (or entropic) the samples get. Eventually, it'll approach the same efficiency as non-compressed data.
Thanks for this. This makes more sense than the article, and I appreciate the time you took to type this out. I am all about learning, so this has moved me in the right direction. That said, it is late, and I am going to review this again against how I interpreted this in the morning.
 
I shoot 61MP A7CR, and save all my images in jpeg/uncompressed RAW format. I go out to shoot a few times a month, and usually end up with less than 100 photos a session. I also shoot sports photos, and videos and when I do I end up with just less than 1000 photos, and 4k60p video in XAVC-HS format with Proxies.

I end up making 30x20 photos, and videos of the events. After each even I make a folder on my computer and upload the entire SD card from my camera, my folder structure looks like this:

01012024 - NewYearsDayParty

12242024 - Christmas Day

etc...

This year I have 44 events, including practice and testing sessions.

I save everything, because one year I didn't copy the entire card over, and lost the video of the last Christmas with one of my family members. So this technique prevents that from happening.

I store everything on my pc that has 72TB of raid space. I will then back up the year onto an external Solid state drive, and I have copies of jpegs and videos in google photos and youtube.

After all of that I have taken 3TB of data for the year.

Today I start a new folder for 2025.
As someone who religiously uses the 20/30fps on the A1, I'm amazed that somebody with an A7RV is outshooting me and having keepers at about 7 times the rate I am................
Who are you referencing? First I shoot an a7CR and I shoot equal amounts uncompressed photos and high data rate 4k videos.
No one is comparing keeper rates or 20/30 fps... but if you are, you win.
No, the comment is more remarking on the fact you keep a very high number of images. I'm much more miserly with keeping despite shooting upwards of a thousand pictures per weekend - I usually distil that down into less than 50. I don't know what your breakdown between video and photo are, I never shoot video.....
I keep all my images. I probably only develop about the same percentage as you ie I want between 20-100 photos/keepers per outing. I also shoot a lot of video with proxies so that I can upload the proxies quickly to youtube, and then do something more with the the higher bit rate files if needed.

Like I said, I inadvertently deleted some files I wish I hadn't so I would rather keep everything, than throw away/delete one wrong image/video. I thought 3TB was actually pretty low for a full year of data, and I have plenty of space.
 
I shoot 61MP A7CR, and save all my images in jpeg/uncompressed RAW format. I go out to shoot a few times a month, and usually end up with less than 100 photos a session. I also shoot sports photos, and videos and when I do I end up with just less than 1000 photos, and 4k60p video in XAVC-HS format with Proxies.

I end up making 30x20 photos, and videos of the events. After each even I make a folder on my computer and upload the entire SD card from my camera, my folder structure looks like this:

01012024 - NewYearsDayParty

12242024 - Christmas Day

etc...

This year I have 44 events, including practice and testing sessions.

I save everything, because one year I didn't copy the entire card over, and lost the video of the last Christmas with one of my family members. So this technique prevents that from happening.

I store everything on my pc that has 72TB of raid space. I will then back up the year onto an external Solid state drive, and I have copies of jpegs and videos in google photos and youtube.

After all of that I have taken 3TB of data for the year.

Today I start a new folder for 2025.
As someone who religiously uses the 20/30fps on the A1, I'm amazed that somebody with an A7RV is outshooting me and having keepers at about 7 times the rate I am................
Who are you referencing? First I shoot an a7CR and I shoot equal amounts uncompressed photos and high data rate 4k videos.
No one is comparing keeper rates or 20/30 fps... but if you are, you win.
No, the comment is more remarking on the fact you keep a very high number of images. I'm much more miserly with keeping despite shooting upwards of a thousand pictures per weekend - I usually distil that down into less than 50. I don't know what your breakdown between video and photo are, I never shoot video.....
I keep all my images. I probably only develop about the same percentage as you ie I want between 20-100 photos/keepers per outing. I also shoot a lot of video with proxies so that I can upload the proxies quickly to youtube, and then do something more with the the higher bit rate files if needed.
Like I said, I inadvertently deleted some files I wish I hadn't so I would rather keep everything, than throw away/delete one wrong image/video. I thought 3TB was actually pretty low for a full year of data, and I have plenty of space.
Yeah if you never cull, then 3TB is pretty reasonable - based on how I number my pics, I'm up to about 50,000 images on the A1 if I never cull. You play a very safe game which is pretty reasonable, but as a result, your storage needs are going to be very fun!
 
Even Sony is confusing whether the lossless compression is the same, that's enough difference for me to continue using uncompressed until I need to save space or improve performance.

"Lossless compressed RAW:

Lossless compression is a format that allows you to reduce image size without compromising on quality. A lossless compressed image is processed by post-processing software and the data is decompressed like a ZIP file. Decompression allows you to expand the compressed file back to its original size.

This is a popular format that occupies less space with minimal quality loss.

Lossless Compressed Raw is recommended when you want to record content in a higher image quality equivalent to uncompressed RAW in a smaller file size format."

https://www.sony.co.uk/electronics/support/articles/00257081
Oh we went through this multiple times in the past. They're literally contradicting themselves in the previous 3 sentences - "Without compromising on quality".
Exactly. No where is there proof that this is truly lossless. My best idea to prove this is to extract the lossless *.arw file and it should be the EXACT same size and hex values as an uncompressed file.

I actually started testing this, buy using the remote camera app to take an uncompressed picture, then taking another one in the compressed format. This takes about 10 seconds to do and the camera and lighting conditions don't change.

I was actually playing with rawwork and libraw and exiffile trying to find a way to decompress the files on my windows machine, but as I don't have a c compiler, I will need to build a VM and use the linux to compile and write a program to decompress the *.arw file.
Anyways the lossless compressed raw is using Lossless JPEG (ljpeg92), which lo and behold actually is lossless. They didn't do any 14+7 lossy compression. If you don't believe, here's how Rawtherapee handles it: note the inclusion of the LibRaw_LjpegDecompressor function. https://github.com/Beep6581/RawTherapee/blob/dev/rtengine/libraw/src/decoders/sonycc.cpp
So you are saying a predictive compression algorithm is going to successfully compute an exact copy of a file 100% of the time?

From Wikipedia: https://en.wikipedia.org/wiki/Lossless_JPEG

"This is a model in which predictions of the sample values are estimated from the neighboring samples that are already coded in the image. Most predictors take the average of the samples immediately above and to the left of the target sample. DPCM encodes the differences between the predicted samples instead of encoding each sample independently. The differences from one sample to the next are usually close to zero."

Actually this language explains Sony waffling. They can't say it is 100% lossless, but a best effort lossless that is better than the prior form of compression.
You're free to use whatever format you want, but I find it hilarious that you care about that perceived image quality loss when you're using lenses such as the 24-240............
I really expect better from you than a comment like that. The 24-240 is a special purpose lens. That's like expecting a tilt shift lens to give you the same quality as GM. I appreciate all glass for what it can do.
That's not what predictive compression is at all.

All it means is that the amount of compression differs based on the content. Zip files use exactly the same principle albeit more tailored for general data structures - don't think you'd accuse zipped files of being lossy despite it also using predictive encoding. Predictive compression is dependent on a concept called entropy, and it's why very noisy images do not compress as well as clean images. You can very easily test that out.

Also, that part you highlighted doesn't mean what you think it means. The initial prediction will have errors, and the errors are essentially what are being encoded to correct it.

Here's an example: say I have 5 numbers: 100, 101, 102, 103 and 104. If uncompressed, each number requires 7 bits, making this sequence 35 bits long. If I use a predictive algorithm, I initially predict each number in the 4 is 100. That means I have an error of 0, 1, 2, 3, 4 and 5. But if I encode 0, 1, 2, 3, 4 and 5 I only need:
  • 1 bit for 0
  • 1 bit for 1
  • 2 bits for 2
  • 2 bits for 3
  • 3 bits for 4
  • 3 bits for 5
So, if I compress this I'd need 7 bits for the 100, followed by 1+1+2+2+3+3 for a total of 19 bits - almost 50% reduction from 35 bits. Reality is a bit more complicated, but that's the basic principle. This is what the comment above means - the differences are close to zero, therefore the bits needed to represent them are much smaller.

You can also see where the algorithm becomes less efficient the more chaotic (or entropic) the samples get. Eventually, it'll approach the same efficiency as non-compressed data.
Thanks for this. This makes more sense than the article, and I appreciate the time you took to type this out. I am all about learning, so this has moved me in the right direction. That said, it is late, and I am going to review this again against how I interpreted this in the morning.
No problem.

Data compression is not a simple concept to grasp, which is why I don't blame you for misunderstanding a bit. Raw compression is a pretty contentious issue as it is without folks misunderstanding it, as demonstrated from Sony's very poorly worded article.

As an addendum, my example did not include Huffman encoding, which furthers the probalistic bit by shorthanding recurring elements. The example above isn't a good candidate for Huffman encoding as the numbers are too small.

Say a particular number happens all the time - say you have number 01010101110 which happens all the time, you then abbreviate it to say 010 and for every recurrence of 01010101110, you save 8 bits.
 
Last edited:
My needs are more modest tho, I probably shoot half as much as you and outside the occasional astro session or high contrast shots I've no need for uncompressed RAW, so that saves a ton on storage. TBH on the A7CR there isn't any kinda IQ penalty on lossless compression vs uncompressed (like on my A7R IV with lossy), so outside of some buffer consideration I dunno why you bother with uncompressed, but I think we had this discussion before...
But there is a difference ...

Watch this video:


Again minor, but the difference is there.
That video didn't tell me anything, there's better ways to compare the files before/after compression.
There is no way you actually paid attention and watched that in that short of time. You got a sense of his method, but no way you went in-depth with reviewing his scenarios in that short of time. I watched it three or four times trying to find a home in his method and his conclusions are reasonable and justified.
It took me less than a minute to notice that he is comparing the EMBEDDED PREVIEWS against each other in the first part of his test. So he did not even compare any image data at all he was claiming. Previews of compressed files of any type might be more compressed than of the uncompressed files. Thats it. But I don't care about the embedded previews. Pkease privide different proof that the quality of lossless compressed is less than uncompressed.
 
This amateur managed to fill just under 3TB, stills only. Off to a heavy start, 1 Jan 100GB of RAWs but will be able to cull it when I get home. Ken
 
The folder structure should have the year first, then month, then day otherwise the 1st of November would sit higher than the 3rd of January.
 
The folder structure should have the year first, then month, then day otherwise the 1st of November would sit higher than the 3rd of January.
This structure stores all the files in a 2024 folder. I have one for 2023, 2022,2021, etc .. and I am making a 2025 folder this year This makes it easy for me to store a full year on a single external drive.Adding the year to the naming structure is redundant but it works for me. .

To be fair I have shot more than I have in the past years especially since I spent a lot of time editing video this year.
 
Last edited:
This amateur managed to fill just under 3TB, stills only. Off to a heavy start, 1 Jan 100GB of RAWs but will be able to cull it when I get home. Ken
No where did I say stills only, you added that. Instead of trying to put in a cheap shot, why don't you reread the initial post I made.
 
The folder structure should have the year first, then month, then day otherwise the 1st of November would sit higher than the 3rd of January.
This structure stores all the files in a 2024 folder. I have one for 2023, 2022,2021, etc .. and I am making a 2025 folder this year This makes it easy for me to store a full year on a single external drive.Adding the year to the naming structure is redundant but it works for me. .

To be fair I have shot more than I have in the past years especially since I spent a lot of time editing video this year.
I do the same, but use 241002 India as the 2nd October 2024 in which case all folders are in line, regardless of whether I use it under the 2024 folder or independently. I started again to take a laptop and regularly do remote jobs and use, this time, a single folder, without the year, which then seamlessly fits into my larger storage. This way it's also easy to double check backups.

Similar, but different so it seems, our systems. On top of this you can then search under wildcards, egal 1108* for August 2011 files 😉 I use this system also for films, so can search for 2015 Christmas films easier where I don't use top level, egal yearly folders.

Deed
 
This amateur managed to fill just under 3TB, stills only. Off to a heavy start, 1 Jan 100GB of RAWs but will be able to cull it when I get home. Ken
No where did I say stills only, you added that. Instead of trying to put in a cheap shot, why don't you reread the initial post I made.
Woah... ....back up. The way I read the comment was Gloomy1 was talking of his own experiences, why so touchy. Happy New Year.
 
This amateur managed to fill just under 3TB, stills only. Off to a heavy start, 1 Jan 100GB of RAWs but will be able to cull it when I get home. Ken
No where did I say stills only, you added that. Instead of trying to put in a cheap shot, why don't you reread the initial post I made.
Woah... ....back up. The way I read the comment was Gloomy1 was talking of his own experiences, why so touchy. Happy New Year.
Thanks for the clarity, I can see the other intention also. I had others assuming I was only using stills, so I thought this was yet another follow up on my saving 3TB of stills, and mistook the "This amateur" comment as pejorative. I am never to big to say I made a mistake.

Apologies to Gloomy1, if I misinterpreted your comment.

...and thanks to Jayboo for calling it out.
 
Last edited:

Keyboard shortcuts

Back
Top