I shoot 61MP A7CR, and save all my images in jpeg/uncompressed RAW format. I go out to shoot a few times a month, and usually end up with less than 100 photos a session. I also shoot sports photos, and videos and when I do I end up with just less than 1000 photos, and 4k60p video in XAVC-HS format with Proxies.
I end up making 30x20 photos, and videos of the events. After each even I make a folder on my computer and upload the entire SD card from my camera, my folder structure looks like this:
01012024 - NewYearsDayParty
12242024 - Christmas Day
etc...
This year I have 44 events, including practice and testing sessions.
I save everything, because one year I didn't copy the entire card over, and lost the video of the last Christmas with one of my family members. So this technique prevents that from happening.
I store everything on my pc that has 72TB of raid space. I will then back up the year onto an external Solid state drive, and I have copies of jpegs and videos in google photos and youtube.
After all of that I have taken 3TB of data for the year.
Today I start a new folder for 2025.
You could use less space if you chose lossless compressed RAW - uncompressed RAW uses almost twice the space for no good reason.
I shot uncompressed RAW on the A7RIV, because there was no alternative. Lossless compressed (unlike lossy compressed, which was the original Sony compressed format) retains all the data, just compressed like a ZIP file. I think lossless compression arrived with the A7IV, but it was ported back to the original A1. It is in the A7RV, and it's in the A7CR.
I know a few people use unusual tools on their RAW files, and need uncompressed for that, but all the mainstream tools support lossless compressed now. There are a few that don't support the "scaled" lossless compressed files (eg: the 15Mpixel RAW-S), but they still support the true RAW files.
Space is cheap, so the only benefit to using the lossless compressed files is buffer performance. I have seen enough comparisons to know that all of the compressions do impart a difference into the final product regardless of what the label says.
No... This really isn't about "what the label says", you're literally arguing against facts and math here.
The only thing that matters is the results you see with your eyes. If you are going to use math as a defense you will need to show your work.
No, you're the one making the initial claim against what's known and understood. Have you done your own test? I'd be genuinely curious to see the results.
That's not true you and Aleph are trying to convince me lossless is better than uncompressed, after I stated I have no issue with space The burden of proof is on the two of you. Uncompressed is the unalgorthmic dorm of the file. I don't want to use anything else except where I need performance. Space is not an issue for me 99% of the time.
Besides, I wonder how they call the MRAW and SRAW lossless, when clearly there is a difference in the final file, so the title of lossless means little to me.
That's different, but yeah no argument there...
I use uncompressed for true file fidelity and compressed (lossy) for performance.
The point is that lossless compression (when used on the full res L RAW) is the best of both, with no downside (not small, none).
It is an improved compression, but either you want to save space or you want file fidelity. Lossless gives you less of both.
No, it's a different type of compression not an improved form of compression. If lossless compression didn't give exact byte per byte and 0 per 0 fidelity then software install packages and zip compression would literally not work... Think about it.
I didn't say it was an improved version of the lossy compression, I said it is an improved compression meaning it could be version 2.0 of the lossy compression, or it could be version 1.0 of a new compression the difference doesn't matter to me.
Very familiar with computer compressions, and you and I both know errors can happen in the process of compression, not to mention the extra computer power it takes to do the compression.
Lossy compression isn't so bad that I would never use it yet I get real buffer gains when using lossy compression vs lossless compression.
You should be using it all the time but you do you, Aleph already pointed out the only plausible justification not to use it and it's got nothing to do with fidelity.
Otherwise I don't have anything against any of the formats as I am all for more options, but I prefer to use the formats that capture the most unaltered data. Just a personal preference.
There's a reason it's called lossless. I'm not trying to change your mind (honest), but the way you're laying it out borders on misinformation (no offense, to you or that random YT'er trying to spot differences at 500%... yeah there's better ways to do that).
I just need to see the better way to prove it, like I said if it is math we need to see the work.
The entire Internet and software industry would like a word.
Another YouTuber (mathphotographer) does this with the Q3. We just need someone to do the same for Sony, so that I can stop using my eyes.
And what did he conclude?
He came away with the MRaw version actually improved DR and had lower noise than the LRaw versions. (As an aside I thought Sony was doing the same thing, but they aren't. I see a real value in a 36 MP raw that is using pixel binning for its smaller size. Almost gives you a free sensor type)
Again not enough difference between the three formats for me to be concerned, but if I have the space why not compress on my computer (ie convert to DNG which is a smaller and more useful format than .arw).
I'd suggest doing your own test if you're gonna argue this vehemently, no offense but the YouTube test you linked is ridiculous. A test like that shouldn't be performed outside where lighting conditions are changing by the second (and they did!), that undermines the credibility and reliability of the whole test, and even
then the YT'er you linked only thinks he saw a difference in 1 out of 4 tests he performed.
Frankly the whole thing smacks of bad methodology, because the test where he thinks he did see a difference is the one where one would expect less difference (with the less resolving of the two lenses he used and with no over/under exposure used)...
I agree he could have used a more scientific and stable environment, but there is enough there to question what Sony really means by lossless, when they carelessly use it with the other two formats.