Why the huge range of raw file sizes?

Started May 15, 2021 | Discussions
Henry Richardson Forum Pro • Posts: 21,950
Why the huge range of raw file sizes?
3

I mistakenly posted this in another forum a few days ago, but then realized it probably should have gone here. In that forum no one has any interest in it.

I wonder why there is such a wide disparity in raw file sizes for different cameras with the same number of megapixels (or very close)? Here is one example comparing 12-bit Olympus and 12-bit Panasonic, but further down I compare Sony, Fujifilm, Leica, Canon, and Nikon too.

Panasonic raw files are bigger than Olympus raw files. The GX7II (GX85), E-M10II, E-M10, and E-M5 are all 16mp, but the GX7II raw files are bigger:

E-M5/E-M10II/E-M10: 4608x3456 pixels, 16.1mp, embedded 3200x2400 jpeg, ~14-15mb file size generally

GX7II: 4592x3448 pixels, 15.8mp, embedded 1920x1440 jpeg, ~18-19mb file size generally

The GX7II with fewer pixels, a much smaller embedded jpeg, and lossy raw compression results in much bigger raw files than the E-M5/E-M10II/E-M10. Actually, the 16mp GX7II raw files are almost exactly the same size as the 20mp PEN-F raw files:

PEN-F: 5184x3888 pixels, 20.3mp, embedded 3200x2400 jpeg, ~18-19mb file size generally

Olympus uses lossless raw compression. Panasonic uses lossy raw compression. Olympus embeds a larger jpeg. Panasonic embeds a smaller jpeg. Yet, Olympus raw files are smaller. Weird and perplexing. I noticed the same thing with my 16mp G3 a few years ago. Here are just a few things I found concerning Olympus lossless raws and Panasonic lossy raws:

http://www.olympusamerica.com/crm/oneoffpages/crm_raw.asp

http://www.dpreview.com/forums/post/40154581

http://www.mu-43.com/threads/70068/

I noticed also that the 20mp G9 raw is much larger than the 20mp E-M1III raw:

G9: 5184 x 3888 raw + embedded 1920 x 1440 jpeg (23.1mb)

E-M1III: 5184 x 3888 raw + embedded 3200 x 2400 jpeg (16.9mb)

I wonder what is Panasonic doing or not doing that causes their raw files with smaller embedded jpegs to always be so much larger than Olympus raw files with bigger embedded jpegs? Very weird.

It turns out that 9 years ago when I first noticed this with my E-M5 and G3 kenw provided the answer and I had forgotten about that. Olympus uses lossless encoding compression and Panasonic uses lossy encoding compression:

https://www.dpreview.com/forums/post/42359338

I just looked at the dcraw source code and it appears ORF files do use Huffman coding for compression (this is a lossless compression method). More accurately they use difference encoding followed by Huffman encoding (the difference encoding transforms the data losslessly such that the Huffman encoding will be more effective at compressing).

This is definitely more effective than what Panasonic does. Panasonic does difference encoding to reduce the number of bits required for storage of most pixels and applies a lossy bit shift whenever the default number of bits is insufficient (again the Panasonic "lossy" compression is so low loss as to be inconsequential, this was explored in detail in a thread in the past year).

In general the difference in file sizes should be most extreme for a base ISO nearly black frame (Olympus will be significantly smaller than Panasonic). For a high ISO image or a bright but unclipped image the difference will be smaller.

Normally one would expect that lossy compression would be smaller than a superior lossless compression, but Panasonic really screws it up and their inferior lossy compression is actually much larger than the Olympus lossless compression.

These are 20mp except the Fujifilm is 16mp. Notice the raw file sizes:

These are 20mp except the Nikon is 21mp.

These are 16mp except the Canon is 18mp.

I understand that some may use no compression, some may use lossless compression, and some may even use lossy compression. Also, some are 12-bits/pixel and some are 14-bits/pixel. And the embedded JPEG size can vary. But, from looking at all that it doesn't explain the wide range, I think.

-- hide signature --

Henry Richardson
http://www.bakubo.com

Olympus OM-D E-M10 Olympus OM-D E-M10 II Olympus PEN-F Panasonic Lumix DC-G9
If you believe there are incorrect tags, please send us this post using our feedback form.
ggbutcher
ggbutcher Senior Member • Posts: 1,590
Re: Why the huge range of raw file sizes?

Henry Richardson wrote:

I mistakenly posted this in another forum a few days ago, but then realized it probably should have gone here. In that forum no one has any interest in it.

I wonder why there is such a wide disparity in raw file sizes for different cameras with the same number of megapixels (or very close)? Here is one example comparing 12-bit Olympus and 12-bit Panasonic, but further down I compare Sony, Fujifilm, Leica, Canon, and Nikon too.

Panasonic raw files are bigger than Olympus raw files. The GX7II (GX85), E-M10II, E-M10, and E-M5 are all 16mp, but the GX7II raw files are bigger:

E-M5/E-M10II/E-M10: 4608x3456 pixels, 16.1mp, embedded 3200x2400 jpeg, ~14-15mb file size generally

GX7II: 4592x3448 pixels, 15.8mp, embedded 1920x1440 jpeg, ~18-19mb file size generally

The GX7II with fewer pixels, a much smaller embedded jpeg, and lossy raw compression results in much bigger raw files than the E-M5/E-M10II/E-M10. Actually, the 16mp GX7II raw files are almost exactly the same size as the 20mp PEN-F raw files:

PEN-F: 5184x3888 pixels, 20.3mp, embedded 3200x2400 jpeg, ~18-19mb file size generally

Olympus uses lossless raw compression. Panasonic uses lossy raw compression. Olympus embeds a larger jpeg. Panasonic embeds a smaller jpeg. Yet, Olympus raw files are smaller. Weird and perplexing. I noticed the same thing with my 16mp G3 a few years ago. Here are just a few things I found concerning Olympus lossless raws and Panasonic lossy raws:

http://www.olympusamerica.com/crm/oneoffpages/crm_raw.asp

http://www.dpreview.com/forums/post/40154581

http://www.mu-43.com/threads/70068/

I noticed also that the 20mp G9 raw is much larger than the 20mp E-M1III raw:

G9: 5184 x 3888 raw + embedded 1920 x 1440 jpeg (23.1mb)

E-M1III: 5184 x 3888 raw + embedded 3200 x 2400 jpeg (16.9mb)

I wonder what is Panasonic doing or not doing that causes their raw files with smaller embedded jpegs to always be so much larger than Olympus raw files with bigger embedded jpegs? Very weird.

It turns out that 9 years ago when I first noticed this with my E-M5 and G3 kenw provided the answer and I had forgotten about that. Olympus uses lossless encoding compression and Panasonic uses lossy encoding compression:

https://www.dpreview.com/forums/post/42359338

I just looked at the dcraw source code and it appears ORF files do use Huffman coding for compression (this is a lossless compression method). More accurately they use difference encoding followed by Huffman encoding (the difference encoding transforms the data losslessly such that the Huffman encoding will be more effective at compressing).

This is definitely more effective than what Panasonic does. Panasonic does difference encoding to reduce the number of bits required for storage of most pixels and applies a lossy bit shift whenever the default number of bits is insufficient (again the Panasonic "lossy" compression is so low loss as to be inconsequential, this was explored in detail in a thread in the past year).

In general the difference in file sizes should be most extreme for a base ISO nearly black frame (Olympus will be significantly smaller than Panasonic). For a high ISO image or a bright but unclipped image the difference will be smaller.

Normally one would expect that lossy compression would be smaller than a superior lossless compression, but Panasonic really screws it up and their inferior lossy compression is actually much larger than the Olympus lossless compression.

These are 20mp except the Fujifilm is 16mp. Notice the raw file sizes:

These are 20mp except the Nikon is 21mp.

These are 16mp except the Canon is 18mp.

I understand that some may use no compression, some may use lossless compression, and some may even use lossy compression. Also, some are 12-bits/pixel and some are 14-bits/pixel. And the embedded JPEG size can vary. But, from looking at all that it doesn't explain the wide range, I think.

There's also metadata.  Particularly Makernotes.

 ggbutcher's gear list:ggbutcher's gear list
Nikon D50 Nikon D7000 Nikon Z6 Nikon AF-S DX Nikkor 18-200mm f/3.5-5.6G ED VR II Nikon AF-S DX Nikkor 18-140mm F3.5-5.6G ED VR +2 more
Jack Hogan Veteran Member • Posts: 8,183
Re: Why the huge range of raw file sizes?

I think that, with a little help from kenw and ggbutcher, you have pretty well covered it,

fvdbergh2501 Contributing Member • Posts: 618
Re: Why the huge range of raw file sizes?
1

One of the things you have to keep in mind is that "lossy compression" is actually a very vague term that can cover anything from a JPEG2000-like wavelet transform method (or HEIF, for that matter) to just storing the upper 8 bits of your 12-bit data and calling it a day. You already have part of your answer in your original post, namely that the Panasonic compression sounds rather simplistic for a lossy method.

An important factor you have to consider is speed. I would say you have to look at both compression and decompression speed. Some methods are almost symmetric: I have once implemented a method that uses differencing followed by Rice coding (a variable-length coding that is similar to Huffman coding) that produced compression and decompression times that were very close (say within about 10%). For our intended application (compression of satellite image time series 'data cubes') this was a desirable property because we would often pass an entire compressed data cube through some additional processing to produce another compressed data cube (i.e., frequent compression and decompression), and we had a 80 TB archive of ~300 cubes to consider.

One of the competing algorithms was LZ4, a LZ77-family method comparable to zip/gzip/deflate. LZ4 is specifically tuned for performance over compression ratio, and it is significantly faster (multiple times) than zip. Even so, the decompression time of LZ4 (on our satellite data cubes) was less than half its compression time. This would be an advantage if you wanted to decompress the data frequently, but compress it infrequently. And it turns out that the difference in compression ratio between zip and LZ4 was quite small, especially if you take into account the huge differences in compression/decompression times.

In another project we were ingesting large quantities of Sentinel 2 satellite imagery (around 50 TB per run). One of the problems here was that the Sentinel 2 images are distributed in JPEG2000 files, which may not sound like a huge problem, but decompressing all those files consumed a huge amount of CPU time. (Yes, you can get commercial JP2k decoders that are more efficient that the open source decoders).

So what are the expected use-cases for a camera raw file codec? One might rank them as "compression ratio" trumps "compression time" which in turn trumps "decompression time", but this depends on the hardware available on the camera. You might have to decompress the raw file if you allow some in-camera raw development features. My guess would be that the Panasonic algorithm was determined by the hardware at the time, a choice which seems regrettable in retrospect.

Anyhow, as the LibRaw guys will confirm, anything that touches Huffman coding (actually most variable-length codes) generally makes it slow do decode. The intuitive explanation of Huffman decoding is that you read one bit at a time, and follow along the Huffman tree to find the decompressed symbol. You have to jump through a lot of hoops (look-up tables for partial prefixes, for example) to avoid processing one bit at a time, so it is not easy to achieve really fast Huffman decompression times.

I am not up to date with the Asymmetric Numeral Systems (ANS) entropy coding schemes, such as used in zstd, so I don't know if that offers a significant advantage in decompression speed, but that method was published/discovered long after most of the camera raw formats were finalized. Other than CR3, most of the formats appear to be quite old.

-F

OP Henry Richardson Forum Pro • Posts: 21,950
Re: Why the huge range of raw file sizes?

ggbutcher wrote:

There's also metadata. Particularly Makernotes.

Yes, but it is hard to believe that two 16mp cameras would vary by that much because of makernotes: 14.0mb vs 32.2mb, for example.

-- hide signature --

Henry Richardson
http://www.bakubo.com

ggbutcher
ggbutcher Senior Member • Posts: 1,590
Re: Why the huge range of raw file sizes?

Henry Richardson wrote:

ggbutcher wrote:

There's also metadata. Particularly Makernotes.

Yes, but it is hard to believe that two 16mp cameras would vary by that much because of makernotes: 14.0mb vs 32.2mb, for example.

I took the exifprint.cpp sample program from exiv2 and added code to add the sizes of the tags and print the total.  Running this against a Panasonic raw I had handy produces a total metadata size of ~ 1.6Mb (sorry, on the wrong computer right now).  Now, that doesn't account for the size difference you assert above, but I don't know from where you got the 32.2Mb as it's not one of your examples in the original post.

The size of a raw file is approximately the sum of its metadata (which is whatever the camera manufacturer feels compelled to include) and all the embedded images in their particular encodings.  Between exiftool and exiv2 you can explore all that; me, I'm not that curious... 

 ggbutcher's gear list:ggbutcher's gear list
Nikon D50 Nikon D7000 Nikon Z6 Nikon AF-S DX Nikkor 18-200mm f/3.5-5.6G ED VR II Nikon AF-S DX Nikkor 18-140mm F3.5-5.6G ED VR +2 more
OP Henry Richardson Forum Pro • Posts: 21,950
Re: Why the huge range of raw file sizes?

Jack Hogan wrote:

I think that, with a little help from kenw and ggbutcher, you have pretty well covered it,

I am sorry, but I am not sure what you are saying. It is hard to believe that two 16mp cameras would vary by that much because of makernotes or anything else that anyone has suggested: 14.0mb vs 32.2mb, for example.

-- hide signature --

Henry Richardson
http://www.bakubo.com

OP Henry Richardson Forum Pro • Posts: 21,950
Thank you -- some more thoughts
2

fvdbergh2501 wrote:

One of the things you have to keep in mind is that "lossy compression" is actually a very vague term that can cover anything from a JPEG2000-like wavelet transform method (or HEIF, for that matter) to just storing the upper 8 bits of your 12-bit data and calling it a day. You already have part of your answer in your original post, namely that the Panasonic compression sounds rather simplistic for a lossy method.

An important factor you have to consider is speed. I would say you have to look at both compression and decompression speed. Some methods are almost symmetric: I have once implemented a method that uses differencing followed by Rice coding (a variable-length coding that is similar to Huffman coding) that produced compression and decompression times that were very close (say within about 10%). For our intended application (compression of satellite image time series 'data cubes') this was a desirable property because we would often pass an entire compressed data cube through some additional processing to produce another compressed data cube (i.e., frequent compression and decompression), and we had a 80 TB archive of ~300 cubes to consider.

One of the competing algorithms was LZ4, a LZ77-family method comparable to zip/gzip/deflate. LZ4 is specifically tuned for performance over compression ratio, and it is significantly faster (multiple times) than zip. Even so, the decompression time of LZ4 (on our satellite data cubes) was less than half its compression time. This would be an advantage if you wanted to decompress the data frequently, but compress it infrequently. And it turns out that the difference in compression ratio between zip and LZ4 was quite small, especially if you take into account the huge differences in compression/decompression times.

In another project we were ingesting large quantities of Sentinel 2 satellite imagery (around 50 TB per run). One of the problems here was that the Sentinel 2 images are distributed in JPEG2000 files, which may not sound like a huge problem, but decompressing all those files consumed a huge amount of CPU time. (Yes, you can get commercial JP2k decoders that are more efficient that the open source decoders).

So what are the expected use-cases for a camera raw file codec? One might rank them as "compression ratio" trumps "compression time" which in turn trumps "decompression time", but this depends on the hardware available on the camera. You might have to decompress the raw file if you allow some in-camera raw development features. My guess would be that the Panasonic algorithm was determined by the hardware at the time, a choice which seems regrettable in retrospect.

Anyhow, as the LibRaw guys will confirm, anything that touches Huffman coding (actually most variable-length codes) generally makes it slow do decode. The intuitive explanation of Huffman decoding is that you read one bit at a time, and follow along the Huffman tree to find the decompressed symbol. You have to jump through a lot of hoops (look-up tables for partial prefixes, for example) to avoid processing one bit at a time, so it is not easy to achieve really fast Huffman decompression times.

I am not up to date with the Asymmetric Numeral Systems (ANS) entropy coding schemes, such as used in zstd, so I don't know if that offers a significant advantage in decompression speed, but that method was published/discovered long after most of the camera raw formats were finalized. Other than CR3, most of the formats appear to be quite old.

Thank you for your very interesting post. While the time to compress/decompress, of course, makes a lot of sense, I think, in the case of most cameras it is the write speed to the card which is the bigger issue. As an example, recently in another forum someone checked to see whether shooting raw only or shooting raw+jpeg made a difference in how many shots could be taken in continuous mode. He found it didn't make any difference because in both cases the camera would keep going until the buffer was filled with the same number of shots and then delay while writing to the card.  He tried it with both a very fast card and a slow card.  Same number of shots for both. Shooting only jpeg though the buffer never seemed to get filled so the camera would just keep shooting and shooting. All this seems to me that the processing time for compression is not the main factor for choosing a different algorithm and, in fact, points to having a great, even if relatively slow, algorithm because the write time for larger files is a much bigger factor.  Having smaller files makes the camera faster.

-- hide signature --

Henry Richardson
http://www.bakubo.com

fvdbergh2501 Contributing Member • Posts: 618
Re: Thank you -- some more thoughts
1

Henry Richardson wrote:

fvdbergh2501 wrote:

One of the things you have to keep in mind is that "lossy compression" is actually a very vague term that can cover anything from a JPEG2000-like wavelet transform method (or HEIF, for that matter) to just storing the upper 8 bits of your 12-bit data and calling it a day. You already have part of your answer in your original post, namely that the Panasonic compression sounds rather simplistic for a lossy method.

An important factor you have to consider is speed. I would say you have to look at both compression and decompression speed. Some methods are almost symmetric: I have once implemented a method that uses differencing followed by Rice coding (a variable-length coding that is similar to Huffman coding) that produced compression and decompression times that were very close (say within about 10%). For our intended application (compression of satellite image time series 'data cubes') this was a desirable property because we would often pass an entire compressed data cube through some additional processing to produce another compressed data cube (i.e., frequent compression and decompression), and we had a 80 TB archive of ~300 cubes to consider.

One of the competing algorithms was LZ4, a LZ77-family method comparable to zip/gzip/deflate. LZ4 is specifically tuned for performance over compression ratio, and it is significantly faster (multiple times) than zip. Even so, the decompression time of LZ4 (on our satellite data cubes) was less than half its compression time. This would be an advantage if you wanted to decompress the data frequently, but compress it infrequently. And it turns out that the difference in compression ratio between zip and LZ4 was quite small, especially if you take into account the huge differences in compression/decompression times.

In another project we were ingesting large quantities of Sentinel 2 satellite imagery (around 50 TB per run). One of the problems here was that the Sentinel 2 images are distributed in JPEG2000 files, which may not sound like a huge problem, but decompressing all those files consumed a huge amount of CPU time. (Yes, you can get commercial JP2k decoders that are more efficient that the open source decoders).

So what are the expected use-cases for a camera raw file codec? One might rank them as "compression ratio" trumps "compression time" which in turn trumps "decompression time", but this depends on the hardware available on the camera. You might have to decompress the raw file if you allow some in-camera raw development features. My guess would be that the Panasonic algorithm was determined by the hardware at the time, a choice which seems regrettable in retrospect.

Anyhow, as the LibRaw guys will confirm, anything that touches Huffman coding (actually most variable-length codes) generally makes it slow do decode. The intuitive explanation of Huffman decoding is that you read one bit at a time, and follow along the Huffman tree to find the decompressed symbol. You have to jump through a lot of hoops (look-up tables for partial prefixes, for example) to avoid processing one bit at a time, so it is not easy to achieve really fast Huffman decompression times.

I am not up to date with the Asymmetric Numeral Systems (ANS) entropy coding schemes, such as used in zstd, so I don't know if that offers a significant advantage in decompression speed, but that method was published/discovered long after most of the camera raw formats were finalized. Other than CR3, most of the formats appear to be quite old.

Thank you for your very interesting post. While the time to compress/decompress, of course, makes a lot of sense, I think, in the case of most cameras it is the write speed to the card which is the bigger issue. As an example, recently in another forum someone checked to see whether shooting raw only or shooting raw+jpeg made a difference in how many shots could be taken in continuous mode. He found it didn't make any difference because in both cases the camera would keep going until the buffer was filled with the same number of shots and then delay while writing to the card. He tried it with both a very fast card and a slow card. Same number of shots for both. Shooting only jpeg though the buffer never seemed to get filled so the camera would just keep shooting and shooting. All this seems to me that the processing time for compression is not the main factor for choosing a different algorithm and, in fact, points to having a great, even if relatively slow, algorithm because the write time for larger files is a much bigger factor. Having smaller files makes the camera faster.

I agree with your conclusion: As long as "time to compress + time to write" is less than the alternative (e.g., using a fast compression method), then you can keep on using more complex (and slower) compression methods.

The subtlety is that judging the sweet-spot solutions developed by the camera manufacturers 15 to 20 years ago using a recent state-of-the-art camera body (with much greater processing power) is inherently unfair, and inevitably leads to the conclusion that somehow the manufacturers are leaving a lot of performance on the table. My gut feeling is that processing power in the camera ISP has increased more rapidly, relative to the number of pixels per image, so we potentially have spare processing power to devote to more complex compression.

However, the manufacturers cannot change their raw formats without breaking compatibility with all the software out there. For example, I noticed that LibRaw added support for Canon's CR3 format in version 0.20.1, which was released in 2020. The CR3 format seems to have appeared in the M50 in April 2018. That is rather exemplary of the LibRaw team; I am sure it will take many years before most non-commercial raw developers add support for CR3.

Backwards compatibility aside, just how much more compression can you squeeze out of a raw file using a lossless method? This depends a great deal on the image contents, but if I take a quick look at two Nikon D850 and D7000 files I happened to have around, it looks like Nikon achieves a compression ratio of between 1.5 and 1.57 on 14-bit raw files (after excluding the embedded JPEG size, but including metadata). Not really a representative sample of images, but it gives us a ballpark figure.

Generally, you are doing really well when you can achieve a compression ratio of 2.0 (meaning compressed file is half the uncompressed size) with a generic lossless compression method (think zip, applied to "typical" files). Images with a lot of detail tend to compress similarly, if slightly lower. Again, using satellite images as a reference (not representative of typical camera images, but it is what I know best), we were happy when we could hit a ratio of about 1.8 with a fast lossless method. The difference in compression ratio between fast methods (differencing + Rice coding, or LZ4) and slow transform-based methods (JPEG2000 in lossless mode) was not enough to warrant using the slow methods. For example, we might get a ratio of 1.8 for a fast method vs 1.9 for a slow method.

Looking at the Nikon .NEF files, my gut feeling is that we can do a little bit better in terms of compression ratio. I ran my "not-really-representative" raw D850 and D7000 images (Bayer mosaiced data extracted with dcraw -D) through OpenJPEG to produce JPEG2000 files, and got compression ratios of 1.61 and 1.72 (relative to 1.5 and 1.57 in the NEF)**. But is a 10% smaller file size going to make a meaningful difference in "compression time + write time" ? You can usually get a 10% card write speed increase by just buying a faster card.

**I also tried separating the 4 raw Bayer channels, and that improved the compression ratio from 1.61 to 1.63 on the D850 file, so that does not make a huge difference, even if that would be a better test.

Jack Hogan Veteran Member • Posts: 8,183
Re: Why the huge range of raw file sizes?

Henry Richardson wrote:

Jack Hogan wrote:

I think that, with a little help from kenw and ggbutcher, you have pretty well covered it,

I am sorry, but I am not sure what you are saying. It is hard to believe that two 16mp cameras would vary by that much because of makernotes or anything else that anyone has suggested: 14.0mb vs 32.2mb, for example.

Perhaps I should clarify then. Like others here, I am saying that once you account for different compression of the raw data, number of embedded images/jpegs at different sizes and qualities, and different metadata - you pretty well have the answer to the question in your title. It's not that mysterious or complicated, just the sum of those bits.

Jack

Entropy512 Veteran Member • Posts: 6,008
Re: Why the huge range of raw file sizes?

fvdbergh2501 wrote:

but this depends on the hardware available on the camera.

This is most obvious with Sony, where we've seen an evolution of:

Lossy compressed RAW that does a funky block quantization scheme, where for every block of pixels (which are something like 2x16, 2x32, 32x2, or 16x2, I forget exactly which) they store the maximum and minimum value for the block, and then each pixel is represented as a 7-bit value that interpolates between max/min.  There's also a fairly basic nonlinear tone curve in there.  Gives almost exactly 50% reduction in file sizes and is computationally simple.

Lossless uncompressed RAW takes all samples, pads them up to 16 bits, and saves them.  Padding up to 16 bits makes reading the file easier - no need to "unpack" a packed format that isn't word-aligned.  This effectively inflates file sizes by 1.33x if a camera that normally records 12 bits/sample, at the cost of making the decoder much simpler/faster.

Sony's new lossless compressed RAW does some sort of entropy coding - achieves similar compression ratios to the old lossy algorithm, but you'll notice that it's only present in Sony cameras that have the first notably refreshed BIONZ ISP to be seen in 5-6 years.  (e.g. lossless compressed RAW showed up at the same time as H.265 video and 10-bit video depth.  They're not directly tied together, but it's clear that "major comprehensive hardware refresh" was the key to all of the most notable features.)

-- hide signature --

Context is key. If I have quoted someone else's post when replying, please do not reply to something I say without reading text that I have quoted, and understanding the reason the quote function exists.

 Entropy512's gear list:Entropy512's gear list
Sony a6000 Pentax K-5 Pentax K-01 Sony a6300 Canon EF 85mm F1.8 USM +5 more
Entropy512 Veteran Member • Posts: 6,008
Re: Thank you -- some more thoughts

Henry Richardson wrote:

Thank you for your very interesting post. While the time to compress/decompress, of course, makes a lot of sense, I think, in the case of most cameras it is the write speed to the card which is the bigger issue. As an example, recently in another forum someone checked to see whether shooting raw only or shooting raw+jpeg made a difference in how many shots could be taken in continuous mode. He found it didn't make any difference because in both cases the camera would keep going until the buffer was filled with the same number of shots and then delay while writing to the card. He tried it with both a very fast card and a slow card. Same number of shots for both. Shooting only jpeg though the buffer never seemed to get filled so the camera would just keep shooting and shooting. All this seems to me that the processing time for compression is not the main factor for choosing a different algorithm and, in fact, points to having a great, even if relatively slow, algorithm because the write time for larger files is a much bigger factor. Having smaller files makes the camera faster.

Not necessarily - Huffman coding is notoriously hard to parallize/accelerate, not sure about some of the newer entropy coding mechanisms (some of them are known to be even slower from what I've seen), and as a result, you trade an SD card bottleneck for a CPU performance/thermal envelope/power consumption limitation.

For reference, at one point I played with the LJ92 algorithm (the lossless algorithm used in lossless compressed DNG) and I think the Huffman step only did 20-30 MB/s on a single core of an i5-7200U.  Multicore would require separating the image into slices and encoding each slice separately, and has the obvious negative side effect of nuking your power/thermal budget on many platforms.  My memory is a bit vague here though.  There are probably more optimized implementations, but in general - it can sometimes be very difficult to get compression throughput/bandwidth high enough to be worth it without other tradeoffs.

Years ago I was involved in a project that had a very fast lookup table based Huffman encoder that was, at the time, revolutionary for being able to do 320x240 at 30 FPS in software only (this was in the era when a Pentium III was blazing fast and incredibly new, and we wanted to target more "moderate" hardware).  At one point I revisited Prof Berger's old patent and I suspect that if you tried to encode much higher bit depth samples the lookup table size would explode.  I never had time to confirm that guess though.

-- hide signature --

Context is key. If I have quoted someone else's post when replying, please do not reply to something I say without reading text that I have quoted, and understanding the reason the quote function exists.

 Entropy512's gear list:Entropy512's gear list
Sony a6000 Pentax K-5 Pentax K-01 Sony a6300 Canon EF 85mm F1.8 USM +5 more
OP Henry Richardson Forum Pro • Posts: 21,950
Re: Why the huge range of raw file sizes?

ggbutcher wrote:

Henry Richardson wrote:

ggbutcher wrote:

There's also metadata. Particularly Makernotes.

Yes, but it is hard to believe that two 16mp cameras would vary by that much because of makernotes: 14.0mb vs 32.2mb, for example.

I took the exifprint.cpp sample program from exiv2 and added code to add the sizes of the tags and print the total. Running this against a Panasonic raw I had handy produces a total metadata size of ~ 1.6Mb (sorry, on the wrong computer right now). Now, that doesn't account for the size difference you assert above, but I don't know from where you got the 32.2Mb as it's not one of your examples in the original post.

Look again.  It is there.

The size of a raw file is approximately the sum of its metadata (which is whatever the camera manufacturer feels compelled to include) and all the embedded images in their particular encodings. Between exiftool and exiv2 you can explore all that; me, I'm not that curious...

-- hide signature --

Henry Richardson
http://www.bakubo.com

ggbutcher
ggbutcher Senior Member • Posts: 1,590
Re: Why the huge range of raw file sizes?

Henry Richardson wrote:

ggbutcher wrote:

Henry Richardson wrote:

ggbutcher wrote:

There's also metadata. Particularly Makernotes.

Yes, but it is hard to believe that two 16mp cameras would vary by that much because of makernotes: 14.0mb vs 32.2mb, for example.

I took the exifprint.cpp sample program from exiv2 and added code to add the sizes of the tags and print the total. Running this against a Panasonic raw I had handy produces a total metadata size of ~ 1.6Mb (sorry, on the wrong computer right now). Now, that doesn't account for the size difference you assert above, but I don't know from where you got the 32.2Mb as it's not one of your examples in the original post.

Look again. It is there.

The size of a raw file is approximately the sum of its metadata (which is whatever the camera manufacturer feels compelled to include) and all the embedded images in their particular encodings. Between exiftool and exiv2 you can explore all that; me, I'm not that curious...

I just searched for "32" in the original post, couldn't find reference to a 32.2Mb image.  You reference it in the subsequent post, but you don't identify the camera...

 ggbutcher's gear list:ggbutcher's gear list
Nikon D50 Nikon D7000 Nikon Z6 Nikon AF-S DX Nikkor 18-200mm f/3.5-5.6G ED VR II Nikon AF-S DX Nikkor 18-140mm F3.5-5.6G ED VR +2 more
OP Henry Richardson Forum Pro • Posts: 21,950
Re: Why the huge range of raw file sizes?

ggbutcher wrote:

Henry Richardson wrote:

ggbutcher wrote:

Henry Richardson wrote:

ggbutcher wrote:

There's also metadata. Particularly Makernotes.

Yes, but it is hard to believe that two 16mp cameras would vary by that much because of makernotes: 14.0mb vs 32.2mb, for example.

I took the exifprint.cpp sample program from exiv2 and added code to add the sizes of the tags and print the total. Running this against a Panasonic raw I had handy produces a total metadata size of ~ 1.6Mb (sorry, on the wrong computer right now). Now, that doesn't account for the size difference you assert above, but I don't know from where you got the 32.2Mb as it's not one of your examples in the original post.

Look again. It is there.

The size of a raw file is approximately the sum of its metadata (which is whatever the camera manufacturer feels compelled to include) and all the embedded images in their particular encodings. Between exiftool and exiv2 you can explore all that; me, I'm not that curious...

I just searched for "32" in the original post, couldn't find reference to a 32.2Mb image. You reference it in the subsequent post, but you don't identify the camera...

It is right there.  If you can't see it then I don't know what to tell you.

-- hide signature --

Henry Richardson
http://www.bakubo.com

OP Henry Richardson Forum Pro • Posts: 21,950
16mp: 14.0mb vs. 32.2mb
3

Henry Richardson wrote:

These are 16mp except the Canon is 18mp.

Olympus E-M5II: 4608x3456 pixels, embedded 3200x2400 jpeg, 14.0mb file size, lossless compression

Fuji X-T10: 4936x3296 pixels, embedded 1920x1280 jpeg, 32.2mb file size, compression?

-- hide signature --

Henry Richardson
http://www.bakubo.com

OP Henry Richardson Forum Pro • Posts: 21,950
I will get back to you

fvdbergh2501 wrote:

I agree with your conclusion: As long as "time to compress + time to write" is less than the alternative (e.g., using a fast compression method), then you can keep on using more complex (and slower) compression methods.

The subtlety is that judging the sweet-spot solutions developed by the camera manufacturers 15 to 20 years ago using a recent state-of-the-art camera body (with much greater processing power) is inherently unfair, and inevitably leads to the conclusion that somehow the manufacturers are leaving a lot of performance on the table. My gut feeling is that processing power in the camera ISP has increased more rapidly, relative to the number of pixels per image, so we potentially have spare processing power to devote to more complex compression.

However, the manufacturers cannot change their raw formats without breaking compatibility with all the software out there. For example, I noticed that LibRaw added support for Canon's CR3 format in version 0.20.1, which was released in 2020. The CR3 format seems to have appeared in the M50 in April 2018. That is rather exemplary of the LibRaw team; I am sure it will take many years before most non-commercial raw developers add support for CR3.

Backwards compatibility aside, just how much more compression can you squeeze out of a raw file using a lossless method? This depends a great deal on the image contents, but if I take a quick look at two Nikon D850 and D7000 files I happened to have around, it looks like Nikon achieves a compression ratio of between 1.5 and 1.57 on 14-bit raw files (after excluding the embedded JPEG size, but including metadata). Not really a representative sample of images, but it gives us a ballpark figure.

Generally, you are doing really well when you can achieve a compression ratio of 2.0 (meaning compressed file is half the uncompressed size) with a generic lossless compression method (think zip, applied to "typical" files). Images with a lot of detail tend to compress similarly, if slightly lower. Again, using satellite images as a reference (not representative of typical camera images, but it is what I know best), we were happy when we could hit a ratio of about 1.8 with a fast lossless method. The difference in compression ratio between fast methods (differencing + Rice coding, or LZ4) and slow transform-based methods (JPEG2000 in lossless mode) was not enough to warrant using the slow methods. For example, we might get a ratio of 1.8 for a fast method vs 1.9 for a slow method.

Looking at the Nikon .NEF files, my gut feeling is that we can do a little bit better in terms of compression ratio. I ran my "not-really-representative" raw D850 and D7000 images (Bayer mosaiced data extracted with dcraw -D) through OpenJPEG to produce JPEG2000 files, and got compression ratios of 1.61 and 1.72 (relative to 1.5 and 1.57 in the NEF)**. But is a 10% smaller file size going to make a meaningful difference in "compression time + write time" ? You can usually get a 10% card write speed increase by just buying a faster card.

**I also tried separating the 4 raw Bayer channels, and that improved the compression ratio from 1.61 to 1.63 on the D850 file, so that does not make a huge difference, even if that would be a better test.

Thank you again for a very interesting and informative response.. I am not ignoring you. I am thinking about all you wrote and chewing on it and also formulating a response. Like everyone else I have other things on my plate each day so I will get back and reply soon.

-- hide signature --

Henry Richardson
http://www.bakubo.com

fvdbergh2501 Contributing Member • Posts: 618
Re: Thank you -- some more thoughts
1

Entropy512 wrote:

Henry Richardson wrote:

Thank you for your very interesting post. While the time to compress/decompress, of course, makes a lot of sense, I think, in the case of most cameras it is the write speed to the card which is the bigger issue. As an example, recently in another forum someone checked to see whether shooting raw only or shooting raw+jpeg made a difference in how many shots could be taken in continuous mode. He found it didn't make any difference because in both cases the camera would keep going until the buffer was filled with the same number of shots and then delay while writing to the card. He tried it with both a very fast card and a slow card. Same number of shots for both. Shooting only jpeg though the buffer never seemed to get filled so the camera would just keep shooting and shooting. All this seems to me that the processing time for compression is not the main factor for choosing a different algorithm and, in fact, points to having a great, even if relatively slow, algorithm because the write time for larger files is a much bigger factor. Having smaller files makes the camera faster.

Not necessarily - Huffman coding is notoriously hard to parallize/accelerate, not sure about some of the newer entropy coding mechanisms (some of them are known to be even slower from what I've seen), and as a result, you trade an SD card bottleneck for a CPU performance/thermal envelope/power consumption limitation.

For reference, at one point I played with the LJ92 algorithm (the lossless algorithm used in lossless compressed DNG) and I think the Huffman step only did 20-30 MB/s on a single core of an i5-7200U. Multicore would require separating the image into slices and encoding each slice separately, and has the obvious negative side effect of nuking your power/thermal budget on many platforms. My memory is a bit vague here though. There are probably more optimized implementations, but in general - it can sometimes be very difficult to get compression throughput/bandwidth high enough to be worth it without other tradeoffs.

Interesting that you mention the multicore aspect. On this satellite image data cube project I mentioned all over this thread, we specifically went all-out in an attempt to both read and write the compressed data faster than the native storage device (a SCSI RAID NAS connected with 2x10GE network links to a dual-socket Haswell Xeon machine). We went with the HDF5 format, which conveniently includes a compression codec plug-in feature. A lot of our candidate methods came from blosc,  which allowed us to plug in LZ4 or gzip in a multi-threaded way. We also ended up implementing our own multi-threaded differences+Rice code codec (source available here). The nature of the data made it fairly easy to block/slice for multi-threading, but it was surprisingly hard to beat the native storage device in terms of real-world throughput. Uncompressed data could be read/written at over 1 GB/s (well, in ~2015 that was respectable before PCI 4.0 NVME SSDs broke all records) to the SAN, and most compression codecs could not get close to that even when running across 16 Haswell CPU cores.

Anyhow, now that you mentioned Huffman decoding at 20-30 MB/s, I could not resist trying out our Rice codec on my current desktop machine (AMD Ryzen 7 3700X, 8 cores). I only used a single thread, but I could compress a D850 raw file at a rate of around 270 MB/s, and decompress it at around 301 MB/s. I cannot recall the exact figures, but I think we got less than 200 MB/s on the Haswell cores in single-threaded tests back in 2015, so I am actually quite shocked by how much CPU performance has improved in the last 5 years. I also ran the same image (raw D850 data, but 4 colour planes stored sequentially) through flac, pretending it was raw audio data. On flac's fast mode the compression ratio was within 1% of our codec, and flac compressed at around 84 MB/s. flac's decompression was a bit faster, around 93 MB/s.

Years ago I was involved in a project that had a very fast lookup table based Huffman encoder that was, at the time, revolutionary for being able to do 320x240 at 30 FPS in software only (this was in the era when a Pentium III was blazing fast and incredibly new, and we wanted to target more "moderate" hardware). At one point I revisited Prof Berger's old patent and I suspect that if you tried to encode much higher bit depth samples the lookup table size would explode. I never had time to confirm that guess though.

We ran into the same problem with a LUT-based Rice decoder, but you can work around it. The trick is to build a reasonable size LUT, maybe 12 or 16 bits, but to have some logic after the look-up. Some entries might decode to multiple symbols (if you fitted 2 or more short codes into 12 bits, for example), others would not decode a complete symbol. You could then finish parsing the entries that did not yield complete symbols using your regular method, but starting with the bits you have already partially decoded with the LUT. For Rice codes this works well if you restrict the unary part of the code to fit in the LUT, meaning you still avoid parsing the unary prefix of the code (which corresponds to parsing the Huffman tree), and you just have to read a known number of bits (determined by the unary prefix) to complete the binary suffix of the code.

But modern instruction sets gave us the count-leading-zeros instruction clzl, as well as bit-extraction instructions like bextr, so we ended up ditching the LUT from our implementation. These instructions, together with 64-bit integers, makes it possible to decompress Rice codes (representing 8 or 16-bit source data) quite efficiently. You even end up with fairly readable C code, with just a few intrinsics sprinked throughout. The compression code is a another story: that turned out to be an unreadable mass of SSE2 intrinsics, but it does allow you to process up to four 16-bit samples at once, and get the benefit of 64-bit writes to memory. Without all that, compression speed lags noticeably behind decompression speed.

Hmm. I think that I am just rambling now. I blame it on WFH leading to a severe shortage of technical discussions

fvdbergh2501 Contributing Member • Posts: 618
Re: I will get back to you

Henry Richardson wrote:

fvdbergh2501 wrote:

I agree with your conclusion: As long as "time to compress + time to write" is less than the alternative (e.g., using a fast compression method), then you can keep on using more complex (and slower) compression methods.

The subtlety is that judging the sweet-spot solutions developed by the camera manufacturers 15 to 20 years ago using a recent state-of-the-art camera body (with much greater processing power) is inherently unfair, and inevitably leads to the conclusion that somehow the manufacturers are leaving a lot of performance on the table. My gut feeling is that processing power in the camera ISP has increased more rapidly, relative to the number of pixels per image, so we potentially have spare processing power to devote to more complex compression.

However, the manufacturers cannot change their raw formats without breaking compatibility with all the software out there. For example, I noticed that LibRaw added support for Canon's CR3 format in version 0.20.1, which was released in 2020. The CR3 format seems to have appeared in the M50 in April 2018. That is rather exemplary of the LibRaw team; I am sure it will take many years before most non-commercial raw developers add support for CR3.

Backwards compatibility aside, just how much more compression can you squeeze out of a raw file using a lossless method? This depends a great deal on the image contents, but if I take a quick look at two Nikon D850 and D7000 files I happened to have around, it looks like Nikon achieves a compression ratio of between 1.5 and 1.57 on 14-bit raw files (after excluding the embedded JPEG size, but including metadata). Not really a representative sample of images, but it gives us a ballpark figure.

Generally, you are doing really well when you can achieve a compression ratio of 2.0 (meaning compressed file is half the uncompressed size) with a generic lossless compression method (think zip, applied to "typical" files). Images with a lot of detail tend to compress similarly, if slightly lower. Again, using satellite images as a reference (not representative of typical camera images, but it is what I know best), we were happy when we could hit a ratio of about 1.8 with a fast lossless method. The difference in compression ratio between fast methods (differencing + Rice coding, or LZ4) and slow transform-based methods (JPEG2000 in lossless mode) was not enough to warrant using the slow methods. For example, we might get a ratio of 1.8 for a fast method vs 1.9 for a slow method.

Looking at the Nikon .NEF files, my gut feeling is that we can do a little bit better in terms of compression ratio. I ran my "not-really-representative" raw D850 and D7000 images (Bayer mosaiced data extracted with dcraw -D) through OpenJPEG to produce JPEG2000 files, and got compression ratios of 1.61 and 1.72 (relative to 1.5 and 1.57 in the NEF)**. But is a 10% smaller file size going to make a meaningful difference in "compression time + write time" ? You can usually get a 10% card write speed increase by just buying a faster card.

**I also tried separating the 4 raw Bayer channels, and that improved the compression ratio from 1.61 to 1.63 on the D850 file, so that does not make a huge difference, even if that would be a better test.

Thank you again for a very interesting and informative response.. I am not ignoring you. I am thinking about all you wrote and chewing on it and also formulating a response. Like everyone else I have other things on my plate each day so I will get back and reply soon.

No worries.

In the meantime, I have downloaded a few of the images in question:

1. Olympus OM-D E-M5 DPR studio scene, ISO 200: P1010002.ORF, size less embedded JPEG = 13764684 bytes

2. Panasonic DMC GX7 DPR studio scene, ISO 200: P1030038.RW2, size less embedded JPEG = 19073024 bytes

3. Olympus OM-D E-M5II DPR studio scene, ISO 200: P1010065.ORF, size less embedded JPEG = 13531354 bytes

I skipped the Fuji X-T10 raw file because the X-trans layout is too much effort for me to separate into colour planes just for fun. But my guess would be that the individual color planes will compress similarly to Bayer CFA sensors. And I used the GX7 because I could not find the GX7II on the DPR studio drop-down list. Hope that does not matter too much.

I ran the three files above through dcraw -4 -D to extract the raw Bayer mosaic data, then I split them into 4 colour planes, and I compressed them with the custom difference + Rice coding method I mentioned earlier. I calculated the compression ratio as (image_width*image_height*12/8) / (compressed_size), meaning I treat the input samples as 12-bit values, as if they were tightly packed before compression. Here are the results

1. E-M5, P1010002.ORF, Rice compressed size = 13813146 bytes, ratio = 1.75

2. GX7, P1030038.RW2, Rice compressed size = 15219334, ratio = 1.57

3. E-M5II, P1010065.ORF, Rice compressed size = 13301770, ratio = 1.82

So the takeaway message is that the Olympus raw format is compressing quite well, producing a compressed raw file that is slightly smaller than my Rice-coding method (E-M5) or slightly larger than my method (E-M5II). The Panasonic raw format is noticeable less effective, especially for a raw format.

Interestingly enough, my Rice-coding method performed worst on the GX7 image in terms of compression ratio. This could be any number of things, from higher noise levels in the GX7 (just speculating, did not test), to the residual "damage" caused by the lossy compression method that may leave artifacts that makes it harder to re-compress them with a different codec. Or maybe my codec parameter choices (block size, etc.) was not optimal for the GX7. Regardless, this is curious.

So why did I use a custom compression codec based on Rice coding to do this test? Well, anyone can use OpenJPEG to try a transform-based method like JPEG2000, but I wanted to try a method that I think is fast enough for embedded camera implementation. To give a somewhat relevant figure, I timed how long it took to decompress the P1010002.ORF with LibRaw's "dcraw_emu -timing P1010002.ORF" command, which reported that "unpacking" the raw file (which I interpret as decompressing, but not demosaicing etc., I could be wrong) took 415 ms.

I compressed and decompressed the P1010002.ORF file using the custom Rice codec, and decompression took 86 ms. Note that the Oly raw format probably uses Huffman codes, so it is not a fair comparison, just a single data point to illustrate that Rice codes are reasonably fast, and produce roughly the same compression ratio as whatever Olympus is using.

I repeated the timings with P1030038.RW2, with LibRaw reporting an unpacking time of 113 ms, compared to 90 ms for the Rice codec. Notice that the Panasonic raw "unpack" time using LibRaw is much lower than the Olympus raw file, 113 ms vs 415 ms, so this supports the claim that Panasonic may have been prioritizing processing power requirements over compression ratio.

My conclusion here is that the Panasonic raw file format is meaningfully worse than the Olympus format, yielding worse compression ratios despite being a lossy method. But as I have said in earlier posts, keep in mind that the Panasonic format might be older, and its design may have been constrained by camera CPU power at the time that the format was developed.

OP Henry Richardson Forum Pro • Posts: 21,950
Okay, here is my reply
2

fvdbergh2501 wrote:

I agree with your conclusion: As long as "time to compress + time to write" is less than the alternative (e.g., using a fast compression method), then you can keep on using more complex (and slower) compression methods.

The subtlety is that judging the sweet-spot solutions developed by the camera manufacturers 15 to 20 years ago using a recent state-of-the-art camera body (with much greater processing power) is inherently unfair, and inevitably leads to the conclusion that somehow the manufacturers are leaving a lot of performance on the table. My gut feeling is that processing power in the camera ISP has increased more rapidly, relative to the number of pixels per image, so we potentially have spare processing power to devote to more complex compression.

I shoot with Olympus and Panasonic m4/3 these days, but shot with various Canon and Sony DSLRs before. In the case of Olympus, the 5mp E-1 DSLR and 5mp C-5050 digicam with the ORF raw format came out in 2003.  There may have been earlier models that used ORF also.  I gave this extreme example in another post:

Olympus E-M5II: 4608x3456 pixels, embedded 3200x2400 jpeg, 14.0mb file size, lossless compression

Fuji X-T10: 4936x3296 pixels, embedded 1920x1280 jpeg, 32.2mb file size, compression?

Here are a couple of interesting posts about the inexpensive, bottom of the line E-M10II from 2015 shooting raw+jpeg:

https://www.dpreview.com/forums/post/65118750

https://www.dpreview.com/forums/post/65119421

A lot of cameras can shoot for many dozens or even hundreds of frames with no delay when shooting jpegs. Each jpeg must be totally processed from raw internally and compressed before writing to the card.

However, the manufacturers cannot change their raw formats without breaking compatibility with all the software out there. For example, I noticed that LibRaw added support for Canon's CR3 format in version 0.20.1, which was released in 2020. The CR3 format seems to have appeared in the M50 in April 2018. That is rather exemplary of the LibRaw team; I am sure it will take many years before most non-commercial raw developers add support for CR3.

Rawtherapee and ART support CR3 already, I think, and on the darktable forum I saw a post saying that the support will be added soon.

Backwards compatibility aside, just how much more compression can you squeeze out of a raw file using a lossless method? This depends a great deal on the image contents, but if I take a quick look at two Nikon D850 and D7000 files I happened to have around, it looks like Nikon achieves a compression ratio of between 1.5 and 1.57 on 14-bit raw files (after excluding the embedded JPEG size, but including metadata). Not really a representative sample of images, but it gives us a ballpark figure.

Generally, you are doing really well when you can achieve a compression ratio of 2.0 (meaning compressed file is half the uncompressed size) with a generic lossless compression method (think zip, applied to "typical" files). Images with a lot of detail tend to compress similarly, if slightly lower. Again, using satellite images as a reference (not representative of typical camera images, but it is what I know best), we were happy when we could hit a ratio of about 1.8 with a fast lossless method. The difference in compression ratio between fast methods (differencing + Rice coding, or LZ4) and slow transform-based methods (JPEG2000 in lossless mode) was not enough to warrant using the slow methods. For example, we might get a ratio of 1.8 for a fast method vs 1.9 for a slow method.

Looking at the Nikon .NEF files, my gut feeling is that we can do a little bit better in terms of compression ratio. I ran my "not-really-representative" raw D850 and D7000 images (Bayer mosaiced data extracted with dcraw -D) through OpenJPEG to produce JPEG2000 files, and got compression ratios of 1.61 and 1.72 (relative to 1.5 and 1.57 in the NEF)**. But is a 10% smaller file size going to make a meaningful difference in "compression time + write time" ? You can usually get a 10% card write speed increase by just buying a faster card.

**I also tried separating the 4 raw Bayer channels, and that improved the compression ratio from 1.61 to 1.63 on the D850 file, so that does not make a huge difference, even if that would be a better test.

See my example above of the 14.0mb E-M5II and 32.2mb Fuji X-T10.

Keyboard shortcuts:
FForum MMy threads