It's not an SSD issue because the BMP is quick, and that file size is massive (3 bytes per pixel for an RGB image => 96MB for that 8k image).150 seconds to save an 8k image - that's over 2 minutes.No, actually quite the complete opposite: not a guesswork, it's a benchmark, and it's on actual knowledge, and I know how the coding works and how the benchmark was made. I could make such a benchmark myself any time you want btw.Wild guesswork based on no actual knowledge of how the coding is nor any measure of processing time?Why would you find that surprising?Yes I missed that
Looking at this the second file has in effect double the data density than the first however I am still very skeptical those data points are not interpolated by the algorithm
I have asked Jim Karslsson what he thinks as well
I find it a bit surprising that doing more compression saves time, this is generally not true in any compression algorithm you squeeze more you spend more time hence my doubts
They are 2 very different algorithms, and I explained earlier why the compressed one is faster, because it's a more SIMPLE algorithm, whereas the lossless algorithm is a lot more COMPLEX.
Let's see the time it takes for JPEG (lossy) vs PNG (lossless):
So for the 8k image, we get 3 sec vs 150 sec, about, JPEG is a lot faster than PNG and yet it compresses a lot more.
I make video filters (in C++ / assembler), and video encoding software, image processing software etc, in C++, and actually have a pretty good knowledge about codecs.
Some benchmark. How much time is writing to what appears to be a very very slow disk.
A very very slow disk is going to significantly disadvantage a larger png file. PNG compression is likely much faster but takes much longer to write the file to disk.
Fast memory and fast SSD could reverse the result.
What was the quality of the JPEG file ?
A lot more information would be required about type of processor, type of storage, type of image to know whether it would be relevant.
It's a benchmark that I found but it looks fine.
However, if you want, I can create my own benchmark with the following codecs for ex:
1) bmp
2) jpg (at various compressions levels)
3) png (also at different compression levels)
4) webp (same)
5) tiff
And then see the results.
(I say those formats because I already have them available in opencv c++ library so it's easier for me... but I could add more later if needed... in fact, I could add a format that simulates the Sony lossy compression, using the algorithm that was described above, that would be very interesting)
In any case, to be back to topic, the Sony compression is extremely quick because it's a 1-pass compression, it just reads a short stream of pixels, like 16 of them (maybe they modified it a bit) and compresses that, in other words, it's as efficient as it gets. It's going to deliver an almost instantaneous compression. Basically, it can read the data and write the output at the same time, in streaming mode.
It's also easy to parallelize it, but not necessary because the bottleneck is not the compression but the disk writing speed. However! When saving compressed raws in RAM, which the Sony cameras do, then it's not limited to the disk writing speed anymore, so in that case, it's a good idea to parallelize it, and for ex, process multiple lines in parallel. This is why this compression gives a huge boost of speed and allows the camera to shoot at higher frame rates.
Last edited:






