MicahSYL
Veteran Member
Thanks for the references to the investigations on Sony lossy compression.Okay, have a look here:This is based on my understanding of JPEG and MPEG lossy compression. The lossy part is due to the Quantization process, which evaluate how much can be removed (The lowest bits of each color of all bits), which is the least significant bits.I'm not 100% sure about that, not in the case of Sony's lossy compression at least, because if you look at the algorithm, it's extremely simple, which makes it very quick to encode and decode, but also more lossy than it should be.Lossy compression needs more computation than lossless compression, due to one less process step of quantization, so it is not a processor GPU issue,Can this be a thing now, or was the processor never the problem?
Lossy Sony raws take the same space as lossless DNG files converted from uncompressed ARW. 24MP average on the A9. So, why would lossless slow anything down unless they require more CPU resources?but a transfer/bus traffic issue. With lossy compression, Sony camera is already having a bottleneck clearing buffer, what about lossless compressed raw files? You will simply have less frames in the buffer. So instead of more fps you simply have a little less and then wait for buffer to clear.
https://www.dpreview.com/articles/2...the-cooked-pulling-apart-sony-raw-compression
The first stage does indeed remove some bits, and Sony does it wrong here compared to Nikon for example, which already suggests it's using some shortcut to get it done more quickly. #1
That second stage is actually lossy in the case of Sony.Then followed by compression algo, which do not discard data bits
"the image is divided up into a series of 16 pixel stripes for each color channel. Rather than recording a separate value for each of these pixels, the Sony system records the brightest and darkest value in each stripe, and a series of simple notes about how all the other pixels vary from those extremes".
...
"as soon as you have a big gap between bright and dark, the 7-bit values used to note the differences aren't sufficient to precisely describe the original image information."
So, instead of using a more CPU intensive algorithm like a Huffman coding to compress those bits, Sony is using a much faster method. #2
I agree with your general explanations, but as you can see from #1 and #2, Sony employs suboptimal and faster algorithms ... to me, it really sounds like they do this to save CPU time and to be able to reach higher burst rates.- the compression is lossless, but quantization is lossy. That is why a lossless compression = 100% quality = no quantization.
Compression algo has many and the more recent ones I am unaware, as I stop lecturing on these more than 10 years ago. Their performance is also related to the data contents and patterns, so some images with very little variance in color, can be very small after compression, others can even be bigger than original size.
So I may be wrong, as I based on more than a decade ago theories. Haha. Cheers.
Edit: to be clear, lossy compression is not an issue per se, the problem is that the lossy compression is worse than the average lossy compression used by other manufacturers. In any case, it would be great to have options for lossless + higher quality lossy.
Huffman compression used by others are not CPU intensive. It is also a simple compression algo, but based on individual pixel. For a co-processor (GPU) to handle Huffman compression is no big deal at all, in fact a very basis routine in imaging. What the author talks about loss of data during compression is could not fully agree, the mention of notes of the variance is very vague.
But the first phase is the quantization process which will lose data - and the original 2 investigators has clearly noted - it is the first phase that removes some data bits.
Huffman or Sony own compression should not loss data, and GPU are hard coded algo in the chip to perform these operations extremely fast. They are not firmware but etched onto silicon chips - that is why it is a Graphic Processor chip. The main CPU does not process the compression algorithm nor the quantization, it is all within the co-processor. This is the usual way to split the processing units because they perform very different functions. CPU for general functions, which is controlled by firmware, which can be updated. But GPU is fixed dedicated to processing graphic functions only.
So I don't really think it is processing limitation. Anyway the internal structure of the Sony processor is not revealed to the public, and we are all just guessing based on common knowledge.
The long buffer clearing time is a good indicator that there is a problem for data transfer. It could be processor, but I really doubt a CPU will be involved with Graphic Processing functions. Cheers.