A1 firmware update - Top requested item

A1 firmware update - Top requested item


  • Total voters
    0
Yes I missed that

Looking at this the second file has in effect double the data density than the first however I am still very skeptical those data points are not interpolated by the algorithm

I have asked Jim Karslsson what he thinks as well

I find it a bit surprising that doing more compression saves time, this is generally not true in any compression algorithm you squeeze more you spend more time hence my doubts
Why would you find that surprising?

They are 2 very different algorithms, and I explained earlier why the compressed one is faster, because it's a more SIMPLE algorithm, whereas the lossless algorithm is a lot more COMPLEX.

Let's see the time it takes for JPEG (lossy) vs PNG (lossless):

20d78f000ba7447b876588a56ac07b3a.jpg.png

So for the 8k image, we get 3 sec vs 150 sec, about, JPEG is a lot faster than PNG and yet it compresses a lot more.
Wild guesswork based on no actual knowledge of how the coding is nor any measure of processing time?
No, actually quite the complete opposite: not a guesswork, it's a benchmark, and it's on actual knowledge, and I know how the coding works and how the benchmark was made. I could make such a benchmark myself any time you want btw.

I make video filters (in C++ / assembler), and video encoding software, image processing software etc, in C++, and actually have a pretty good knowledge about codecs.
150 seconds to save an 8k image - that's over 2 minutes.
Some benchmark. How much time is writing to what appears to be a very very slow disk.
A very very slow disk is going to significantly disadvantage a larger png file. PNG compression is likely much faster but takes much longer to write the file to disk.
Fast memory and fast SSD could reverse the result.
What was the quality of the JPEG file ?
A lot more information would be required about type of processor, type of storage, type of image to know whether it would be relevant.
It's not an SSD issue because the BMP is quick, and that file size is massive (3 bytes per pixel for an RGB image => 96MB for that 8k image).

It's a benchmark that I found but it looks fine.

However, if you want, I can create my own benchmark with the following codecs for ex:

1) bmp
2) jpg (at various compressions levels)
3) png (also at different compression levels)
4) webp (same)
5) tiff

And then see the results.

(I say those formats because I already have them available in opencv c++ library so it's easier for me... but I could add more later if needed... in fact, I could add a format that simulates the Sony lossy compression, using the algorithm that was described above, that would be very interesting)

In any case, to be back to topic, the Sony compression is extremely quick because it's a 1-pass compression, it just reads a short stream of pixels, like 16 of them (maybe they modified it a bit) and compresses that, in other words, it's as efficient as it gets. It's going to deliver an almost instantaneous compression. Basically, it can read the data and write the output at the same time, in streaming mode.

It's also easy to parallelize it, but not necessary because the bottleneck is not the compression but the disk writing speed. However! When saving compressed raws in RAM, which the Sony cameras do, then it's not limited to the disk writing speed anymore, so in that case, it's a good idea to parallelize it, and for ex, process multiple lines in parallel. This is why this compression gives a huge boost of speed and allows the camera to shoot at higher frame rates.
 
Last edited:
Yes I missed that

Looking at this the second file has in effect double the data density than the first however I am still very skeptical those data points are not interpolated by the algorithm

I have asked Jim Karslsson what he thinks as well

I find it a bit surprising that doing more compression saves time, this is generally not true in any compression algorithm you squeeze more you spend more time hence my doubts
Why would you find that surprising?

They are 2 very different algorithms, and I explained earlier why the compressed one is faster, because it's a more SIMPLE algorithm, whereas the lossless algorithm is a lot more COMPLEX.

Let's see the time it takes for JPEG (lossy) vs PNG (lossless):

20d78f000ba7447b876588a56ac07b3a.jpg.png

So for the 8k image, we get 3 sec vs 150 sec, about, JPEG is a lot faster than PNG and yet it compresses a lot more.
Wild guesswork based on no actual knowledge of how the coding is nor any measure of processing time?
No, actually quite the complete opposite: not a guesswork, it's a benchmark, and it's on actual knowledge, and I know how the coding works and how the benchmark was made. I could make such a benchmark myself any time you want btw.

I make video filters (in C++ / assembler), and video encoding software, image processing software etc, in C++, and actually have a pretty good knowledge about codecs.
150 seconds to save an 8k image - that's over 2 minutes.
Some benchmark. How much time is writing to what appears to be a very very slow disk.
A very very slow disk is going to significantly disadvantage a larger png file. PNG compression is likely much faster but takes much longer to write the file to disk.
Fast memory and fast SSD could reverse the result.
What was the quality of the JPEG file ?
A lot more information would be required about type of processor, type of storage, type of image to know whether it would be relevant.
It's not an SSD issue because the BMP is quick, and that file size is massive (3 bytes per pixel for an RGB image => 96MB for that 8k image).

It's a benchmark that I found but it looks fine.

However, if you want, I can create my own benchmark with the following codecs for ex:

1) bmp
2) jpg (at various compressions levels)
3) png (also at different compression levels)
4) webp (same)
5) tiff

And then see the results.

(I say those formats because I already have them available in opencv c++ library so it's easier for me... but I could add more later if needed... in fact, I could add a format that simulates the Sony lossy compression, using the algorithm that was described above, that would be very interesting)

In any case, to be back to topic, the Sony compression is extremely quick because it's a 1-pass compression, it just reads a short stream of pixels, like 16 of them (maybe they modified it a bit) and compresses that, in other words, it's as efficient as it gets. It's going to deliver an almost instantaneous compression. Basically, it can read the data and write the output at the same time, in streaming mode.

It's also easy to parallelize it, but not necessary because the bottleneck is not the compression but the disk writing speed. However! When saving compressed raws in RAM, which the Sony cameras do, then it's not limited to the disk writing speed anymore, so in that case, it's a good idea to parallelize it, and for ex, process multiple lines in parallel. This is why this compression gives a huge boost of speed and allows the camera to shoot at higher frame rates.
If this benchmark were relevant and if as you seem to believe there is no write constraint to buffer (RAM) then shouldn't the uncompress RAW (BMP equivalent) be as fast as JPEG.

Regardless a 2 minute save time for lossless compressed PNG is nothing remotely comparable to a lossless RAW file. If it were wouldn't the camera be shooting at 1 frame every 2 minutes because it would be taking 2 minutes to compress each lossless file ?

And yet we seem to be able to process at least 20fps lossless - so that's 3 orders of magnitude faster than the benchmark. While A1 lossy is 30fps - so 30% faster - assuming we use the A1 as an example. Your benchmark shows the PNG - which I am assuming is supposed to represent the lossless RAW - as being >100 times or two orders of magnitude slower - although it's hard to determine what the actual JPEG time is. Again appears to bear no resemblance to the real world differences on today's cameras.

Similarly by your own logic the Sony compression algorithms and process would work the same way for both lossless and lossy compression so why the performance difference if not constrained by disk/memory write speed as you seem to believe - bearing in mind that saving to buffer (RAM) may well be write constrained.

BTW I have no interest in the discussion beyond pointing out that the benchmark appears to be entirely irrelevant because a) it indicates saving times that are orders of magnitude different to actual camera performance, and b) the uncompressed BMP is shown to be nearly as fast as the lossy JPEG while in the real world uncompressed RAW appears to have the same performance impact as lossless RAW.
 
Yes I missed that

Looking at this the second file has in effect double the data density than the first however I am still very skeptical those data points are not interpolated by the algorithm

I have asked Jim Karslsson what he thinks as well

I find it a bit surprising that doing more compression saves time, this is generally not true in any compression algorithm you squeeze more you spend more time hence my doubts
Why would you find that surprising?

They are 2 very different algorithms, and I explained earlier why the compressed one is faster, because it's a more SIMPLE algorithm, whereas the lossless algorithm is a lot more COMPLEX.

Let's see the time it takes for JPEG (lossy) vs PNG (lossless):

20d78f000ba7447b876588a56ac07b3a.jpg.png

So for the 8k image, we get 3 sec vs 150 sec, about, JPEG is a lot faster than PNG and yet it compresses a lot more.
Wild guesswork based on no actual knowledge of how the coding is nor any measure of processing time?
No, actually quite the complete opposite: not a guesswork, it's a benchmark, and it's on actual knowledge, and I know how the coding works and how the benchmark was made. I could make such a benchmark myself any time you want btw.

I make video filters (in C++ / assembler), and video encoding software, image processing software etc, in C++, and actually have a pretty good knowledge about codecs.
150 seconds to save an 8k image - that's over 2 minutes.
Some benchmark. How much time is writing to what appears to be a very very slow disk.
A very very slow disk is going to significantly disadvantage a larger png file. PNG compression is likely much faster but takes much longer to write the file to disk.
Fast memory and fast SSD could reverse the result.
What was the quality of the JPEG file ?
A lot more information would be required about type of processor, type of storage, type of image to know whether it would be relevant.
It's not an SSD issue because the BMP is quick, and that file size is massive (3 bytes per pixel for an RGB image => 96MB for that 8k image).

It's a benchmark that I found but it looks fine.

However, if you want, I can create my own benchmark with the following codecs for ex:

1) bmp
2) jpg (at various compressions levels)
3) png (also at different compression levels)
4) webp (same)
5) tiff

And then see the results.

(I say those formats because I already have them available in opencv c++ library so it's easier for me... but I could add more later if needed... in fact, I could add a format that simulates the Sony lossy compression, using the algorithm that was described above, that would be very interesting)

In any case, to be back to topic, the Sony compression is extremely quick because it's a 1-pass compression, it just reads a short stream of pixels, like 16 of them (maybe they modified it a bit) and compresses that, in other words, it's as efficient as it gets. It's going to deliver an almost instantaneous compression. Basically, it can read the data and write the output at the same time, in streaming mode.

It's also easy to parallelize it, but not necessary because the bottleneck is not the compression but the disk writing speed. However! When saving compressed raws in RAM, which the Sony cameras do, then it's not limited to the disk writing speed anymore, so in that case, it's a good idea to parallelize it, and for ex, process multiple lines in parallel. This is why this compression gives a huge boost of speed and allows the camera to shoot at higher frame rates.
If this benchmark were relevant and if as you seem to believe there is no write constraint to buffer (RAM) then shouldn't the uncompress RAW (BMP equivalent) be as fast as JPEG.
Regardless a 2 minute save time for lossless compressed PNG is nothing remotely comparable to a lossless RAW file. If it were wouldn't the camera be shooting at 1 frame every 2 minutes because it would be taking 2 minutes to compress each lossless file ?
And yet we seem to be able to process at least 20fps lossless - so that's 3 orders of magnitude faster than the benchmark. While A1 lossy is 30fps - so 30% faster - assuming we use the A1 as an example. Your benchmark shows the PNG - which I am assuming is supposed to represent the lossless RAW - as being >100 times or two orders of magnitude slower - although it's hard to determine what the actual JPEG time is. Again appears to bear no resemblance to the real world differences on today's cameras.
Similarly by your own logic the Sony compression algorithms and process would work the same way for both lossless and lossy compression so why the performance difference if not constrained by disk/memory write speed as you seem to believe - bearing in mind that saving to buffer (RAM) may well be write constrained.
BTW I have no interest in the discussion beyond pointing out that the benchmark appears to be entirely irrelevant because a) it indicates saving times that are orders of magnitude different to actual camera performance, and b) the uncompressed BMP is shown to be nearly as fast as the lossy JPEG while in the real world uncompressed RAW appears to have the same performance impact as lossless RAW.
I think this discussion is utterly OT and irrelevant to this topic.

But just FYI PNG is a terrible codec and something like QOI is much much simpler and has 20-50x the encoding speed achieving near the same lossless compression rate
 
Yes I missed that

Looking at this the second file has in effect double the data density than the first however I am still very skeptical those data points are not interpolated by the algorithm

I have asked Jim Karslsson what he thinks as well

I find it a bit surprising that doing more compression saves time, this is generally not true in any compression algorithm you squeeze more you spend more time hence my doubts
Why would you find that surprising?

They are 2 very different algorithms, and I explained earlier why the compressed one is faster, because it's a more SIMPLE algorithm, whereas the lossless algorithm is a lot more COMPLEX.

Let's see the time it takes for JPEG (lossy) vs PNG (lossless):

20d78f000ba7447b876588a56ac07b3a.jpg.png

So for the 8k image, we get 3 sec vs 150 sec, about, JPEG is a lot faster than PNG and yet it compresses a lot more.
Wild guesswork based on no actual knowledge of how the coding is nor any measure of processing time?
No, actually quite the complete opposite: not a guesswork, it's a benchmark, and it's on actual knowledge, and I know how the coding works and how the benchmark was made. I could make such a benchmark myself any time you want btw.

I make video filters (in C++ / assembler), and video encoding software, image processing software etc, in C++, and actually have a pretty good knowledge about codecs.
150 seconds to save an 8k image - that's over 2 minutes.
Some benchmark. How much time is writing to what appears to be a very very slow disk.
A very very slow disk is going to significantly disadvantage a larger png file. PNG compression is likely much faster but takes much longer to write the file to disk.
Fast memory and fast SSD could reverse the result.
What was the quality of the JPEG file ?
A lot more information would be required about type of processor, type of storage, type of image to know whether it would be relevant.
No it's not - PNG is notoriously complicated as an algorithm, disk speed is the least of its worries. If anything, BMP is at a disadvantage due to disk speed.
 
Last edited:
Yes I missed that

Looking at this the second file has in effect double the data density than the first however I am still very skeptical those data points are not interpolated by the algorithm

I have asked Jim Karslsson what he thinks as well

I find it a bit surprising that doing more compression saves time, this is generally not true in any compression algorithm you squeeze more you spend more time hence my doubts
Why would you find that surprising?

They are 2 very different algorithms, and I explained earlier why the compressed one is faster, because it's a more SIMPLE algorithm, whereas the lossless algorithm is a lot more COMPLEX.

Let's see the time it takes for JPEG (lossy) vs PNG (lossless):

20d78f000ba7447b876588a56ac07b3a.jpg.png

So for the 8k image, we get 3 sec vs 150 sec, about, JPEG is a lot faster than PNG and yet it compresses a lot more.
Wild guesswork based on no actual knowledge of how the coding is nor any measure of processing time?
No, actually quite the complete opposite: not a guesswork, it's a benchmark, and it's on actual knowledge, and I know how the coding works and how the benchmark was made. I could make such a benchmark myself any time you want btw.

I make video filters (in C++ / assembler), and video encoding software, image processing software etc, in C++, and actually have a pretty good knowledge about codecs.
150 seconds to save an 8k image - that's over 2 minutes.
Some benchmark. How much time is writing to what appears to be a very very slow disk.
A very very slow disk is going to significantly disadvantage a larger png file. PNG compression is likely much faster but takes much longer to write the file to disk.
Fast memory and fast SSD could reverse the result.
What was the quality of the JPEG file ?
A lot more information would be required about type of processor, type of storage, type of image to know whether it would be relevant.
No it's not - PNG is notoriously complicated as an algorithm, disk speed is the least of its worries. If anything, BMP is at a disadvantage due to disk speed.
No what is not ?

The fact that PNG is a complicated storage format has no relevance - which is my point.
 
Yes I missed that

Looking at this the second file has in effect double the data density than the first however I am still very skeptical those data points are not interpolated by the algorithm

I have asked Jim Karslsson what he thinks as well

I find it a bit surprising that doing more compression saves time, this is generally not true in any compression algorithm you squeeze more you spend more time hence my doubts
Why would you find that surprising?

They are 2 very different algorithms, and I explained earlier why the compressed one is faster, because it's a more SIMPLE algorithm, whereas the lossless algorithm is a lot more COMPLEX.

Let's see the time it takes for JPEG (lossy) vs PNG (lossless):

20d78f000ba7447b876588a56ac07b3a.jpg.png

So for the 8k image, we get 3 sec vs 150 sec, about, JPEG is a lot faster than PNG and yet it compresses a lot more.
Wild guesswork based on no actual knowledge of how the coding is nor any measure of processing time?
No, actually quite the complete opposite: not a guesswork, it's a benchmark, and it's on actual knowledge, and I know how the coding works and how the benchmark was made. I could make such a benchmark myself any time you want btw.

I make video filters (in C++ / assembler), and video encoding software, image processing software etc, in C++, and actually have a pretty good knowledge about codecs.
150 seconds to save an 8k image - that's over 2 minutes.
Some benchmark. How much time is writing to what appears to be a very very slow disk.
A very very slow disk is going to significantly disadvantage a larger png file. PNG compression is likely much faster but takes much longer to write the file to disk.
Fast memory and fast SSD could reverse the result.
What was the quality of the JPEG file ?
A lot more information would be required about type of processor, type of storage, type of image to know whether it would be relevant.
No it's not - PNG is notoriously complicated as an algorithm, disk speed is the least of its worries. If anything, BMP is at a disadvantage due to disk speed.
No what is not ?

The fact that PNG is a complicated storage format has no relevance - which is my point.
The whole point is that your disk speed reference is a red herring - if it was, why isn't BPM affected more? You'll see that BMP is closer to JPEG than PNG (BMP has a much larger size as it's not compressed at all) and if it were bottlenecked by storage speeds, it'll be far slower than JPEG.

Your assertion that it's a very very slow disk is incorrect in this case.

Besides, point was to show algorithm complexity albeit with examples that aren't directly attributable to raw at hand.
 
Last edited:
Yes I missed that

Looking at this the second file has in effect double the data density than the first however I am still very skeptical those data points are not interpolated by the algorithm

I have asked Jim Karslsson what he thinks as well

I find it a bit surprising that doing more compression saves time, this is generally not true in any compression algorithm you squeeze more you spend more time hence my doubts
Why would you find that surprising?

They are 2 very different algorithms, and I explained earlier why the compressed one is faster, because it's a more SIMPLE algorithm, whereas the lossless algorithm is a lot more COMPLEX.

Let's see the time it takes for JPEG (lossy) vs PNG (lossless):

20d78f000ba7447b876588a56ac07b3a.jpg.png

So for the 8k image, we get 3 sec vs 150 sec, about, JPEG is a lot faster than PNG and yet it compresses a lot more.
Wild guesswork based on no actual knowledge of how the coding is nor any measure of processing time?
No, actually quite the complete opposite: not a guesswork, it's a benchmark, and it's on actual knowledge, and I know how the coding works and how the benchmark was made. I could make such a benchmark myself any time you want btw.

I make video filters (in C++ / assembler), and video encoding software, image processing software etc, in C++, and actually have a pretty good knowledge about codecs.
150 seconds to save an 8k image - that's over 2 minutes.
Some benchmark. How much time is writing to what appears to be a very very slow disk.
A very very slow disk is going to significantly disadvantage a larger png file. PNG compression is likely much faster but takes much longer to write the file to disk.
Fast memory and fast SSD could reverse the result.
What was the quality of the JPEG file ?
A lot more information would be required about type of processor, type of storage, type of image to know whether it would be relevant.
No it's not - PNG is notoriously complicated as an algorithm, disk speed is the least of its worries. If anything, BMP is at a disadvantage due to disk speed.
No what is not ?

The fact that PNG is a complicated storage format has no relevance - which is my point.
The whole point is that your disk speed reference is a red herring - if it was, why isn't BPM affected more? You'll see that BMP is closer to JPEG than PNG (BMP has a much larger size as it's not compressed at all) and if it were bottlenecked by storage speeds, it'll be far slower than JPEG.

Your assertion that it's a very very slow disk is incorrect in this case.

Besides, point was to show algorithm complexity albeit with examples that aren't directly attributable to raw at hand.
Sure but what complexity is the chart illustrating ?

The compression algorithm or the file format complexity ? There is nothing indicating which, if either, is the driver of the slow time. So is this chart even a valid indicator of compression algorithm complexity ?

Real world experience also does not indicate 150seconds is to be expected when saving a PNG file regardless of whether compression or data throughput is the driver behind the long time. Which raises the question of validity of the chart for any purpose at all.

But more to the point to argue that the compression algorithm is the reason for being unable to shoot at 30fps rather than data throughput makes no sense when you also can't shoot at 30fps with uncompressed raw where compression time is zero.

By that logic you should in fact be able to shoot at higher FPS with uncompressed RAW that with lossy compressed RAW.
 
Yes I missed that

Looking at this the second file has in effect double the data density than the first however I am still very skeptical those data points are not interpolated by the algorithm

I have asked Jim Karslsson what he thinks as well

I find it a bit surprising that doing more compression saves time, this is generally not true in any compression algorithm you squeeze more you spend more time hence my doubts
Why would you find that surprising?

They are 2 very different algorithms, and I explained earlier why the compressed one is faster, because it's a more SIMPLE algorithm, whereas the lossless algorithm is a lot more COMPLEX.

Let's see the time it takes for JPEG (lossy) vs PNG (lossless):

20d78f000ba7447b876588a56ac07b3a.jpg.png

So for the 8k image, we get 3 sec vs 150 sec, about, JPEG is a lot faster than PNG and yet it compresses a lot more.
Wild guesswork based on no actual knowledge of how the coding is nor any measure of processing time?
No, actually quite the complete opposite: not a guesswork, it's a benchmark, and it's on actual knowledge, and I know how the coding works and how the benchmark was made. I could make such a benchmark myself any time you want btw.

I make video filters (in C++ / assembler), and video encoding software, image processing software etc, in C++, and actually have a pretty good knowledge about codecs.
150 seconds to save an 8k image - that's over 2 minutes.
Some benchmark. How much time is writing to what appears to be a very very slow disk.
A very very slow disk is going to significantly disadvantage a larger png file. PNG compression is likely much faster but takes much longer to write the file to disk.
Fast memory and fast SSD could reverse the result.
What was the quality of the JPEG file ?
A lot more information would be required about type of processor, type of storage, type of image to know whether it would be relevant.
No it's not - PNG is notoriously complicated as an algorithm, disk speed is the least of its worries. If anything, BMP is at a disadvantage due to disk speed.
No what is not ?

The fact that PNG is a complicated storage format has no relevance - which is my point.
The whole point is that your disk speed reference is a red herring - if it was, why isn't BPM affected more? You'll see that BMP is closer to JPEG than PNG (BMP has a much larger size as it's not compressed at all) and if it were bottlenecked by storage speeds, it'll be far slower than JPEG.

Your assertion that it's a very very slow disk is incorrect in this case.

Besides, point was to show algorithm complexity albeit with examples that aren't directly attributable to raw at hand.
Sure but what complexity is the chart illustrating ?
The compression algorithm or the file format complexity ? There is nothing indicating which, if either, is the driver of the slow time. So is this chart even a valid indicator of compression algorithm complexity ?

Real world experience also does not indicate 150seconds is to be expected when saving a PNG file regardless of whether compression or data throughput is the driver behind the long time. Which raises the question of validity of the chart for any purpose at all.
But more to the point to argue that the compression algorithm is the reason for being unable to shoot at 30fps rather than data throughput makes no sense when you also can't shoot at 30fps with uncompressed raw where compression time is zero.

By that logic you should in fact be able to shoot at higher FPS with uncompressed RAW that with lossy compressed RAW.
Frankly, we're all just guessing here. But if I were to assume, here's my assumptions:
  • Uncompressed raw is slow because you're running into an actual bandwidth problem now. No because then that's a throughput problem - the sizes are too big then. 80MB x 30 shot/second = 2.4 GB/s. That's a lot of data to move, especially when CFxA tops out at 800-900 MB/s (which the camera doesn't seem to actually hit anyway).
  • Lossless compressed is too computationally heavy to do quickly enough.
  • That leaves lossy compressed, which is a very simple and deterministic algorithm which does not depend on the entropy level at all for speed purposes (it may for quality purposes, but that's why it's lossy).
You can tweak the algorithm for PNG to set the amount of compression you desire vs speed. The graph gets the point across albeit in a non-direct way to illustrate that compression algorithms greatly impact speed, whether it be computational perspective which the graph supports, or from a bandwidth problem, which the graph does not illustrate.

BTW if you're curious where the numbers come from, it's from Blender's implementation which is notoriously slow https://blender.stackexchange.com/q...he-fastest-or-at-least-faster-png-is-too-slow
 
Last edited:
Yes I missed that

Looking at this the second file has in effect double the data density than the first however I am still very skeptical those data points are not interpolated by the algorithm

I have asked Jim Karslsson what he thinks as well

I find it a bit surprising that doing more compression saves time, this is generally not true in any compression algorithm you squeeze more you spend more time hence my doubts
Why would you find that surprising?

They are 2 very different algorithms, and I explained earlier why the compressed one is faster, because it's a more SIMPLE algorithm, whereas the lossless algorithm is a lot more COMPLEX.

Let's see the time it takes for JPEG (lossy) vs PNG (lossless):

20d78f000ba7447b876588a56ac07b3a.jpg.png

So for the 8k image, we get 3 sec vs 150 sec, about, JPEG is a lot faster than PNG and yet it compresses a lot more.
Wild guesswork based on no actual knowledge of how the coding is nor any measure of processing time?
No, actually quite the complete opposite: not a guesswork, it's a benchmark, and it's on actual knowledge, and I know how the coding works and how the benchmark was made. I could make such a benchmark myself any time you want btw.

I make video filters (in C++ / assembler), and video encoding software, image processing software etc, in C++, and actually have a pretty good knowledge about codecs.
150 seconds to save an 8k image - that's over 2 minutes.
Some benchmark. How much time is writing to what appears to be a very very slow disk.
A very very slow disk is going to significantly disadvantage a larger png file. PNG compression is likely much faster but takes much longer to write the file to disk.
Fast memory and fast SSD could reverse the result.
What was the quality of the JPEG file ?
A lot more information would be required about type of processor, type of storage, type of image to know whether it would be relevant.
No it's not - PNG is notoriously complicated as an algorithm, disk speed is the least of its worries. If anything, BMP is at a disadvantage due to disk speed.
No what is not ?

The fact that PNG is a complicated storage format has no relevance - which is my point.
The whole point is that your disk speed reference is a red herring - if it was, why isn't BPM affected more? You'll see that BMP is closer to JPEG than PNG (BMP has a much larger size as it's not compressed at all) and if it were bottlenecked by storage speeds, it'll be far slower than JPEG.

Your assertion that it's a very very slow disk is incorrect in this case.

Besides, point was to show algorithm complexity albeit with examples that aren't directly attributable to raw at hand.
Sure but what complexity is the chart illustrating ?
The compression algorithm or the file format complexity ? There is nothing indicating which, if either, is the driver of the slow time. So is this chart even a valid indicator of compression algorithm complexity ?

Real world experience also does not indicate 150seconds is to be expected when saving a PNG file regardless of whether compression or data throughput is the driver behind the long time. Which raises the question of validity of the chart for any purpose at all.
But more to the point to argue that the compression algorithm is the reason for being unable to shoot at 30fps rather than data throughput makes no sense when you also can't shoot at 30fps with uncompressed raw where compression time is zero.

By that logic you should in fact be able to shoot at higher FPS with uncompressed RAW that with lossy compressed RAW.
That's exactly my point. The issue must be related to the number of photos you can save before the camera stops operation as the buffer is full.


Maximum number of shots at 20 fps

Compressed RAW 238 images

Lossless Compressed 96 images

Uncompressed 82 images

An uncompressed RAW reads 16 bits so it is around 99 MB suggesting a buffer of 8 GB

However to shoot 238 images the file size would need to be 34 MB which is 65% compression this is going to be very difficult in my view to achieve unless the bit depth is dropped at source or the file is very heavily squeezed

The other point is 82 images at 30 fps would still be 2.7 seconds which is not a short time in uncompressed raw and 3.2 in lossless compressed raw instead of the circa 5 seconds in compressed RAW. Why exclude those options if the issue was simply the size of the full buffer?

Or it has more to do with the fact those are no longer full size sensor readout?

Difficult to say but not clear for sure what is going on



--
If you like my image I would appreciate if you follow me on social media
instagram http://instagram.com/interceptor121
My flickr sets http://www.flickr.com/photos/interceptor121/
Youtube channel http://www.youtube.com/interceptor121
Underwater Photo and Video Blog http://interceptor121.com
Deer Photography workshops https://interceptor121.com/2021/09/26/2021-22-deer-photography-workshops-in-woburn/
If you want to get in touch don't send me a PM rather contact me directly at my website/social media
 
So I did that benchmark code that I was talking about, and indeed, PNG is wayyyy slower than JPEG:

(code creates the files and that image with the bars, sorry it was done quickly so it doesn't look great)

8f573aa5ee4b477da53d3739b5a55b01.jpg.png

extract = time to load and decode the image

null = time to just save the image data, unchanged (and uncompressed), interesting that some jpeg write faster, that's because the image data is 142MB (~49MP x 3), so, with a small jpeg, the time wasted on the cpu is compensated by less time wasted to write, and that's on a 7GB/s SSD.

jpeg-xxx => quality from 0 to 100, same for webp

png-x => compression from 0 to 9

Note: bmp is like null, and a bit quicker because I guess it's optimized for streaming (file writing) whereas the null code is not, I'm just writing the whole buffer in 1 go instead of streaming it. I can make null stream it too for the next benchmark.

And I noticed that some people on this thread believe I'm just making stuff up, so here's some proof, console output and the generated files, but even is that is not enough, then I can show you the source code too.

e12d8fd6d0964449940f35c3c9fec4cb.jpg.png

7c23aebaf6514d578bb5fc87eebf8fb8.jpg.png

Note: I could run it on a slow disk too, then we will probably see that JPEG overtakes BMP.
 
Last edited:
So I did that benchmark code that I was talking about, and indeed, PNG is wayyyy slower than JPEG:

(code creates the files and that image with the bars, sorry it was done quickly so it doesn't look great)

8f573aa5ee4b477da53d3739b5a55b01.jpg.png

extract = time to load and decode the image

null = time to just save the image data, unchanged (and uncompressed), interesting that some jpeg write faster, that's because the image data is 142MB (~49MP x 3), so, with a small jpeg, the time wasted on the cpu is compensated by less time wasted to write, and that's on a 7GB/s SSD.

jpeg-xxx => quality from 0 to 100, same for webp

png-x => compression from 0 to 9

Note: bmp is like null, and a bit quicker because I guess it's optimized for streaming (bmg writing) whereas the null code is not, I'm just writing the whole buffer in 1 go instead of streaming it. I can make null stream it too for the next benchmark.

And I noticed that some people on this thread believe I'm just making stuff up, so here's some proof, console output and the generated files, but even is that is not enough, then I can show you the source code too.

e12d8fd6d0964449940f35c3c9fec4cb.jpg.png

7c23aebaf6514d578bb5fc87eebf8fb8.jpg.png

Note: I could run it on a slow disk too, then we will probably see that JPEG overtakes BMP.
When doing the test, were they all single thread-bound for all of them, or just some?
 
Btw, just added the test on a slower drive, I actually just checked the speed, it's around 80MB/s, not 100.

Results =

45f11eb8cee740c2922c8584ae55b837.jpg.png

CPU-side my code is using only 1 thread, but OpenCV is using optimizations for those codecs, often GPU acceleration and I also enabled multithreading for it, but I don't know the details for every codec ... JPEG and PNG are optimized for sure.

Anyway, the main point that I was making and was challenged on it is that more compression does NOT mean slower encoding. It's all codec dependent, for ex. JPEG compresses a lot more than PNG-9 and is way faster, and on a slow drive, it's even way faster than writing the uncompressed file.
 
Last edited:
Some of you might not want any new capabilities in A1 - they can, of course, skip this entirely.
yes - skipped the Alpha One entirely
 
Anyway, the main point that I was making and was challenged on it is that more compression does NOT mean slower encoding.
You were challenged by someone who very likely doesn't wana know. Your point is common knowledge for anyone versed in compression. He argues exactly like produde/dcisive does and you can't have a meaningful discussion if your "opponent" is not interested in reaching an actual conclusion.
 
Anyway, the main point that I was making and was challenged on it is that more compression does NOT mean slower encoding. It's all codec dependent, for ex. JPEG compresses a lot more than PNG-9 and is way faster, and on a slow drive, it's even way faster than writing the uncompressed file.
Well yeah, that's obvious. Lossless is where it gets interesting because it's generally true the more if compresses the slower it gets, but there's always some smart cookie out there that bucks that trend.

Not a hard concept to grasp, and not sure how others aren't getting it - it depends on the compute complexity.
 
Anyway, the main point that I was making and was challenged on it is that more compression does NOT mean slower encoding.
You were challenged by someone who very likely doesn't wana know. Your point is common knowledge for anyone versed in compression. He argues exactly like produde/dcisive does and you can't have a meaningful discussion if your "opponent" is not interested in reaching an actual conclusion.
I still remember the arguments had with confusing M-RAW and APS-C output........
 
I'd like a firmware update that displays a tutorial about "bits" and a link to this thread on the rear screen when I switch the camera on :)
 
I'd like a firmware update that displays a tutorial about "bits" and a link to this thread on the rear screen when I switch the camera on :)
Oh punishment ,no one cares ,bits and pieces ,it is not as though we will ever get 16 bit on ff
 
Correction "Relay Playback". Supposedly provides high speed review of hundreds of images - plays them back like a video with the option to stop and select individual frames.
Functionality is not already there, as per the Sony announcement it is coming in the march fw update.
But this functionality already exists on Sony cameras. Even the A7iii can do it (and the A1 has it too).

Shoot a burst, and have it so they're grouped in playback.

Go into playback mode, then hit the centre button in the control wheel to of into your burst group. Then hit the bottom button on the control wheel on the back of the camera. It will play the burst back "like a video". Turn the control wheel to change the speed off the playback (from "Speed 1" to "Speed 9"). Hit the bottom button on the control wheel again to stop on a given frame.

Sony calls it "Cont. Playback". If you use the Disp function to show the details on screen , you'l see it mentioned on the screen once you go into the group.

So "Relay Playback" must be something different to this.

Here's some screenshots (the second one is while the "video" is playing):

2ffb772de279405982bd0e9349c81938.jpg

e2e1b2866d5e4b9dab5d1d4f75e6b717.jpg
ha ha well you learn something new every day.

Maybe they are just giving it a new name and claiming it as a new feature.
If you read the post DPReview made, apparently it allows for the camera to treat the two cards as though it's one - as in, when reviewing your images, if a burst crosses over to the next card coz the first one's full, it'll still allow you to continue reviewing rather than manually switching the selected card and reviewing images from there.
If that’s what it is I’m surprised so many people are making such a big thing of it. I suspect people are thinking it’s Continuous playback and they don’t know it already exists.
 

Keyboard shortcuts

Back
Top