Bionz XR - finally!!!!!!!!!!!

Can this be a thing now, or was the processor never the problem?
Lossy compression needs more computation than lossless compression, due to one less process step of quantization, so it is not a processor GPU issue,
I'm not 100% sure about that, not in the case of Sony's lossy compression at least, because if you look at the algorithm, it's extremely simple, which makes it very quick to encode and decode, but also more lossy than it should be.
but a transfer/bus traffic issue. With lossy compression, Sony camera is already having a bottleneck clearing buffer, what about lossless compressed raw files? You will simply have less frames in the buffer. So instead of more fps you simply have a little less and then wait for buffer to clear.
Lossy Sony raws take the same space as lossless DNG files converted from uncompressed ARW. 24MP average on the A9. So, why would lossless slow anything down unless they require more CPU resources?
This is based on my understanding of JPEG and MPEG lossy compression. The lossy part is due to the Quantization process, which evaluate how much can be removed (The lowest bits of each color of all bits), which is the least significant bits.
Okay, have a look here:

https://www.dpreview.com/articles/2...the-cooked-pulling-apart-sony-raw-compression

The first stage does indeed remove some bits, and Sony does it wrong here compared to Nikon for example, which already suggests it's using some shortcut to get it done more quickly. #1
Then followed by compression algo, which do not discard data bits
That second stage is actually lossy in the case of Sony.

"the image is divided up into a series of 16 pixel stripes for each color channel. Rather than recording a separate value for each of these pixels, the Sony system records the brightest and darkest value in each stripe, and a series of simple notes about how all the other pixels vary from those extremes".

...

"as soon as you have a big gap between bright and dark, the 7-bit values used to note the differences aren't sufficient to precisely describe the original image information."

So, instead of using a more CPU intensive algorithm like a Huffman coding to compress those bits, Sony is using a much faster method. #2
- the compression is lossless, but quantization is lossy. That is why a lossless compression = 100% quality = no quantization.

Compression algo has many and the more recent ones I am unaware, as I stop lecturing on these more than 10 years ago. Their performance is also related to the data contents and patterns, so some images with very little variance in color, can be very small after compression, others can even be bigger than original size.

So I may be wrong, as I based on more than a decade ago theories. Haha. Cheers.
I agree with your general explanations, but as you can see from #1 and #2, Sony employs suboptimal and faster algorithms ... to me, it really sounds like they do this to save CPU time and to be able to reach higher burst rates.

Edit: to be clear, lossy compression is not an issue per se, the problem is that the lossy compression is worse than the average lossy compression used by other manufacturers. In any case, it would be great to have options for lossless + higher quality lossy.
Thanks for the references to the investigations on Sony lossy compression.

Huffman compression used by others are not CPU intensive. It is also a simple compression algo, but based on individual pixel. For a co-processor (GPU) to handle Huffman compression is no big deal at all, in fact a very basis routine in imaging. What the author talks about loss of data during compression is could not fully agree, the mention of notes of the variance is very vague.

But the first phase is the quantization process which will lose data - and the original 2 investigators has clearly noted - it is the first phase that removes some data bits.

Huffman or Sony own compression should not loss data, and GPU are hard coded algo in the chip to perform these operations extremely fast. They are not firmware but etched onto silicon chips - that is why it is a Graphic Processor chip. The main CPU does not process the compression algorithm nor the quantization, it is all within the co-processor. This is the usual way to split the processing units because they perform very different functions. CPU for general functions, which is controlled by firmware, which can be updated. But GPU is fixed dedicated to processing graphic functions only.

So I don't really think it is processing limitation. Anyway the internal structure of the Sony processor is not revealed to the public, and we are all just guessing based on common knowledge.

The long buffer clearing time is a good indicator that there is a problem for data transfer. It could be processor, but I really doubt a CPU will be involved with Graphic Processing functions. Cheers.
 
Hey!

Does anybody know the semiconductor structure size (in nm) of the new (and old) processor?
The old one was probably in the 40nm region, based on the fact that it is using a quadcore Cortex A5 design. New one, no idea
Hi dont be cheatted by 7S3 marketing wording, XR is being compared with 7s2 ( very 1st generation BIONZ CPU).

Checkout 7R3 features in sony.com, it's already updated in Gen 3 bodies:
As far as critical/measurable metrics:

From the A7R2 onwards, we never saw a sustained multiframe demosaicer/scaler throughput of more than 500 MPixels/sec. We saw higher rates than this within given frames (R3 hit around 1-1.2 GPixel/sec within a given frame according to some of Jim Kasson's tests), but never for multiple frames in sequence.

The A7R4 was the first camera in years to break this limit, by only 20% (600 MPixels/sec).

The S3 is hitting at least 960 MPixels/sec (4k120) sustained.

Personally, I'm not particularly interested in the S3, because I want higher resolution. I don't need A7R-class resolution, but 6k downsampled to 4k is definitely sharper than from a 4k Bayer sensor. However, this BIONZ in an A7M4 will be incredible. Sadly, with Sony's slower pace of product releases as things mature, we might not see an A7M4 for a while.
Same as you! Just be a little bit patient, M4 will be announced very soon. Actually i'm also waiting for M4 as well, maybe i will replace my a9 with M4, though A9 still has the fastest RS CMOS.

S3 was scheduled to launch in May orignially

--
Tristan.W
 
Last edited:
Can this be a thing now, or was the processor never the problem?
Lossy compression needs more computation than lossless compression, due to one less process step of quantization, so it is not a processor GPU issue, but a transfer/bus traffic issue. With lossy compression, Sony camera is already having a bottleneck clearing buffer, what about lossless compressed raw files? You will simply have less frames in the buffer. So instead of more fps you simply have a little less and then wait for buffer to clear.
Note that certain things are such common operations that they are readily available in fixed-function hardware IP that can achieve much greater performance than flexible-function systems.

For example: JPEG engines and H.264 engines. These are almost always implemented in fixed-function hardware accelerators in cameras, and the subcomponents of them (such as the Huffman encoder in a JPEG engine) can't easily be repurposed for similar (but not exactly the same) algorithms. Also in this particular case, a JPEG datapath is only 8 bits wide.

Whether or not the XR makes lossless RAW possible depends on whether they added a hardware accelerator that can accelerate key parts of the operation. Since they didn't announce lossless RAW, unfortunately it's unlikely that they did so.
 
Edit: to be clear, lossy compression is not an issue per se, the problem is that the lossy compression is worse than the average lossy compression used by other manufacturers. In any case, it would be great to have options for lossless + higher quality lossy.
Yeah. I'd be entirely willing to have a partial tradeoff that had file sizes slightly higher than the current lossy RAW and instead of using a tone curve plus the "strip quantization", only used a tone curve and encoded a few more bits per pixel similar to what Nikon does. Such an approach would be computationally efficient and save space. For example, using a tone curve to take things down to 12 bits/pixel would be almost indistinguishable from lossless in nearly any scenario (most definitely avoiding the artifacts that stripe quantization has)

In fact, with most Sony cameras, just bit-packing would save some space. Sony's lossless RAWs use 16 bits per sample regardless of what the underlying sensor bit depth was.

Another mode I would love to see in cameras is a variation on continuous shooting called "accumulator mode" - Reserve some RAM that is might higher bit depth than the sensor, and take each shot and add it to that buffer. You could burst multiple shots to memory without filling the flash write buffer much, permitting artificial synthesis of very long shutter speeds while only requiring moderate ND filters at most. For example right now, in bright sunlight you need a 3 or 4 stop ND filter to get an A7M3's burst rate slow enough to have near 100% shutter duty cycle and not back up the buffer. If the camera could accumulate internally, you'd only need a 1-2 stop filter, maybe even less.
 
Can this be a thing now, or was the processor never the problem?
Lossy compression needs more computation than lossless compression, due to one less process step of quantization, so it is not a processor GPU issue, but a transfer/bus traffic issue. With lossy compression, Sony camera is already having a bottleneck clearing buffer, what about lossless compressed raw files? You will simply have less frames in the buffer. So instead of more fps you simply have a little less and then wait for buffer to clear.
Note that certain things are such common operations that they are readily available in fixed-function hardware IP that can achieve much greater performance than flexible-function systems.

For example: JPEG engines and H.264 engines. These are almost always implemented in fixed-function hardware accelerators in cameras, and the subcomponents of them (such as the Huffman encoder in a JPEG engine) can't easily be repurposed for similar (but not exactly the same) algorithms. Also in this particular case, a JPEG datapath is only 8 bits wide.

Whether or not the XR makes lossless RAW possible depends on whether they added a hardware accelerator that can accelerate key parts of the operation. Since they didn't announce lossless RAW, unfortunately it's unlikely that they did so.
Fully agreed, in fact further down the thread I mentioned that these are functions in co-processor - etched on silicon chips. Firmware are for general purpose processing, while many of the graphic intensive functions are fixed as they are industrial standards and obviously will be part of the GPU, just as you mention the JPEG engine, which will encompass the Huffman algo.

But as to re-usability of functions like the Huffman compression I have no idea, but remember the whole XP is one whole GPU, which means Sony can infact reuse the sub-functions (as subroutine in Assembly lang, or as an Object in OOP).

As I state - the 2 step process of Quantization then compression, Lossless Compression is simply without the quantization process. Compression algorithms MUST BE Lossless, only in the Quantization process can noise or least significant bits be dropped, resulting in Lossy Compression. So in all logical analysis, if you want 100% quality JPEG or 100% quality RAW, it is just without a quantization process - the GPU will do less computation.

That is at least how JPEG works when you state 100% quality. Oh my, I got myself into a lecture again. Sorry about that. Have a good day. Cheers.
 
Edit: to be clear, lossy compression is not an issue per se, the problem is that the lossy compression is worse than the average lossy compression used by other manufacturers. In any case, it would be great to have options for lossless + higher quality lossy.
Yeah. I'd be entirely willing to have a partial tradeoff that had file sizes slightly higher than the current lossy RAW and instead of using a tone curve plus the "strip quantization", only used a tone curve and encoded a few more bits per pixel similar to what Nikon does. Such an approach would be computationally efficient and save space. For example, using a tone curve to take things down to 12 bits/pixel would be almost indistinguishable from lossless in nearly any scenario (most definitely avoiding the artifacts that stripe quantization has)

In fact, with most Sony cameras, just bit-packing would save some space. Sony's lossless RAWs use 16 bits per sample regardless of what the underlying sensor bit depth was.
Agreed. Now that they have enough computing power, I hope Sony allocates enough resources (programmers/engineers) to make such things happen.
Another mode I would love to see in cameras is a variation on continuous shooting called "accumulator mode" - Reserve some RAM that is might higher bit depth than the sensor, and take each shot and add it to that buffer. You could burst multiple shots to memory without filling the flash write buffer much, permitting artificial synthesis of very long shutter speeds while only requiring moderate ND filters at most. For example right now, in bright sunlight you need a 3 or 4 stop ND filter to get an A7M3's burst rate slow enough to have near 100% shutter duty cycle and not back up the buffer. If the camera could accumulate internally, you'd only need a 1-2 stop filter, maybe even less.
Yes, that would be awesome, and those shots would be re-aligned, right? And by the same method, you can also accomplish this:

- reduced noise in low light

- improved detail, you could take a burst of the moon and have the camera create a combined high detail shot.
 
Last edited:
Agreed. And now I am frustrated.... Seeing how the eye AF works for birds on the canon A5.

I shoot birds in flight between 1m and 25m from me. Having a head or eye AF operational for birds would triple the quality of my output no problem. My camera (A7iii +70-200mm GM) focuses (AFC wide) on the bird but most of the time not on the head. If the bird is passing sideways, it focuses on the closest wing.
 
You could rent an A9ii or A7riv and check how much the AF improved and if it helps for your case.
 
Edit: to be clear, lossy compression is not an issue per se, the problem is that the lossy compression is worse than the average lossy compression used by other manufacturers. In any case, it would be great to have options for lossless + higher quality lossy.
Yeah. I'd be entirely willing to have a partial tradeoff that had file sizes slightly higher than the current lossy RAW and instead of using a tone curve plus the "strip quantization", only used a tone curve and encoded a few more bits per pixel similar to what Nikon does. Such an approach would be computationally efficient and save space. For example, using a tone curve to take things down to 12 bits/pixel would be almost indistinguishable from lossless in nearly any scenario (most definitely avoiding the artifacts that stripe quantization has)

In fact, with most Sony cameras, just bit-packing would save some space. Sony's lossless RAWs use 16 bits per sample regardless of what the underlying sensor bit depth was.
Agreed. Now that they have enough computing power, I hope Sony allocates enough resources (programmers/engineers) to make such things happen.
Another mode I would love to see in cameras is a variation on continuous shooting called "accumulator mode" - Reserve some RAM that is might higher bit depth than the sensor, and take each shot and add it to that buffer. You could burst multiple shots to memory without filling the flash write buffer much, permitting artificial synthesis of very long shutter speeds while only requiring moderate ND filters at most. For example right now, in bright sunlight you need a 3 or 4 stop ND filter to get an A7M3's burst rate slow enough to have near 100% shutter duty cycle and not back up the buffer. If the camera could accumulate internally, you'd only need a 1-2 stop filter, maybe even less.
Yes, that would be awesome, and those shots would be re-aligned, right? And by the same method, you can also accomplish this:

- reduced noise in low light

- improved detail, you could take a burst of the moon and have the camera create a combined high detail shot.
Doing alignment would require a LOT more processing power.

At least the use case I see is primarily improved DR in cases where a long exposure was already acceptable (e.g. on a tripod) - people do still use tripods!

A technique I use is to do the following:

Put a weaker (3-4 stop ND filter) on the camera

Put camera into continuous drive mode and ISO 100

Set exposure to preserve highlights

Run the camera for a while

Stack them in post

This gives you the "smooth water" effects of an extremely aggressive ND filter, but without the color casts and with much higher dynamic range because you're synthesizing an ultra-low ISO. You also gain the ability to frame your shot.

The problem is that you need to get the inter-frame time up to around 2-3 seconds if you don't want the buffer to fill up. Internal accumulation would allow you to shoot with exposure times as short as 1/maxframerate without any buffer limits.
 
Edit: to be clear, lossy compression is not an issue per se, the problem is that the lossy compression is worse than the average lossy compression used by other manufacturers. In any case, it would be great to have options for lossless + higher quality lossy.
Yeah. I'd be entirely willing to have a partial tradeoff that had file sizes slightly higher than the current lossy RAW and instead of using a tone curve plus the "strip quantization", only used a tone curve and encoded a few more bits per pixel similar to what Nikon does. Such an approach would be computationally efficient and save space. For example, using a tone curve to take things down to 12 bits/pixel would be almost indistinguishable from lossless in nearly any scenario (most definitely avoiding the artifacts that stripe quantization has)

In fact, with most Sony cameras, just bit-packing would save some space. Sony's lossless RAWs use 16 bits per sample regardless of what the underlying sensor bit depth was.
Agreed. Now that they have enough computing power, I hope Sony allocates enough resources (programmers/engineers) to make such things happen.
Another mode I would love to see in cameras is a variation on continuous shooting called "accumulator mode" - Reserve some RAM that is might higher bit depth than the sensor, and take each shot and add it to that buffer. You could burst multiple shots to memory without filling the flash write buffer much, permitting artificial synthesis of very long shutter speeds while only requiring moderate ND filters at most. For example right now, in bright sunlight you need a 3 or 4 stop ND filter to get an A7M3's burst rate slow enough to have near 100% shutter duty cycle and not back up the buffer. If the camera could accumulate internally, you'd only need a 1-2 stop filter, maybe even less.
Yes, that would be awesome, and those shots would be re-aligned, right? And by the same method, you can also accomplish this:

- reduced noise in low light

- improved detail, you could take a burst of the moon and have the camera create a combined high detail shot.
Doing alignment would require a LOT more processing power.
I would say that alignment just needs to have access to enough memory, and processing power is not really an issue, because it's a step that is performed at the end, after the burst has been taken, so it's not time critical, and it's fine to make the user wait, and it could display such a message: "processing, please wait ... 50% done, 00:03 left".

Also, only a burst of 10 shots is necessary, after about 10 shots, you won't get more details, so that also solves the memory issue somewhat, and also limits the processing time. Also, if the software is well implemented, it can use the gyro data from the camera which then makes alignment very straightforward, as long as your intention is to shoot a static subject.
At least the use case I see is primarily improved DR in cases where a long exposure was already acceptable (e.g. on a tripod) - people do still use tripods!

A technique I use is to do the following:

Put a weaker (3-4 stop ND filter) on the camera

Put camera into continuous drive mode and ISO 100

Set exposure to preserve highlights

Run the camera for a while

Stack them in post

This gives you the "smooth water" effects of an extremely aggressive ND filter, but without the color casts and with much higher dynamic range because you're synthesizing an ultra-low ISO. You also gain the ability to frame your shot.
Understood. So you want the camera to do that internally, right?
The problem is that you need to get the inter-frame time up to around 2-3 seconds if you don't want the buffer to fill up. Internal accumulation would allow you to shoot with exposure times as short as 1/maxframerate without any buffer limits.
Wait, what do you mean by "buffer" here? Do you mean the reserved RAM for stacking the images, or the buffer that stores each individual image?

Let's call the accumulator RAM "stack". If you have it with enough bits, then you don't need to store anything in the buffer anymore. The RAWs will be 14bpp, so their max value (for one pixel) will be 11111111111111 in binary, normally that should be 2^15 - 1 = 32767. If you allocate 15bpp, then you can store 2 accumulate 2 images in the stack, 16bpp => 4 images, okay, so 2^(bpp - 14) images, so if we use 32bpp, we can accumulate 262144 shots and that "stack" buffer would take 232MB (for a 61MP image). If we use 24bpp, then we can accumulate 1024 shots, and that stack buffer would take 174MB.

With 2 small buffers, it's possible to accumulate with virtually no limits.
 
Last edited:

Keyboard shortcuts

Back
Top