Eating stars

So this should affect visibly "ordinary" images as well, not just stars (with longer exposures)?
I will check when I will have the camera ;) I doubt it will be very visible, but resolution tests may show the nature of the filtering. I think they analyze a small cluster surrounding each pixel and if the value of the pixel in question is an outlier, it is assigned the minimum or maximum cluster value.
Right you are, as usual.

Details of the probable algorithm here:

http://blog.kasson.com/the-last-wor...e-sony-a7rii-long-exposure-spatial-filtering/

Jim
 
So this should affect visibly "ordinary" images as well, not just stars (with longer exposures)?
I will check when I will have the camera ;) I doubt it will be very visible, but resolution tests may show the nature of the filtering. I think they analyze a small cluster surrounding each pixel and if the value of the pixel in question is an outlier, it is assigned the minimum or maximum cluster value.
Right you are, as usual.

Details of the probable algorithm here:

http://blog.kasson.com/the-last-wor...e-sony-a7rii-long-exposure-spatial-filtering/

Jim
 
Last edited:
That algorithm sounds like it belongs to the same broad class of algorithms as median filtering and bilateral filtering. Clipping (per-color-plane) pixels to a small neighbourhood would suppress single-pixel outliers, but not sharp «ridges» or other structured high-acutance things.

Why would they do it? To improve dxo mark score? Because (some) raw developers do a poor job? Because their built-in lossless raw compression needs it?

-h
Suppression of hot pixels caused by dark current.
 
That algorithm sounds like it belongs to the same broad class of algorithms as median filtering and bilateral filtering. Clipping (per-color-plane) pixels to a small neighbourhood would suppress single-pixel outliers, but not sharp «ridges» or other structured high-acutance things.

Why would they do it? To improve dxo mark score? Because (some) raw developers do a poor job? Because their built-in lossless raw compression needs it?

-h
Suppression of hot pixels caused by dark current.
But why do it in the camera, for _raw_ files?
I don't have insight into the minds of the Sony engineers. Remember, these are the same folks that brought us Craw compression, wouldn't let us have uncompressed, and when they finally did, wouldn't offer lossless compression.

Jim
 
That algorithm sounds like it belongs to the same broad class of algorithms as median filtering and bilateral filtering. Clipping (per-color-plane) pixels to a small neighbourhood would suppress single-pixel outliers, but not sharp «ridges» or other structured high-acutance things.

Why would they do it? To improve dxo mark score? Because (some) raw developers do a poor job? Because their built-in lossless raw compression needs it?

-h
Suppression of hot pixels caused by dark current.
But why do it in the camera, for _raw_ files?
I don't have insight into the minds of the Sony engineers. Remember, these are the same folks that brought us Craw compression, wouldn't let us have uncompressed, and when they finally did, wouldn't offer lossless compression.
Part of me thinks of all the time we will need to waste to hack yet another SONY format (lossless compressed this time) and asks them not to go there :)
 
It is lossy compressed.
Are you saying that the "confetti noise" with long exposures is a compression artifact?
I'm saying it is not directly comparable, I'm afraid. We can have only so many variables for the results to be comparable. 4 secs vs. 30 secs is already too much of a difference IMHO.
Yes but that makes the algorithm we are discussing even more questionable. If it still works for 30 sec exposures, it failed to remove hot points; and I doubt that they are just a compression artifact.
 
It is lossy compressed.
Are you saying that the "confetti noise" with long exposures is a compression artifact?
I'm saying it is not directly comparable, I'm afraid. We can have only so many variables for the results to be comparable. 4 secs vs. 30 secs is already too much of a difference IMHO.
Yes but that makes the algorithm we are discussing even more questionable. If it still works for 30 sec exposures, it failed to remove hot points; and I doubt that they are just a compression artifact.
As I said, I do not see any proof of evidence that the shots are comparable.
 
It is lossy compressed.
Are you saying that the "confetti noise" with long exposures is a compression artifact?
I'm saying it is not directly comparable, I'm afraid. We can have only so many variables for the results to be comparable. 4 secs vs. 30 secs is already too much of a difference IMHO.
Yes but that makes the algorithm we are discussing even more questionable. If it still works for 30 sec exposures, it failed to remove hot points; and I doubt that they are just a compression artifact.
There are conditions under which the proposed algorithm will fail to remove hot pixels. I think you can find some of those by inspecting the algorithm.
 
It is lossy compressed.
Are you saying that the "confetti noise" with long exposures is a compression artifact?
I'm saying it is not directly comparable, I'm afraid. We can have only so many variables for the results to be comparable. 4 secs vs. 30 secs is already too much of a difference IMHO.
Yes but that makes the algorithm we are discussing even more questionable. If it still works for 30 sec exposures, it failed to remove hot points; and I doubt that they are just a compression artifact.
There are conditions under which the proposed algorithm will fail to remove hot pixels. I think you can find some of those by inspecting the algorithm.
Actually, it is pretty easy to see that what I posted cannot be produced by such an algorithm. That algorithm does not make much sense for the G pixels anyway. The processed image is full of hot pixels.
 
It is lossy compressed.
Are you saying that the "confetti noise" with long exposures is a compression artifact?
I'm saying it is not directly comparable, I'm afraid. We can have only so many variables for the results to be comparable. 4 secs vs. 30 secs is already too much of a difference IMHO.
Yes but that makes the algorithm we are discussing even more questionable. If it still works for 30 sec exposures, it failed to remove hot points; and I doubt that they are just a compression artifact.
As I said, I do not see any proof of evidence that the shots are comparable.
They do not have to be for what I said.

BTW, here is a whole thread about hot/warm pixels with long exposures:

 
Why would they do it? To improve dxo mark score? Because (some) raw developers do a poor job? Because their built-in lossless raw compression needs it?
Suppression of hot pixels caused by dark current.
Obviously, Jim's answer is at least very reasonable justification.

However, Sony's lossy raw compression actually computes the min and max values that Jim's presumed-correct filter equation needs. If they are generating a lossy-compressed raw image and keep the "outliers" in the image data, they could have the effect of causing significant posterization (scaling of the 7-bit value offsets) that would effect not just one pixel, but the entire 16-pixel sequence (within the color-interleaved 32-pixel ARW compression block). In other words, stray pixel values could cause much more severe artifacting using ARW compression than they would for most raw formats, so Sony may have a little extra motivation to do this filtering.

Sony now supports both lossy-compressed and uncompressed raw, but the default path has long been lossy-compressed, and their compression is designed so that image data compressed by it can be updated in-place. In short, I'd bet a lot of the imaging pipe is tuned to work with compressed data; I wouldn't even be surprised if the compression -- and this filtering -- is literally done on the sensor chip. If so, my suggestion to Sony would be to make selection of the uncompressed raw output disable the outlier filtering. That's probably already an alternative code path at a very low level, so why not?
 
  1. ProfHankD wrote:
Why would they do it? To improve dxo mark score? Because (some) raw developers do a poor job? Because their built-in lossless raw compression needs it?
Suppression of hot pixels caused by dark current.
Obviously, Jim's answer is at least very reasonable justification.

However, Sony's lossy raw compression actually computes the min and max values that Jim's presumed-correct filter equation needs. If they are generating a lossy-compressed raw image and keep the "outliers" in the image data, they could have the effect of causing significant posterization (scaling of the 7-bit value offsets) that would effect not just one pixel, but the entire 16-pixel sequence (within the color-interleaved 32-pixel ARW compression block). In other words, stray pixel values could cause much more severe artifacting using ARW compression than they would for most raw formats, so Sony may have a little extra motivation to do this filtering.

Sony now supports both lossy-compressed and uncompressed raw, but the default path has long been lossy-compressed, and their compression is designed so that image data compressed by it can be updated in-place. In short, I'd bet a lot of the imaging pipe is tuned to work with compressed data; I wouldn't even be surprised if the compression -- and this filtering -- is literally done on the sensor chip. If so, my suggestion to Sony would be to make selection of the uncompressed raw output disable the outlier filtering. That's probably already an alternative code path at a very low level, so why not?
Good points, Hank. BTW, I’ve been working with uncompressed files so far.
 
Sony's lossy raw compression actually computes the min and max values that Jim's presumed-correct filter equation needs. If they are generating a lossy-compressed raw image and keep the "outliers" in the image data, they could have the effect of causing significant posterization (scaling of the 7-bit value offsets) that would effect not just one pixel, but the entire 16-pixel sequence (within the color-interleaved 32-pixel ARW compression block). In other words, stray pixel values could cause much more severe artifacting using ARW compression than they would for most raw formats, so Sony may have a little extra motivation to do this filtering.
Yes, delta-based encoders can propagate the error.
 
Why would they do it? To improve dxo mark score? Because (some) raw developers do a poor job? Because their built-in lossless raw compression needs it?
Suppression of hot pixels caused by dark current.
Obviously, Jim's answer is at least very reasonable justification.

However, Sony's lossy raw compression actually computes the min and max values that Jim's presumed-correct filter equation needs. If they are generating a lossy-compressed raw image and keep the "outliers" in the image data, they could have the effect of causing significant posterization (scaling of the 7-bit value offsets) that would effect not just one pixel, but the entire 16-pixel sequence (within the color-interleaved 32-pixel ARW compression block). In other words, stray pixel values could cause much more severe artifacting using ARW compression than they would for most raw formats, so Sony may have a little extra motivation to do this filtering.

Sony now supports both lossy-compressed and uncompressed raw, but the default path has long been lossy-compressed, and their compression is designed so that image data compressed by it can be updated in-place. In short, I'd bet a lot of the imaging pipe is tuned to work with compressed data; I wouldn't even be surprised if the compression -- and this filtering -- is literally done on the sensor chip. If so, my suggestion to Sony would be to make selection of the uncompressed raw output disable the outlier filtering. That's probably already an alternative code path at a very low level, so why not?
I can just imagine that somewhere within Sony there is someone who is slapping his knee, going "bwhahahaha" (or the cultural equivalent), exclaiming something along the lines of: "You wouldn't believe it, but the guys on the DPR PST forum are looking at individual photosite values from our cameras again!"

:)

-F
 
The way I proposed the algorithm that Jim refers to, was to actually examine the patterns in pixel values resulting from the spatial filtering:

Pairs of pixels with identical values
Pairs of pixels with identical values

In the above diagram I have done this only in a small area of the image from a Sony A7RII firmware v4.0.

In my opinion, this "pixel pairing" is the smoking gun.

More info here: http://www.markshelley.co.uk/Astronomy/SonyA7S/diagnosingstareater.html

Mark
 
Last edited:
The way I proposed the algorithm that Jim refers to, was to actually examine the patterns in pixel values resulting from the spatial filtering:

Pairs of pixels with identical values
Pairs of pixels with identical values

In the above diagram I have done this only in a small area of the image from a Sony A7RII firmware v4.0.

In my opinion, this "pixel pairing" is the smoking gun.

More info here: http://www.markshelley.co.uk/Astronomy/SonyA7S/diagnosingstareater.html

Mark
Interesting.

However, it looks to me like not all the paired values you've marked are outliers.

I think it's most likely that the pre-processing for lossy ARW compression which finds the min, max, and scaling factor for each interleaved 16-same-color-pixel subsequence in a 32-pixel block is the outlier detector. The outlier values being set the same might be nothing more than posterization in that algorithm. This should be detectable by looking for subtle differences in processing of horizontal vs. vertical, or32-pixel block edge, patterns. Keep in mind compressed pixel values are only 7 bit, and scaling is by shifts (not rounded, and power-of-2 steps), so probability of duplicate values in the compressed form is much higher... and could be higher in uncompressed raws that still use that processing as the detector of outliers.

The distinguishing feature would be if horizontal and vertical patterns of outliers are handled the same or slightly differently, or if things look a little different at 32-pixel block boundaries.

I suppose it is also possible that the star eating happens very late, in which case your algorithm makes more sense, but literally copying a value would be cruder than I'd expect for Sony....
 
Interesting.

However, it looks to me like not all the paired values you've marked are outliers.
I only marked pairs that were either higher than neighbouring values (of the same pixel colour) or lower than neighbouring values.
I think it's most likely that the pre-processing for lossy ARW compression which finds the min, max, and scaling factor for each interleaved 16-same-color-pixel subsequence in a 32-pixel block is the outlier detector. The outlier values being set the same might be nothing more than posterization in that algorithm. This should be detectable by looking for subtle differences in processing of horizontal vs. vertical, or32-pixel block edge, patterns. Keep in mind compressed pixel values are only 7 bit, and scaling is by shifts (not rounded, and power-of-2 steps), so probability of duplicate values in the compressed form is much higher... and could be higher in uncompressed raws that still use that processing as the detector of outliers.

The distinguishing feature would be if horizontal and vertical patterns of outliers are handled the same or slightly differently, or if things look a little different at 32-pixel block boundaries.
It's certainly possible that something slightly different happens at 32 pixel block boundaries. I'll take a closer look. But I think my suggested algorithm captures the essence of what is going on and it also explains the artifacts we see in real astro-images.
I suppose it is also possible that the star eating happens very late, in which case your algorithm makes more sense, but literally copying a value would be cruder than I'd expect for Sony....
Nikon was doing something very similar in the early years.

Mark
 

Keyboard shortcuts

Back
Top