hjulenissen
Senior Member
Good point, I had not thought about that.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Right you are, as usual.I will check when I will have the cameraSo this should affect visibly "ordinary" images as well, not just stars (with longer exposures)?I doubt it will be very visible, but resolution tests may show the nature of the filtering. I think they analyze a small cluster surrounding each pixel and if the value of the pixel in question is an outlier, it is assigned the minimum or maximum cluster value.
Details of the probable algorithm here:
http://blog.kasson.com/the-last-wor...e-sony-a7rii-long-exposure-spatial-filtering/
Jim
Right you are, as usual.I will check when I will have the cameraSo this should affect visibly "ordinary" images as well, not just stars (with longer exposures)?I doubt it will be very visible, but resolution tests may show the nature of the filtering. I think they analyze a small cluster surrounding each pixel and if the value of the pixel in question is an outlier, it is assigned the minimum or maximum cluster value.
Details of the probable algorithm here:
http://blog.kasson.com/the-last-wor...e-sony-a7rii-long-exposure-spatial-filtering/
Jim
hjulenissen wrote:
Suppression of hot pixels caused by dark current.That algorithm sounds like it belongs to the same broad class of algorithms as median filtering and bilateral filtering. Clipping (per-color-plane) pixels to a small neighbourhood would suppress single-pixel outliers, but not sharp «ridges» or other structured high-acutance things.
Why would they do it? To improve dxo mark score? Because (some) raw developers do a poor job? Because their built-in lossless raw compression needs it?
-h
I don't have insight into the minds of the Sony engineers. Remember, these are the same folks that brought us Craw compression, wouldn't let us have uncompressed, and when they finally did, wouldn't offer lossless compression.But why do it in the camera, for _raw_ files?Suppression of hot pixels caused by dark current.That algorithm sounds like it belongs to the same broad class of algorithms as median filtering and bilateral filtering. Clipping (per-color-plane) pixels to a small neighbourhood would suppress single-pixel outliers, but not sharp «ridges» or other structured high-acutance things.
Why would they do it? To improve dxo mark score? Because (some) raw developers do a poor job? Because their built-in lossless raw compression needs it?
-h
Part of me thinks of all the time we will need to waste to hack yet another SONY format (lossless compressed this time) and asks them not to go thereI don't have insight into the minds of the Sony engineers. Remember, these are the same folks that brought us Craw compression, wouldn't let us have uncompressed, and when they finally did, wouldn't offer lossless compression.But why do it in the camera, for _raw_ files?Suppression of hot pixels caused by dark current.That algorithm sounds like it belongs to the same broad class of algorithms as median filtering and bilateral filtering. Clipping (per-color-plane) pixels to a small neighbourhood would suppress single-pixel outliers, but not sharp «ridges» or other structured high-acutance things.
Why would they do it? To improve dxo mark score? Because (some) raw developers do a poor job? Because their built-in lossless raw compression needs it?
-h
Are you saying that the "confetti noise" with long exposures is a compression artifact?It is lossy compressed.
I'm saying it is not directly comparable, I'm afraid. We can have only so many variables for the results to be comparable. 4 secs vs. 30 secs is already too much of a difference IMHO.Are you saying that the "confetti noise" with long exposures is a compression artifact?It is lossy compressed.
Yes but that makes the algorithm we are discussing even more questionable. If it still works for 30 sec exposures, it failed to remove hot points; and I doubt that they are just a compression artifact.I'm saying it is not directly comparable, I'm afraid. We can have only so many variables for the results to be comparable. 4 secs vs. 30 secs is already too much of a difference IMHO.Are you saying that the "confetti noise" with long exposures is a compression artifact?It is lossy compressed.
As I said, I do not see any proof of evidence that the shots are comparable.Yes but that makes the algorithm we are discussing even more questionable. If it still works for 30 sec exposures, it failed to remove hot points; and I doubt that they are just a compression artifact.I'm saying it is not directly comparable, I'm afraid. We can have only so many variables for the results to be comparable. 4 secs vs. 30 secs is already too much of a difference IMHO.Are you saying that the "confetti noise" with long exposures is a compression artifact?It is lossy compressed.
There are conditions under which the proposed algorithm will fail to remove hot pixels. I think you can find some of those by inspecting the algorithm.Yes but that makes the algorithm we are discussing even more questionable. If it still works for 30 sec exposures, it failed to remove hot points; and I doubt that they are just a compression artifact.I'm saying it is not directly comparable, I'm afraid. We can have only so many variables for the results to be comparable. 4 secs vs. 30 secs is already too much of a difference IMHO.Are you saying that the "confetti noise" with long exposures is a compression artifact?It is lossy compressed.
Actually, it is pretty easy to see that what I posted cannot be produced by such an algorithm. That algorithm does not make much sense for the G pixels anyway. The processed image is full of hot pixels.There are conditions under which the proposed algorithm will fail to remove hot pixels. I think you can find some of those by inspecting the algorithm.Yes but that makes the algorithm we are discussing even more questionable. If it still works for 30 sec exposures, it failed to remove hot points; and I doubt that they are just a compression artifact.I'm saying it is not directly comparable, I'm afraid. We can have only so many variables for the results to be comparable. 4 secs vs. 30 secs is already too much of a difference IMHO.Are you saying that the "confetti noise" with long exposures is a compression artifact?It is lossy compressed.
They do not have to be for what I said.As I said, I do not see any proof of evidence that the shots are comparable.Yes but that makes the algorithm we are discussing even more questionable. If it still works for 30 sec exposures, it failed to remove hot points; and I doubt that they are just a compression artifact.I'm saying it is not directly comparable, I'm afraid. We can have only so many variables for the results to be comparable. 4 secs vs. 30 secs is already too much of a difference IMHO.Are you saying that the "confetti noise" with long exposures is a compression artifact?It is lossy compressed.
Obviously, Jim's answer is at least very reasonable justification.Suppression of hot pixels caused by dark current.Why would they do it? To improve dxo mark score? Because (some) raw developers do a poor job? Because their built-in lossless raw compression needs it?
Good points, Hank. BTW, I’ve been working with uncompressed files so far.
- ProfHankD wrote:
Obviously, Jim's answer is at least very reasonable justification.Suppression of hot pixels caused by dark current.Why would they do it? To improve dxo mark score? Because (some) raw developers do a poor job? Because their built-in lossless raw compression needs it?
However, Sony's lossy raw compression actually computes the min and max values that Jim's presumed-correct filter equation needs. If they are generating a lossy-compressed raw image and keep the "outliers" in the image data, they could have the effect of causing significant posterization (scaling of the 7-bit value offsets) that would effect not just one pixel, but the entire 16-pixel sequence (within the color-interleaved 32-pixel ARW compression block). In other words, stray pixel values could cause much more severe artifacting using ARW compression than they would for most raw formats, so Sony may have a little extra motivation to do this filtering.
Sony now supports both lossy-compressed and uncompressed raw, but the default path has long been lossy-compressed, and their compression is designed so that image data compressed by it can be updated in-place. In short, I'd bet a lot of the imaging pipe is tuned to work with compressed data; I wouldn't even be surprised if the compression -- and this filtering -- is literally done on the sensor chip. If so, my suggestion to Sony would be to make selection of the uncompressed raw output disable the outlier filtering. That's probably already an alternative code path at a very low level, so why not?
Yes, delta-based encoders can propagate the error.Sony's lossy raw compression actually computes the min and max values that Jim's presumed-correct filter equation needs. If they are generating a lossy-compressed raw image and keep the "outliers" in the image data, they could have the effect of causing significant posterization (scaling of the 7-bit value offsets) that would effect not just one pixel, but the entire 16-pixel sequence (within the color-interleaved 32-pixel ARW compression block). In other words, stray pixel values could cause much more severe artifacting using ARW compression than they would for most raw formats, so Sony may have a little extra motivation to do this filtering.
I can just imagine that somewhere within Sony there is someone who is slapping his knee, going "bwhahahaha" (or the cultural equivalent), exclaiming something along the lines of: "You wouldn't believe it, but the guys on the DPR PST forum are looking at individual photosite values from our cameras again!"Obviously, Jim's answer is at least very reasonable justification.Suppression of hot pixels caused by dark current.Why would they do it? To improve dxo mark score? Because (some) raw developers do a poor job? Because their built-in lossless raw compression needs it?
However, Sony's lossy raw compression actually computes the min and max values that Jim's presumed-correct filter equation needs. If they are generating a lossy-compressed raw image and keep the "outliers" in the image data, they could have the effect of causing significant posterization (scaling of the 7-bit value offsets) that would effect not just one pixel, but the entire 16-pixel sequence (within the color-interleaved 32-pixel ARW compression block). In other words, stray pixel values could cause much more severe artifacting using ARW compression than they would for most raw formats, so Sony may have a little extra motivation to do this filtering.
Sony now supports both lossy-compressed and uncompressed raw, but the default path has long been lossy-compressed, and their compression is designed so that image data compressed by it can be updated in-place. In short, I'd bet a lot of the imaging pipe is tuned to work with compressed data; I wouldn't even be surprised if the compression -- and this filtering -- is literally done on the sensor chip. If so, my suggestion to Sony would be to make selection of the uncompressed raw output disable the outlier filtering. That's probably already an alternative code path at a very low level, so why not?

Interesting.The way I proposed the algorithm that Jim refers to, was to actually examine the patterns in pixel values resulting from the spatial filtering:
Pairs of pixels with identical values
In the above diagram I have done this only in a small area of the image from a Sony A7RII firmware v4.0.
In my opinion, this "pixel pairing" is the smoking gun.
More info here: http://www.markshelley.co.uk/Astronomy/SonyA7S/diagnosingstareater.html
Mark
I only marked pairs that were either higher than neighbouring values (of the same pixel colour) or lower than neighbouring values.Interesting.
However, it looks to me like not all the paired values you've marked are outliers.
It's certainly possible that something slightly different happens at 32 pixel block boundaries. I'll take a closer look. But I think my suggested algorithm captures the essence of what is going on and it also explains the artifacts we see in real astro-images.I think it's most likely that the pre-processing for lossy ARW compression which finds the min, max, and scaling factor for each interleaved 16-same-color-pixel subsequence in a 32-pixel block is the outlier detector. The outlier values being set the same might be nothing more than posterization in that algorithm. This should be detectable by looking for subtle differences in processing of horizontal vs. vertical, or32-pixel block edge, patterns. Keep in mind compressed pixel values are only 7 bit, and scaling is by shifts (not rounded, and power-of-2 steps), so probability of duplicate values in the compressed form is much higher... and could be higher in uncompressed raws that still use that processing as the detector of outliers.
The distinguishing feature would be if horizontal and vertical patterns of outliers are handled the same or slightly differently, or if things look a little different at 32-pixel block boundaries.
Nikon was doing something very similar in the early years.I suppose it is also possible that the star eating happens very late, in which case your algorithm makes more sense, but literally copying a value would be cruder than I'd expect for Sony....