Eating stars

In my opinion, this "pixel pairing" is the smoking gun.
I think it's most likely that the pre-processing for lossy ARW compression which finds the min, max, and scaling factor for each interleaved 16-same-color-pixel subsequence in a 32-pixel block is the outlier detector.
Indeed.

I'd be interested to see a controlled test that uses uncompressed raws, to remove any possible influence of the posterization that can be caused by ARW compression, as posterization is essentially synonymous with the apparition of repeated identical pixel values.

One way to implement a reproducible test to see if the Sony really eats imaged point sources of light — i.e. real stars — instead of limiting its appetite to hot pixels that are not actually stars, would be e.g. to take a picture of a test chart.

Smartphones with OLEDs seem to be a good choice to display test charts, as they typically have a better contrast ratio (near infinite ?) than LCDs, and might thus better emulate the contrast of a real, bright star against an inky black sky.

The Samsung Galaxy S7 has an OLED display that uses a Bayer-like GRGB pattern.

According to Displaymate, both the B and R subpixels of a Samsung S7 have a pitch of 408 subpixels per inch (SPPI).

The Samsung's green subpixels, being twice more numerous than the R and B subpixels, have a correspondingly higher density (408 PPI * sqrt(2) ~= 577 SPPI)

The pixel density of full-color pixels on the GRGB Bayer-like diamond pattern of a Samsung AMOLED display is obviously governed by the lower SPPI of the B and R pixels, and is thus 408PPI.

The 42MP sensor of the Sony A7R2 and A7R3 has a resolution of 7952x5304 pixels, and therefore a pixel pitch of 7952 / 36mm ~= 4.53 microns, or 220.9 pixels per millimeter, or about 5610.6 pixels per inch.

The 42MP Sony sensor thus has a pixel pitch that's about 5610.6 / 408 ~= 13.75 times that of a full-color pixel on a Samsung S7 AMOLED display.

If a Samsung S7 is photographed with a lens and subject distance combination such that the reproduction ratio is smaller than 1 / 13.75, then, the Bayer-like diamond subpixel pattern of an AMOLED display's full-color pixel should be projected by a high-quality lens to an area smaller than a pixel on the Sony camerea's sensor.

For example, a 408PPI pattern photographed with a Zeiss Otus 55mm lens at a distance of 1m should have a reproduction ratio of about 1/16.1 — i.e. yield something like 6577 PPI projected on the sensor, corresponding to a pitch of 3.86 microns — i.e. smaller than the 42MP Sony sensor's 4.53-micron pixel pitch.

A single white pixel displayed on a Samsung S7 AMOLED display, photographed in a dark room or under layers of blankets (to limit screen reflections) with an Otus 55mm at a distance of 1m, at an aperture equal to or faster than F/2.8 to limit diffraction, should thus be a good proxy for an actual, point-like star.

One could display a known number of single white pixels on the smartphone, arranged e.g. in a well-known pattern, and then make 3.2-second and 4-second exposures to see if any stars disappear from that pattern.
 
Last edited:
In my opinion, this "pixel pairing" is the smoking gun.
I think it's most likely that the pre-processing for lossy ARW compression which finds the min, max, and scaling factor for each interleaved 16-same-color-pixel subsequence in a 32-pixel block is the outlier detector.
Indeed.

I'd be interested to see a controlled test that uses uncompressed raws, to remove any possible influence of the posterization that can be caused by ARW compression, as posterization is essentially synonymous with the apparition of repeated identical pixel values.
Exactly the same spatial filtering effects happen on uncompressed raws as compressed ones. What we're seeing as not an artiact of compression.
One way to implement a reproducible test to see if the Sony really eats imaged point sources of light — i.e. real stars — instead of limiting its appetite to hot pixels that are not actually stars, would be e.g. to take a picture of a test chart.
One of the best direct comparison tests I've seen so far is the one Matt Grum has done here, using a background of glitter to form the "stars":


Mark
 
In my opinion, this "pixel pairing" is the smoking gun.
I think it's most likely that the pre-processing for lossy ARW compression which finds the min, max, and scaling factor for each interleaved 16-same-color-pixel subsequence in a 32-pixel block is the outlier detector.
Indeed.

I'd be interested to see a controlled test that uses uncompressed raws, to remove any possible influence of the posterization that can be caused by ARW compression, as posterization is essentially synonymous with the apparition of repeated identical pixel values.
Exactly the same spatial filtering effects happen on uncompressed raws as compressed ones. What we're seeing as not an artifact of compression.
True enough. However, compression does itself perform some spatial filtering when neighborhood dynamic ranges are high:

Here's an analysis of occurrence of hot pixels (defined as more than 5 sigma intensity) in the same location, uncompressed raw for 16 exposures at 3.2 sec:



7b51596612264f879c7a1c583b288ad8.jpg.png

Now look what happens when you allow the camera to compress the file:



7e502261c20b45d6a369842d5fb157f4.jpg.png

About a third of the hot pixels are AWOL.



Jim

--
 
The way I proposed the algorithm that Jim refers to, was to actually examine the patterns in pixel values resulting from the spatial filtering:

Pairs of pixels with identical values
Pairs of pixels with identical values

In the above diagram I have done this only in a small area of the image from a Sony A7RII firmware v4.0.

In my opinion, this "pixel pairing" is the smoking gun.

More info here: http://www.markshelley.co.uk/Astronomy/SonyA7S/diagnosingstareater.html

Mark
It is hard to say what we are looking at here without having the actual RAW file.
 
The way I proposed the algorithm that Jim refers to, was to actually examine the patterns in pixel values resulting from the spatial filtering:

Pairs of pixels with identical values
Pairs of pixels with identical values

In the above diagram I have done this only in a small area of the image from a Sony A7RII firmware v4.0.

In my opinion, this "pixel pairing" is the smoking gun.

More info here: http://www.markshelley.co.uk/Astronomy/SonyA7S/diagnosingstareater.html

Mark
It is hard to say what we are looking at here without having the actual RAW file.
You can download one of my Sony A7S bulb mode images from here:


It uses the old algorithm.

Drew Geraci made some raw A7RIII files available with exposures longer than 3.2sec. You can find a link at the bottom of this article:

https://petapixel.com/2017/11/14/sony-a7r-iii-star-eater-no/

They use the new algorithm.

The stars in his image survived the spatial filtering because of their larger size but the same pixel pairing artifacts exist and allow the effects of the algorithm on smaller sizes to be predicted.

Mark
 
Last edited:
The way I proposed the algorithm that Jim refers to, was to actually examine the patterns in pixel values resulting from the spatial filtering:

Pairs of pixels with identical values
Pairs of pixels with identical values

In the above diagram I have done this only in a small area of the image from a Sony A7RII firmware v4.0.

In my opinion, this "pixel pairing" is the smoking gun.

More info here: http://www.markshelley.co.uk/Astronomy/SonyA7S/diagnosingstareater.html

Mark
It is hard to say what we are looking at here without having the actual RAW file.
You can download one of my Sony A7S bulb mode images from here:

https://drive.google.com/open?id=0B3Ky5pyZvsINaDBraUhvQUVadkk

It uses the old algorithm.

Drew Geraci made some raw A7RIII files available with exposures longer than 3.2sec. You can find a link at the bottom of this article:

https://petapixel.com/2017/11/14/sony-a7r-iii-star-eater-no/

They use the new algorithm.

The stars in his image survived the spatial filtering because of their larger size but the same pixel pairing artifacts exist and allow the effects of the algorithm on smaller sizes to be predicted.
Thanks, I will play with those files when I have the time.

By the new algorithm - you mean a new version of the firmware? I am not a Sony user.

What firmware was used by dpreview in their star eating article for the A7RII? One can see unremoved hot pixels there and very few (some should exist in any file) equal pairs.
 
Any chance you can post the files you got from dpreview? Or long exposure dark frames from the A7RII? The ones I found here clearly show hot pixels which are not removed.
I don't think they are mine to spread around. Why don't you shoot Rishi a PM?
How about the files in the link I posted? You participated in that thread. Why aren't the hot pixels removed there? Here is a crop of one of them, zoomed in considerably.


30s, ISO 3200, A7R2
I downloaded the files from the Dropbox link. The EXIF shows it was v1.00 of the A7RII firmware i.e. the camera's original firmware. The hot pixels you are seeing are the "confetti" noise on longer exposures, that everyone complained about when the A7RII was first released.

Sony's response was to release a firmware update that introduced spatial filtering to all exposures of 4sec or longer and "fixed" the confetti noise but ate stars. Previously only Bulb mode exposures were affected by that spatial filtering.

Mark
 
Last edited:
You can download one of my Sony A7S bulb mode images from here:

https://drive.google.com/open?id=0B3Ky5pyZvsINaDBraUhvQUVadkk

It uses the old algorithm.

Drew Geraci made some raw A7RIII files available with exposures longer than 3.2sec. You can find a link at the bottom of this article:

https://petapixel.com/2017/11/14/sony-a7r-iii-star-eater-no/

They use the new algorithm.

The stars in his image survived the spatial filtering because of their larger size but the same pixel pairing artifacts exist and allow the effects of the algorithm on smaller sizes to be predicted.
Thanks, I will play with those files when I have the time.

By the new algorithm - you mean a new version of the firmware? I am not a Sony user.

What firmware was used by dpreview in their star eating article for the A7RII? One can see unremoved hot pixels there and very few (some should exist in any file) equal pairs.
Can you provide a link to the star eating article? I'm unsure which one you might be referring to. It doesn't make sense to me that hot pixels can exist in an image demonstrating "star eater".

Yes, by the new algorithm - I do mean a new version of the firmware

The original A7RII v1.0 firmware had the "confetti noise" problem on longer exposures.

Sony released v1.1 firmware that "fixed" the confetti noise problem on all exposures of 4sec or longer. It was almost certainly this update that introduced star eater to those exposures.

A lot later on, v4.0 of the A7RII firmware "improved" the spatial filtering by reducing its effect on the green channel - at least that's what my analysis suggests.

So in summary, it's important to know the firmware version of any image being examined.

Mark
 
Last edited:
Can you provide a link to the star eating article? I'm unsure which one you might be referring to. It doesn't make sense to me that hot pixels can exist in an image demonstrating "star eater".
Here:

Yes, by the new algorithm - I do mean a new version of the firmware

The original A7RII v1.0 firmware had the "confetti noise" problem on longer exposures.

Sony released v1.1 firmware that "fixed" the confetti noise problem on all exposures of 4sec or longer. It was almost certainly this update that introduced star eater to those exposures.

A lot later on, v4.0 of the A7RII firmware "improved" the spatial filtering by reducing its effect on the green channel - at least that's what my analysis suggests.

So in summary, it's important to know the firmware version of any image being examined.
Thanks, this helpful.
 
Can you provide a link to the star eating article? I'm unsure which one you might be referring to. It doesn't make sense to me that hot pixels can exist in an image demonstrating "star eater".
Here:

https://www.dpreview.com/news/3195011528/analysis-the-sony-a7r-iii-is-still-a-star-eater
It's using A7RII v3.0 which would be applying spatial aliasing but to be honest it's very difficult to tell anything useful from that jpeg. I'm not seeing any obvious hot pixels.

Mark
 
I played with those files and they have a very different character than the RAWs I examined earlier. I can see many equal pairs but also many pairs which are close but not equal. This might be due to compression (but some of the pairs of equal pixels might be due to compression) or maybe the algorithm has some threshold and does the correction only if the difference is above that.

I can see equal pixels which are relatively dim and are still local maxima. There is a lot of filtering going on apparently.

Apparently the firmware is responsible for the differences, as you say.
 
  1. ProfHankD wrote:
Why would they do it? To improve dxo mark score? Because (some) raw developers do a poor job? Because their built-in lossless raw compression needs it?
Suppression of hot pixels caused by dark current.
Obviously, Jim's answer is at least very reasonable justification.

However, Sony's lossy raw compression actually computes the min and max values that Jim's presumed-correct filter equation needs. If they are generating a lossy-compressed raw image and keep the "outliers" in the image data, they could have the effect of causing significant posterization (scaling of the 7-bit value offsets) that would effect not just one pixel, but the entire 16-pixel sequence (within the color-interleaved 32-pixel ARW compression block). In other words, stray pixel values could cause much more severe artifacting using ARW compression than they would for most raw formats, so Sony may have a little extra motivation to do this filtering.

Sony now supports both lossy-compressed and uncompressed raw, but the default path has long been lossy-compressed, and their compression is designed so that image data compressed by it can be updated in-place. In short, I'd bet a lot of the imaging pipe is tuned to work with compressed data; I wouldn't even be surprised if the compression -- and this filtering -- is literally done on the sensor chip. If so, my suggestion to Sony would be to make selection of the uncompressed raw output disable the outlier filtering. That's probably already an alternative code path at a very low level, so why not?
Good points, Hank. BTW, I’ve been working with uncompressed files so far.
It seems to me that what people want is a menu option to output an uncompressed raw data file with no algorithms applied whatever. Perhaps it could be called a "Technical Raw File" to discourage use by beginners.
 
It seems to me that what people want is a menu option to output an uncompressed raw data file with no algorithms applied whatever. Perhaps it could be called a "Technical Raw File" to discourage use by beginners.
One could even go lower and call it something like a "camera debug record" perhaps with extra EXIF about various normally hidden internal settings. This is also a perfect reason to continue, and fully open, the PlayMemories App interface -- so that custom apps could get access to things without changing the camera default behavior.

However, none of that will happen. The truth is that specs are a game, and Sony doesn't just build great sensors and very good cameras -- they have become experts at the game. If they have a really unprocessed raw, they'd be putting themselves at a disadvantage in comparisons with other cameras that don't.

Sony actually put themselves at a disadvantage early on in that the A100 favored high DR over (apparently) lower noise in JPEGs, which made them look less good than some competitors in testing back when DR wasn't the main thing people talked about. Favoring DR was the correct engineering decision in my opinion, and I love the idea that engineering had that much say. I've noticed that Sony has been pretty careful now to respond directly to what testers complain about... which makes me a little sad. :-(
 
It seems to me that what people want is a menu option to output an uncompressed raw data file with no algorithms applied whatever. Perhaps it could be called a "Technical Raw File" to discourage use by beginners.
The sensor already has hot pixel mapping - would "Technical Raw" disable that too?

What about the PDAF pixels - i.e. blue pixels which have been "stolen" for AF, so there is no blue value at that site? Would "Technical Raw" interpolate a value?

I'm reasonably happy for those to take place but I certainly don't want spatial filtering. However, other folk's mileage may vary.

Mark
 
It seems to me that what people want is a menu option to output an uncompressed raw data file with no algorithms applied whatever. Perhaps it could be called a "Technical Raw File" to discourage use by beginners.
The sensor already has hot pixel mapping - would "Technical Raw" disable that too?

What about the PDAF pixels - i.e. blue pixels which have been "stolen" for AF, so there is no blue value at that site? Would "Technical Raw" interpolate a value?

I'm reasonably happy for those to take place but I certainly don't want spatial filtering. However, other folk's mileage may vary.

Mark
Yeah - I remember JimK mentioning a few categories of things that did make sense for the camera to do - ones that were best described as "camera calibration corrections" - stolen PDAF pixels are effectively in that category too.

But spatial filtering and automatic dark frame subtraction should never be forced. (Sony at least allows turning off auto-DFS, Pentax didn't for quite a while.)
 

Keyboard shortcuts

Back
Top