Jack Hogan wrote:

Snipping for clarity

crames wrote:

Great Bustard wrote:

crames wrote: Here's the green channel of a piece of sky shot at 26mm, f/6.3 1/159 (S100):

26mm mean=61.01 STD=2.77

Piece of sky at 5mm, f/6.3 1/159:

5mm mean=60.76 STD=2.53

Did you get 5mm and 26mm reversed?

No, they're not reversed. They are crops representing the same area of sky. The 26mm focal length puts more pixels on that area.

5mm piece upsampled 520%:

5mm X 520% mean=60.76 STD=2.53

Note that although the noise is much more visible when the 5mm image is upsampled, the mean and standard deviation effectively do not change. (Measurements and resampling done in linear space before conversion to sRGB for display).

Cliff, are the stats on this data from a raw file? It would be better if they were because raw data is proportional to the luminance hitting the sensor (The Signal), while rendered data is typically not.

They were processed identically from raw and measured while still in a linear color space. The data is indeed proportional to the luminance hitting the sensor.

Regardless, when we enlarge an image much beyond 1:1 resolution on our typical display devices we soon lose the benefit of the downsampling (binning) that happens in the CoC of our eyes and we start to perceive some of the noise as 'Signal', fooling our visual system - which is not used to seeing this way.

No, lets not bring in confounding factors like circle of confusion. The upsampled image has been upsampled with a sinc filter. You can measure the standard deviation without even looking at it.

If you feel your monitor is making it look noisier than it really is, just step back a bit until you can't see the monitor's pixel structure.

Sure.

You just said above that upsampling increases the standard deviation. Now you agree with me that the standard deviation doesn't change?

I think that to get the standard deviation to reduce when upsampling, you would have to somehow redistribute the photons that the pixel values represent, but that's not really possible after the shot has been captured.

Correct.

Please correct me if I'm wrong about this.

We note, however, that if we crop the photo taken at 5mm to the same framing as the photo taken at 26mm, then it is made with only 3.7% as much light and thus 7.3x more photon noise.

As shown, the 5mm and the 26mm have been cropped to the same framing. The standard deviations are 2.53 and 2.77, respectively. When the 5mm is upsampled to the size of the 26mm, the standard deviation does not change - it's still 2.5.

So where is the 7.3x more photon noise? Would it not show up in the standard deviation?

I would say it appears that cropping doesn't change the photon noise that has been "baked" into the pixel values.

The fallacy is saying in effect that cropping away 3/4 of the pixels is the same as removing 3/4 of the photons from the remaining the pixels.

If the Luminance hitting the sensor and the exposure is the same (that is The Signal is the same) in both captures, the mean number of photons hitting each pixel on the sensor is also the same - therefore so is their standard deviation and shot noise SNR, which can be assumed to be proportional to the square root of the mean number of arriving photons per pixel.

Right.

If we have two captures of the same uniform subject taken by the same sensor at different focal lengths as above - and we want to compare the properties of the captured information from the same physical area in object space, we will have to compare images with a different number of pixels, the smaller focal length resulting in fewer of them in the region of interest. As long as the number of pixels under investigation is large enough so that quantization noise can be ignored, both will have roughly the same shot noise SNR as measured statistically from the raw data

As shown in the images I posted.

However, since each pixel has collected the same mean number of photons and the capture with the smaller focal length has fewer pixels in the region of interest, the total number of photons collected by it will be less (total number of photons = mean x number of pixels in ROI).

The mean stays the same, so therefore the photon noise stays the same. How is the total number of photons relevant?

To compare apples to apples we normally downsample the image with the larger number of pixels so that it has the same number of pixels as the smaller one. The process involves adding counts from neighbouring pixels and dividing the result by the number of pixels added together (our eyes also do this in the CoC). As we know this averaging results in a lowered standard deviation (hence SNR) for the downsampled version.

I'm familiar with that.

If we wanted instead to do the opposite and upsample the image of the ROI with the smaller number of pixels to the same number as that of with the larger number we would have to add noiseto the upsampled image in order to provide the dithering necessary to allow us to have comparable images. Simply magnifying it makes it blocky as shown and doesn't do it. If noise is added in order to dither out the blockiness the image with the fewer collected photons will appear noisier, theoretically in the same SNR proportion as in the paragraph above

Jack, I upsampled the image with the ideal sinc interpolation filter. No blockiness has been introduced by the upsample, at all.

Upsampled noise looks noisier, not due to any change in the standard deviation but due to the way magnification shifts the visible frequencies in the noise to more sensitive regions of the eye's contrast sensitivity function.

Your suggestion to add more noise to dither out blockiness (caused by, what, nearest neighbor upsampling?) would be part of an very poor workflow and is a complete red herring. If you don't purposely add more noise to an upsampled image what do you get?