Equivalence question, please do not shoot me. Part II

When a photo scales down, both detail (absolute) and noise (absolute) scale with it. Detail has a spatial size associated with it (it has features) and is distinctive, random noise has none. When detail scales down the brain can assimilate the change in scale, however because random noise is featureless, it cannot. Therefore the brain cannot make the scale relationship with noise as it can with detail, and therefore it only associates the absolute magnitude of the noise with scale.
Then it should be easy for you to answer my question about the noise in the gray patches above.
 
I've browsed over the OP's summary posting. And all I can say is that posters are not fully understanding of what they were posting about.

Please, digest the following short fact and restart discussion from there:

"Noise (the signal to noise ratio SNR) is a function of (spatial) frequency".

That's an almost trivial statement for anybody interested in audio, still only very few people are aware that this is generally true which includes imaging.
No one argues against a smaller pixel being more noisy than a larger pixel, all else equal. The discussion is not about a pixel-for-pixel comparison of photos made with different numbers of pixel, but the noise in the photo as a whole (or, alternatively, the noise over any given proportion of the photo, e.g. the noise of one 2x2 pixel vs the average noise of four 1x1 pixels).
The discussion should be centered around what is relevant or visible in a photo.
All such discussions "should be centered around what is relevant or visible in a photo" when discussing photography.
Relative noise is not relevant, but absolute noise is. However, the same cannot be said for detail, and that's difference here.
Well, that's simply not true.
When a photo scales down, both detail (absolute) and noise (absolute) scale with it. Detail has a spatial size associated with it (it has features) and is distinctive, random noise has none. When detail scales down the brain can assimilate the change in scale, however because random noise is featureless, it cannot. Therefore the brain cannot make the scale relationship with noise as it can with detail, and therefore it only associates the absolute magnitude of the noise with scale. Of course camera noise is not truly random and can have some structure to it (grain), but size relationship is very subtle compared to detail.
Sorry, but a photo with a signal of 1 billion electrons and 10 million electrons of noise (1% noise) is going to look less noisy than a photo with a signal of 10 million electrons and 1 million electrons of noise (10% noise).

Less technically, the photo made with more light will look noisy. Of course, if you display the more noisy photo small enough, look at it from far away enough, and/or blur it enough, then the noise will be less apparent in the more noisy photo.
It's pointless to talk about relative noise in photography.
In fact, that is the relevant point when it comes to the appearance of noise in a photo with the natural, and implied, viewing conditions (same display size viewed from the same distance).
 
So in layman's terms, because noise roams around,
while "genuine" image signal more or less stays put;
it won't persist at the same pixel location
and so won't be stacked to the same degree ?
Exactly. The genuine 'signal' at a particular point gets reinforced through stacking, but the random noise part of the measurement will partly cancel out as it is sometimes positive and sometimes negative, so signal-to-noise ratio increases.
Not exactly exactly. If one stacks, say, 2 images, the signal s at each pixel doubles to 2s, and, with it, the noise increases by 41% from Sqrt(s) to Sqrt(2s) = 1.414Sqrt(s). Noise is not being cancelled out; it's actually increasing. But it is increasing at a slower rate than the signal, so the per-pixel s/n due to the stacking goes from s/Sqrt(s) = Sqrt(s) to 2s/Sqrt(2s) = Sqrt(2s) = 1.4Sqrt(s), an increase of 41%. Relative noise (not absolute noise, but noise relative to signal), by contrast, goes from 1/Sqrt(s) to 1/Sqrt(2s) = .707/Sqrt(s), a decrease of 29.3%.

--
gollywop
http://g4.img-dpreview.com/D8A95C7DB3724EC094214B212FB1F2AF.jpg
 
Last edited:
Great Bustard wrote
So, how does Image stacking avoid Noise stacking ?
Noise from light is well modeled by the Poisson Distribution, where the standard deviation is the square root of the signal (where the signal is measured in electrons).
Understood, but that is true for all four donor photos.
Yes (keeping in mind that the signal adds linearly and the noise adds in quadrature).
So, let's say you stack four photos. You've quadrupled the signal but the standard deviation is only doubled. Thus the relative noise (noise/signal) is halved.
So in layman's terms, because noise roams around,
while "genuine" image signal more or less stays put;
it won't persist at the same pixel location
and so won't be stacked to the same degree ?
The signal in four stacked photos is 4S = 4x as great. The noise in four stacked photos is sqrt (4N) = 2sqrt(N) = 2x as great. Thus the relative noise is 2N/4S = 0.5 N/S = have the original relative noise.
 
I've browsed over the OP's summary posting. And all I can say is that posters are not fully understanding of what they were posting about.

Please, digest the following short fact and restart discussion from there:

"Noise (the signal to noise ratio SNR) is a function of (spatial) frequency".

That's an almost trivial statement for anybody interested in audio, still only very few people are aware that this is generally true which includes imaging.
No one argues against a smaller pixel being more noisy than a larger pixel, all else equal. The discussion is not about a pixel-for-pixel comparison of photos made with different numbers of pixel, but the noise in the photo as a whole (or, alternatively, the noise over any given proportion of the photo, e.g. the noise of one 2x2 pixel vs the average noise of four 1x1 pixels).
The discussion should be centered around what is relevant or visible in a photo.
All such discussions "should be centered around what is relevant or visible in a photo" when discussing photography.
Relative noise is not relevant, but absolute noise is. However, the same cannot be said for detail, and that's difference here.
Well, that's simply not true.
When a photo scales down, both detail (absolute) and noise (absolute) scale with it. Detail has a spatial size associated with it (it has features) and is distinctive, random noise has none. When detail scales down the brain can assimilate the change in scale, however because random noise is featureless, it cannot. Therefore the brain cannot make the scale relationship with noise as it can with detail, and therefore it only associates the absolute magnitude of the noise with scale. Of course camera noise is not truly random and can have some structure to it (grain), but size relationship is very subtle compared to detail.
Sorry, but a photo with a signal of 1 billion electrons and 10 million electrons of noise (1% noise) is going to look less noisy than a photo with a signal of 10 million electrons and 1 million electrons of noise (10% noise).
We are talking apples and oranges again. We are not talking about different photos captured by different amounts of light. Its about the same photos assessed at different scales. Case in point is the example in your OP. By your own admission they both possess the same SNR and therefore the same relative noise, yet the one with the higher absolute noise appears nosier. Therefore the relative noise is irrelevant when downsampling.
Less technically, the photo made with more light will look noisy. Of course, if you display the more noisy photo small enough, look at it from far away enough, and/or blur it enough, then the noise will be less apparent in the more noisy photo.
It's pointless to talk about relative noise in photography.
In fact, that is the relevant point when it comes to the appearance of noise in a photo with the natural, and implied, viewing conditions (same display size viewed from the same distance).
But the subject of the thread is exactly that. It is in reference to viewing size since we are talking specifically about downsampling. Therefore your previous claims are out of context.
 
I've browsed over the OP's summary posting. And all I can say is that posters are not fully understanding of what they were posting about.

Please, digest the following short fact and restart discussion from there:

"Noise (the signal to noise ratio SNR) is a function of (spatial) frequency".

That's an almost trivial statement for anybody interested in audio, still only very few people are aware that this is generally true which includes imaging.
No one argues against a smaller pixel being more noisy than a larger pixel, all else equal. The discussion is not about a pixel-for-pixel comparison of photos made with different numbers of pixel, but the noise in the photo as a whole (or, alternatively, the noise over any given proportion of the photo, e.g. the noise of one 2x2 pixel vs the average noise of four 1x1 pixels).
The discussion should be centered around what is relevant or visible in a photo.
All such discussions "should be centered around what is relevant or visible in a photo" when discussing photography.
Relative noise is not relevant, but absolute noise is. However, the same cannot be said for detail, and that's difference here.
Well, that's simply not true.
When a photo scales down, both detail (absolute) and noise (absolute) scale with it. Detail has a spatial size associated with it (it has features) and is distinctive, random noise has none. When detail scales down the brain can assimilate the change in scale, however because random noise is featureless, it cannot. Therefore the brain cannot make the scale relationship with noise as it can with detail, and therefore it only associates the absolute magnitude of the noise with scale. Of course camera noise is not truly random and can have some structure to it (grain), but size relationship is very subtle compared to detail.
Sorry, but a photo with a signal of 1 billion electrons and 10 million electrons of noise (1% noise) is going to look less noisy than a photo with a signal of 10 million electrons and 1 million electrons of noise (10% noise).
We are talking apples and oranges again. We are not talking about different photos captured by different amounts of light. Its about the same photos assessed at different scales.
If you are talking about comparing the noise from a larger portion of the scene to the noise in a smaller portion of the scene, then that is the apples and oranges comparison.
Case in point is the example in your OP. By your own admission they both possess the same SNR and therefore the same relative noise, yet the one with the higher absolute noise appears nosier. Therefore the relative noise is irrelevant when downsampling.
I addressed that in the paragraph immediately below:
Less technically, the photo made with more light will look noisy. Of course, if you display the more noisy photo small enough, look at it from far away enough, and/or blur it enough, then the noise will be less apparent in the more noisy photo.
^^^
It's pointless to talk about relative noise in photography.
In fact, that is the relevant point when it comes to the appearance of noise in a photo with the natural, and implied, viewing conditions (same display size viewed from the same distance).
But the subject of the thread is exactly that. It is in reference to viewing size since we are talking specifically about downsampling. Therefore your previous claims are out of context.
If you are trying to say that a photo will appear (as opposed to be) less noisy when displayed smaller, viewed from further away, or blurred, then I have no argument with that.
 
You are trying to say that noise can just be blurred away. That is not so.
Yes it can. It gets eradicated by the additive nature of random noise. If I do a running average a string of random numbers between -10 and +10, the running average converges to 0. Information is permanently lost, not just hidden.
Let me rephrase: the noise is still there, but it is hidden in the blur.
A good example of this is diffraction. You are trying to say, for example, that 50 MP suffers diffraction more than 12 MP.
No they suffer the same loss, but both maintain the same signal. You are confusing resolution with signal again. The SNR goes up at the expense of lost resolution.
The per-pixel SNR goes up, not the per-image SNR.
When I take an image from a uniformly illuminated gray card and desample by 2 with an appropriate low-pass desampling filter, the per-pixel SNR of every pixel increases. By what measure of per-image SNR does the per-image SNR not also increase?

Dale B. Dalrymple
 
But the subject of the thread is exactly that. It is in reference to viewing size since we are talking specifically about downsampling. Therefore your previous claims are out of context.
If you are trying to say that a photo will appear (as opposed to be) less noisy when displayed smaller, viewed from further away, or blurred, then I have no argument with that.
What would be responsible for the noise appearing less noisy if the noise is actually the same?

I might ask gollywop the same question in regard to his example he posted.
 
Last edited:
Now let's look at the new image as a whole. Each pixel has signal 4s (and noise 2Sqrts(s)). The size of the new image is m/2 x n/2, and so total signal is m/2 x n/2 x 4s = mns, and total noise is Sqrt(mns), just the same as for the original image.
What is responsible for the image as a whole appearing less noisy if the relative noise is the same? In your example for the whole image (not pixel level), a pixel with 4s would be appear four times as bright, which makes sense since the same amount of light is compressed into 1/4 the area. However the reduced image retains the same relative brightness. Is it the normalization of the 4s signal during downsampling responsible for the appearance of reduced noise? IOW, to maintain the same relative brightness, the signal is normalized from 4s back to s (by 4s/4). Therefore is the noise actually 2Sqrts(s)/4 = Sqrts(s)/2? That would mean the SNR is not the same as you claimed.

Edit: I did a standard deviation comparison of a grey patch in PS and indeed the SD dropped in half.
 
Last edited:
Now let's look at the new image as a whole. Each pixel has signal 4s (and noise 2Sqrts(s)). The size of the new image is m/2 x n/2, and so total signal is m/2 x n/2 x 4s = mns, and total noise is Sqrt(mns), just the same as for the original image.
What is responsible for the image as a whole appearing less noisy if the relative noise is the same? In your example for the whole image (not pixel level), a pixel with 4s would be appear four times as bright, which makes sense since the same amount of light is compressed into 1/4 the area. However the reduced image retains the same relative brightness. Is it the normalization of the 4s signal during downsampling responsible for the appearance of reduced noise? IOW, to maintain the same relative brightness, the signal is normalized from 4s back to s (by 4s/4). Therefore is the noise actually 2Sqrts(s)/4 = Sqrts(s)/2? That would mean the SNR is not the same as you claimed.
Scaling signal and noise by the same factor doesn't change the ratio of signal to noise. 4s/2Sqrt(s) = (4s/4)/(2Sqrt(s)/4) = 2Sqrt(s), and mns/Sqrt(mns) = (mns/4)/(Sqrt(mns)/4) = Sqrt(mns).

--
gollywop
http://g4.img-dpreview.com/D8A95C7DB3724EC094214B212FB1F2AF.jpg
 
Last edited:
Now let's look at the new image as a whole. Each pixel has signal 4s (and noise 2Sqrts(s)). The size of the new image is m/2 x n/2, and so total signal is m/2 x n/2 x 4s = mns, and total noise is Sqrt(mns), just the same as for the original image.
What is responsible for the image as a whole appearing less noisy if the relative noise is the same?
It doesn't; at least not when I do it and make the comparison as meaningful as possible.

Here is a 100% crop of an original image followed by a 100% crop of the same region of the x2 downsampled version

original
original

x2 downsampled
x2 downsampled

If you view them either above or at their originals, the noise looks pretty much the same to me – particularly if, when viewing the "originals," you view the smaller image at half the distance you use to view the larger.

But, one would really like to view them at the same size. There are, of course, problems in doing this because of the differences due to the downsampling along with the fact that the jpeg artifacts that are created when making and saving the downsampled jpeg version are quite different from those of the original. While the noise is the same, the structure of the noise is different and when you blow the downsampled version up it need not compare well.

The best way to compare at the same size is to do the following:

• take a raw image with good noise, such as in the images above, and open in ACR -> PS.
• duplicate the image in PS
• downsample the duplicate by halving the pixel dimensions
• magnify the original to 100%
• magnify the downsampled duplicate to 200%

And now compare similar regions.

Here are screen shots of the original at 100% and downsampled duplicate at 200%. These have the same pixel dimensions and can be readily compared in their "originals" in the dpr image viewer:

screen shot of original at 100%
screen shot of original at 100%

screen shot of downsampled at 200%
screen shot of downsampled at 200%

And here, just as a matter of interest, are the statistics for the above two shots:

statistics for original
statistics for original

statistics for downsampled
statistics for downsampled

--
gollywop
http://g4.img-dpreview.com/D8A95C7DB3724EC094214B212FB1F2AF.jpg
 
Last edited:
Now let's look at the new image as a whole. Each pixel has signal 4s (and noise 2Sqrts(s)). The size of the new image is m/2 x n/2, and so total signal is m/2 x n/2 x 4s = mns, and total noise is Sqrt(mns), just the same as for the original image.
What is responsible for the image as a whole appearing less noisy if the relative noise is the same?
It doesn't; at least not when I do it and make the comparison as meaningful as possible.

Here is a 100% crop of an original image followed by a 100% crop of the same region of the x2 downsampled version

original
original

x2 downsampled
x2 downsampled

If you view them either above or at their originals, the noise looks pretty much the same to me – particularly if, when viewing the "originals," you view the smaller image at half the distance you use to view the larger.

But, one would really like to view them at the same size. There are, of course, problems in doing this because of the differences due to the downsampling along with the fact that the jpeg artifacts that are created when making and saving the downsampled jpeg version are quite different from those of the original. While the noise is the same, the structure of the noise is different and when you blow the downsampled version up it need not compare well.

The best way to compare at the same size is to do the following:

• take a raw image with good noise, such as in the images above, and open in ACR -> PS.
• duplicate the image in PS
• downsample the duplicate by halving the pixel dimensions
• magnify the original to 100%
• magnify the downsampled duplicate to 200%

And now compare similar regions.

Here are screen shots of the original at 100% and downsampled duplicate at 200%. These have the same pixel dimensions and can be readily compared in their "originals" in the dpr image viewer:

screen shot of original at 100%
screen shot of original at 100%

screen shot of downsampled at 200%
screen shot of downsampled at 200%

And here, just as a matter of interest, are the statistics for the above two shots:

statistics for original
statistics for original

statistics for downsampled
statistics for downsampled
For completeness and full disclosure, I should note that the above statistics are quickie versions (as noted by the exclamation point in the triangle in the top right corner). Here, then, are the full versions based on the complete pixel count:

full stats for original at 100% (3rd image above)
full stats for original at 100% (3rd image above)

full stats for downsampled at 200% (4th image above)
full stats for downsampled at 200% (4th image above)

And here are the full statistics for the first two images (original and x2 downsampled)

full stats for the original image (top image above)
full stats for the original image (top image above)

full info for x2 downsampled image (2nd image above)
full info for x2 downsampled image (2nd image above)

--
gollywop
http://g4.img-dpreview.com/D8A95C7DB3724EC094214B212FB1F2AF.jpg
 
Last edited:
But the subject of the thread is exactly that. It is in reference to viewing size since we are talking specifically about downsampling. Therefore your previous claims are out of context.
If you are trying to say that a photo will appear (as opposed to be) less noisy when displayed smaller, viewed from further away, or blurred, then I have no argument with that.
What would be responsible for the noise appearing less noisy if the noise is actually the same?
Limitations in visual acuity.
 
Did I miss something in your presentation (quite probably)? I think I follow it properly ... maybe not.

I am sometimes confused with this type of analysis (lack of detail knowledge etc). Here are three images of the same scene. The full shot is a downsize of the original (4233x3349 pixels down to 1600x1266), the second and third are 100% of cropped portions of the original image:

1. Full imaged cropped to1600x1266
1. Full imaged cropped to1600x1266

2. section of original at 45 degrees to left above her head - with some shadow detail
2. section of original at 45 degrees to left above her head - with some shadow detail

3. section of original at 45 degrees to left above her head - without shadow detail
3. section of original at 45 degrees to left above her head - without shadow detail

For my visual interpretation, when I visually compare the original image and a a downsized image to half the linear dimension, and the upsize to the full dimensionality, I see a visible drop in noise. I interpret this as "loss of detail" since the image noise is "detail" and the photoshop image sizing removes high frequency detail.

When I look at the image mean/std dev/median values, there is no essential change in these values. I interpret this caused by the specific tonal spectrum of the image that does not change ... the noise is "immaterial" since it contributes essentially nothing to the statistics. When I look at your examples in your statistical post, it follows essentially the same minimal change pattern.

When I look at image 2, there is some tonal variation. The statistical change before and after downsize (and upsize) are quite different.

When I look at image 3, there is no tonal variation other than noise. The statistical change before and after downsize (and upsize) are widely different.

2 & 3 seem to demonstrate "loss of detail" and not a true measure of noise - although the results of image 3 seem much closer to the truth.

I wonder what is the proper way to evaluate and possibly measure image noise? My current speculation is that, within photoshop, the only way that seems to work is with images that are comprised of constant signal intensity (tone), and the random noise that is part of that tonality. I think this provides an indication but not a true accurate measure of noise ???

--
tony
http://www.tphoto.ca
 
Last edited:
It's an excellent question, and, yes, I go along with your notion that the only way one can measure noise in PS is to use a basically uniform area; otherwise, of course, the standard deviation includes variations in signal as well as noise. That's the reason I picked a region that was primarily evening sky. I included some structure just so one could see that the crop was the same.

Here, however, is a 100 % crop from original and x2 downsampled images that includes the same region (you'll have to take my word for it) that is comprised mostly of sky and noise. It's not completely uniform, but it's close. This crop, by the way, is from a completely different region from that I used before.

original
original

downsampled
downsampled

and here are the statistics for them

info for original
info for original

info for downsampled
info for downsampled

I didn't worry too much about the non-noise elements in my original post, because, if the mean and standard deviation remain essentially unchanged due to downsampling in a full image, they will also do so for a "noise-only" sub-portion. I was not attempting the measure the noise, but just show that downsampling doesn't change the s/n.

--
gollywop
http://g4.img-dpreview.com/D8A95C7DB3724EC094214B212FB1F2AF.jpg
 
Last edited:
I was not attempting the measure the noise, but just show that downsampling doesn't change the s/n.
Wrong. Downsampling increases S/N. Please see the following link and note the numbers in the images:
It certainly does at the pixel level.
IMHO, the 'per image' number that you are taking in another message has no meaning for noise. It is just a single number. A single number has no notion of noise in Poisson stats. (For read noise, one can still make a case that a single measurement has noise on it, though.)

--
Dj Joofa
http://www.djjoofa.com
 
Last edited:
Now let's look at the new image as a whole. Each pixel has signal 4s (and noise 2Sqrts(s)). The size of the new image is m/2 x n/2, and so total signal is m/2 x n/2 x 4s = mns, and total noise is Sqrt(mns), just the same as for the original image.

--
That figure is just a single number for a given image. A single number has no notion of noise in Poisson stats.
 
Now let's look at the new image as a whole. Each pixel has signal 4s (and noise 2Sqrts(s)). The size of the new image is m/2 x n/2, and so total signal is m/2 x n/2 x 4s = mns, and total noise is Sqrt(mns), just the same as for the original image.

--
That figure is just a single number for a given image. A single number has no notion of noise in Poisson stats.
As far as I know, the Poisson is a distribution with a single parameter, λ, which is the mean. The standard deviation is a function of the same parameter, Sqrt(λ). If I have an estimate of λ, then I also have an estimate of its standard deviation.

However, if I'm wrong, can you please tell me where?

thanks,
 

Keyboard shortcuts

Back
Top