How does "total light" change SNR?

edhannon

Senior Member
Messages
1,761
Solutions
7
Reaction score
342
Location
Seminole, FL, US
Despite all of the explanations in the multitude of posts on this issue, I still do not understand how the total light on a sensor can change the SNR of an image.

Here is a drawing of the model I have in mind for a typical digital camera raw flow:

Typical Raw Flow
Typical Raw Flow

Each pixel in the sensor produces a value in the raw file. It's value (and its shot noise) is dependent on the light falling on that pixel. Its value is independent of the value (and shot noise) of the other pixels.

The raw file is transferred to a computer where a de-mosaic algorithm is applied. The SNR will change depending on the de-mosaic algorithm used.

If down-sampling is applied, that will also result in a change in the SNR. (I personally believe that this is also dependent on the algorithm used.)

I can see how down-sampling can change the SNR as a result of any averaging used in the algorithm. This is also true of the de-mosaic algorithm.

If you ignore the change in SNR due to de-mosaicking and assume that the change in SNR due to down-sampling is proportional to the square root of the ratios of the resolutions in all algorithms, then the total light equation gets the same answer. But this does not prove that total light is changing SNR.

Can someone explain the mechanism by which the "total light" changes the SNR?

--
Ed Hannon
 
You can't.
OK, so in a photo, you can't have the SNR of a single pixel.
As I said above, the direct way to measure is to take several images of a constant calibrated light source and measure the noise as the standard deviation from the steady DC level ADU signal.
How would you ensure that constant calibrated light source put out the same number of photons for each image?
An approximation that is easier is to take the standard deviation of a patch of gray in one image. This assumes that the shot noise of each pixel is approximately the same.
Please explain what is the shot noise of a pixel.
 
I don't believe there are many who would not agree that larger pixels produce better SNR. That has a clear cut physical explanation and is easily measureable in the raw file.

The issue is, for two cameras with the same pixel size where one has more pixels, where does the gain after normalization come from: the math used in normalizing; or some un-measurable characteristic in the raw file?
The Gain comes from the math,
How does math change a physical phenomenon?
 
Are you saying that two of the cameras have the same architecture and pixel size, just that one has a larger sensor with more pixels and that the one with more pixels measures better SNR (by taking standard deviation of a gray section) ?
What I'm saying, Ed, is that the exposure is the same for the two cameras (6D and RX100), and that the two cameras have the same pixel count, the same QE, and, at ISO 6400, about the same read noise.
Not sure I see that in the link.
Can you see it now?
No. Because the 6D is a 24x36 sensor and the RX10 is 8.8x13.2.

So the 6D has much larger pixels.

I don't believe there are many who would not agree that larger pixels produce better SNR. That has a clear cut physical explanation and is easily measureable in the raw file.

The issue is, for two cameras with the same pixel size where one has more pixels, where does the gain after normalization come from: the math used in normalizing; or some un-measurable characteristic in the raw file?
Each "pixel" on my new sensor is going to contain 4 "photosites". Why? Because I'm an offensively wealthy and eccentric CMOS sensor engineer, have a lab at my disposal, and that's just how I want to collect my photons. To be clear, there is no 2x2 pixel binning operation: the underlying circuitry is simply hardwired to extract a single value from each 2x2 pixel group.

Where do you measure the SNR?
 
Are you saying that two of the cameras have the same architecture and pixel size, just that one has a larger sensor with more pixels and that the one with more pixels measures better SNR (by taking standard deviation of a gray section) ?
What I'm saying, Ed, is that the exposure is the same for the two cameras (6D and RX100), and that the two cameras have the same pixel count, the same QE, and, at ISO 6400, about the same read noise.
Not sure I see that in the link.
Can you see it now?
No. Because the 6D is a 24x36 sensor and the RX10 is 8.8x13.2.

So the 6D has much larger pixels.

I don't believe there are many who would not agree that larger pixels produce better SNR. That has a clear cut physical explanation and is easily measureable in the raw file.

The issue is, for two cameras with the same pixel size where one has more pixels, where does the gain after normalization come from: the math used in normalizing; or some un-measurable characteristic in the raw file?
...is that you are comparing noise at the pixel level as opposed to the image level. And, yes, at the pixel level, the photo made from more pixels, all else equal, will be more noisy, as Lee Jay's photo demonstrates:



Pixel%20density%20test%20results.jpg


However, in "real life" when viewing a photo, either as displayed on a computer monitor or printed, we cannot resolve individual pixels, the noise in the photo is a function solely of the total amount of light that fell on the sensor and the efficiency of the sensor -- no normalization necessary.
 
I don't believe there are many who would not agree that larger pixels produce better SNR. That has a clear cut physical explanation and is easily measureable in the raw file.

The issue is, for two cameras with the same pixel size where one has more pixels, where does the gain after normalization come from: the math used in normalizing; or some un-measurable characteristic in the raw file?
The Gain comes from the math,
How does math change a physical phenomenon?
It doesn't. It changes it's representation.
 
You can't.
OK, so in a photo, you can't have the SNR of a single pixel.
As I said above, the direct way to measure is to take several images of a constant calibrated light source and measure the noise as the standard deviation from the steady DC level ADU signal.
How would you ensure that constant calibrated light source put out the same number of photons for each image?
An approximation that is easier is to take the standard deviation of a patch of gray in one image. This assumes that the shot noise of each pixel is approximately the same.
Please explain what is the shot noise of a pixel.
 
I don't believe there are many who would not agree that larger pixels produce better SNR. That has a clear cut physical explanation and is easily measureable in the raw file.

The issue is, for two cameras with the same pixel size where one has more pixels, where does the gain after normalization come from: the math used in normalizing; or some un-measurable characteristic in the raw file?
The Gain comes from the math,
How does math change a physical phenomenon?
 
I don't believe there are many who would not agree that larger pixels produce better SNR. That has a clear cut physical explanation and is easily measureable in the raw file.

The issue is, for two cameras with the same pixel size where one has more pixels, where does the gain after normalization come from: the math used in normalizing; or some un-measurable characteristic in the raw file?
The Gain comes from the math,
How does math change a physical phenomenon?
 
True value of the light would be the long term average photon counts. Shot noise occurs because the photons do not arrive at fixed intervals. The arrival times are a random Poison process. This causes variations in the count over a fixed time period.

Each time I sample a single pixel that has a steady light impacting on it I will get a slightly different count. So if I take several images of a constant light source each pixel will get an raw value (ADU) that has a shot noise component. This is the basis of the ISO/EVMA method of measuring SNR of a sensor.

In a single image, this variation in reading the pixels will result in a variation of the raw value (ADU) between pixels that are receiving the same illumination levels. That variation contains components coming from shot noise, read noise, fixed pattern noises, etc. In a camera that has very low read and pattern noise, the shot noise dominates. Thus the SNR computed from the standard deviation of a gray patch in a single image approximates very closely the shot noise SNR. This is the justification for the DxOMark SNR measurement method.
Imagine that you knew, somehow, what the long term average photon count for a pixel capturing light coming from a grey card was supposed to be. Imagine you took an exposure using a 60x40 pixel sensor, and each and every one of those 2400 pixels registered exactly 90% of that long term average photon count. What would the signal to noise ratio of the image be?
 
True value of the light would be the long term average photon counts. Shot noise occurs because the photons do not arrive at fixed intervals. The arrival times are a random Poison process. This causes variations in the count over a fixed time period.

Each time I sample a single pixel that has a steady light impacting on it I will get a slightly different count. So if I take several images of a constant light source each pixel will get an raw value (ADU) that has a shot noise component. This is the basis of the ISO/EVMA method of measuring SNR of a sensor.

In a single image, this variation in reading the pixels will result in a variation of the raw value (ADU) between pixels that are receiving the same illumination levels. That variation contains components coming from shot noise, read noise, fixed pattern noises, etc. In a camera that has very low read and pattern noise, the shot noise dominates. Thus the SNR computed from the standard deviation of a gray patch in a single image approximates very closely the shot noise SNR. This is the justification for the DxOMark SNR measurement method.
Imagine that you knew, somehow, what the long term average photon count for a pixel capturing light coming from a grey card was supposed to be. Imagine you took an exposure using a 60x40 pixel sensor, and each and every one of those 2400 pixels registered exactly 90% of that long term average photon count. What would the signal to noise ratio of the image be?
Well, in that case the noise would be zero. I suspect you mean to ask if the average value registered by those 2400 pixels was exactly 90% of the long term photon count.
 
If, using the same two cameras, I take an image of a gray card at the same exposure and then measure the SNR by dividing the average gray signal level of the raw file (in ADU) by the noise level (standard deviation of the raw image). I will then get the same SNR for both.
Really? How do you know that?
Or even simpler, take an image with one camera of a gray patch. Then measure the SNR of the raw file. Then measure the SNR of a crop of the image. The SNRs will be the same.
Really? How do you know that?

You've just made a pair of unsupported assertions that on their face seem false to me. I think you should provide some backing.

In a counter to your second assertion, let my offer this example.

Imagine the sensor you are using measures 9 x 6 pixels. Of the 54 pixels, 5 record some value other than the "correct" value. The distribution of those 5 pixels allows the creation of crops that contain 1, 2, 3, 4 or 5 of the wrong value pixels. It seems that each of these crops must have a different SNR.
 
Down sampling has no special magic. When you combines pixels, you are lowering resolution, and effectively filtering the noise out.

You can do exactly the same by applying a software filter. But the filter is better, because you can use advanced techniques to only filter where you need it. With a smart filter, you can trade off resolution versus noise over the image.

Down sampling = dumb, high pass filter
Advanced noise reduction = smart, adaptive, calibrated filter.

Tiny pixels with smart filter will beat large pixels with no filter, so don't worry about down sampling so much. It simply isn't relevant to the noise issue
 
Imagine that you knew, somehow, what the long term average photon count for a pixel capturing light coming from a grey card was supposed to be. Imagine you took an exposure using a 60x40 pixel sensor, and each and every one of those 2400 pixels registered exactly 90% of that long term average photon count. What would the signal to noise ratio of the image be?
Well, in that case the noise would be zero. I suspect you mean to ask if the average value registered by those 2400 pixels was exactly 90% of the long term photon count.
Well, no, I was expecting either that Ed would tell us that the SNR is not 0, because every pixel has a non-0 SNR, or that he might begin to see the flaw in basing his assertions on a single-pixel-SNR argument.
 
If, using the same two cameras, I take an image of a gray card at the same exposure and then measure the SNR by dividing the average gray signal level of the raw file (in ADU) by the noise level (standard deviation of the raw image). I will then get the same SNR for both.
Really? How do you know that?
I believe you will find that DxOMark "Screen" level SNRs are the same for cameras that have the same sensor technology but different sizes.

Or even simpler, take an image with one camera of a gray patch. Then measure the SNR of the raw file. Then measure the SNR of a crop of the image. The SNRs will be the same.
Really? How do you know that?
I ran that experiment and posted the result a while back. The SNRs are the same. The response back then was there must be something wrong with what I did.
 
Imagine that you knew, somehow, what the long term average photon count for a pixel capturing light coming from a grey card was supposed to be. Imagine you took an exposure using a 60x40 pixel sensor, and each and every one of those 2400 pixels registered exactly 90% of that long term average photon count. What would the signal to noise ratio of the image be?
Well, in that case the noise (NSR) would be zero. I suspect you mean to ask if the average value registered by those 2400 pixels was exactly 90% of the long term photon count.
Well, no, I was expecting either that Ed would tell us that the SNR is not 0, because every pixel has a non-0 SNR, or that he might begin to see the flaw in basing his assertions on a single-pixel-SNR argument.
Ah. Apologies for butting in.
 
Last edited:
If, using the same two cameras, I take an image of a gray card at the same exposure and then measure the SNR by dividing the average gray signal level of the raw file (in ADU) by the noise level (standard deviation of the raw image). I will then get the same SNR for both.

Or even simpler, take an image with one camera of a gray patch. Then measure the SNR of the raw file. Then measure the SNR of a crop of the image. The SNRs will be the same.
You are right. Under your assumptions (i.e, same exposure, uniform across all pixels, same pixel size), the SNRs will be (more or less) the same, as long as the smaller pixel array of numbers still has a sufficiently large number of pixels. In this case any increase in SNR shall be realized when downsampled to the same size.

--
Dj Joofa
http://www.djjoofa.com
 
Last edited:
If, using the same two cameras, I take an image of a gray card at the same exposure and then measure the SNR by dividing the average gray signal level of the raw file (in ADU) by the noise level (standard deviation of the raw image). I will then get the same SNR for both.
Really? How do you know that?
I believe you will find that DxOMark "Screen" level SNRs are the same for cameras that have the same sensor technology but different sizes.
Or even simpler, take an image with one camera of a gray patch. Then measure the SNR of the raw file. Then measure the SNR of a crop of the image. The SNRs will be the same.
Really? How do you know that?
I ran that experiment and posted the result a while back. The SNRs are the same. The response back then was there must be something wrong with what I did.

--
Ed Hannon
http://www.pbase.com/edhannon
What exactly is the point to your argument? That resaming is some synthetic manipulation of the data? It isn't. Sure the increase in SNR is happens when you combine pixels and resample, but the effect is no different than if you combine extra photons from a longer exposure. When you combine photon, the error is reduced. If you combine data from the measurement of photons the error is reduced. I think what you are missing is that the extra data is coming from extra light. It's not just magically averaging itself out by virtue of some numeric manipulation, it's coming from actual data that isn't otherwise present on the smaller sensor.

It's pretty straight forward.
 
Last edited:
If, using the same two cameras, I take an image of a gray card at the same exposure and then measure the SNR by dividing the average gray signal level of the raw file (in ADU) by the noise level (standard deviation of the raw image). I will then get the same SNR for both.

Or even simpler, take an image with one camera of a gray patch. Then measure the SNR of the raw file. Then measure the SNR of a crop of the image. The SNRs will be the same.
You are right. Under your assumptions (i.e, same exposure, uniform across all pixels, same pixel size), the SNRs will be (more or less) the same, as long as the smaller pixel array of numbers still has a sufficiently large number of pixels. In this case any increase in SNR shall be realized when downsampled to the same size.
...where is the utility in saying that the same number of same size pixels with the same QE and read noise will have the same noise? Isn't that all but a tautology?
 
If, using the same two cameras, I take an image of a gray card at the same exposure and then measure the SNR by dividing the average gray signal level of the raw file (in ADU) by the noise level (standard deviation of the raw image). I will then get the same SNR for both.

Or even simpler, take an image with one camera of a gray patch. Then measure the SNR of the raw file. Then measure the SNR of a crop of the image. The SNRs will be the same.
You are right. Under your assumptions (i.e, same exposure, uniform across all pixels, same pixel size), the SNRs will be (more or less) the same, as long as the smaller pixel array of numbers still has a sufficiently large number of pixels. In this case any increase in SNR shall be realized when downsampled to the same size.
...where is the utility in saying that the same number of same size pixels with the same QE and read noise will have the same noise? Isn't that all but a tautology?
I think the number of pixels is not assumed to be same here.

This scenario is similar to a (ergodic) random source with a fixed mean and fixed variance emitting numbers (pixel values). After collecting a long enough series of numbers the mean and standard deviation shall be more or less same as a longer series, from a practical viewpoint.
 

Keyboard shortcuts

Back
Top