How does "total light" change SNR?

edhannon

Senior Member
Messages
1,761
Solutions
7
Reaction score
342
Location
Seminole, FL, US
Despite all of the explanations in the multitude of posts on this issue, I still do not understand how the total light on a sensor can change the SNR of an image.

Here is a drawing of the model I have in mind for a typical digital camera raw flow:

Typical Raw Flow
Typical Raw Flow

Each pixel in the sensor produces a value in the raw file. It's value (and its shot noise) is dependent on the light falling on that pixel. Its value is independent of the value (and shot noise) of the other pixels.

The raw file is transferred to a computer where a de-mosaic algorithm is applied. The SNR will change depending on the de-mosaic algorithm used.

If down-sampling is applied, that will also result in a change in the SNR. (I personally believe that this is also dependent on the algorithm used.)

I can see how down-sampling can change the SNR as a result of any averaging used in the algorithm. This is also true of the de-mosaic algorithm.

If you ignore the change in SNR due to de-mosaicking and assume that the change in SNR due to down-sampling is proportional to the square root of the ratios of the resolutions in all algorithms, then the total light equation gets the same answer. But this does not prove that total light is changing SNR.

Can someone explain the mechanism by which the "total light" changes the SNR?

--
Ed Hannon
 
Because signal and noise add up differently, the latter adding up in quadrature.

For sake of simplicity let's remove uncecessary parameters. Let's assume we have a two-pixel sensor with no color filter array and lets only be concerned about the maximum SNR.

Our two pixel sensor's pixels have full well capacity of 10.000 electrons and read noise of 5 electrons.

For one pixel the key parameters would be:

signal = 10.000 (let's call it S)

photon shot noise = sqrt(S) (let's calll it PN)

read noise = 5 (let's call it RN)

total noise = sqrt(PN^2 + RN^2)

Thus maximum SNR for one pixel would be 99,88:1



For the sensor - that is two pixels in this case:

signal = 2*S

noise = sqrt(2*PN^2 + 2*RN^2)

Thus maximum SNR for the sensor of two pixels would be 141,24:1
 
"Total light" could mean the sum of all the light waves' amplitudes that are converted to stored electron charge when the shutter is open. This is the analog signal level.

The read noise is essentially constant as the sensor analog signal level amplification is constant.

The shot noise is a function of light amplitude.

For some cameras increasing brightness by electronic amplification (a.k.a ISO) of the DC signals on their way to the ADC does not significantly increase the read noise. In other cameras this opposite happens at high ISOs.

The ADC can add electronic noise. Again the level is camera-brand dependent. It is not unusual for the ADC read noise to be inconsequential.

If we assume ISO amplification noise and ADC read noise is too low (compared to the sensor read noise and light wave shot noise) to be digitized, then the sensor read noise and shot noise are the only important noise sources. The former is constant and the latter is not.

After the data leaves the ADC, the SNR cannot be increased. Clumsiness or ill-designed software can decrease the SNR during demosaiging.

Noise can be filtered as well. Non-selective filtering decreases the total SNR. Noisy pixels are averaged with less noisy pixels. This improves the appearance of the noisy pixels at the expense of the others. Often this trade-off works out well. But the total information content (total signal) remains constant.

Downsampling could decrease the total SNR, but it shouldn't. Downsampling should reduce the information content such that the SNR remains constant. If it doesn't, then the total information will remain constant as iy does with filtering.

Those who invent a way to increase the total information content (SNR) of data after the measurement is over will become spectacularly wealthy. The commercial benefits outside of photography would be profound. No one has done this so far.
 
For the purposes of this thread, that is.

Already we have posts a) implying photons (use of term "shot noise) and b) talking about waves and their amplitude.

What is this thread's definition of "total light" and what are it's units?
 
For the purposes of this thread, that is.

Already we have posts a) implying photons (use of term "shot noise) and b) talking about waves and their amplitude.

What is this thread's definition of "total light" and what are it's units?
 
We don't need to introduce color processing to understand how total light determines the noise in an image. So, if we assume a monochrome sensor we don't need to include the effect of a demosaicing algorithm on the noise.

I also think it is much simpler to consider signal and noise over a band of detail resolutions normalized to image height. So, for instance, we could talk about the signal and noise in the band of 10 to 100 cycles per image height. This approach will be most familiar to folks already acquainted with Fourier transform analysis. The advantage of this approach is that we see that low-pass filtering just selects which frequency components of both the signal and the noise are present in a measurement, while the SNR at any one frequency is unaffected by the filtering.

Finally, by analyzing the noise in a fixed resolution band for typical images- which are shot noise limited; we see that the signal to noise ratio of a given image will scale by the square root of the total photons captured as the total light captured for that image is changed.

For the unusual case where the noise is dominated by an unchanging read noise level, the image SNR will change in proportion to the total light captured.
 
Last edited:
Despite all of the explanations in the multitude of posts on this issue, I still do not understand how the total light on a sensor can change the SNR of an image.

Can someone explain the mechanism by which the "total light" changes the SNR?
Image data is just an array of numbers. So, it is all in the interpretation and definition of noise. The definition of noise usually taken here is the standard deviation of a uniform patch of pixels, i.e., a bunch of pixels that should have identical values, but the measured values are slightly wiggling around some base value. (There are other definitions also, but this one provides some nice formulae, such as the relation between mean value of signal to standard deviation for shot noise.) So, with this notion of noise, the variation of a uniform patch of pixel array in relation to the (average) signal becomes less marked as the amount of light falling on the sensor increases - hence, the notion of a higher SNR. BTW, with other definitions of noise, the SNR value will be different. So SNR is not an inherently fixed property of an image. It is how we define, interpret and measure it.

--
Dj Joofa
http://www.djjoofa.com
 
Last edited:
Because signal and noise add up differently, the latter adding up in quadrature.

For sake of simplicity let's remove uncecessary parameters. Let's assume we have a two-pixel sensor with no color filter array and lets only be concerned about the maximum SNR.

Our two pixel sensor's pixels have full well capacity of 10.000 electrons and read noise of 5 electrons.

For one pixel the key parameters would be:

signal = 10.000 (let's call it S)

photon shot noise = sqrt(S) (let's calll it PN)

read noise = 5 (let's call it RN)

total noise = sqrt(PN^2 + RN^2)

Thus maximum SNR for one pixel would be 99,88:1

For the sensor - that is two pixels in this case:

signal = 2*S

noise = sqrt(2*PN^2 + 2*RN^2)

Thus maximum SNR for the sensor of two pixels would be 141,24:1


Can you please calculate the SNR of the image below?



goofscloseup.jpg




--
Dj Joofa
 
Despite all of the explanations in the multitude of posts on this issue, I still do not understand how the total light on a sensor can change the SNR of an image.

Can someone explain the mechanism by which the "total light" changes the SNR?
Image data is just an array of numbers. So, it is all in the interpretation and definition of noise. The definition of noise usually taken here is the standard deviation of a uniform patch of pixels, i.e., a bunch of pixels that should have identical values, but the measured values are slightly wiggling around some base value. (There are other definitions also, but this one provides some nice formulae, such as the relation between mean value of signal to standard deviation for shot noise.) So, with this notion of noise, the variation of a uniform patch of pixel array in relation to the (average) signal becomes less marked as the amount of light falling on the sensor increases - hence, the notion of a higher SNR. BTW, with other definitions of noise, the SNR value will be different. So SNR is not an inherently fixed property of an image. It is how we define, interpret and measure it.
 
Despite all of the explanations in the multitude of posts on this issue, I still do not understand how the total light on a sensor can change the SNR of an image.

Can someone explain the mechanism by which the "total light" changes the SNR?
Image data is just an array of numbers. So, it is all in the interpretation and definition of noise. The definition of noise usually taken here is the standard deviation of a uniform patch of pixels, i.e., a bunch of pixels that should have identical values, but the measured values are slightly wiggling around some base value. (There are other definitions also, but this one provides some nice formulae, such as the relation between mean value of signal to standard deviation for shot noise.) So, with this notion of noise, the variation of a uniform patch of pixel array in relation to the (average) signal becomes less marked as the amount of light falling on the sensor increases - hence, the notion of a higher SNR. BTW, with other definitions of noise, the SNR value will be different. So SNR is not an inherently fixed property of an image. It is how we define, interpret and measure it.
 
Despite all of the explanations in the multitude of posts on this issue, I still do not understand how the total light on a sensor can change the SNR of an image.

Can someone explain the mechanism by which the "total light" changes the SNR?
Image data is just an array of numbers. So, it is all in the interpretation and definition of noise. The definition of noise usually taken here is the standard deviation of a uniform patch of pixels, i.e., a bunch of pixels that should have identical values, but the measured values are slightly wiggling around some base value. (There are other definitions also, but this one provides some nice formulae, such as the relation between mean value of signal to standard deviation for shot noise.) So, with this notion of noise, the variation of a uniform patch of pixel array in relation to the (average) signal becomes less marked as the amount of light falling on the sensor increases - hence, the notion of a higher SNR. BTW, with other definitions of noise, the SNR value will be different. So SNR is not an inherently fixed property of an image. It is how we define, interpret and measure it.

--
Dj Joofa
http://www.djjoofa.com
With my definition of SNR in a given resolution band for a given image, we can also derive simple formula for how the SNR of that image changes as the total light changes. Note that I am not trying to compare the SNR between different images with this simplification, only the same image with different amounts of light captured. With this approach we don't need, and I did not assume, uniform signal over a group of pixels.
Can this definition be used to answer the question I asked in this post:

http://www.dpreview.com/forums/post/54345283
Your question is poorly posed. I have proposed a model which makes testable predictions about the results of experiments: for instance on the relationship between the mean and the difference between successive images taken of a fixed subject with constant illumination. Properly performed, these experiments will demonstrate that the variation in the recorded image is a function of the total light recorded just as I explained.
 
Despite all of the explanations in the multitude of posts on this issue, I still do not understand how the total light on a sensor can change the SNR of an image.

Here is a drawing of the model I have in mind for a typical digital camera raw flow:

Typical Raw Flow
Typical Raw Flow

Each pixel in the sensor produces a value in the raw file. It's value (and its shot noise) is dependent on the light falling on that pixel. Its value is independent of the value (and shot noise) of the other pixels.

The raw file is transferred to a computer where a de-mosaic algorithm is applied. The SNR will change depending on the de-mosaic algorithm used.

If down-sampling is applied, that will also result in a change in the SNR. (I personally believe that this is also dependent on the algorithm used.)

I can see how down-sampling can change the SNR as a result of any averaging used in the algorithm. This is also true of the de-mosaic algorithm.

If you ignore the change in SNR due to de-mosaicking and assume that the change in SNR due to down-sampling is proportional to the square root of the ratios of the resolutions in all algorithms, then the total light equation gets the same answer. But this does not prove that total light is changing SNR.

Can someone explain the mechanism by which the "total light" changes the SNR?
Let's simplify the situation. Consider two 24 MP sensors, one twice the size (4x the area) of the other, both with the same QE and negligible read noise.

For a given exposure, 4x as much light falls on the larger sensor than the smaller sensor, resulting in half the NSR (twice the SNR).

Simples.
 
Despite all of the explanations in the multitude of posts on this issue, I still do not understand how the total light on a sensor can change the SNR of an image.

Can someone explain the mechanism by which the "total light" changes the SNR?
Image data is just an array of numbers. So, it is all in the interpretation and definition of noise. The definition of noise usually taken here is the standard deviation of a uniform patch of pixels, i.e., a bunch of pixels that should have identical values, but the measured values are slightly wiggling around some base value. (There are other definitions also, but this one provides some nice formulae, such as the relation between mean value of signal to standard deviation for shot noise.) So, with this notion of noise, the variation of a uniform patch of pixel array in relation to the (average) signal becomes less marked as the amount of light falling on the sensor increases - hence, the notion of a higher SNR. BTW, with other definitions of noise, the SNR value will be different. So SNR is not an inherently fixed property of an image. It is how we define, interpret and measure it.
 
Despite all of the explanations in the multitude of posts on this issue, I still do not understand how the total light on a sensor can change the SNR of an image.

Here is a drawing of the model I have in mind for a typical digital camera raw flow:

Typical Raw Flow
Typical Raw Flow

Each pixel in the sensor produces a value in the raw file. It's value (and its shot noise) is dependent on the light falling on that pixel. Its value is independent of the value (and shot noise) of the other pixels.

The raw file is transferred to a computer where a de-mosaic algorithm is applied. The SNR will change depending on the de-mosaic algorithm used.

If down-sampling is applied, that will also result in a change in the SNR. (I personally believe that this is also dependent on the algorithm used.)

I can see how down-sampling can change the SNR as a result of any averaging used in the algorithm. This is also true of the de-mosaic algorithm.

If you ignore the change in SNR due to de-mosaicking and assume that the change in SNR due to down-sampling is proportional to the square root of the ratios of the resolutions in all algorithms, then the total light equation gets the same answer. But this does not prove that total light is changing SNR.

Can someone explain the mechanism by which the "total light" changes the SNR?
Let's simplify the situation. Consider two 24 MP sensors, one twice the size (4x the area) of the other, both with the same QE and negligible read noise.

For a given exposure, 4x as much light falls on the larger sensor than the smaller sensor, resulting in half the NSR (twice the SNR).

Simples.
To be exact at any resolution, we need to state that the two sensors are recording the same image except for the amount of light. So, somehow we need the same diffraction effects with a different amount of light- this could be the case if we expose both images at the same base ISO for the two sensors, with the larger sensor using four times the exposure time.

If the two images only need to be the same for the range of resolutions that I am concerned with, then the two images don't necessarily need to have exactly the same physical aperture. In that case, for a subject right on the focus plane, the two cameras might be set to the same exposure time and f-stop, in which case the larger sensor camera again records more photons in proportion to its area.
 
Despite all of the explanations in the multitude of posts on this issue, I still do not understand how the total light on a sensor can change the SNR of an image.

Can someone explain the mechanism by which the "total light" changes the SNR?
Image data is just an array of numbers. So, it is all in the interpretation and definition of noise. The definition of noise usually taken here is the standard deviation of a uniform patch of pixels, i.e., a bunch of pixels that should have identical values, but the measured values are slightly wiggling around some base value. (There are other definitions also, but this one provides some nice formulae, such as the relation between mean value of signal to standard deviation for shot noise.) So, with this notion of noise, the variation of a uniform patch of pixel array in relation to the (average) signal becomes less marked as the amount of light falling on the sensor increases - hence, the notion of a higher SNR. BTW, with other definitions of noise, the SNR value will be different. So SNR is not an inherently fixed property of an image. It is how we define, interpret and measure it.

--
Dj Joofa
http://www.djjoofa.com
With my definition of SNR in a given resolution band for a given image, we can also derive simple formula for how the SNR of that image changes as the total light changes. Note that I am not trying to compare the SNR between different images with this simplification, only the same image with different amounts of light captured. With this approach we don't need, and I did not assume, uniform signal over a group of pixels.
Can this definition be used to answer the question I asked in this post:

http://www.dpreview.com/forums/post/54345283
Your question is poorly posed. I have proposed a model which makes testable predictions about the results of experiments: for instance on the relationship between the mean and the difference between successive images taken of a fixed subject with constant illumination. Properly performed, these experiments will demonstrate that the variation in the recorded image is a function of the total light recorded just as I explained.
Yes, I agree with that. However, we already know (from several previous models) that variation in a recorded image is a function of the total light recorded, in one way or another. However, what I see is that while a large number of models are being proposed for how to calculate SNRs, it still seems difficult to calculate 'SNR' of a give photo that I provide.
I like the simple question posed by Joofa, and of course it all hinges on what we mean by SNR. Noise usually means the std.dev. of a measurement taken repeated times, in space or time, that should nominally have zero std. dev. Starting with a single image, you have to make many assumptions in order to calculate SNR. I know I couldn't do it without more information or a lot of assumptions.

Then again, I think most of us know what the OP was probably asking, so I was glad to see some guys being helpful here and making just the normal assumptions...Poisson process....shot noise...square root of signal....blah blah blah.

To summarize, it is difficult to say what the SNR of a signal image is, but I know it when I see it.
 
Last edited:
Despite all of the explanations in the multitude of posts on this issue, I still do not understand how the total light on a sensor can change the SNR of an image.

Here is a drawing of the model I have in mind for a typical digital camera raw flow:

Typical Raw Flow
Typical Raw Flow

Each pixel in the sensor produces a value in the raw file. It's value (and its shot noise) is dependent on the light falling on that pixel. Its value is independent of the value (and shot noise) of the other pixels.

The raw file is transferred to a computer where a de-mosaic algorithm is applied. The SNR will change depending on the de-mosaic algorithm used.

If down-sampling is applied, that will also result in a change in the SNR. (I personally believe that this is also dependent on the algorithm used.)

I can see how down-sampling can change the SNR as a result of any averaging used in the algorithm. This is also true of the de-mosaic algorithm.

If you ignore the change in SNR due to de-mosaicking and assume that the change in SNR due to down-sampling is proportional to the square root of the ratios of the resolutions in all algorithms, then the total light equation gets the same answer. But this does not prove that total light is changing SNR.

Can someone explain the mechanism by which the "total light" changes the SNR?
Let's simplify the situation. Consider two 24 MP sensors, one twice the size (4x the area) of the other, both with the same QE and negligible read noise.

For a given exposure, 4x as much light falls on the larger sensor than the smaller sensor, resulting in half the NSR (twice the SNR).

Simples.
To be exact at any resolution, we need to state that the two sensors are recording the same image except for the amount of light. So, somehow we need the same diffraction effects with a different amount of light- this could be the case if we expose both images at the same base ISO for the two sensors, with the larger sensor using four times the exposure time.
Adding resolution into the mix adds secondary effects not merely because of diffraction but lens aberrations. For example, a 25 / 1.4 at f/1.4 on mFT vs a 50 / 1.4 at f/2.8 on FF, the latter resolving higher.

Clearly, the photo with higher resolution can trade some or all of that higher resolution for a less noisy photo with noise filtering. How much, of course, depends on the resolution differential.
If the two images only need to be the same for the range of resolutions that I am concerned with, then the two images don't necessarily need to have exactly the same physical aperture. In that case, for a subject right on the focus plane, the two cameras might be set to the same exposure time and f-stop, in which case the larger sensor camera again records more photons in proportion to its area.
Alternatively, the larger sensor system can use the same aperture diameter for the same DOF and diffraction, and a concomitantly longer shutter speed, motion blur permitting.
 
So far there have been a lot of assertions that total light striking the sensor impacts SNR and some explanations on how one would compute the SNR if this were true. There are also a few patently false statements (e.g. de-mosaicking and down converting do not change SNR). But no description of the mechanism that allows total light to impact the SNR after down sampling.

I completely agree that down sampling increases SNR. I also agree that, if you compare two sensors with the same pixel level SNR where one is a crop sensor and the other a FF then the gain in down sampling (assuming averaging algorithm) is given by sqrt(n1/n2). This results from the adding of signal and noise, where correlated signal adds directly and uncorrelated noise adds in quadrature.

But what I am still looking for is an explanation of how the total light on the sensor changes SNR. How much light strikes the entire sensor does not change the SNR of individual pixels. The raw file reports individual pixels. The value of each pixel (and its shot noise) is independent of the other pixels. What in the raw file changes between a crop sensor and a full frame that have the same pixel level SNR other than the total number of pixels?
 
Despite all of the explanations in the multitude of posts on this issue, I still do not understand how the total light on a sensor can change the SNR of an image.

Here is a drawing of the model I have in mind for a typical digital camera raw flow:

Typical Raw Flow
Typical Raw Flow
Allowing some previous comments of yours to further contextualize things here:
If you want to account for the improvement in noise that results from re-sampling then do so - but it occurs at re-sampling from application of addition of signals and noise - not in the sensor. So report it as due to re-sampling gain - but not to the total light on the sensor.
I have a simple question for you: if you don't expose, say, a D800 sensor to a greater total amount of light compared to a D7000 sensor, is it still possible to realize a SNR advantage through downsampling?

Downsampling will still improve the pixel-level SNR, and so will some fancy noise reduction software. But such gains are never endless. Rather, the scope for such gains depends on the pixel-level SNR you had before you started. And that depends on the total amount of light that fell on the sensor.

The bottom line is that if you deny that total light is the factor that is giving you this improved scope, then you might as well be on some new age forum talking about magical incantations because if there isn't actually something extra in the larger data set then that's the sort of thing you'd need to fall back on instead.
 
So far there have been a lot of assertions that total light striking the sensor impacts SNR and some explanations on how one would compute the SNR if this were true. There are also a few patently false statements (e.g. de-mosaicking and down converting do not change SNR). But no description of the mechanism that allows total light to impact the SNR after down sampling.

I completely agree that down sampling increases SNR. I also agree that, if you compare two sensors with the same pixel level SNR where one is a crop sensor and the other a FF then the gain in down sampling (assuming averaging algorithm) is given by sqrt(n1/n2). This results from the adding of signal and noise, where correlated signal adds directly and uncorrelated noise adds in quadrature.

But what I am still looking for is an explanation of how the total light on the sensor changes SNR. How much light strikes the entire sensor does not change the SNR of individual pixels. The raw file reports individual pixels. The value of each pixel (and its shot noise) is independent of the other pixels. What in the raw file changes between a crop sensor and a full frame that have the same pixel level SNR other than the total number of pixels?
As I said above, consider two 24 MP sensors, one twice the size (4x the area) of the other, both with the same QE and negligible read noise.

So, if both sensors are subject to the same exposure, how will the noise compare and why?
 
So far there have been a lot of assertions that total light striking the sensor impacts SNR and some explanations on how one would compute the SNR if this were true. There are also a few patently false statements (e.g. de-mosaicking and down converting do not change SNR). But no description of the mechanism that allows total light to impact the SNR after down sampling.

I completely agree that down sampling increases SNR. I also agree that, if you compare two sensors with the same pixel level SNR where one is a crop sensor and the other a FF then the gain in down sampling (assuming averaging algorithm) is given by sqrt(n1/n2). This results from the adding of signal and noise, where correlated signal adds directly and uncorrelated noise adds in quadrature.

But what I am still looking for is an explanation of how the total light on the sensor changes SNR. How much light strikes the entire sensor does not change the SNR of individual pixels. The raw file reports individual pixels. The value of each pixel (and its shot noise) is independent of the other pixels. What in the raw file changes between a crop sensor and a full frame that have the same pixel level SNR other than the total number of pixels?
As I said above, consider two 24 MP sensors, one twice the size (4x the area) of the other, both with the same QE and negligible read noise.

So, if both sensors are subject to the same exposure, how will the noise compare and why?
The measured noise of the raw file (standard deviation of a gray patch) will be the same. In fact, if I measure the SNR of a subset of the patch in the raw file and then the whole patch, it will again be the same. But the whole patch gets more total light than the subset.

The only time SNR will measure different is if the larger is if the larger is down sized to match the smaller. And then the change is only measurable after down sizing.

There is no way to directly measure in the raw file the SNR due to total light. The quoted "total light" or "Print" SNR is actually the measured pixel level SNR adjusted with a factor based on the amount of down sampling.

Again, no one has shown a physical path from the total light from an array of independent pixels read into an array in the raw file to the SNR gain from down sampling. There is ample justification for stating that the act of down sizing increases the SNR but none for stating that this occurs due to the total light on the sensor.
 

Keyboard shortcuts

Back
Top