How does "total light" change SNR? II

I just don't understand your problem. Please put me on your ban list.

I'm tired of your gratuitous and ignorant insults.
I'm quite perplexed. Firstly, I don't have a ban list. Secondly, I'm at a loss to see where I have insulted you, gratuitously, ignorantly or otherwise.
 
I just don't understand your problem. Please put me on your ban list.

I'm tired of your gratuitous and ignorant insults.
I'm quite perplexed. Firstly, I don't have a ban list. Secondly, I'm at a loss to see where I have insulted you, gratuitously, ignorantly or otherwise.
Pretty sure I know who he meant to respond to, but that's moot now.

EDIT: Looking at who replied to what, I have to say I'm completely confused. I have no idea what prompted his reply to you.
 
Last edited:
I just don't understand your problem. Please put me on your ban list.

I'm tired of your gratuitous and ignorant insults.
I'm quite perplexed. Firstly, I don't have a ban list. Secondly, I'm at a loss to see where I have insulted you, gratuitously, ignorantly or otherwise.
Pretty sure I know who he meant to respond to, but that's moot now.

EDIT: Looking at who replied to what, I have to say I'm completely confused. I have no idea what prompted his reply to you.
No, me neither. Sometimes when I look back at my posts, I see I've been unintentionally offhand, when I'll apologise, but I can't see anything there. Still might as well apologise anyway, if I've unintentionally caused offence.
 
I just don't understand your problem. Please put me on your ban list.

I'm tired of your gratuitous and ignorant insults.
I'm quite perplexed. Firstly, I don't have a ban list. Secondly, I'm at a loss to see where I have insulted you, gratuitously, ignorantly or otherwise.
Pretty sure I know who he meant to respond to, but that's moot now.

EDIT: Looking at who replied to what, I have to say I'm completely confused. I have no idea what prompted his reply to you.
No, me neither. Sometimes when I look back at my posts, I see I've been unintentionally offhand, when I'll apologise, but I can't see anything there. Still might as well apologise anyway, if I've unintentionally caused offence.
Maybe it was because you had the audacity to respond (and take apart) the little aphorisms in his signature. The post above in which you did that followed one of his where he, I guess inadvertently, had removed them from his signature and they appeared to be part of the post. That certainly made them fair game and worthy of the response you gave them.

I should note on that score that I took the trouble to PM him some time ago that cannot is one word in his context, not two, but he appears not to have appreciated that. I would have thought that he would have wanted to correct a little mistake he was making in each post. :-)

I didn't bother to go into the situation where the variables in a model have no measurable counterpart but, at best, have loose proxies, in which case the notion of an optimal method for estimating the parameters that relate the data to the model has questionable relevance. But then that doesn't stop people from trying. ;-)

--
gollywop

D8A95C7DB3724EC094214B212FB1F2AF.jpg
 
Last edited:
I just don't understand your problem. Please put me on your ban list.

I'm tired of your gratuitous and ignorant insults.
I'm quite perplexed. Firstly, I don't have a ban list. Secondly, I'm at a loss to see where I have insulted you, gratuitously, ignorantly or otherwise.
Pretty sure I know who he meant to respond to, but that's moot now.

EDIT: Looking at who replied to what, I have to say I'm completely confused. I have no idea what prompted his reply to you.
No, me neither. Sometimes when I look back at my posts, I see I've been unintentionally offhand, when I'll apologise, but I can't see anything there. Still might as well apologise anyway, if I've unintentionally caused offence.
Maybe it was because you had the audacity to respond (and take apart) the little aphorisms in his signature. The post above in which you did that followed one of his where he, I guess inadvertently, had removed them from his signature and they appeared to be part of the post. That certainly made them fair game and worthy of the response you gave them.
Could be that, I didn't see they were his signature, I thought they were part of the post.
I should note on that score that I took the trouble to PM him some time ago that cannot is one word in his context, not two, but he appears not to have appreciated that. I would have thought that he would have wanted to correct a little mistake he was making in each post. :-)

I didn't bother to go into the situation where the variables in a model have no measurable counterpart but, at best, have loose proxies, in which case the notion of an optimal method for estimating the parameters that relate the data to the model has questionable relevance. But then that doesn't stop people from trying. ;-)
It's in their nature.

--

Bob
 
Last edited:
In the case of 2:1 downsampling, the noise is indeed reduced by a factor of 2.
Correct me if I'm wrong, but wasnt this when the area was 4X? So a factor of 2 would be the quadrature. Was it not really 2 then?
I was thinking that bi-linear was kind of like simple averaging but, judging by the amount of humble pie I've already eaten today, that's probably wrong too..
Me too! So which filter is the closest to quadrature then? Sorry, I have had time to read through all your good work.
By 2:1 downsampling, I mean measured linearly. So that's four pixels that become one. Quadrature says half the noise: sqrt(4) = 2. Binlinear interpolation gives you half the noise. But change the ratio a bit -- up or down -- and bilinear interpolation s not as good at reducing noise. That's what the graph without the AA filter says.

Jim
 
Does that help, or just muddy the water?
It just muddies the water, I'm afraid...

You're mixing up these two concepts:

(a) Signal variance considered as noise — e.g. the photon shot noise

(b) The variance of a useful signal

Let's examine these concepts in turn:

===

Concept (a)

When one considers the variation in photon count as "photon shot noise", one implicitly makes an assumption about its (presumably Poissonian) statistics — in particular, its mean value.

In such a mental framework, photon count deviations from the mean value are assumed to be noise.

This is equivalent to saying that we know what the "true" value of a signal — the mean, in this case — should be, and that based on that knowledge, we can then measure the deviations from the true value and call them "noise".

This, in turn, is equivalent to saying that the true value — i.e. the mean — stays the valid reference across all the samples we're examining.

This, in turn, is equivalent to saying that the true value — i.e. the mean — is constant.

This, in turn, is equivalent to saying that the we're only considering how our imaging system captures an idealized, constant signal, i.e. some featureless flat field. In the frequency domain, such a featureless, flat field obviously only contains energy at the zero frequency — i.e. DC.

Concept (b)

Featureless, flat fields that do not contain any non-DC component yield utterly uninteresting pictures.

Real-world, "useful" pictures contain many frequency components.

Consider this simplified model of how one particular subject — e.g. a black and white checkerboard — might have been sampled by a pixel sensor:

Consider this suite of 24 pixel values:

111188881111888811118888

Where "1" corresponds to a dark pixel, and "8" to a bright pixel.

The mean value of these 24 pixels is obviously (1+8)/2 = 4.5

It's hopefully also obvious that the standard deviation of these pixels is 3.5

Now, let's downsample this pixel data to halve the pixel count — i.e. we map the 24 pixels to 12 pixels.

The resulting downsampled pixels should, with most downsampling algorithms, look something like this:

118811881188

The mean value of these 12 pixels is obviously (1+8)/2 = 4.5

It's hopefully also obvious that the standard deviation of these 12 pixels is identical to the 24-pixel example above, and remains 3.5

A proper downsampling algorithm should thus, ideally, preserve the useful variations of the signal; such an algorithm would therefore tend to preserve the variance of the original signal.

===

To summarize, concept (a) applies to constant, featureless DC signals.

Concept (b) applies to real-world signals.

Assuming, as you did with your simulations, that a real-world signal's variance should decrease as a function of the downscaling, without first considering the "useful" frequency component of such a signal, is therefore an incorrect approach.

Incidentally, it's the "useful" frequency component of a signal that should determine the parameters and methods of bandwidth limiting we should apply to a signal if we want to avoid aliasing to appear in the downsampled result.

Also note that resampling — be it upsampling or downsampling — is irrelevant to the validity of the equivalence principle.

From a noise point of view, "equivalence" rests on the fundamental — and trivial — physical principle that if sensor A and sensor B have the same resolution, and if sensor A has twice the area of sensor B, then sensor A's pixels will be twice larger than sensor B's. Sensor A's pixels will thus, on average, generate twice as many photoelectrons as sensor B's pixels.

The relative magnitude of both:

a) The Poisson distribution photon count variation

and

b) the electron circuit noise (reset noise, ADC quantization noise, crosstalk and 1/f noise etc.)

will therefore be smaller for sensor A's pixels, yielding an improved signal to noise ratio, and therefore better discrimination between useful signal variations and random noise.

It's fortunate that an inept fruitcake like Demosthenes Mateo (dtmateojr) who couldn't even grasp such a trivial notion has finally been (self-)evicted from this forum; he was a shining embodiment of the Dunning-Kruger effect.

:-D
 
Last edited:
Does that help, or just muddy the water?
It just muddies the water, I'm afraid...

You're mixing up these two concepts:

(a) Signal variance considered as noise — e.g. the photon shot noise

(b) The variance of a useful signal

Let's examine these concepts in turn:
Personally, I don't find it useful to consider these as separate concepts. Noise is defined (engineering-wise) as the variance of a signal that one would expect to be constant. Temporal, or spatial.
===

A proper downsampling algorithm should thus, ideally, preserve the useful variations of the signal; such an algorithm would therefore tend to preserve the variance of the original signal.
If only we knew what the original signal was supposed to look like without noise, we'd be all set.
It's fortunate that an inept fruitcake like Demosthenes Mateo (dtmateojr) who couldn't even grasp such a trivial notion has finally been (self-)evicted from this forum; he was a shining embodiment of the Dunning-Kruger effect.

:-D
You know, we should not speak ill of the departed. Calling someone names really degrades your own post. I am not the forum police but let's elevate ourselves out of the mud, ok?
 
Does that help, or just muddy the water?
It just muddies the water, I'm afraid...

You're mixing up these two concepts:

(a) Signal variance considered as noise — e.g. the photon shot noise

(b) The variance of a useful signal

Let's examine these concepts in turn:
Personally, I don't find it useful to consider these as separate concepts. Noise is defined (engineering-wise) as the variance of a signal that one would expect to be constant. Temporal, or spatial.
I think it's application-dependent.

Imagine that I'm measuring the SNR of a digital sampling system to which I inject a sinusoidal test signal. If its frequency is high enough, such a signal would obviously not be constant across the acquired samples.

However, if one knows the amplitude, frequency and phase of the injected signal, one can have a pretty good idea of the value of the digital samples one should expect from the system; deviations can then be computed from what an ideal sampling system would deliver.

A useful operational definition of noise, regardless of whether the signal is constant or not, could e.g. be the deviation, or difference between the sampled result and the idealized one.

Or the variance of a signal (the error signal) that one would expect (or hope) to be zero, i.e. constant, if you will (^^;

Anyway, the point of my previous post was that a decrease in the signal's variance isn't necessarily an indicator that a downsampling algorithm is well-behaved. From an information-theoretic point of view, for those signals that do not consist solely of a DC component, a decrease in variance after downsampling might actually count as a negative.
===

A proper downsampling algorithm should thus, ideally, preserve the useful variations of the signal; such an algorithm would therefore tend to preserve the variance of the original signal.
If only we knew what the original signal was supposed to look like without noise, we'd be all set.
It's fortunate that an inept fruitcake like Demosthenes Mateo (dtmateojr) who couldn't even grasp such a trivial notion has finally been (self-)evicted from this forum; he was a shining embodiment of the Dunning-Kruger effect.

:-D
You know, we should not speak ill of the departed. Calling someone names really degrades your own post. I am not the forum police but let's elevate ourselves out of the mud, ok?
Very true, but my extreme irritation with the likes of him got the better of me (^^;
 
Last edited:
I don't think I agree with your point. Noise is perfectly well-defined. There is no need to invent new definitions. In some systems, e.g. a digital communications system with a certain modulation, the effect of noise is easy to quantify in terms such as bit error rate. In photography the effect of noise in a family snapshot vs. a medical image is much harder to quantify. None the less, for purposes of measurement we must refer the noise to some standard which may not be "interesting" from an application point-of-view, such as an unmodulated carrier, of an average grey level...

J.
 
Does that help, or just muddy the water?
It just muddies the water, I'm afraid...

You're mixing up these two concepts:

(a) Signal variance considered as noise — e.g. the photon shot noise

(b) The variance of a useful signal

Let's examine these concepts in turn:
Personally, I don't find it useful to consider these as separate concepts. Noise is defined (engineering-wise) as the variance of a signal that one would expect to be constant. Temporal, or spatial.
The point is, that you can't expect the 'signal' in a real world image to be constant, because the image is made up of discrete quanta. Having observed and participated in the discussions on these forums for a number of years, I think it is very important, for this audience, to separate them. Even if you make a perfect sensor, with 100% QE and zero read noise, there will still be photon shot noise in the images, therefore it helps to separate out. In the end, you find out what can be done with the sensor, assuming technological improvements, and what must be done by the craft of photography, maximising exposure, finding signal processing methods which can reconstruct the visually expected scene, and so on.
===

A proper downsampling algorithm should thus, ideally, preserve the useful variations of the signal; such an algorithm would therefore tend to preserve the variance of the original signal.
If only we knew what the original signal was supposed to look like without noise, we'd be all set.
We cannot know. The original signal is unpredictable and unrepeatable.
It's fortunate that an inept fruitcake like Demosthenes Mateo (dtmateojr) who couldn't even grasp such a trivial notion has finally been (self-)evicted from this forum; he was a shining embodiment of the Dunning-Kruger effect.

:-D
You know, we should not speak ill of the departed. Calling someone names really degrades your own post. I am not the forum police but let's elevate ourselves out of the mud, ok?
I'm waiting for mud wrestling to become an Olympic sport.
 
Personally, I don't find it useful to consider these as separate concepts. Noise is defined (engineering-wise) as the variance of a signal that one would expect to be constant. Temporal, or spatial.
The point is, that you can't expect the 'signal' in a real world image to be constant, because the image is made up of discrete quanta. Having observed and participated in the discussions on these forums for a number of years, I think it is very important, for this audience, to separate them. Even if you make a perfect sensor, with 100% QE and zero read noise, there will still be photon shot noise in the images, therefore it helps to separate out. In the end, you find out what can be done with the sensor, assuming technological improvements, and what must be done by the craft of photography, maximising exposure, finding signal processing methods which can reconstruct the visually expected scene, and so on.
There is an <Expected value> for the photon signal from any blackbody at constant temperature. Or, speaking again with subtle humor included....on the average, such a signal is constant.
===

A proper downsampling algorithm should thus, ideally, preserve the useful variations of the signal; such an algorithm would therefore tend to preserve the variance of the original signal.
If only we knew what the original signal was supposed to look like without noise, we'd be all set.
We cannot know. The original signal is unpredictable and unrepeatable.
I think you missed my subtle humor...
 
Personally, I don't find it useful to consider these as separate concepts. Noise is defined (engineering-wise) as the variance of a signal that one would expect to be constant. Temporal, or spatial.
The point is, that you can't expect the 'signal' in a real world image to be constant, because the image is made up of discrete quanta. Having observed and participated in the discussions on these forums for a number of years, I think it is very important, for this audience, to separate them. Even if you make a perfect sensor, with 100% QE and zero read noise, there will still be photon shot noise in the images, therefore it helps to separate out. In the end, you find out what can be done with the sensor, assuming technological improvements, and what must be done by the craft of photography, maximising exposure, finding signal processing methods which can reconstruct the visually expected scene, and so on.
There is an <Expected value> for the photon signal from any blackbody at constant temperature. Or, speaking again with subtle humor included....on the average, such a signal is constant.
Indeed, but most of us don't go around photographing black bodies (personally, I'm very happy to given the chance).
===

A proper downsampling algorithm should thus, ideally, preserve the useful variations of the signal; such an algorithm would therefore tend to preserve the variance of the original signal.
If only we knew what the original signal was supposed to look like without noise, we'd be all set.
We cannot know. The original signal is unpredictable and unrepeatable.
I think you missed my subtle humor...
Sorry, subtle humour isn't exactly my thing.
 
I asked this before, never got an answer. How, exactly, does having more photons 'reduce the error in measurement'? It would be a good start to explain first what you think it is being measured.
--
Bob
The information we need from a digital image are parameter estimates for total luminous flux in two-dimensions. The luminous flux is the lumens per sq. meter. Luminous flux is a state of nature. Error is the difference between the state of nature and our estimate for the parameter of interest (lumens per sq. meter). Error is the difference between the true, but unknown, value for the flux and our estimate.

As an adherent of Bayesian statistics, I prefer to use the term uncertainty in the parameter estimate of interest. Bayesians view data parameter estimates in terms of posterior probability density functions were the peak of the function is the most probable parameter estimate and the width of the PDF is the estimate uncertainty. I view parameter estimates as knowledge about about a state of nature not as data.
In any event, when the SNR is 1:1 the uncertainty in the state of nature we wish to estimate is high. Or the parameter-estimate error is high. The opposite happens when the SNR is 1000:1.
Clear as korvspad.
Note: If the parameter estimate of interest is luminous flux, the shot noise is not noise. The current physics depicts the fluctuations in total luminous flux as an inherent property of light. If this idea holds up, then what we call shot noise is actually a part of the state of nature we wish to estimate. It is part of the signal. I am not aware of any other case where a intrinsic property of nature (signal) is considered noise. Noise comes from our inability to make a perfect (error free) measurements. Even when the SNR is extremely high the measurement is not perfect. For instance manufacturing tolerances mean different measurement devices will yield different parameter estimates even when the SNR is extremely high.

--

– There is no substitute for signal-to-noise in the raw data
– Signal-to-noise can not be improved post facto
– Given a model, there are optimal methods to estimate the parameters that relate the data to the model
– There are no miracles
 
Does that help, or just muddy the water?
It just muddies the water, I'm afraid...

You're mixing up these two concepts:

(a) Signal variance considered as noise — e.g. the photon shot noise

(b) The variance of a useful signal

Let's examine these concepts in turn:
Personally, I don't find it useful to consider these as separate concepts. Noise is defined (engineering-wise) as the variance of a signal that one would expect to be constant. Temporal, or spatial.
I think it's application-dependent.

Imagine that I'm measuring the SNR of a digital sampling system to which I inject a sinusoidal test signal. If its frequency is high enough, such a signal would obviously not be constant across the acquired samples.

However, if one knows the amplitude, frequency and phase of the injected signal, one can have a pretty good idea of the value of the digital samples one should expect from the system; deviations can then be computed from what an ideal sampling system would deliver.
If you knew the amplitude, frequency, and phase of the injected signal, there would be no need to sample it to determine its amplitude, frequency, and phase.

As Eric said:
If only we knew what the original signal was supposed to look like without noise, we'd be all set.
Even I could write an algorithm to produce a noise-free image under those circumstances. :-)
A useful operational definition of noise, regardless of whether the signal is constant or not, could e.g. be the deviation, or difference between the sampled result and the idealized one.
The minute (second) the "idealized" one got in the picture, it ceased to become operational.
Or the variance of a signal (the error signal) that one would expect (or hope) to be zero, i.e. constant, if you will (^^;

Anyway, the point of my previous post was that a decrease in the signal's variance isn't necessarily an indicator that a downsampling algorithm is well-behaved. From an information-theoretic point of view, for those signals that do not consist solely of a DC component, a decrease in variance after downsampling might actually count as a negative.
===

A proper downsampling algorithm should thus, ideally, preserve the useful variations of the signal; such an algorithm would therefore tend to preserve the variance of the original signal.
If only we knew what the original signal was supposed to look like without noise, we'd be all set.
It's fortunate that an inept fruitcake like Demosthenes Mateo (dtmateojr) who couldn't even grasp such a trivial notion has finally been (self-)evicted from this forum; he was a shining embodiment of the Dunning-Kruger effect.

:-D
You know, we should not speak ill of the departed. Calling someone names really degrades your own post. I am not the forum police but let's elevate ourselves out of the mud, ok?
Very true, but my extreme irritation with the likes of him got the better of me (^^;
Yeah, it's hard simply to forget a bad pain existed.

--
gollywop



D8A95C7DB3724EC094214B212FB1F2AF.jpg
 
There is an <Expected value> for the photon signal from any blackbody at constant temperature. Or, speaking again with subtle humor included....on the average, such a signal is constant.
Indeed, but most of us don't go around photographing black bodies (personally, I'm very happy to given the chance).
Of course the sun and almost all thermal light sources besides LEDs are effectively blackbody radiators with a temperature, like 6500K (D65).

I am 99.9% sure you are just making a joke, but after reading your comment below, well....
I think you missed my subtle humor...
Sorry, subtle humour isn't exactly my thing.

--
Bob
 
There is an <Expected value> for the photon signal from any blackbody at constant temperature. Or, speaking again with subtle humor included....on the average, such a signal is constant.
Indeed, but most of us don't go around photographing black bodies (personally, I'm very happy to given the chance).
Of course the sun and almost all thermal light sources besides LEDs are effectively blackbody radiators with a temperature, like 6500K (D65).

I am 99.9% sure you are just making a joke, but after reading your comment below, well....
Ah, yeah, well Bob's joke wasn't exactly subtle. ;-)
I think you missed my subtle humor...
Sorry, subtle humour isn't exactly my thing.

--
Bob


--
gollywop



D8A95C7DB3724EC094214B212FB1F2AF.jpg
 
There is an <Expected value> for the photon signal from any blackbody at constant temperature. Or, speaking again with subtle humor included....on the average, such a signal is constant.
Indeed, but most of us don't go around photographing black bodies (personally, I'm very happy to given the chance).
Of course the sun and almost all thermal light sources besides LEDs are effectively blackbody radiators with a temperature, like 6500K (D65).

I am 99.9% sure you are just making a joke, but after reading your comment below, well....
I'm going to sneak in under the 0.1%. We aren't photographing the light sources, we're photographing what they reflect off. That process of reflection adds some information about those objects modulated into the noise background, but when we capture the resultant radiation, we haven't any way of knowing what was the 'average' expected value due to the light source and what was the information added in the reflection. And as far as the sensor is concerned, it is all the 'signal' - it's a bit tough expecting a sensor to be equipped with the prior knowledge required to separate the information and noise, so as far as the sensor is concerned, anything incident on it is 'signal'. Any additional 'noise' it adds is 'noise'.

--

Bob
 
There is an <Expected value> for the photon signal from any blackbody at constant temperature. Or, speaking again with subtle humor included....on the average, such a signal is constant.
Indeed, but most of us don't go around photographing black bodies (personally, I'm very happy to given the chance).
Of course the sun and almost all thermal light sources besides LEDs are effectively blackbody radiators with a temperature, like 6500K (D65).

I am 99.9% sure you are just making a joke, but after reading your comment below, well....
I'm going to sneak in under the 0.1%. We aren't photographing the light sources, we're photographing what they reflect off. That process of reflection adds some information about those objects modulated into the noise background, but when we capture the resultant radiation, we haven't any way of knowing what was the 'average' expected value due to the light source and what was the information added in the reflection. And as far as the sensor is concerned, it is all the 'signal' - it's a bit tough expecting a sensor to be equipped with the prior knowledge required to separate the information and noise, so as far as the sensor is concerned, anything incident on it is 'signal'. Any additional 'noise' it adds is 'noise'.
Doesn't matter what transmission, reflection, absorption etc. processes the signal subsequently travels through. There is still an <expected value>.
 
There is an <Expected value> for the photon signal from any blackbody at constant temperature. Or, speaking again with subtle humor included....on the average, such a signal is constant.
Indeed, but most of us don't go around photographing black bodies (personally, I'm very happy to given the chance).
Of course the sun and almost all thermal light sources besides LEDs are effectively blackbody radiators with a temperature, like 6500K (D65).

I am 99.9% sure you are just making a joke, but after reading your comment below, well....
I'm going to sneak in under the 0.1%. We aren't photographing the light sources, we're photographing what they reflect off. That process of reflection adds some information about those objects modulated into the noise background, but when we capture the resultant radiation, we haven't any way of knowing what was the 'average' expected value due to the light source and what was the information added in the reflection. And as far as the sensor is concerned, it is all the 'signal' - it's a bit tough expecting a sensor to be equipped with the prior knowledge required to separate the information and noise, so as far as the sensor is concerned, anything incident on it is 'signal'. Any additional 'noise' it adds is 'noise'.
Doesn't matter what transmission, reflection, absorption etc. processes the signal subsequently travels through. There is still an <Expected value>.
 

Keyboard shortcuts

Back
Top