How does "total light" change SNR? II

There is a difference here between a sensel in a camera and a microphone. Both are single devices, but the mic gives a sequence of numbers (in time) which can be called a signal. The sensel gives one number, which is better called a measurement or estimate.
Since the ADC digitizes a DC voltage, I can not accept that the sensor does not generate signals.

– There is no substitute for signal-to-noise in the raw data
– Signal-to-noise can not be improved post facto
– Given a model, there are optimal methods to estimate the parameters that relate the data to the model
– There are no miracles
 
More pixels in the same area will give less accuracy in the measurement from each each pixel, because each receives fewer photons, and therefore there is more random variation in the measurements across an array of pixels. That is, more noise, most visible in smooth areas such as sky.
More pixels will sample the scene more accurately, except for when the light is so low that the read noise dwarfs the photon noise, as larger pixels tend to have less read noise per proportion of the photo, which, of course, does not happen for photos involving the sky except for night photos.
I thinking he was referring to more pixels in the same area and by implication smaller pixels having, on average, more shot noise presumably expressed as a ratio. Unless he's lying, of course . .
The shot noise will be the same for the same proportion of the photo at the same spatial frequency.
You seem to have gone off at a tangent: DC talks about pixels with not a hint of 'proportion of the photo' nor of 'spatial frequency'. I know you are not given to straw man arguments but I suppose a little obfuscation doesn't hurt?

Consider a largish pixel receiving , um, 49,000 photons (100% QE of course). Consider pixels half the size therefore receiving 7,000 photons each (same QE). Is the uncertainty in the big one about 0.45%? For the little one, is it not 1.2%? Or standard deviations or however that's done?

Maybe we could discuss that? Of course one image is bigger on your screen than the other. Hmm . . . .

So do we now have to account for output image size and do we now have to introduce re-sampling into the equation?
 
More pixels in the same area will give less accuracy in the measurement from each each pixel, because each receives fewer photons, and therefore there is more random variation in the measurements across an array of pixels. That is, more noise, most visible in smooth areas such as sky.
More pixels will sample the scene more accurately, except for when the light is so low that the read noise dwarfs the photon noise, as larger pixels tend to have less read noise per proportion of the photo, which, of course, does not happen for photos involving the sky except for night photos.
I thinking he was referring to more pixels in the same area and by implication smaller pixels having, on average, more shot noise presumably expressed as a ratio. Unless he's lying, of course . .
The shot noise will be the same for the same proportion of the photo at the same spatial frequency.
You seem to have gone off at a tangent: DC talks about pixels with not a hint of 'proportion of the photo' nor of 'spatial frequency'. I know you are not given to straw man arguments but I suppose a little obfuscation doesn't hurt?
Please cut the attitude. Yes, the shot noise per pixel will be higher -- no one says or implies otherwise.
Consider a largish pixel receiving , um, 49,000 photons (100% QE of course). Consider pixels half the size therefore receiving 7,000 photons each (same QE). Is the uncertainty in the big one about 0.45%? For the little one, is it not 1.2%? Or standard deviations or however that's done?

Maybe we could discuss that? Of course one image is bigger on your screen than the other. Hmm . . . .
The natural and implicit conditions are viewing the photo at the same size from the same distance.
So do we now have to account for output image size and do we now have to introduce re-sampling into the equation?
Always, since that is how we view photos.
 
More pixels in the same area will give less accuracy in the measurement from each each pixel, because each receives fewer photons, and therefore there is more random variation in the measurements across an array of pixels. That is, more noise, most visible in smooth areas such as sky.
More pixels will sample the scene more accurately, except for when the light is so low that the read noise dwarfs the photon noise, as larger pixels tend to have less read noise per proportion of the photo, which, of course, does not happen for photos involving the sky except for night photos.
I thinking he was referring to more pixels in the same area and by implication smaller pixels having, on average, more shot noise presumably expressed as a ratio. Unless he's lying, of course . .
The shot noise will be the same for the same proportion of the photo at the same spatial frequency.
You seem to have gone off at a tangent: DC talks about pixels with not a hint of 'proportion of the photo' nor of 'spatial frequency'. I know you are not given to straw man arguments but I suppose a little obfuscation doesn't hurt?
Please cut the attitude. Yes, the shot noise per pixel will be higher -- no one says or implies otherwise.
Sorry, comment withdrawn - was supposed to have been gentle sarcasm. My bad.
Consider a largish pixel receiving , um, 49,000 photons (100% QE of course). Consider pixels half the size therefore receiving 7,000 photons each (same QE). Is the uncertainty in the big one about 0.45%? For the little one, is it not 1.2%? Or standard deviations or however that's done?

Maybe we could discuss that? Of course one image is bigger on your screen than the other. Hmm . . . .
The natural and implicit conditions are viewing the photo at the same size from the same distance.
Aha! Now it is clear. So, for example, what is the effect of down-sampling the larger image to the same size as the smaller? Is that 1.2% above reduced by sqrt(2) to 0.85% assuming a simple bi-linear reduction?

I am not being deliberately obtuse, that is a genuine question seeking an answer.
So do we now have to account for output image size and do we now have to introduce re-sampling into the equation?
Always, since that is how we view photos.
Thanks for the explanations which goes a long way to explaining the differing assertions in this thread.

--
Cheers,
Ted
 
Last edited:
More pixels in the same area will give less accuracy in the measurement from each each pixel, because each receives fewer photons, and therefore there is more random variation in the measurements across an array of pixels. That is, more noise, most visible in smooth areas such as sky.
More pixels will sample the scene more accurately, except for when the light is so low that the read noise dwarfs the photon noise, as larger pixels tend to have less read noise per proportion of the photo, which, of course, does not happen for photos involving the sky except for night photos.
I thinking he was referring to more pixels in the same area and by implication smaller pixels having, on average, more shot noise presumably expressed as a ratio. Unless he's lying, of course . .
The shot noise will be the same for the same proportion of the photo at the same spatial frequency.
You seem to have gone off at a tangent: DC talks about pixels with not a hint of 'proportion of the photo' nor of 'spatial frequency'. I know you are not given to straw man arguments but I suppose a little obfuscation doesn't hurt?
Please cut the attitude. Yes, the shot noise per pixel will be higher -- no one says or implies otherwise.
Sorry, comment withdrawn - I forgot the smiley. My bad.
Ah. No worries!
Consider a largish pixel receiving , um, 49,000 photons (100% QE of course). Consider pixels half the size therefore receiving 7,000 photons each (same QE). Is the uncertainty in the big one about 0.45%? For the little one, is it not 1.2%? Or standard deviations or however that's done?

Maybe we could discuss that? Of course one image is bigger on your screen than the other. Hmm . . . .
The natural and implicit conditions are viewing the photo at the same size from the same distance.
Aha! Now it is clear.
So do we now have to account for output image size and do we now have to introduce re-sampling into the equation?
Always, since that is how we view photos.
Thanks for the explanations which goes a long way to explaining the differing assertions in this thread.
I keep embedding Lee Jay's photo which really tells the whole story, and tells it well:

Pixel%20density%20test%20results.jpg


When viewed at extreme enlargements, the middle column shows that more smaller pixels will be more noisy than fewer larger pixels (all else equal), but that this noise will be at a higher frequency. For scenes with detail, the more noisy photo most certainly has the "higher IQ", but if we were looking at a scene with little to know detail (e.g. sky), it would be "lower IQ".

However, the third column shows what happens when we apply noise filtering to the photo made with more pixels (again, all else equal). Here, we see that the photo made from more pixels is superior in every respect.

So, with the exception of very low light scenes where read noise is significant (since more smaller pixels typically has more read noise per proportion of the photo than fewer larger pixels for sensors of the same generation), more smaller pixels will deliver the superior photo, although noise filtering may be needed to give the optimum balance of noise vs detail.
 
Last edited:
Jim,

Yes, Schottky was the first scientist to study and characterize shoot noise. For decades this phenomenon was thought to originate from what goes on in the photodiodes, i.e. measurement uncertainty.

Recently experimental results showed shot noise is not related to the photodiodes but is an inherent property of light. (link ).
That page is dated 2013. I think you'll find that those authors did not discover that photon shot noise is 'an inherent property of light'. That was already a well known fact (I don't know who 'discovered' it, but it was already being talked about by Emil Martinec and many others when I joined these forums in 2007).
Schottky was wrong about the source of the signal fluctuations.
Schottky wasn't talking about image sensors. He was talking about electron shot noise in semiconductors, we're talking about photon shot noise. The statistics are the same, the carrier is different.
But Schottky did follow Occam's Razor by invoking the simplest explanation given the data and all of his prior knowledge. He was wrong for the right reasons. This is all most scientists can hope for long after their work is published.

After decades of using the terms shot noise and photon noise, there is no hope the rigorous statistical definition of noise (measurement error or prameter estimate uncertainy) will ever be abandoned. Except for certain experiments, it probably doesn't matter.

I hadn't considered that photon noise really isn't noise until this thread. The signal fluctuations are a state-of-nature. The total model for the raw file information content should include the photon noise in the signal parameter terms, not in the noise parameter terms. It is a coincidence photon noise behaves similarly to measurement noise.
All that is true, except that the shot noise recorded by the sensor isn't exactly the shot noise in the photons, because there is another random process going on, the conversion of photons to electrons. Current sensors have about a 50% conversion rate (if the photon makes its way through the CFA) and that also is random.
 
Jim,

Yes, Schottky was the first scientist to study and characterize shoot noise. For decades this phenomenon was thought to originate from what goes on in the photodiodes, i.e. measurement uncertainty.

Recently experimental results showed shot noise is not related to the photodiodes but is an inherent property of light. (link ).
That page is dated 2013. I think you'll find that those authors did not discover that photon shot noise is 'an inherent property of light'. That was already a well known fact (I don't know who 'discovered' it, but it was already being talked about by Emil Martinec and many others when I joined these forums in 2007).
Schottky was wrong about the source of the signal fluctuations.
Schottky wasn't talking about image sensors. He was talking about electron shot noise in semiconductors, we're talking about photon shot noise. The statistics are the same, the carrier is different.
But Schottky did follow Occam's Razor by invoking the simplest explanation given the data and all of his prior knowledge. He was wrong for the right reasons. This is all most scientists can hope for long after their work is published.

After decades of using the terms shot noise and photon noise, there is no hope the rigorous statistical definition of noise (measurement error or prameter estimate uncertainy) will ever be abandoned. Except for certain experiments, it probably doesn't matter.

I hadn't considered that photon noise really isn't noise until this thread. The signal fluctuations are a state-of-nature. The total model for the raw file information content should include the photon noise in the signal parameter terms, not in the noise parameter terms. It is a coincidence photon noise behaves similarly to measurement noise.
All that is true, except that the shot noise recorded by the sensor isn't exactly the shot noise in the photons, because there is another random process going on, the conversion of photons to electrons. Current sensors have about a 50% conversion rate (if the photon makes its way through the CFA) and that also is random.
Either way, the photoelectrons counted by a pixel still follow the Poisson Distribution. Chris (cpw) did a derivation of that a while back, but I don't have the link handy.
 
However you decide to define what the sensor measures, it arises from a state of nature.
It's not a phrase that I like very much. For a start, I'm not sure what 'state' means when applied to nature as you are applying it - the 'state' seems to depend on how you observe it.
I can not wrap my mind around rejecting the concept that the total light that exits a lens was not created by nature. That light has energy. The energy has a true, single value. The energy is a state of nature.

At any rate, the concept of a state-of-nature originates from the work of Edwin T. Jaynes. Jaynes was the Wayman Crow Distinguished Professor of Physics at Washington University. I wonder how his "this weasly way of appearing different " ever made it to press in peer-reviewed scientific journals?
Most peer reviewed physics journals are all about weasly ways of appearing different. One of the things that you need to realise about Jayne's 'state of nature' is that it is not a 'state of nature' - it is a conceptual abstraction - a 'state of nature' is an infinite set of integers. here is your basic problem. At some stage, when the GUT is finally constructed, we might be able to determine that there is a singular 'state of nature' corresponding to some observed phenomenon. In that sense it is a useful abstraction, but what it isn't, is how the universe functions. It is a model of how it functions.
– There is no substitute for signal-to-noise in the raw data
I'd agree that.
– Signal-to-noise can not be improved post facto
Given some priors, it can. that's how noise reduction works, which definitely does improve signal to noise ratio, though at other costs.
– Given a model, there are optimal methods to estimate the parameters that relate the data to the model
Sure, but I'm not sure what that applies to here.
– There are no miracles
Indeed there aren't. And in relation to this wider discussion, there has been no process descried by which 'downsampling' would improve SNR.
 
Jim,

Yes, Schottky was the first scientist to study and characterize shoot noise. For decades this phenomenon was thought to originate from what goes on in the photodiodes, i.e. measurement uncertainty.

Recently experimental results showed shot noise is not related to the photodiodes but is an inherent property of light. (link ).
That page is dated 2013. I think you'll find that those authors did not discover that photon shot noise is 'an inherent property of light'. That was already a well known fact (I don't know who 'discovered' it, but it was already being talked about by Emil Martinec and many others when I joined these forums in 2007).
Schottky was wrong about the source of the signal fluctuations.
Schottky wasn't talking about image sensors. He was talking about electron shot noise in semiconductors, we're talking about photon shot noise. The statistics are the same, the carrier is different.
But Schottky did follow Occam's Razor by invoking the simplest explanation given the data and all of his prior knowledge. He was wrong for the right reasons. This is all most scientists can hope for long after their work is published.

After decades of using the terms shot noise and photon noise, there is no hope the rigorous statistical definition of noise (measurement error or prameter estimate uncertainy) will ever be abandoned. Except for certain experiments, it probably doesn't matter.

I hadn't considered that photon noise really isn't noise until this thread. The signal fluctuations are a state-of-nature. The total model for the raw file information content should include the photon noise in the signal parameter terms, not in the noise parameter terms. It is a coincidence photon noise behaves similarly to measurement noise.
All that is true, except that the shot noise recorded by the sensor isn't exactly the shot noise in the photons, because there is another random process going on, the conversion of photons to electrons. Current sensors have about a 50% conversion rate (if the photon makes its way through the CFA) and that also is random.
Either way, the photoelectrons counted by a pixel still follow the Poisson Distribution. Chris (cpw) did a derivation of that a while back, but I don't have the link handy.
Yes, they do - but the Poisson distribution is in part down to what's going on in the pixel, as we know. Apply the same stimulus to two sensors identical apart from quantum efficiency, you get a different distribution. Or, another thought experiment - suppose one made a photon gun which could shoot a known number of photons into each pixel (can't think of how the technology would work, because you can't count the photons without collapsing their wave function) even then, you'd see a Poisson distribution, unless the sensor had 100% QE.
 
And the amplitude of noise in the raw image is independent of the "total light".
Is that so? If we take two photos of the same scene with the same camera and lens, one at f/2.8 1/100 and the other at f/5.6 1/100, and display them at the same size and same brightness, which is more noisy and why?
The greater exposure gives more photons on each pixel, reducing the error in measurement. Therefore there is less noise across the image.
It doesn't matter how many pixels that greater amount of light is distributed over. More pixels simply greater accuracy in sampling the signal. More light means the closer that sample is to the mean signal.
More pixels in the same area will give less accuracy in the measurement from each each pixel, because it receives fewer photons, and therefore more random variation in the measurements across an array of pixels. That is, more noise.
The question here is what you mean by 'accuracy' in the measurement from each pixel. First you have to decide what it is that you think the pixel is measuring - if it is the number of photons, then theres not much reason to think that small pixels are less accurate at counting photons than big pixels.
During my time, I have specified and purchased lots of Instrumentation. In that field, 'accuracy' is an important metric. It is normally expressed as a percentage or, occasionally, as actual units. Sometimes bits are tacked on to account for effects of temperature and such.

It should be so simple to define 'accuracy' for a photometric sensor, especially in this knowledgeable Forum although I, for one, will not attempt it. Enough confusion as it is ;-)
It is quite simple. Sensors (at least, pixels) are photon counters. They basically suffer from two kinds of inaccuracy. One is non-linearity - if you know what it is, it can be corrected, the other is electronic noise, which is independent of the reading. That inaccuracy is called read noise. When it comes to a sensor, there is a third inaccuracy, which is that the scale factor (output to photons) is not exactly the same for every pixel. This inaccuracy is called 'Pixel Response Non-Uniformity' or 'PRNU'. Again, it can be corrected if every pixel is characterised and individual correction applied. Often it runs in rows and columns, and a simpler option which most cameras adopt is characterising line by line and column by column and applying those corrections. But really, there is no reason to suspect that large pixels are more accurate photon counters than small ones, unless they fail to be engineered such that read noise is proportional to full well capacity.
It's just surprising the amount of discussion that has been generated on that one single factor.
There are two competing theses. Thesis one is that the major noise in photos is photon shot noise, which depends on the number of photons counted and not on the 'accuracy' of the pixel. The second thesis is that the major noise in photos is pixel inaccuracy. In effect what they are saying is that read noise dominates shot noise. It's demonstrably untrue, but that doesn't stop the case being argued with some passion.
 
Consider a largish pixel receiving , um, 49,000 photons (100% QE of course). Consider pixels half the size therefore receiving 7,000 photons each (same QE). Is the uncertainty in the big one about 0.45%? For the little one, is it not 1.2%? Or standard deviations or however that's done?

Maybe we could discuss that? Of course one image is bigger on your screen than the other. Hmm . . . .
The natural and implicit conditions are viewing the photo at the same size from the same distance.
Aha! Now it is clear. So, for example, what is the effect of down-sampling the larger image to the same size as the smaller? Is that 1.2% above reduced by sqrt(2) to 0.85% assuming a simple bi-linear reduction?

I am not being deliberately obtuse, that is a genuine question seeking an answer.
Ted, two things.

First, wouldn't the half-size (I'm assuming you mean half the pitch) pixels get a quarter as much light, not a seventh as much? And the photon-noise-induced SNR in the big pixel case would then be 0.0045, as you said, but, in the case of the smaller pixel, it would be twice that, or sqrt(12250)/12250 = 0.009.

Second, the reduction of noise upon binlinear interpolation downsampling is not a simple function of sampling ratio, as is indicated by this graph, which has no AA filtering:



7a678afcedd64857a580fba8745977eb.jpg.png

In the case of 2:1 downsampling, the noise is indeed reduced by a factor of 2.

But realistically, we'd probably be applying some AA filtering before downsampling. For a Gaussian AA filter with sigma equal to 0.2 times the ratio of input width in pixels over output width in pixels, the noise reduction looks like this:



fbe32d2c0570438fa11662462afb6ad0.jpg.png

In downsampling you have the opportunity to trade off reducing noise and preserving detail. Lanczos 3 is often thought to be a reasonable compromise.

With more aggressive AA filtering, we get this:



cdfdf439dcb74a23a3e0fc842644e005.jpg.png

And now, as the reduction ratio gets near 0.2, or 5, depending no how you look at it, the reduction in noise is as predicted by formulae like the image size normalization algorithm used by DxO.

Does that help, or just muddy the water?

Jim









--
 
Consider a largish pixel receiving , um, 49,000 photons (100% QE of course). Consider pixels half the size therefore receiving 7,000 photons each (same QE). Is the uncertainty in the big one about 0.45%? For the little one, is it not 1.2%? Or standard deviations or however that's done?

Maybe we could discuss that? Of course one image is bigger on your screen than the other. Hmm . . . .
The natural and implicit conditions are viewing the photo at the same size from the same distance.
Aha! Now it is clear. So, for example, what is the effect of down-sampling the larger image to the same size as the smaller? Is that 1.2% above reduced by sqrt(2) to 0.85% assuming a simple bi-linear reduction?

I am not being deliberately obtuse, that is a genuine question seeking an answer.
Ted, two things.

First, wouldn't the half-size (I'm assuming you mean half the pitch) pixels get a quarter as much light, not a seventh as much? And the photon-noise-induced SNR in the big pixel case would then be 0.0045, as you said, but, in the case of the smaller pixel, it would be twice that, or sqrt(12250)/12250 = 0.009.
Rats! Quite right, Jim.
Second, the reduction of noise upon binlinear interpolation downsampling is not a simple function of sampling ratio, as is indicated by this graph, which has no AA filtering:

7a678afcedd64857a580fba8745977eb.jpg.png

In the case of 2:1 downsampling, the noise is indeed reduced by a factor of 2.
I was thinking that bi-linear was kind of like simple averaging but, judging by the amount of humble pie I've already eaten today, that's probably wrong too.
But realistically, we'd probably be applying some AA filtering before downsampling.
I thought Bart was the only guy on the planet that does that! ;-)
For a Gaussian AA filter with sigma equal to 0.2 times the ratio of input width in pixels over output width in pixels, the noise reduction looks like this:

fbe32d2c0570438fa11662462afb6ad0.jpg.png

In downsampling you have the opportunity to trade off reducing noise and preserving detail. Lanczos 3 is often thought to be a reasonable compromise.

With more aggressive AA filtering, we get this:

cdfdf439dcb74a23a3e0fc842644e005.jpg.png

And now, as the reduction ratio gets near 0.2, or 5, depending no how you look at it, the reduction in noise is as predicted by formulae like the image size normalization algorithm used by DxO.

Does that help, or just muddy the water?
Helps, Jim. Thanks.

--
Cheers,
Ted
 
In the case of 2:1 downsampling, the noise is indeed reduced by a factor of 2.
I was thinking that bi-linear was kind of like simple averaging but, judging by the amount of humble pie I've already eaten today, that's probably wrong too.
When I generated those graphs, I was surprised, too.
But realistically, we'd probably be applying some AA filtering before downsampling.
I thought Bart was the only guy on the planet that does that! ;-)
Caught me! You may have noticed I used Bart's suggested Gaussian sigmas (see below). Thanks to Detail Man for pointing me to Bart's downsampling page:

http://bvdwolf.home.xs4all.nl/main/foto/down_sample/down_sample.htm
For a Gaussian AA filter with sigma equal to 0.2 times the ratio of input width in pixels over output width in pixels, the noise reduction looks like this...
Thanks, Ted.

Jim
 
In the case of 2:1 downsampling, the noise is indeed reduced by a factor of 2.
Correct me if I'm wrong, but wasnt this when the area was 4X? So a factor of 2 would be the quadrature. Was it not really 2 then?
I was thinking that bi-linear was kind of like simple averaging but, judging by the amount of humble pie I've already eaten today, that's probably wrong too..
Me too! So which filter is the closest to quadrature then? Sorry, I have had time to read through all your good work.
 
In the case of 2:1 downsampling, the noise is indeed reduced by a factor of 2.
Correct me if I'm wrong, but wasn't this when the area was 4X? So a factor of 2 would be the quadrature. Was it not really 2 then?
I was thinking that bi-linear was kind of like simple averaging but, judging by the amount of humble pie I've already eaten today, that's probably wrong too..
Me too! So which filter is the closest to quadrature then? Sorry, I have[n't] had time to read through all your good work.
Pardon me for correcting the mess DPR editor made of your post!

I read somewhere that the simple averaging done in the Foveon 2x2 binning gets you sqrt(2) less noise (or maybe it was SNR) and not the naively expected factor of 2. Only hearsay of course, Judge Judy wouldn't like that ;-)

So I'm interested in the response, too.
 
In the case of 2:1 downsampling, the noise is indeed reduced by a factor of 2.
Correct me if I'm wrong, but wasnt this when the area was 4X? So a factor of 2 would be the quadrature. Was it not really 2 then?
I was thinking that bi-linear was kind of like simple averaging but, judging by the amount of humble pie I've already eaten today, that's probably wrong too..
Me too! So which filter is the closest to quadrature then? Sorry, I have had time to read through all your good work.
By 2:1 downsampling, I mean measured linearly. So that's four pixels that become one. Quadrature says half the noise: sqrt(4) = 2. Binlinear interpolation gives you half the noise. But change the ratio a bit -- up or down -- and bilinear interpolation s not as good at reducing noise. That's what the graph without the AA filter says.

Jim
 
So which filter is the closest to quadrature then?
I'm assuming you mean which resamping algorithm, not which filter. If that's the case, bilinear interpolation.

Jim
 
Jim,

Yes, Schottky was the first scientist to study and characterize shoot noise. For decades this phenomenon was thought to originate from what goes on in the photodiodes, i.e. measurement uncertainty.

Recently experimental results showed shot noise is not related to the photodiodes but is an inherent property of light. (link ).
That page is dated 2013. I think you'll find that those authors did not discover that photon shot noise is 'an inherent property of light'. That was already a well known fact (I don't know who 'discovered' it, but it was already being talked about by Emil Martinec and many others when I joined these forums in 2007).
Schottky was wrong about the source of the signal fluctuations.
Schottky wasn't talking about image sensors. He was talking about electron shot noise in semiconductors, we're talking about photon shot noise. The statistics are the same, the carrier is different.
But Schottky did follow Occam's Razor by invoking the simplest explanation given the data and all of his prior knowledge. He was wrong for the right reasons. This is all most scientists can hope for long after their work is published.

After decades of using the terms shot noise and photon noise, there is no hope the rigorous statistical definition of noise (measurement error or prameter estimate uncertainy) will ever be abandoned. Except for certain experiments, it probably doesn't matter.

I hadn't considered that photon noise really isn't noise until this thread. The signal fluctuations are a state-of-nature. The total model for the raw file information content should include the photon noise in the signal parameter terms, not in the noise parameter terms. It is a coincidence photon noise behaves similarly to measurement noise.
All that is true, except that the shot noise recorded by the sensor isn't exactly the shot noise in the photons, because there is another random process going on, the conversion of photons to electrons. Current sensors have about a 50% conversion rate (if the photon makes its way through the CFA) and that also is random.
Either way, the photoelectrons counted by a pixel still follow the Poisson Distribution. Chris (cpw) did a derivation of that a while back, but I don't have the link handy.
Yes, they do - but the Poisson distribution is in part down to what's going on in the pixel, as we know. Apply the same stimulus to two sensors identical apart from quantum efficiency, you get a different distribution. Or, another thought experiment - suppose one made a photon gun which could shoot a known number of photons into each pixel (can't think of how the technology would work, because you can't count the photons without collapsing their wave function) even then, you'd see a Poisson distribution, unless the sensor had 100% QE.
A smart man sent me a link with Chris' derivation:

 
Jim,

Yes, Schottky was the first scientist to study and characterize shoot noise. For decades this phenomenon was thought to originate from what goes on in the photodiodes, i.e. measurement uncertainty.

Recently experimental results showed shot noise is not related to the photodiodes but is an inherent property of light. (link ).
That page is dated 2013. I think you'll find that those authors did not discover that photon shot noise is 'an inherent property of light'. That was already a well known fact (I don't know who 'discovered' it, but it was already being talked about by Emil Martinec and many others when I joined these forums in 2007).
Schottky was wrong about the source of the signal fluctuations.
Schottky wasn't talking about image sensors. He was talking about electron shot noise in semiconductors, we're talking about photon shot noise. The statistics are the same, the carrier is different.
But Schottky did follow Occam's Razor by invoking the simplest explanation given the data and all of his prior knowledge. He was wrong for the right reasons. This is all most scientists can hope for long after their work is published.

After decades of using the terms shot noise and photon noise, there is no hope the rigorous statistical definition of noise (measurement error or prameter estimate uncertainy) will ever be abandoned. Except for certain experiments, it probably doesn't matter.

I hadn't considered that photon noise really isn't noise until this thread. The signal fluctuations are a state-of-nature. The total model for the raw file information content should include the photon noise in the signal parameter terms, not in the noise parameter terms. It is a coincidence photon noise behaves similarly to measurement noise.
All that is true, except that the shot noise recorded by the sensor isn't exactly the shot noise in the photons, because there is another random process going on, the conversion of photons to electrons. Current sensors have about a 50% conversion rate (if the photon makes its way through the CFA) and that also is random.
Either way, the photoelectrons counted by a pixel still follow the Poisson Distribution. Chris (cpw) did a derivation of that a while back, but I don't have the link handy.
Yes, they do - but the Poisson distribution is in part down to what's going on in the pixel, as we know. Apply the same stimulus to two sensors identical apart from quantum efficiency, you get a different distribution. Or, another thought experiment - suppose one made a photon gun which could shoot a known number of photons into each pixel (can't think of how the technology would work, because you can't count the photons without collapsing their wave function) even then, you'd see a Poisson distribution, unless the sensor had 100% QE.
A smart man sent me a link with Chris' derivation:

http://www.dpreview.com/forums/post/42082844
A smart man (probably the same one) sent it to me, too. Still - it's useful for other people to have the link too.
 
I just don't understand your problem. Please put me on your ban list.

I'm tired of your gratuitous and ignorant insults.
 

Keyboard shortcuts

Back
Top