D600 High ISO in DX

Started Nov 23, 2012 | Questions
bobn2
bobn2 Forum Pro • Posts: 62,182
Re: Roger's optimum pixel pitch of 5um
1

Leo360 wrote:

Again, Bill Claff's charts give D600 one stop advantage in Dynamic Range over E-M5.

Bill Claff's figures are not 'per pixel'. Bill normalises to the Circle of Confusion for equal DOF in the final print, which is sensor size related.

-- hide signature --

Bob

Leo360 Senior Member • Posts: 1,141
Re: Roger's optimum pixel pitch of 5um

bobn2 wrote:

Leo360 wrote:

noirdesir wrote:

A single pixel does not have any noise. Display a single pixel fullscreen on your monitor and tell me whether you can see any noise in it.

Saying that a single pixel does not have any noise in it is not quite accurate.

It is exactly accurate.

There is mean number of photons that is supposed to hit this pixel under given exposure.

No there is not. The process of photons hitting pixels is itself a random process. The noise is in the light, not the pixel. Even if the pixel had 100% efficiency and zero read noise, which means that it would count very photon that struck it, there would be the noise. Nothing in nature dictates that there is a mean number that 'should' hit a pixel. In fact nature dictates precisely that the incidence of photons is randomised.

Photon shot-noise is the property of the photon flux and here we are all in a "violent agreement".

The number of photons hitting a given small area of the image plane is a Poisson random process

p(n) = exp(-lambda*t) * (lamda*t)^n / n!

where "t" is the exposure time and  "lambda" is the parameter which happens to be "mean photon count per unit time of exposure" (aka expected value) for the area of impact. I will repeat,  this lambda is a function of a point on the image plane (both x and y coordinates). A "true" camera independent image is the one constructed from those lambda functions for Red, Green and Blue channels. Now back to the real world. We do not know light distribution (lambdas) across the image plane. We use a pixel based sensor to record a realization of this random process to estimate the lambda functions. At each pixel we have obtained a sampled photon count which served as an estimator for the lambda corresponding to this tiny pixel area. The accuracy of the estimation depends on the the photon count observed. The higher the photon count the lesser error in using it in place of (lamda*t).  Everyone notices this fact while observing that there is more photon-shot noise in shadows than in highlights. Expose-to-the-right is the empirical way to say the same thing -- you expose so much as to obtain as many photons in every pixel as possible without falling off the DR upper end in highlights.

To summarize -- saying that there is no "mean photon count" (aka expected value) is wrong. These quantities are deterministic parameters of the photon count probability distribution. The camera is a device that tries to estimate these parameters (on pixel by pixel basis) from a given random process realization.

The number of photons registered by it should correspond to the number of photons which hit it. Shot noise is not because pixels are incorrectly counting the number of photons (that is read noise)

You are ascribing me the words I never said! Photon shot-noise is the property of photon flux and has nothing to do photon-to-electron conversion in the pixel. This is why noise coming out of a pixel is nominally broken into shot-noise, read-noise, dark-current components.

On top of it it is contaminated by read-noise which makes photon-to-electron conversion noisy. So the value of this pixel reading will not be exactly right.

Read noise is not 'on top of it'. Read noise is why the value of the pixel reading will not be exactly right.

There is also Quantum Efficiency (QE) that makes photons go missing and dark current which makes photons appear from nowhere. And QE of 53% does not mean that exactly 47% of photons go missing on each reading. Photon absorption is in itself a random process. I guess it should be factored into read-noise but I am not sure.

In your experiment with a single pixel filling the whole screen you will have a solid color (no spacial variation) but with the intensity slightly off. I think that what you are trying to say is that one cannot observe spacial noise variability (from one pixel to another) when measuring only a single pixel.

Temporal noise variability (i.e. from one frame to the next) is of no interest to the still photographer. It is the spatial variability (or at least the integration of spatial and temporal variability over the exposure time) that we are interested in.

I guess that people that combine consecutive shots in HDR photography may disagree. There are practical instances when you combine several shots of the same scene to boost photon count.

And it is not only about temporal variability. You can have several identical cameras (idealized situation) taking identical images in controlled environment (think of Nikon factory testing facility). Photon counts in the corresponding pixels will be different because they sample different realizations of the same photon random process and because thermal noise, read-noise are uncorrelated. This is true even for the sensors consisting of a single pixel.

As far as I understand it the std. dev. is the thing that should scale with the area. And dark current is second order effect for the pixel size we are discussing here.

I have to think about it more, for it sounds counter-intuitive on the first glance. With Gaussian random processes variance is the parameter which scales and combines. I am not sure that it is applicable here, though.

The random process is occuring in the light not the pixel.

Here I was talking about ways to describe read-noise and the random process governing the read-noise is happening in the pixel.

Leo

 Leo360's gear list:Leo360's gear list
Nikon D5100 Nikon D750 Nikon AF-S DX Nikkor 18-200mm f/3.5-5.6G ED VR II Nikon AF-S Nikkor 24-120mm F4G ED VR Nikon AF-S Nikkor 50mm f/1.8G
Leo360 Senior Member • Posts: 1,141
Re: Roger's optimum pixel pitch of 5um

bobn2 wrote:

Leo360 wrote:

Again, Bill Claff's charts give D600 one stop advantage in Dynamic Range over E-M5.

Bill Claff's figures are not 'per pixel'. Bill normalises to the Circle of Confusion for equal DOF in the final print, which is sensor size related.

Thanks for the clarification.

Leo

 Leo360's gear list:Leo360's gear list
Nikon D5100 Nikon D750 Nikon AF-S DX Nikkor 18-200mm f/3.5-5.6G ED VR II Nikon AF-S Nikkor 24-120mm F4G ED VR Nikon AF-S Nikkor 50mm f/1.8G
bobn2
bobn2 Forum Pro • Posts: 62,182
Re: Roger's optimum pixel pitch of 5um
2

Leo360 wrote:

bobn2 wrote:

Leo360 wrote:

noirdesir wrote:

A single pixel does not have any noise. Display a single pixel fullscreen on your monitor and tell me whether you can see any noise in it.

Saying that a single pixel does not have any noise in it is not quite accurate.

It is exactly accurate.

There is mean number of photons that is supposed to hit this pixel under given exposure.

No there is not. The process of photons hitting pixels is itself a random process. The noise is in the light, not the pixel. Even if the pixel had 100% efficiency and zero read noise, which means that it would count very photon that struck it, there would be the noise. Nothing in nature dictates that there is a mean number that 'should' hit a pixel. In fact nature dictates precisely that the incidence of photons is randomised.

Photon shot-noise is the property of the photon flux and here we are all in a "violent agreement".

The number of photons hitting a given small area of the image plane is a Poisson random process

p(n) = exp(-lambda*t) * (lamda*t)^n / n!

where "t" is the exposure time and "lambda" is the parameter which happens to be "mean photon count per unit time of exposure" (aka expected value) for the area of impact. I will repeat, this lambda is a function of a point on the image plane (both x and y coordinates). A "true" camera independent image is the one constructed from those lambda functions for Red, Green and Blue channels.

No, that is not the "true" image. The true image is actually composed of all those individual discrete photon strikes. The lambda function is a mathematical model to allow us to map a continuous function onto what is a discrete one,

Now back to the real world. We do not know light distribution (lambdas) across the image plane.

Because it does not exist. We can discover it by observation of what happened in the real world, which was a number of discrete photon events.

We use a pixel based sensor to record a realization of this random process to estimate the lambda functions.

It does not 'record a realization'. The lambda function, which it estimates by averaging the count of photon events over an area, is an abstraction. The reality was the photon events. If we wished we could use an alternative mechanism to do the same only better. Imagine we had a sensor so advanced that it could record the exact position (to within the limits of the uncertainty principle) of each photon event in time and space, then transmit that with the photon energy to the camera's processor. That processor could reconstruct the output of any sensor of any pixel count and CFA characteristic.

At each pixel we have obtained a sampled photon count which served as an estimator for the lambda corresponding to this tiny pixel area. The accuracy of the estimation depends on the the photon count observed. The higher the photon count the lesser error in using it in place of (lamda*t). Everyone notices this fact while observing that there is more photon-shot noise in shadows than in highlights. Expose-to-the-right is the empirical way to say the same thing -- you expose so much as to obtain as many photons in every pixel as possible without falling off the DR upper end in highlights.

The lambda is not the reality, it is an abstraction. And if you sample the reality with a greater precision, you can always estimate this function more precisely. This is simply a matter of information, the more information you have, the more precise a model you can make.

To summarize -- saying that there is no "mean photon count" (aka expected value) is wrong.

No-one said that there is no "mean photon count", there is a mean of any set of observations. But 'mean photon count' says nothing about 'expected value', because 'expected' is before the observation, the mean is after.

These quantities are deterministic parameters of the photon count probability distribution. The camera is a device that tries to estimate these parameters (on pixel by pixel basis) from a given random process realization.

Which quantities are you now saying now are the 'deterministic parameters of the photon count probability distribuution'? What is actually happening is that photons are generated randomly by various quantum processes, they travel, reflect off things and end up hitting the sensor. If you have a model of their distribution and trajectory, and knew something about the scene, you could generate a model of the light energy distribution over the surface of the sensor, but any continuous function describing this is an abstraction from the reality.

The number of photons registered by it should correspond to the number of photons which hit it. Shot noise is not because pixels are incorrectly counting the number of photons (that is read noise)

You are ascribing me the words I never said!

I didn't ascribe any words to you.

Photon shot-noise is the property of photon flux and has nothing to do photon-to-electron conversion in the pixel. This is why noise coming out of a pixel is nominally broken into shot-noise, read-noise, dark-current components.

Good, we agree. The sensor does not affect the amount of shot noise in the image projected onto it.

On top of it it is contaminated by read-noise which makes photon-to-electron conversion noisy. So the value of this pixel reading will not be exactly right.

Read noise is not 'on top of it'. Read noise is why the value of the pixel reading will not be exactly right.

There is also Quantum Efficiency (QE) that makes photons go missing and dark current which makes photons appear from nowhere. And QE of 53% does not mean that exactly 47% of photons go missing on each reading. Photon absorption is in itself a random process. I guess it should be factored into read-noise but I am not sure.

No, it gets catered for simply by multiplying the photon count by the QE.

In your experiment with a single pixel filling the whole screen you will have a solid color (no spacial variation) but with the intensity slightly off. I think that what you are trying to say is that one cannot observe spacial noise variability (from one pixel to another) when measuring only a single pixel.

Temporal noise variability (i.e. from one frame to the next) is of no interest to the still photographer. It is the spatial variability (or at least the integration of spatial and temporal variability over the exposure time) that we are interested in.

I guess that people that combine consecutive shots in HDR photography may disagree. There are practical instances when you combine several shots of the same scene to boost photon count.

This is practically increasing the integration time.

And it is not only about temporal variability. You can have several identical cameras (idealized situation) taking identical images in controlled environment (think of Nikon factory testing facility). Photon counts in the corresponding pixels will be different because they sample different realizations of the same photon random process and because thermal noise, read-noise are uncorrelated. This is true even for the sensors consisting of a single pixel.

They will be different because the actual pattern of photon events is, and always will be different. That's the reality, rather simpler than 'sampling different realizations of the same photon random process'

As far as I understand it the std. dev. is the thing that should scale with the area. And dark current is second order effect for the pixel size we are discussing here.

I have to think about it more, for it sounds counter-intuitive on the first glance. With Gaussian random processes variance is the parameter which scales and combines. I am not sure that it is applicable here, though.

The random process is occuring in the light not the pixel.

Here I was talking about ways to describe read-noise and the random process governing the read-noise is happening in the pixel.

OK, go away and think about it. I have already thought about it.

-- hide signature --

Bob

Eric Fossum
Eric Fossum Senior Member • Posts: 1,399
Re: Roger's optimum pixel pitch of 5um

bobn2 wrote:

The random process is occuring in the light not the pixel.

Well, not exactly.  The random process occurs in the emission of photons from a blackbody or other source (plasma).  There is a mean value to the number of photons being emitted, and a variance.  The variance happens to be equal to the number of photons, and the std. dev is the square root.

Then the light is reflected off objects and there should be more randomization because of that, since some photons are reflected and some absorbed.  And then there is the whole lens transmission (and absorption/reflection) and color filter absorption (also somewhat random on a photon by photon basis) and finally, absorption in the silicon.  Where the photon is absorbed is also "random" in position or depth.  Lastly not all photoelectrons are collected -- some escape.

The amazing thing (to me at least) is that this does not change the statistics of the noise.  The variance in photons emitted from the blackbody goes like the number of photons, and the variance in the number of collected photoelectrons goes like the number of photoelectrons.  These processes do not "add up" and make the noise bigger.  Actually, the noise is smaller, but so is the signal, and SNR is smaller as well.  Still equal to the square root of the number of photoelectrons in an rms sort of way.

It is also important to understand what we mean by noise.  Noise means that when we try to measure the average photon flux (or collected photoelectrons) we get a different value each time, and the std. deviation of that mean signal+noise is equal the square root of the number of photoelectrons.  With one pixel, one measurement of course there is no noise.

If you think many pixels should all see the same average flux, then calculating a sort of spatial noise (FPN excepted) willl also yield the same result on a single exposure.  So, in other words, shot noise can be manifested across many spatial pixels, or both space and time.

Estimating the noise is also tricky.  There is "noise" in any noise measurement.

-- hide signature --

*Have fun, make some money, and learn along the way*

 Eric Fossum's gear list:Eric Fossum's gear list
Sony RX100 II Nikon Coolpix P900
bobn2
bobn2 Forum Pro • Posts: 62,182
Re: Roger's optimum pixel pitch of 5um

Eric Fossum wrote:

bobn2 wrote:


The random process is occuring in the light not the pixel.

Well, not exactly. The random process occurs in the emission of photons from a blackbody or other source (plasma). There is a mean value to the number of photons being emitted, and a variance. The variance happens to be equal to the number of photons, and the std. dev is the square root.

Then the light is reflected off objects and there should be more randomization because of that, since some photons are reflected and some absorbed. And then there is the whole lens transmission (and absorption/reflection) and color filter absorption (also somewhat random on a photon by photon basis) and finally, absorption in the silicon. Where the photon is absorbed is also "random" in position or depth. Lastly not all photoelectrons are collected -- some escape.

The amazing thing (to me at least) is that this does not change the statistics of the noise. The variance in photons emitted from the blackbody goes like the number of photons, and the variance in the number of collected photoelectrons goes like the number of photoelectrons. These processes do not "add up" and make the noise bigger. Actually, the noise is smaller, but so is the signal, and SNR is smaller as well. Still equal to the square root of the number of photoelectrons in an rms sort of way.

It is also important to understand what we mean by noise. Noise means that when we try to measure the average photon flux (or collected photoelectrons) we get a different value each time, and the std. deviation of that mean signal+noise is equal the square root of the number of photoelectrons. With one pixel, one measurement of course there is no noise.

If you think many pixels should all see the same average flux, then calculating a sort of spatial noise (FPN excepted) willl also yield the same result on a single exposure. So, in other words, shot noise can be manifested across many spatial pixels, or both space and time.

Estimating the noise is also tricky. There is "noise" in any noise measurement.

Hello Eric. While I would agree with everything you say here, I don't think that it disagrees with what I was saying. If one had a 100% efficient sensor, with zero read noise, you would still observe shot noise. I think the point is that noise is apparent only against an expectation of what you think (your words) should be happening. If you knew what should be happening, then noise would be no problem. So to talk about 'correct' values of a pixel is a nonsense. All you have is observed values, and you can make inferences about the 'correct' values based on your expectations, and also on you knowledge of the way the system operates.

-- hide signature --

Bob

Joofa Senior Member • Posts: 2,655
On random processes

bobn2 wrote:
The random process is occuring in the light not the pixel.

Actually, both. Even if we knew the exact number of photons impinging on a pixel, and consider only the QE among any other effects, the number of photoelectrons generated is random (see bullet #3 below). The mean value of photoelectrons can be estimated as the mean number of photons multiplied by the QE.

The key thing to remember is:

  • the number of photons being impinged is Poisson.
  • the number of photoelectrons generated is Poisson.
  • the number of photoelectrons generated for a given number of photons impinged upon a pixel is not Poisson, rather binomial.
  • the number of photons impinged upon a pixel given the number of photoelectrons generated is not exactly Poisson, rather a variant of it.

No, it gets catered for simply by multiplying the photon count by the QE.

The interesting thing to note regarding the above is that the estimate of the number of photons impinged obtained by dividing the photoelectron count by QE (something that is done many times due to lack of other information) is a relatively poor estimater, with its mean square error being 1 / QE of the mean square error of the best (L2) estimater.

-- hide signature --
bobn2
bobn2 Forum Pro • Posts: 62,182
Re: Roger's optimum pixel pitch of 5um

Joofa wrote:


bobn2 wrote:
The random process is occuring in the light not the pixel.

Actually, both. Even if we knew the exact number of photons impinging on a pixel, and consider only the QE among any other effects, the number of photoelectrons generated is random (see bullet #3 below). The mean value of photoelectrons can be estimated as the mean number of photons multiplied by the QE.

Thanks, yes I stated that also in the post. Simply, one can think of less than 100% QE as (random) photon events being randomly 'lost'.

The key thing to remember is:

  • the number of photons being impinged is Poisson.
  • the number of photoelectrons generated is Poisson.
  • the number of photoelectrons generated for a given number of photons impinged upon a pixel is not Poisson, rather binomial.

That had not occurred to me, but I think that you are right.

The key thing to remember is:

  • the number of photons impinged upon a pixel given the number of photoelectrons generated is not exactly Poisson, rather a variant of it.

No, it gets catered for simply by multiplying the photon count by the QE.

The interesting thing to note regarding the above is that the estimate of the number of photons impinged obtained by dividing the photoelectron count by QE (something that is done many times due to lack of other information) is a relatively poor estimater, with its mean square error being 1 / QE of the mean square error of the best (L2) estimater.

Interesting. Do you have any more detail on that?

-- hide signature --

Bob

Joofa Senior Member • Posts: 2,655
On noise estimation

bobn2 wrote:

Joofa wrote:


bobn2 wrote:
  • the number of photoelectrons generated for a given number of photons impinged upon a pixel is not Poisson, rather binomial.

That had not occurred to me, but I think that you are right.

Yes, that is true, the number of photoelectrons generated given a fixed number of photons impinged on a pixel is Binomial (considering only the QE) and not Poisson.

No, it gets catered for simply by multiplying the photon count by the QE.

The interesting thing to note regarding the above is that the estimate of the number of photons impinged obtained by dividing the photoelectron count by QE (something that is done many times due to lack of other information) is a relatively poor estimater, with its mean square error being 1 / QE of the mean square error of the best (L2) estimater.

Interesting. Do you have any more detail on that?

Yes. In L2 sense the best estimator of the number of photons given the number of photoelectrons generated is the conditional expectation of the number of photons. Lets call it e1. Our other, 'usual', estimator is given as (number of photoelectrons) / QE. Lets call it e2. We shall deem mean square error (MSE) as the average value of the squared error between the actual number of photons (which we don't know) and our estimate e1 (or e2) of that number.

  • e1 is unbiased -- i.e., the average value of e1 is equal to the mean number of photons.
  • e1 is conditionally biased -- i.e., the average value of e1 is not equal to the number of photons impinged on a pixel in a particular measurement.
  • e2 is unbiased.
  • e2 in conditionally unbiased.
  • The MSE of e2 = 1 / QE x MSE of e1.
-- hide signature --
bobn2
bobn2 Forum Pro • Posts: 62,182
Re: On noise estimation

Joofa wrote:

bobn2 wrote:

Joofa wrote:


bobn2 wrote:
  • the number of photoelectrons generated for a given number of photons impinged upon a pixel is not Poisson, rather binomial.

That had not occurred to me, but I think that you are right.

Yes, that is true, the number of photoelectrons generated given a fixed number of photons impinged on a pixel is Binomial and not Poisson.

No, it gets catered for simply by multiplying the photon count by the QE.

The interesting thing to note regarding the above is that the estimate of the number of photons impinged obtained by dividing the photoelectron count by QE (something that is done many times due to lack of other information) is a relatively poor estimater, with its mean square error being 1 / QE of the mean square error of the best (L2) estimater.

Interesting. Do you have any more detail on that?

Yes. In L2 sense the best estimator of the number of photons given the number of photoelectrons generated is the conditional expectation of the number of photons. Lets call it e1. Our other, 'usual', estimator is given as (number of photoelectrons) / QE. Lets call it e2. We shall deem mean square error (MSE) as the average value of the squared error between the actual number of photons (which we don't know) and our estimate e1 (or e2) of that number.

  • e1 is unbiased -- i.e., the average value of e1 is equal to the mean number of photons.
  • e1 is conditionally biased -- i.e., the average value of e1 is not equal to the number of photons impinged on a pixel in a particular measurement.
  • e2 is unbiased.
  • e2 in conditionally unbiased.
  • The MSE of e2 = 1 / QE x MSE of e1.

How would one estimate the MSE of e1?

-- hide signature --

Bob

Joofa Senior Member • Posts: 2,655
Re: On noise estimation

bobn2 wrote:

How would one estimate the MSE of e1?

I think you mean 'how would one estimate or derive e1?', which is what one wants, and not estimate the MSE of e1. The MSE of e1 can be derived analytically, under the conditions stated in previous messages, and is equal to average number of photons impinged on a pixel times (1 - QE). Estimating e1 is again possible analytically, however, in practise can turn out to be difficult as it requires certain knowledge beyond just the number of photoelectrons given. Since many times the number of photoelectrons is all one has, going with e2 is perhaps the only choice.

-- hide signature --
bobn2
bobn2 Forum Pro • Posts: 62,182
Re: On noise estimation

Joofa wrote:

bobn2 wrote:

How would one estimate the MSE of e1?

I think you mean 'how would one estimate or derive e1?', which is what one wants, and not estimate the MSE of e1.

No, I meant estimate the MSE of e1, which is what I want.

The MSE of e1 can be derived analytically, under the conditions stated in previous messages, and is equal to average number of photons impinged on a pixel times (1 - QE).

OK. Obvious now you say it.

Estimating e1 is again possible analytically, however, in practise can turn out to be difficult as it requires certain knowledge beyond just the number of photoelectrons given. Since many times the number of photoelectrons is all one has, going with e2 is perhaps the only choice.

What would be the 'certain knowledge'?

-- hide signature --

Bob

Joofa Senior Member • Posts: 2,655
Re: On noise estimation

bobn2 wrote:

What would be the 'certain knowledge'?

Bullet #4 in my original post below:

http://forums.dpreview.com/forums/post/50345602

-- hide signature --
Leo360 Senior Member • Posts: 1,141
Re: Roger's optimum pixel pitch of 5um

Eric Fossum wrote:

Then the light is reflected off objects and there should be more randomization because of that, since some photons are reflected and some absorbed. And then there is the whole lens transmission (and absorption/reflection) and color filter absorption (also somewhat random on a photon by photon basis) and finally, absorption in the silicon. Where the photon is absorbed is also "random" in position or depth. Lastly not all photoelectrons are collected -- some escape.

The amazing thing (to me at least) is that this does not change the statistics of the noise. The variance in photons emitted from the blackbody goes like the number of photons, and the variance in the number of collected photoelectrons goes like the number of photoelectrons.

This behavior is typical for Poisson processes. And many very different physical phenomena lead to Poisson as long as underlying events are independent and memoryless (radioactive decay, back-body radiation, telephone call arrivals, Geiger counter counts, etc.)

These processes do not "add up" and make the noise bigger. Actually, the noise is smaller, but so is the signal, and SNR is smaller as well. Still equal to the square root of the number of photoelectrons in an rms sort of way.

This is surprising. Is there a simple explanation why these two noise processes in the same conversion chain do not add up ??

And one more question as in the subject line. Roger Clark from clarkvision thinks that 5um is the optimal pixel pitch for a CMOS FX sensors. There has been an interesting discussion in this thread about it. In you opinion is it a reasonable question to ask?? Naturally,  photon absorption length and wavelength should put natural limits on the minimal pixel size.  But, of course, there are many more other factors to consider.

Leo

 Leo360's gear list:Leo360's gear list
Nikon D5100 Nikon D750 Nikon AF-S DX Nikkor 18-200mm f/3.5-5.6G ED VR II Nikon AF-S Nikkor 24-120mm F4G ED VR Nikon AF-S Nikkor 50mm f/1.8G
Joofa Senior Member • Posts: 2,655
Re: Roger's optimum pixel pitch of 5um

Leo360 wrote:

Eric Fossum wrote:

These processes do not "add up" and make the noise bigger. Actually, the noise is smaller, but so is the signal, and SNR is smaller as well. Still equal to the square root of the number of photoelectrons in an rms sort of way.

This is surprising. Is there a simple explanation why these two noise processes in the same conversion chain do not add up ??

Because, both are not "in the chain". One is the 'output' noise (photoelectrons) and the other is the 'input' noise (photon).

-- hide signature --
Leo360 Senior Member • Posts: 1,141
Re: Roger's optimum pixel pitch of 5um

bobn2 wrote:

Leo360 wrote:

A "true" camera independent image is the one constructed from those lambda functions for Red, Green and Blue channels.

No, that is not the "true" image. The true image is actually composed of all those individual discrete photon strikes. The lambda function is a mathematical model to allow us to map a continuous function onto what is a discrete one,

My interest in this discussion is from machine learning perspective. I am after objective description of the features of the scene. A "true" image is the description of the scene. Light is the media which conveys this information and provides a way to estimate the shapes, distances, etc via the interaction of EM field with the matter. Therefore, for my purposes image analysis becomes an estimation problem and this is why I treat it as such.

Now back to the real world. We do not know light distribution (lambdas) across the image plane.

Because it does not exist. We can discover it by observation of what happened in the real world, which was a number of discrete photon events.

The fact that some quantity is not directly observable does not mean it does not exist. Otherwise you are in danger of outlawing the Quantum Mechanics in it entirety.

We use a pixel based sensor to record a realization of this random process to estimate the lambda functions.

It does not 'record a realization'. The lambda function, which it estimates by averaging the count of photon events over an area, is an abstraction. The reality was the photon events.

I think Eric answered this question in his post within this thread.

If we wished we could use an alternative mechanism to do the same only better. Imagine we had a sensor so advanced that it could record the exact position (to within the limits of the uncertainty principle) of each photon event in time and space, then transmit that with the photon energy to the camera's processor. That processor could reconstruct the output of any sensor of any pixel count and CFA characteristic.

Yes, if your idealized sensor is bound only by fundamental Heisenberg uncertainty it will be the best possible sensor in a sense that no other sensor can convey more information about the scene observing the same light. But it cannot reconstruct exact pixel readings of other sensors or even of its own twin brother because the quantum mechanical events are probabilistic in nature. The probabilities of quantum events can be calculated but not the individual event outcomes. Actually it can reconstruct the lambdas obtained by other sensors.

At each pixel we have obtained a sampled photon count which served as an estimator for the lambda corresponding to this tiny pixel area. The accuracy of the estimation depends on the the photon count observed. The higher the photon count the lesser error in using it in place of (lamda*t). Everyone notices this fact while observing that there is more photon-shot noise in shadows than in highlights. Expose-to-the-right is the empirical way to say the same thing -- you expose so much as to obtain as many photons in every pixel as possible without falling off the DR upper end in highlights.

The lambda is not the reality, it is an abstraction. And if you sample the reality with a greater precision, you can always estimate this function more precisely. This is simply a matter of information, the more information you have, the more precise a model you can make.

Lambda is the reality. For example, for a black body radiation it is the temperature (see below).

To summarize -- saying that there is no "mean photon count" (aka expected value) is wrong.

No-one said that there is no "mean photon count", there is a mean of any set of observations. But 'mean photon count' says nothing about 'expected value', because 'expected' is before the observation, the mean is after.

In Probability Theory  "expected value" and "mean value" are synonyms. And so far we have not been talking about prior or posterior probabilities (might be we should).

These quantities are deterministic parameters of the photon count probability distribution. The camera is a device that tries to estimate these parameters (on pixel by pixel basis) from a given random process realization.

Which quantities are you now saying now are the 'deterministic parameters of the photon count probability distribuution'? What is actually happening is that photons are generated randomly by various quantum processes, they travel, reflect off things and end up hitting the sensor. If you have a model of their distribution and trajectory, and knew something about the scene, you could generate a model of the light energy distribution over the surface of the sensor, but any continuous function describing this is an abstraction from the reality.

This is easy. In black-body radiation you have such a deterministic parameter. It is called temperature and it determines the photon flux as well as its spectral distribution. A photo detector captures finite number of photons and allows you to estimate the temperature.  Poisson distribution of the photons hitting a pixel is a simple one-parametric (lambda) random distribution. All the physics (and geometry) of EM radiation, scattering, reflection, propagation, transformation by the lens is hidden inside the lambdas because there are simply no other parameter in the Poisson distribution. For no better word think of it as of a Pointing vector of classic EM field while Poisson shot-noise describe quantum fluctuations around it.

The number of photons registered by it should correspond to the number of photons which hit it. Shot noise is not because pixels are incorrectly counting the number of photons (that is read noise)

You are ascribing me the words I never said!

I didn't ascribe any words to you.

Where did I say that photon shot noise originates inside pixels?

Photon shot-noise is the property of photon flux and has nothing to do photon-to-electron conversion in the pixel. This is why noise coming out of a pixel is nominally broken into shot-noise, read-noise, dark-current components.

Good, we agree. The sensor does not affect the amount of shot noise in the image projected onto it.

Yes, this has been my understanding all along.

On top of it it is contaminated by read-noise which makes photon-to-electron conversion noisy. So the value of this pixel reading will not be exactly right.

Read noise is not 'on top of it'. Read noise is why the value of the pixel reading will not be exactly right.

There is also Quantum Efficiency (QE) that makes photons go missing and dark current which makes photons appear from nowhere. And QE of 53% does not mean that exactly 47% of photons go missing on each reading. Photon absorption is in itself a random process. I guess it should be factored into read-noise but I am not sure.

No, it gets catered for simply by multiplying the photon count by the QE.

Sorry, the act of a photon absorption is not a deterministic event. Since the depth of absorption varies there is a probability that a photon can escape. And since it is probabilistic event you cannot really tell the exact number of photons which were missed.

In your experiment with a single pixel filling the whole screen you will have a solid color (no spacial variation) but with the intensity slightly off. I think that what you are trying to say is that one cannot observe spacial noise variability (from one pixel to another) when measuring only a single pixel.

Temporal noise variability (i.e. from one frame to the next) is of no interest to the still photographer. It is the spatial variability (or at least the integration of spatial and temporal variability over the exposure time) that we are interested in.

I guess that people that combine consecutive shots in HDR photography may disagree. There are practical instances when you combine several shots of the same scene to boost photon count.

This is practically increasing the integration time.

By effectively combining pixel photon counts and reducing photon shot noise along the way.

And it is not only about temporal variability. You can have several identical cameras (idealized situation) taking identical images in controlled environment (think of Nikon factory testing facility). Photon counts in the corresponding pixels will be different because they sample different realizations of the same photon random process and because thermal noise, read-noise are uncorrelated. This is true even for the sensors consisting of a single pixel.

They will be different because the actual pattern of photon events is, and always will be different. That's the reality, rather simpler than 'sampling different realizations of the same photon random process'

You call it "pattern of photon events" while I call it "samples from the random photon process". We are talking about the same thing using different words.

Here I was talking about ways to describe read-noise and the random process governing the read-noise is happening in the pixel.

OK, go away and think about it. I have already thought about it.

-- hide signature --

Bob

Bob, this is what I call mildly insulting language (I refer to "OK, go away..."). If my postings irritate you, you are not obliged to respond.

Leo

 Leo360's gear list:Leo360's gear list
Nikon D5100 Nikon D750 Nikon AF-S DX Nikkor 18-200mm f/3.5-5.6G ED VR II Nikon AF-S Nikkor 24-120mm F4G ED VR Nikon AF-S Nikkor 50mm f/1.8G
Leo360 Senior Member • Posts: 1,141
Re: Roger's optimum pixel pitch of 5um

Joofa wrote:

Leo360 wrote:

Eric Fossum wrote:

These processes do not "add up" and make the noise bigger. Actually, the noise is smaller, but so is the signal, and SNR is smaller as well. Still equal to the square root of the number of photoelectrons in an rms sort of way.

This is surprising. Is there a simple explanation why these two noise processes in the same conversion chain do not add up ??

Because, both are not "in the chain". One is the 'output' noise (photoelectrons) and the other is the 'input' noise (photon).

Those electrons are triggered by the photons and ideally e- count should be proportional to the photon count.  They are "in a chain", so to speak.

Let us consider the following thought experiment. We have a sensor with 1MP and a scene with a single solid color and constant uniform luminosity of 10,000 photons per pixel per exposure.

Let's consider two cases separately and then together:

(A) The ideal sensor with no read noise and 100% QE and the photon statistics is  Poisson. In this case the readings from one  pixel to another fluctuate with the variance of 100 photons solely due the photon shot-noise.

(B) Ideal deterministic photon arrival process of exactly 10000 into each pixel. But the pixels now suffer from the read-noise of some variance. Output will fluctuate even when the input is perfectly constant.

(C= A&B) Now we have Poisson photon arrivals AND read-noise in the pixels. And the read-noise and Poisson arrivals are independent of each other.

I still do not understand why the resulting variance of (C) is not larger than the variances of (A) or (B).

-- hide signature --

Leo--

 Leo360's gear list:Leo360's gear list
Nikon D5100 Nikon D750 Nikon AF-S DX Nikkor 18-200mm f/3.5-5.6G ED VR II Nikon AF-S Nikkor 24-120mm F4G ED VR Nikon AF-S Nikkor 50mm f/1.8G
Leo360 Senior Member • Posts: 1,141
Re: On noise estimation

Joofa wrote:

bobn2 wrote:

Joofa wrote:


bobn2 wrote:
  • the number of photoelectrons generated for a given number of photons impinged upon a pixel is not Poisson, rather binomial.

That had not occurred to me, but I think that you are right.

Yes, that is true, the number of photoelectrons generated given a fixed number of photons impinged on a pixel is Binomial (considering only the QE) and not Poisson.

No, it gets catered for simply by multiplying the photon count by the QE.

The interesting thing to note regarding the above is that the estimate of the number of photons impinged obtained by dividing the photoelectron count by QE (something that is done many times due to lack of other information) is a relatively poor estimater, with its mean square error being 1 / QE of the mean square error of the best (L2) estimater.

Interesting. Do you have any more detail on that?

Yes. In L2 sense the best estimator of the number of photons given the number of photoelectrons generated is the conditional expectation of the number of photons. Lets call it e1. Our other, 'usual', estimator is given as (number of photoelectrons) / QE. Lets call it e2. We shall deem mean square error (MSE) as the average value of the squared error between the actual number of photons (which we don't know) and our estimate e1 (or e2) of that number.

  • e1 is unbiased -- i.e., the average value of e1 is equal to the mean number of photons.
  • e1 is conditionally biased -- i.e., the average value of e1 is not equal to the number of photons impinged on a pixel in a particular measurement.
  • e2 is unbiased.
  • e2 in conditionally unbiased.
  • The MSE of e2 = 1 / QE x MSE of e1.

Is L2 best metric in this case? How  about alternatives like L1. In this case the optimum L1 estimator will be conditional median. Another possibility would be MAP estimator. Which metric is favored by human eye perception?

Leo

 Leo360's gear list:Leo360's gear list
Nikon D5100 Nikon D750 Nikon AF-S DX Nikkor 18-200mm f/3.5-5.6G ED VR II Nikon AF-S Nikkor 24-120mm F4G ED VR Nikon AF-S Nikkor 50mm f/1.8G
Joofa Senior Member • Posts: 2,655
Re: On noise estimation

Leo360 wrote:

Is L2 best metric in this case?

Not necessarily. But usually the one that is most tractable mathematically. If you are a physicist then possibly it is the only thing you know. If you are an electrical engineers then you know its limitations. Furthermore it offers some intuitive justifications like for e.g., thinking that given a bunch of numbers the noise estimate is the standard deviation. So standard deviation will not be our noise estimate if our metric is not quadratic (L2).

How about alternatives like L1. In this case the optimum L1 estimator will be conditional median.

Usually L1 is better for stuff such as outlier effect, etc. But, is more difficult to treat analytically many times. Usually results in iterative optimization, instead of "quick shot" analytical solutions offered by L2.

Another possibility would be MAP estimator.

Not always a good choice. Can offer a large number of solutions.

Of course, L1, L2 and MAP do coincide and give the same results with certain assumptions such as symmetric pdfs and uniform prior.

Which metric is favored by human eye perception?

L1 has better immunity to visual phenomenon such as ringing, etc., in filtering problems. I would tend to think that L1 (or something between L1 and L2) would have more desirable qualities than L2 for human visual perception. But it is not easy to work with L1 all of the times.

-- hide signature --
Joofa Senior Member • Posts: 2,655
Re: Roger's optimum pixel pitch of 5um

Leo360 wrote:

Joofa wrote:

Leo360 wrote:

Eric Fossum wrote:

These processes do not "add up" and make the noise bigger. Actually, the noise is smaller, but so is the signal, and SNR is smaller as well. Still equal to the square root of the number of photoelectrons in an rms sort of way.

This is surprising. Is there a simple explanation why these two noise processes in the same conversion chain do not add up ??

Because, both are not "in the chain". One is the 'output' noise (photoelectrons) and the other is the 'input' noise (photon).

Those electrons are triggered by the photons and ideally e- count should be proportional to the photon count. They are "in a chain", so to speak.

Let us consider the following thought experiment. We have a sensor with 1MP and a scene with a single solid color and constant uniform luminosity of 10,000 photons per pixel per exposure.

Let's consider two cases separately and then together:

(A) The ideal sensor with no read noise and 100% QE and the photon statistics is Poisson. In this case the readings from one pixel to another fluctuate with the variance of 100 photons solely due the photon shot-noise.

(B) Ideal deterministic photon arrival process of exactly 10000 into each pixel. But the pixels now suffer from the read-noise of some variance. Output will fluctuate even when the input is perfectly constant.

(C= A&B) Now we have Poisson photon arrivals AND read-noise in the pixels. And the read-noise and Poisson arrivals are independent of each other.

I still do not understand why the resulting variance of (C) is not larger than the variances of (A) or (B).

The issue that Prof. Fossum mentioned is not read noise related. It is all photon or photoelectrons related, i.e., either the photon shot noise or the photoelectron shot noise.

-- hide signature --
Keyboard shortcuts:
FForum MMy threads