D600 High ISO in DX

Started Nov 23, 2012 | Questions thread
Leo360 Senior Member • Posts: 1,141
Re: Roger's optimum pixel pitch of 5um

bobn2 wrote:

Leo360 wrote:

noirdesir wrote:

A single pixel does not have any noise. Display a single pixel fullscreen on your monitor and tell me whether you can see any noise in it.

Saying that a single pixel does not have any noise in it is not quite accurate.

It is exactly accurate.

There is mean number of photons that is supposed to hit this pixel under given exposure.

No there is not. The process of photons hitting pixels is itself a random process. The noise is in the light, not the pixel. Even if the pixel had 100% efficiency and zero read noise, which means that it would count very photon that struck it, there would be the noise. Nothing in nature dictates that there is a mean number that 'should' hit a pixel. In fact nature dictates precisely that the incidence of photons is randomised.

Photon shot-noise is the property of the photon flux and here we are all in a "violent agreement".

The number of photons hitting a given small area of the image plane is a Poisson random process

p(n) = exp(-lambda*t) * (lamda*t)^n / n!

where "t" is the exposure time and  "lambda" is the parameter which happens to be "mean photon count per unit time of exposure" (aka expected value) for the area of impact. I will repeat,  this lambda is a function of a point on the image plane (both x and y coordinates). A "true" camera independent image is the one constructed from those lambda functions for Red, Green and Blue channels. Now back to the real world. We do not know light distribution (lambdas) across the image plane. We use a pixel based sensor to record a realization of this random process to estimate the lambda functions. At each pixel we have obtained a sampled photon count which served as an estimator for the lambda corresponding to this tiny pixel area. The accuracy of the estimation depends on the the photon count observed. The higher the photon count the lesser error in using it in place of (lamda*t).  Everyone notices this fact while observing that there is more photon-shot noise in shadows than in highlights. Expose-to-the-right is the empirical way to say the same thing -- you expose so much as to obtain as many photons in every pixel as possible without falling off the DR upper end in highlights.

To summarize -- saying that there is no "mean photon count" (aka expected value) is wrong. These quantities are deterministic parameters of the photon count probability distribution. The camera is a device that tries to estimate these parameters (on pixel by pixel basis) from a given random process realization.

The number of photons registered by it should correspond to the number of photons which hit it. Shot noise is not because pixels are incorrectly counting the number of photons (that is read noise)

You are ascribing me the words I never said! Photon shot-noise is the property of photon flux and has nothing to do photon-to-electron conversion in the pixel. This is why noise coming out of a pixel is nominally broken into shot-noise, read-noise, dark-current components.

On top of it it is contaminated by read-noise which makes photon-to-electron conversion noisy. So the value of this pixel reading will not be exactly right.

Read noise is not 'on top of it'. Read noise is why the value of the pixel reading will not be exactly right.

There is also Quantum Efficiency (QE) that makes photons go missing and dark current which makes photons appear from nowhere. And QE of 53% does not mean that exactly 47% of photons go missing on each reading. Photon absorption is in itself a random process. I guess it should be factored into read-noise but I am not sure.

In your experiment with a single pixel filling the whole screen you will have a solid color (no spacial variation) but with the intensity slightly off. I think that what you are trying to say is that one cannot observe spacial noise variability (from one pixel to another) when measuring only a single pixel.

Temporal noise variability (i.e. from one frame to the next) is of no interest to the still photographer. It is the spatial variability (or at least the integration of spatial and temporal variability over the exposure time) that we are interested in.

I guess that people that combine consecutive shots in HDR photography may disagree. There are practical instances when you combine several shots of the same scene to boost photon count.

And it is not only about temporal variability. You can have several identical cameras (idealized situation) taking identical images in controlled environment (think of Nikon factory testing facility). Photon counts in the corresponding pixels will be different because they sample different realizations of the same photon random process and because thermal noise, read-noise are uncorrelated. This is true even for the sensors consisting of a single pixel.

As far as I understand it the std. dev. is the thing that should scale with the area. And dark current is second order effect for the pixel size we are discussing here.

I have to think about it more, for it sounds counter-intuitive on the first glance. With Gaussian random processes variance is the parameter which scales and combines. I am not sure that it is applicable here, though.

The random process is occuring in the ´╗┐light ´╗┐not the pixel.

Here I was talking about ways to describe read-noise and the random process governing the read-noise is happening in the pixel.

Leo

 Leo360's gear list:Leo360's gear list
Nikon D5100 Nikon D750 Nikon AF-S DX Nikkor 18-200mm f/3.5-5.6G ED VR II Nikon AF-S Nikkor 24-120mm F4G ED VR Nikon AF-S Nikkor 50mm f/1.8G
Post (hide subjects) Posted by
cpw
cpw
cpw
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow