# D600 High ISO in DX

Started Nov 23, 2012 | Questions thread
Re: Roger's optimum pixel pitch of 5um
2

Leo360 wrote:

bobn2 wrote:

Leo360 wrote:

noirdesir wrote:

A single pixel does not have any noise. Display a single pixel fullscreen on your monitor and tell me whether you can see any noise in it.

Saying that a single pixel does not have any noise in it is not quite accurate.

It is exactly accurate.

There is mean number of photons that is supposed to hit this pixel under given exposure.

No there is not. The process of photons hitting pixels is itself a random process. The noise is in the light, not the pixel. Even if the pixel had 100% efficiency and zero read noise, which means that it would count very photon that struck it, there would be the noise. Nothing in nature dictates that there is a mean number that 'should' hit a pixel. In fact nature dictates precisely that the incidence of photons is randomised.

Photon shot-noise is the property of the photon flux and here we are all in a "violent agreement".

The number of photons hitting a given small area of the image plane is a Poisson random process

p(n) = exp(-lambda*t) * (lamda*t)^n / n!

where "t" is the exposure time and "lambda" is the parameter which happens to be "mean photon count per unit time of exposure" (aka expected value) for the area of impact. I will repeat, this lambda is a function of a point on the image plane (both x and y coordinates). A "true" camera independent image is the one constructed from those lambda functions for Red, Green and Blue channels.

No, that is not the "true" image. The true image is actually composed of all those individual discrete photon strikes. The lambda function is a mathematical model to allow us to map a continuous function onto what is a discrete one,

Now back to the real world. We do not know light distribution (lambdas) across the image plane.

Because it does not exist. We can discover it by observation of what happened in the real world, which was a number of discrete photon events.

We use a pixel based sensor to record a realization of this random process to estimate the lambda functions.

It does not 'record a realization'. The lambda function, which it estimates by averaging the count of photon events over an area, is an abstraction. The reality was the photon events. If we wished we could use an alternative mechanism to do the same only better. Imagine we had a sensor so advanced that it could record the exact position (to within the limits of the uncertainty principle) of each photon event in time and space, then transmit that with the photon energy to the camera's processor. That processor could reconstruct the output of any sensor of any pixel count and CFA characteristic.

At each pixel we have obtained a sampled photon count which served as an estimator for the lambda corresponding to this tiny pixel area. The accuracy of the estimation depends on the the photon count observed. The higher the photon count the lesser error in using it in place of (lamda*t). Everyone notices this fact while observing that there is more photon-shot noise in shadows than in highlights. Expose-to-the-right is the empirical way to say the same thing -- you expose so much as to obtain as many photons in every pixel as possible without falling off the DR upper end in highlights.

The lambda is ﻿not﻿ the reality, it is an abstraction. And if you sample the reality with a greater precision, you can always estimate this function more precisely. This is simply a matter of information, the more information you have, the more precise a model you can make.

To summarize -- saying that there is no "mean photon count" (aka expected value) is wrong.

No-one said that there is no "mean photon count", there is a mean of any set of observations. But 'mean photon count' says nothing about 'expected value', because 'expected' is before the observation, the mean is after.

These quantities are deterministic parameters of the photon count probability distribution. The camera is a device that tries to estimate these parameters (on pixel by pixel basis) from a given random process realization.

Which quantities are you now saying now are the 'deterministic parameters of the photon count probability distribuution'? What is actually happening is that photons are generated randomly by various quantum processes, they travel, reflect off things and end up hitting the sensor. If you have a ﻿model﻿ of their distribution and trajectory, and knew something about the scene, you could generate a model of the light energy distribution over the surface of the sensor, but any continuous function describing this is an abstraction from the reality.

The number of photons registered by it should correspond to the number of photons which hit it. Shot noise is not because pixels are incorrectly counting the number of photons (that is read noise)

You are ascribing me the words I never said!

I didn't ascribe any words to you.

Photon shot-noise is the property of photon flux and has nothing to do photon-to-electron conversion in the pixel. This is why noise coming out of a pixel is nominally broken into shot-noise, read-noise, dark-current components.

Good, we agree. The sensor does not affect the amount of shot noise in the image projected onto it.

On top of it it is contaminated by read-noise which makes photon-to-electron conversion noisy. So the value of this pixel reading will not be exactly right.

Read noise is not 'on top of it'. Read noise is why the value of the pixel reading will not be exactly right.

There is also Quantum Efficiency (QE) that makes photons go missing and dark current which makes photons appear from nowhere. And QE of 53% does not mean that exactly 47% of photons go missing on each reading. Photon absorption is in itself a random process. I guess it should be factored into read-noise but I am not sure.

No, it gets catered for simply by multiplying the photon count by the QE.

In your experiment with a single pixel filling the whole screen you will have a solid color (no spacial variation) but with the intensity slightly off. I think that what you are trying to say is that one cannot observe spacial noise variability (from one pixel to another) when measuring only a single pixel.

Temporal noise variability (i.e. from one frame to the next) is of no interest to the still photographer. It is the spatial variability (or at least the integration of spatial and temporal variability over the exposure time) that we are interested in.

I guess that people that combine consecutive shots in HDR photography may disagree. There are practical instances when you combine several shots of the same scene to boost photon count.

This is practically increasing the integration time.

And it is not only about temporal variability. You can have several identical cameras (idealized situation) taking identical images in controlled environment (think of Nikon factory testing facility). Photon counts in the corresponding pixels will be different because they sample different realizations of the same photon random process and because thermal noise, read-noise are uncorrelated. This is true even for the sensors consisting of a single pixel.

They will be different because the actual pattern of photon events is, and always will be different. That's the reality, rather simpler than 'sampling different realizations of the same photon random process'

As far as I understand it the std. dev. is the thing that should scale with the area. And dark current is second order effect for the pixel size we are discussing here.

I have to think about it more, for it sounds counter-intuitive on the first glance. With Gaussian random processes variance is the parameter which scales and combines. I am not sure that it is applicable here, though.

The random process is occuring in the ﻿light ﻿not the pixel.

Here I was talking about ways to describe read-noise and the random process governing the read-noise is happening in the pixel.