Tristan Cope
wrote:

aclo
wrote:

Tristan Cope
wrote:

Yes. Lets take the extreme case and say that one photon is falling on

each sensor element. Signal = 1, noise = sqrt 1 = 1. Signal and noise

are equal (essentially photon flux per sensor element is entirely

random regardless of the signal). This is a quantum effect.

That the signal and noise have the same magnitude is irrelevant, as I

explained before (and others do later). And it also happens with

classical objects being randomly deposited onto a surface (such as

epitaxial growth), so it's not quantum (except inasmuch as the

photons being discrete is a "quantum" effect). Anyway this is off

topic.

My understanding is that shot noise is a quantum effect and caused by

the random nature of quantum level events.

It happens also with raindrops or indeed any kind of particles that are randomly deposited on some surface. On the other hand, that photons are discrete is indeed a "quantum" effect. Anyway, this isn't really important, it's just semantics.

OK think of this: imagine a photosite split in 10x10 smaller squares.

Suppose I "detect" photons separately in each, and there is no read

noise. Do you not see that shot noise will be unaffected by whether I

a) write out all 100 values, then add them, or b) I have a computer

(or analog counter) do it? (think about this a bit before dismissing

it). [and the photons aren't coherent so we really can argue like

above].

I entirely accept this, as long as the binning is done in hardware

before any signal amplification. Then it doesn't matter where the

photons fall within the binned area as long as they are counted.

I hope the idea gets across with my example.

Yes, thanks.

I still have a problem with the idea of binning in software though -

I don't see how this is really any different to downsizing a bitmap.

OK, if I have a signal of magnitude S (in one pixel; the units don't matter so long as they are the same for all quantities), then there are generically two different contributions to the noise: one that is constant (ie independent of the signal), call it R, and one that is proportional to the square root of the signal, call it N, the magnitude of which=sqrt(S).

Now the idea (from what I understand from reading a few links and posts, I in fact have no clue about electronics) is that if I bin nxn pixels in hardware then R will not change; of course, S will increase to n^2*S while the shot noise will become n*N=n*sqrt(S). So total noise sqrt(R^2+n^2 S) (uncorrelated noise sources add like this, under some assumptions which are valid here).

On the other hand, if I bin them after readout, then the signal is again n^2 S while the noise will be sqrt(n^2 R^2+n^2 S).

So you see, if I go to smaller pixels so that I must bin nxn of them to get the same size as the larger ones, then, if I do it in software, I'd need n times smaller read noise to get the same signal to noise. This ignores the CFA, and also aliasing issues.

By the way it's not different than downsizing a bitmap.