more pixels are better!

Started Dec 14, 2008 | Discussions thread
cpw Regular Member • Posts: 313
can happen if done well

Iforgetwhat8was4 wrote:

or are they?
...

The answer is not simple because of several variables involved. If the camera maker increases MP in a responsible way, and improves other fators (QE, optics transmission, read noise), then it can be benefiicial. If however, they only increase MP alone, then the resulting per area SNR will be about the same in shot noise dominated exposures, but will suffer in read noise dominated situations.

Suppose we have a given overall sensor size (i.e. FF, etc.), and we've decided that a resolution area A is important to us. Let's say this area A contains n pixels, and we collect a signal S (in electron count) for a given exposure in this area A. If the lighting is uniform across A, then each pixel will collect a signal of S/n.

If the signal is large enough to be only shot noise dominant, then per pixel noise would be sqrt(S/n). Per area A, the signal is just S, and the noise terms rss (root sum square) to produce sqrt(S) of noise, and resulting SNR = sqrt(S), and is then independent of n, so then independent of the MP count.

Suppose now we're in a low signal area of a high iso image where the read noise dominates. Per pixel signal is still S/n, but now noise is = Nr (read noise). Per area A, the signal is still S, but noise is the rss of these read noises and becomes = sqrt(n) Nr, and now the SNR = S/(sqrt(n) Nr). So now dependent upon n. If we only increase n, then the SNR would suffer.

A responsible camera maker would be aware of this and try to maintain SNR by changing other factors. So what are these other factors? To see this, we need to expand upon S. So S = L*A*(pi/4*F#^2) t*T*QE, where L = average scene radiance (lighting, averaged across the wavelength band), t = exposure time, T = optics transmission, and QE = sensor quantum efficiency. Some of these variables are wavelength dependent, but for simplicity I'll just call them averages across the wavelength band. We're not concentrating on the factors such as F#, L, and t here; just the others, i.e. T*QE. Shot noise dominated SNR would go as sqrt(T*QE), and read noise SNR would go as (T*QE/Nr) (1/sqrt(n)). If the camera maker manages to compensate for the (1/sqrt(n)) factor in the read noise case by increasing T (improving on Bayer filter transmission), increasing QE (say by improving upon the fill factor, etc.), or lowering Nr, then it can offset the increased n. The shot noise case also can benefit (by the sqrt(T*QE) factor.

So technology moves along, I don't know that all camera makers tradeoff all of these variables that way, but I would hope so. I've also tried to keep simple without going into the Bayer interpolation effects (mainly because I don't understand what would happen/and the math would get too involved), my level of understanding would be to say that it's always better to start with a higher SNR raw file.

Also, that's not to say that we couldn't also keep these technology advances (higher T, QE and lower Nr), and apply them to a lower MP camera!, but I don't know that things move that way. Well, those are my thoughts on this.

Take care,

Chris

Post (hide subjects) Posted by
thw
cpw
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow