Pixel Size vs Sensor Efficiency

Started May 18, 2012 | Discussions thread
Flat view
Great Bustard Forum Pro • Posts: 39,704
Pixel Size vs Sensor Efficiency

Let me preface this thread with a quote from Dr. Fossum:


Generally, image quality improves with pixel count, assuming ideal sensor technology. There is only a sweet spot according to a specific technology. The sweet spot is constantly migrating to higher pixel counts. And I am pretty sure that in our life time, there will be gigapixel sensors.

OK, there are three elements to sensor efficiency:

  1. QE -- Quantum Efficiency (the proportion of light falling on the sensor that is recorded)

  2. Read Noise (the additional noise added by the sensor and supporting hardware)

  3. Saturation Limit (the maximum amount of light the sensor can absorb)

I don't see how QE and pixel size are correlated, given that the D3s:


and G12:


have the same QE, even though the D3s pixels have 17x the area of the G12 pixels. Going back to Dr. Fossum:


Generally it is easier to have higher QE in larger pixels at the same technology generation. But, we are talking about the difference between say 70% QE and 60% QE so really not so significant.

if there is a relationship, it is trivial (a 70% QE is a 0.22 stop increase over a 60% QE, for example).

Next up is read noise and saturation. The DR (Dynamic Range) is the number of stops from the noise floor (the read noise would be the lowest meaningful measure for a noise floor) to the saturation limit.

If a pixel design were perfectly scaled, there would be no change in DR / pixel, since both the read noise and saturation limit would scale in proportion. However, since the proper measure for the IQ of photography is not at the pixel level, but the image level, every quadrupling of pixels would result a one stop increase in DR / area.

For example, let's consider a 2x2 pixel with a read noise of 12 electrons and a saturation of 49000 electrons (DR = 12 stops) and a perfectly scaled 1x1 pixel with a read noise of 3 electrons and a saturation of 12300 electrons (DR = 12 stops).

The saturation of four 1x1 pixels is the same as one 2x2 pixel (12300 x 4 = 49000). Read noise does not add in a linear manner. Noise is a standard deviation, and the standard deviation from random uncorrelated samples adds in quadrature. Thus, the read noise of the four 1x1 pixels is sqrt (4 x 3²) = 6 electrons.

Thus, the DR for four 1x1 pixels (same area as one 2x2 pixel) is log2 (49000 / 6) = 13 stops -- one stop greater. Of course, this greater DR presumes that we no longer can resolve the detail advantage afforded by the smaller pixels. So, we either have the detail advantage of four times the pixel count, or a stop more DR at the same level of detail.

Thus, we can see that for equally efficient pixels, more pixels gives us greater IQ. The question now comes down to how pixel size affects read noise and saturation, as perfect scaling is not how manufacturers tend to design their sensors.

So here's the big question (finally): if smaller pixels result in greater IQ when perfectly scaled, then one would surely think the manufacturers would do so if they could. But since they don't, we are left with the conclusion that either manufacturers already do make the smallest pixels they can, or that there is a disadvantage to going as small as they can, as read noise will scale down more slowly than saturation.

Thus, the question is what the relationship between pixel size, read noise, and saturation for a given sensor tech. I see a hand in the back. Please, tell the class what you think.

Flat view
Post (hide subjects) Posted by
(unknown member)
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow