Pixel density - can the playing field be leveled???

Started Jun 6, 2009 | Discussions thread
Daniel Browning
Senior MemberPosts: 1,058
Like?
[1/6] Myth busted: small pixels bad, 4 legs good - part 1
In reply to briander, Jun 7, 2009

[Part 1 out of 6.]

It is often stated that an image sensor with small pixels will create digital images that have worse performance characteristics (more noise, less dynamic range, lower sensitivity, worse color depth, more lens aberrations, worse diffraction, and more motion blur.) than those created by a sensor with large pixels. I disagree.

One line of reasoning used by proponents of that position is that a single pixel, in isolation, when reduced in size, has lower performance; therefore, a sensor full of small pixels creates images that have worse performance than the same sensor full of large pixels. But that's not seeing the forest for the trees: the reality is that the resulting images are generally the same, as indicated in a paper by G. Agranov at 2007 International Image Sensor Workshop:

http://www.imagesensors.org/Past%20Workshops/2007%20Workshop/2007%20Papers/079%20Agranov%20et%20al.pdf

Again, it is possible for small pixel sensors to have worse performance per pixel, but the same performance when actually displayed or used for the same purpose as a large pixel sensor. This fact may be unbelievable or at least counter-intuitive to many people who work with digital images, but I believe that is only because of the following five types of mistakes that are frequently made in image analysis:

  • Unequal spatial frequencies

  • Unequal sensor sizes.

  • Unequal processing.

  • Unequal expectations.

  • Unequal technology.

Spatial Frequency

The first category, spatial frequency, is the most important and fundamental element of image analysis as it pertains to pixel size. This aspect of an image indicates the level(s) of detail under analysis: whether fine details (high frequencies) or or more coarse information (low spatial frequency). This is often ignored completely, other times poorly understood, but it always has a tremendous impact on the result of any comparison or performance analysis.

The great majority of image analysis is fundamentally based on the performance of a single pixel, so having worse performance per pixel and the same performance in the actual image, where it matters, would seem a contradiction. It isn't.

Performance scales with spatial frequency. In other words, the many important performance characteristics of a digital image are all a function of spatial frequency, including noise, dynamic range, color depth, diffraction, aberrations, and motion blur. Therefore, for any given sensor, analysis of higher spatial frequencies will never show better performance than analysis of lower spatial frequencies.

Every image sensor has a sampling rate, or Nyquist. This is the spatial frequency at which the image sensor samples information. But every resulting digital image also contains information at all other lower spatial frequencies. For example, Pixel A may have a native sampling rate of 30 lp/mm. But the resulting digital image also contains information corresponding to 20 lp/mm and 10 lp/mm, which are larger, coarser details. Pixel B may be much smaller, and may natively sample at 60 lp/mm, but the resulting image still contains all the information of Pixel A, it only has additional information.

100% crop is the most common way to compare image sensors, but it is very misleading when the sensors have different pixel sizes. The reason is that 100% means the maximum spatial frequency. But different pixel sizes sample different spatial frequencies. So 100% crop means higher spatial frequencies for small pixel sensors than it does for big pixel sensors. This results in comparisons of completely different portions of the image. A 100% crop of a small pixel image would show a single leaf, whereas a 100% crop in a large pixel image would show the entire shrub. It's a nonsensical comparison. Failing to account for that important and fundamental difference is one of the most common flaws in such comparisons.

This type of flaw is much more rare in optics analysis. There it is widely understood that standard optical measurements, such as MTF, are naturally and fundamentally a function of spatial frequency. If the MTF of lens A is 30% at 10 lp/mm, and the MTF of lens B is 20% at 100 lp/mm, that does not mean lens A is superior; in fact, the opposite is more likely true. It's necessary to measure the lenses at the same frequency, either 30 lp/mm or 100 lp/mm, before drawing conclusions. It's very likely that lens B has a much higher MTF at 10 lp/mm. Of course, comparing MTF without regard for spatial frequency is so obviously wrong that very few people ever make that mistake. However, those same people do not realize they are making the exact same error when they compare image sensors with 100% crops. They are comparing at their respective Nyquist frequencies, but they have different Nyquists, so they are not the same spatial frequency.

Take a 100% crop comparison of a high resolution image (e.g. 15 MP) with a low resolution image (e.g. 6 MP) for example. The high resolution image contains details at a very high spatial frequency (fine details), whereas the low-res image is at a lower spatial frequency (larger details). Higher spatial frequencies have higher noise power than low spatial frequencies. But at the same spatial frequency, noise too is the same.

[Continued in part 2.]
--
Daniel

Reply   Reply with quote   Complain
Post (hide subjects)Posted by
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark post MMy threads
Color scheme? Blue / Yellow