JimKasson wrote:
Here's some thoughts on how many sensels are sufficient based on an approach developed for other kinds on imaging systems. It may have something to teach us about the topic of this thread. I have been reading a book by Robert Fiete, entitled Modeling the Imaging Chain of Digital Cameras:
http://spie.org/Publications/Book/868276
There’s a chapter on balancing the resolution of the lens and the sensor, which introduces the concept of system Q, defined as:
Q = 2 * fcs / fco, where fcs is the cutoff frequency of the sampling system (sensor), and fco is the cutoff frequency of the optical system (lens). An imaging system is in some sense “balanced” when the frequencies are the same, and thus Q=2.
The assumptions of the chapter in the book where Q is discussed are probably appropriate for the kinds of surveillance systems the author works with, but they are not usually met in the photographic systems that most of us work with.
1) Monochromatic sensors (no CFA)
2) Diffraction-limited optics
3) No anti-aliasing filter
Under these assumptions, the cutoff frequency of the sensor is half the inverse of the sensel pitch; we get that from Nyquist. To get the cutoff frequency of the lens, we need to define the point where diffraction prevents the detection of whether we’re looking at one point or two. Lord Rayleigh came up with this formula in the 19 century:
R = 1.22 * lambda * N, where lambda is the wavelength of the light, and N is the f-stop.
Fiete uses a criterion that makes it harder on the sensor, the rounded Sparrow criterion:
S = lambda * N
Or, in the frequency domain, fco = 1 / (lambda * N)
Thus Q is:
Q = lambda * N / pitch
I figure that some of the finest lenses that we use are close to diffraction-limited at f/8. If that’s true, for 0.5 micrometer light (in the middle of the visible spectrum), a Q of 2 implies:
Pitch = N /4
At f/8 we want a 2-micrometer pixel pitch, finer than currently available for any available sensors sized at micro 4/3 and larger. A full frame sensor with that pitch would have 216 megapixels.
You can try to come up with correction to take into account the Bayer array. Depending on how the assumptions, the correction should be between 1 and some number greater than 2, but in any case, the pixel pitch should be at least as fine as for a monochromatic sensor. With a correction actor of 2, we're talking about a 800 Mp full frame sensor!
As an aside, note that you don’t need an AA filter for a system with a Q of 2, since the lens diffraction does the job for you. That’s not true with a Bayer CFA.
I have several questions:
1) Is any of this relevant to our photography?
2) Have I made a math or logical error?
3) At what aperture do our best lenses become close to diffraction-limited?
For details about the Sparrow criterion: http://blog.kasson.com/?p=5720
For more details on calculating Q: http://blog.kasson.com/?p=5742
For ruminations on corrections for a Bayer CFA: http://blog.kasson.com/?p=5752
Hi Jim, good post. I don't have the book: what does he mean by cutoff frequency?
If he means when the relative MTF curve hits a first zero then:
1) fcs would be equal to the inverse of pitch, as opposed to half of it; and
2) fcd (d for diffraction) would indeed be 1 / (lambda*N) - without the 1.22 factor that would instead be required to indicate the first Airy PSF zero
So in this scenario the two frequencies would be the same when pitch = lambda * N. For a lambda of 0.5 microns and f/8, pitch = 4 microns. Is this a 'balanced' system?
For photographic purposes I would venture to say that the criterion of MTF=0 is perhaps less relevant then it could be in other applications, the main reason being that it is unbalanced (meaning that in this ideal example the system is diffraction limited) everywhere except for at the very end of the scale. Said graphically, for a perfect monochrome system with lambda=0.5 microns, pitch=4 microns and f/8:
Balanced? Only at the extremes with these parameters, otherwise always diffraction limited.
Imho a balanced system for us 'togs should instead require that the optics and the sensor be matched at relevant contrast/frequencies. It's what I clumsily attempted to say in this recent post , suggesting MTF50 as the balancing criterion.
Perfect monochrome system with square pixels, lambda=0.5 microns, pitch=6 microns, f/8
With these parameters (lambda=0.5microns, pitch=6microns, f/8) diffraction is less dominant in real frequencies below nyquist (0.5 cycles/pixel). Coincidentally this is approximately the pixel pitch of a D600/A7, both of which however have AAs.
Another criterion for matching/balancing the optics to the sensor could be to have the MTF curves cross over earlier, say at nyquist (perhaps what you were thinking in your post?). That would happen with a pitch of about 6.9 microns at f/8 with the assumptions above, D4/s territory.
Once we start putting reality into the equation everything needs to be recalculated, because Bayer/OLPF effects need to be added to the sensor model and aberrations/defocus/blur to the optics'. For instance stretching things a bit we could say that the last graph applies to sensors with AAs in the 4 micron pitch range.*
Jack
*BTW if anyone is wondering about how these pitches would relate to 'equivalent ' situations in formats other than FF, simply divide the f/number and the pitch by the crop factor.