Are Perceptual Megapixels stupid?

Also, resolution of fine detail isn't the only benefit of using a larger aperture optic. As aperture increases, lower contrast details become visible. This is an often overlooked and underappreciated advantage.
How is that different from an increased resolution?
It's not resolving fine detail. It's resolving low contrast.
Are you referring to lower shot noise making the contrast swings more discernible? That would be a function of Exposure.

If so, noise (hence Exposure) affects the standard deviation of the resulting MTF curve. I seem to recall reading that empirical observations have shown that the standard deviation of the MTF curve is more or less proportional to the standard deviation of the noise in the image. With a bar target it tended to be proportional to the square root of the spatial frequency.

Jack
It's not strictly an exposure issue. There's a combination of exposure and image scale at play. While an f/4 shutter actuation at some focal length, X, has the same exposure as an f/4 shutter actuation at a focal of 10X, the longer focal length and larger entrance pupil diameter allow for a larger image scale, better resolution of both fine detail and lower contrast detail that won't be discernable in the wider angle, reduced image scale photo made with the same exposure.
The attached illustrate what I'm trying to convey.

1920x1080 black background | 1280x720 slightly lighter black background | 600x 335 lighter black background
1920x1080 black background | 1280x720 slightly lighter black background | 600x 335 lighter black background

1920x1080 black background | 128x72 slightly lighter black background | 60x 34 lighter black background
1920x1080 black background | 128x72 slightly lighter black background | 60x 34 lighter black background

I made the image files in Photoshop as 16-bit files. The exported JPGs show all the squares on my 1920x1080 Dell laptop screen. Hopefully, they render on your screen.

--
Bill Ferris Photography
Flagstaff, AZ
 
Also, resolution of fine detail isn't the only benefit of using a larger aperture optic. As aperture increases, lower contrast details become visible. This is an often overlooked and underappreciated advantage.
How is that different from an increased resolution?
It's not resolving fine detail. It's resolving low contrast.
Are you referring to lower shot noise making the contrast swings more discernible? That would be a function of Exposure.

If so, noise (hence Exposure) affects the standard deviation of the resulting MTF curve. I seem to recall reading that empirical observations have shown that the standard deviation of the MTF curve is more or less proportional to the standard deviation of the noise in the image. With a bar target it tended to be proportional to the square root of the spatial frequency.

Jack
It's not strictly an exposure issue. There's a combination of exposure and image scale at play. While an f/4 shutter actuation at some focal length, X, has the same exposure as an f/4 shutter actuation at a focal of 10X, the longer focal length and larger entrance pupil diameter allow for a larger image scale, better resolution of both fine detail and lower contrast detail that won't be discernable in the wider angle, reduced image scale photo made with the same exposure.
The attached illustrate what I'm trying to convey.
I am trying to follow your train of thought Bill. If the Exposure is kept constant with both unaberrated lenses at f/4 by varying the exposure time, all else equal, the Airy disks on the image plane should be the same size. Both lenses will produce the same MTF curve, with the same x-axis in lp/mm, and the same diffraction extinction frequency.

As we all know by looking into a telescope, the ability to discern two distant starts in such a situation will favor the lens with the larger magnification, since one can envision a magnification where the stars, hence the Airy disks, are far apart on the image plane and one where they overlap. One could say that in the former case the two stars are 'resolved' and in the latter one they are not.

However, this is scene dependent and not what MTF measures. MTF measures the spatial frequency response of the lens itself. It answers the question: How good is the lens at transferring contrast from the scene when the detail is of a certain size in the image?

So how do you compare the expected performance of lenses with different magnifications for your specific application? An easy way would be to convert the x-axis of the relative MTF curves to be scene-referred, as opposed to image-referred: so many lp/parsec.

Jack
 
Last edited:
Also, resolution of fine detail isn't the only benefit of using a larger aperture optic. As aperture increases, lower contrast details become visible. This is an often overlooked and underappreciated advantage.
How is that different from an increased resolution?
It's not resolving fine detail. It's resolving low contrast.
Are you referring to lower shot noise making the contrast swings more discernible? That would be a function of Exposure.

If so, noise (hence Exposure) affects the standard deviation of the resulting MTF curve. I seem to recall reading that empirical observations have shown that the standard deviation of the MTF curve is more or less proportional to the standard deviation of the noise in the image. With a bar target it tended to be proportional to the square root of the spatial frequency.

Jack
It's not strictly an exposure issue. There's a combination of exposure and image scale at play. While an f/4 shutter actuation at some focal length, X, has the same exposure as an f/4 shutter actuation at a focal of 10X, the longer focal length and larger entrance pupil diameter allow for a larger image scale, better resolution of both fine detail and lower contrast detail that won't be discernable in the wider angle, reduced image scale photo made with the same exposure.
The attached illustrate what I'm trying to convey.
I am trying to follow your train of thought Bill. If the Exposure is kept constant with both unaberrated lenses at f/4 by varying the exposure time, all else equal, the Airy disks on the image plane should be the same size. Both lenses will produce the same MTF curve, with the same x-axis in lp/mm, and the same diffraction extinction frequency.

As we all know by looking into a telescope, the ability to discern two distant starts in such a situation will favor the lens with the larger magnification, since one can envision a magnification where the stars, hence the Airy disks, are far apart on the image plane and one where they overlap. One could say that in the former case the two stars are 'resolved' and in the latter one they are not.
In visual astronomy, the better analog would be looking at and discerning an extended object such as a faint galaxy. Just as two lenses of any focal length at f/4 deliver the same exposure to the sensor, two telescopes of any aperture and focal length deliver an image of a galaxy having the same surface brightness at magnifications producing the same exit pupil diameter. In fact, if we take into consideration the reality that no optical system is perfect, every galaxy displays its greatest surface brightness to the naked eye. Increasing aperture doesn't increase an object's apparent surface brightness. At best the 20-inch Obsession Dob delivers an image of a distant galaxy having the same surface brightness as that produced by a 60mm refractor.
However, this is scene dependent and not what MTF measures. MTF measures the spatial frequency response of the lens itself. It answers the question: How good is the lens at transferring contrast from the scene when the detail is of a certain size in the image?

So how do you compare the expected performance of lenses with different magnifications for your specific application? An easy way would be to convert the x-axis of the relative MTF curves to be scene-referred, as opposed to image-referred: so many lp/parsec.
Continuing the visual astronomy analogy, the advantage of the larger aperture is that - while the galaxy (a faint extended object) can have the same apparent surface brightness in a wide range of apertures being larger at the eyepiece translates to delivering more total light to the eye. As you mentioned earlier, increased total light from the subject translates to an improved SNR and less prominent photon noise. This results in objects that appear as low having contrast against the surrounding dark sky being visible - or more easily so - in larger aperture telescopes. It's how the observer using the 20-inch Dob is able to see more galaxies in the Virgo cluster than the person set up next to them using the 60mm refractor. Though, one would hope the Obsession owner would offer to share the view :)

What I'm trying to illustrate with the images, is the advantage of maintaining exposure but increasing image scale. Subtle contrast differences become easier to discern at a larger size. It's a different kind or nature of improved resolution. It's not the discernment of small or fine details but of low-contrast details.

Now, the ease or difficulty of discerning a feature is is a subjective thing. That makes it highly debatable :) And, of course, it's possible the images I posted aren't as effective at illustrating the point as others might be.

--
Bill Ferris Photography
Flagstaff, AZ
http://www.billferris.photoshelter.com
 
Last edited:
Also, resolution of fine detail isn't the only benefit of using a larger aperture optic. As aperture increases, lower contrast details become visible. This is an often overlooked and underappreciated advantage.
How is that different from an increased resolution?
It's not resolving fine detail. It's resolving low contrast.
Are you referring to lower shot noise making the contrast swings more discernible? That would be a function of Exposure.

If so, noise (hence Exposure) affects the standard deviation of the resulting MTF curve. I seem to recall reading that empirical observations have shown that the standard deviation of the MTF curve is more or less proportional to the standard deviation of the noise in the image. With a bar target it tended to be proportional to the square root of the spatial frequency.

Jack
It's not strictly an exposure issue. There's a combination of exposure and image scale at play. While an f/4 shutter actuation at some focal length, X, has the same exposure as an f/4 shutter actuation at a focal of 10X, the longer focal length and larger entrance pupil diameter allow for a larger image scale, better resolution of both fine detail and lower contrast detail that won't be discernable in the wider angle, reduced image scale photo made with the same exposure.
The attached illustrate what I'm trying to convey.
I am trying to follow your train of thought Bill. If the Exposure is kept constant with both unaberrated lenses at f/4 by varying the exposure time, all else equal, the Airy disks on the image plane should be the same size. Both lenses will produce the same MTF curve, with the same x-axis in lp/mm, and the same diffraction extinction frequency.

As we all know by looking into a telescope, the ability to discern two distant starts in such a situation will favor the lens with the larger magnification, since one can envision a magnification where the stars, hence the Airy disks, are far apart on the image plane and one where they overlap. One could say that in the former case the two stars are 'resolved' and in the latter one they are not.
If the lenses are perfect, and sensors have infinite resolution, only the aperture size matters for the ability do discern.

The linear resolution is inversely proportionnal to f#.
 
If the Exposure is kept constant with both unaberrated lenses at f/4 by varying the exposure time, all else equal, the Airy disks on the image plane should be the same size. Both lenses will produce the same MTF curve, with the same x-axis in lp/mm, and the same diffraction extinction frequency.

As we all know by looking into a telescope, the ability to discern two distant starts in such a situation will favor the lens with the larger magnification, since one can envision a magnification where the stars, hence the Airy disks, are far apart on the image plane and one where they overlap. One could say that in the former case the two stars are 'resolved' and in the latter one they are not.
If the lenses are perfect, and sensors have infinite resolution, only the aperture size matters for the ability do discern.
What if they have different focal lengths? Photographers take them into consideration through f-number, approximated by f/D, i.e. focal length divided by aperture diameter. The relative blurring function on the image plane, call it an Airy disk, would be 1.22lambdaN.
The linear resolution is inversely proportionnal to f#.
It depends on the shape of the relative MTF curve and what you mean by resolution. The MTF curve of a diffraction limited lens with a circular aperture is not linear, though not far from it. The diffraction extinction spatial frequency is D/(f*lambda).

It is linear for a square aperture.

Jack
 
Is Roger wrong? Can sensors out-resolve lenses?
Roger is never wrong, but perhaps sometimes not accurate as he explained above ;-)

I do not know if this was referred to in any of the posts above, but with only a few days left of DPReview, there is no time to check :-) + :-(

The link below gives an approximation of a formula for total system resolution, i.e. the combination of camera and lens : https://www.pbase.com/lwestfall/image/52085939

So

1/Res(total) = 1/Res(sensor) + 1/Res(lens)

Some formulas use the square for all the resolutions.

In any case, it shows that even if one part has a higher resolution than the other, both still contribute to the total resolution.

However, it also shows that if one is much much greater than the other, the total resolution will be largely limited by the part with the lowest resolution.

I.e., if both sensor and lens have arbitrary resolution values of 10, then total system resolution will be 5, but if the lens resolution is 10 and the sensor resolution is 100, then total system resolution will be 9.1 - probably an extreme case, but mathematically possible :-)

In the latter case some might argue that the sensor out-resolved the lens, but other might argue that is does not as total system resolution is not 10.
It is more accurate to multiply MTF curves, but that ignores anisotropy and phase.
 
Is Roger wrong? Can sensors out-resolve lenses?
Roger is never wrong, but perhaps sometimes not accurate as he explained above

I do not know if this was referred to in any of the posts above, but with only a few days left of DPReview, there is no time to check

The link below gives an approximation of a formula for total system resolution, i.e. the combination of camera and lens : https://www.pbase.com/lwestfall/image/52085939

So

1/Res(total) = 1/Res(sensor) + 1/Res(lens)

Some formulas use the square for all the resolutions.
It is more accurate to multiply MTF curves, but that ignores anisotropy and phase.
So do the two formulas above ;-)
 
Last edited:
Is Roger wrong? Can sensors out-resolve lenses?
Roger is never wrong, but perhaps sometimes not accurate as he explained above

I do not know if this was referred to in any of the posts above, but with only a few days left of DPReview, there is no time to check

The link below gives an approximation of a formula for total system resolution, i.e. the combination of camera and lens : https://www.pbase.com/lwestfall/image/52085939

So

1/Res(total) = 1/Res(sensor) + 1/Res(lens)

Some formulas use the square for all the resolutions.
It is more accurate to multiply MTF curves, but that ignores anisotropy and phase.
So do the two formulas above ;-)
True enough. I didn't mean to imply otherwise.
 
Assuming the maximum resolution of the sensor is proxied by the nyquist limit and I have a lens that is tested with a sensor of a given pixel pitch say 4.5

this would mean that if the resulting image had 86 lp/mm the lens resolution is higher than 400 lp/mm

If on the same system another lens measured 50 lp/mm that would mean that lens was able to resolve more than 100 lp/mm

this would indicate that lens in the first example has 4 times the resolving power of the lens in the second example

One could conclude that having a similar increase in sensor resolution which would mean 16 times the pixels if you had to achieve 4x improvement would be the same

It appears to me that with pixel size ranging between 3.7 to 6 microns for consumer cameras there is not really a lot of significant increases possible to pixel count without other effects such as taking it a very long time to read all this information

So it may be that is not that the lens is the weakest link but more the fact that with sensor resolution being maxed out by other considerations lenses are what makes the difference more and more?
 
It appears to me that with pixel size ranging between 3.7 to 6 microns for consumer cameras there is not really a lot of significant increases possible to pixel count without other effects such as taking it a very long time to read all this information

So it may be that is not that the lens is the weakest link but more the fact that with sensor resolution being maxed out by other considerations lenses are what makes the difference more and more?
Ignoring aliasing and Anti-Aliasing filters, there are typically two main drivers to system 'resolution' In modern Interchangeable Lens Cameras: lens blur and sensor blur.

Lens blur is caused by diffraction and aberrations due to the physical set up of the lens.

Sensor blur is introduced by filtering right on top of silicon in combination with the shape and size of the active area of the pixel. To simplify things we will assume a perfectly square aperture that fills the pixel entirely - and measure resolution horizontally or vertically in the center of the field of view only.

Each of these two main components results in an MTF curve. If we multiply them together frequency-by-frequency we get the System MTF curve as shown below for a modern ILC:

4.35um perfectly square pixel aperture, f-number = 5, Aberrations of a good 24-70mm zoom in the center
4.35um perfectly square pixel aperture, f-number = 5, Aberrations of a good 24-70mm zoom in the center

So what's resolution? Choose a threshold that is relevant for the purpose at hand. Often online one finds MTF50, the spatial frequency at which the captured contrast is half of what would be possible. This may be relevant when pixel peeping .

System MTF50 above is about 84.6 lp/mm, lens MTF50 is about 125.0 lp/mm and pixel aperture 138.7 lp/mm.

Can we say that as set up the lens brings down system MTF50 more than the sensor does? We could, though that answer depends on lens and pixel aperture readings not at MTF50 - but at about MTF65 and MTF80 respectively. Does that mean that the sensor 'outresolves' the lens in general? No, because if we instead chose MTF10 as a threshold, as required by a different application, the opposite would be true. And then there are Nyquist and aliasing to contend with.

As pixels get smaller the dashed line of pixel aperture and its null above move more to the right, bigger to the left. The null for a 6um pixel aperture is 167lp/mm, for 3.7um 270lp/mm. On the other hand the dotted MTF curve of the lens depends only on aberrations and f-number so unless those change it stays put. This should give an intuition on how system resolution is affected by varying pixel size in this case.

Jack

PS The 1/r formula yields a system MTF50 of 65.7 and the 1/r^2, 92.9 lp/mm versus the actual 84.6.
 
Last edited:
Based on the discussion and the shapes of the curves above one can surmise that the relationship between effective pixel aperture and System MTF50 is roughly linear in the interval of interest all else equal. In fact:

No AA, green channel, perfectly square effective pixel aperture, with the same Nikon Z 24-70mm 4 S kit at 24mm and f/5 in the center
No AA, green channel, perfectly square effective pixel aperture, with the same Nikon Z 24-70mm 4 S kit at 24mm and f/5 in the center

So approximately 10 lp/mm in System MTF50 are lost for every micron increase in effective pixel aperture, with this setup.

Jack
 
Based on the discussion and the shapes of the curves above one can surmise that the relationship between effective pixel aperture and System MTF50 is roughly linear in the interval of interest all else equal. In fact:

No AA, green channel, perfectly square effective pixel aperture, with the same Nikon Z 24-70mm 4 S kit at 24mm and f/5 in the center
No AA, green channel, perfectly square effective pixel aperture, with the same Nikon Z 24-70mm 4 S kit at 24mm and f/5 in the center

So approximately 10 lp/mm in System MTF50 are lost for every micron increase in effective pixel aperture, with this setup.

Jack
Thank you Jack for this data and the analysis. I am curious on how the lens mtf can be measured without the camera itself but let's not go there for now.

If I read this chart correctly the gradient is 10 lp/mm however if for example I looked at Nyquist limit the expectation would be 25 lp/mm difference between 4 and 5 so it would appear that the actual benefit of decreasing pixel size is less than the theoretical maximum matrix resolution.

Conscious of many approximations here but in essence looks like there are loss factors that impact that maximum limit before you start taking the lens into account and the lens own contrast losses

If I now overlay other topics for example the ability to read rapidly in a rolling shutter sensor a high number of pixels this means that I need to be careful increasing the pixel count as the resolution benefit may be less than expected and at the same time I have other side effects that are important from a use case point of view.

I am not going into different construction types for the sensor and micro lenses effectivness etc etc for simplificity



--
If you like my image I would appreciate if you follow me on social media
instagram http://instagram.com/interceptor121
My flickr sets http://www.flickr.com/photos/interceptor121/
Youtube channel http://www.youtube.com/interceptor121
Underwater Photo and Video Blog http://interceptor121.com
Deer Photography workshops https://interceptor121.com/2021/09/26/2021-22-deer-photography-workshops-in-woburn/
If you want to get in touch don't send me a PM rather contact me directly at my website/social media
 
Based on the discussion and the shapes of the curves above one can surmise that the relationship between effective pixel aperture and System MTF50 is roughly linear in the interval of interest all else equal. In fact:

No AA, green channel, perfectly square effective pixel aperture, with the same Nikon Z 24-70mm 4 S kit at 24mm and f/5 in the center
No AA, green channel, perfectly square effective pixel aperture, with the same Nikon Z 24-70mm 4 S kit at 24mm and f/5 in the center

So approximately 10 lp/mm in System MTF50 are lost for every micron increase in effective pixel aperture, with this setup.

Jack
Thank you Jack for this data and the analysis. I am curious on how the lens mtf can be measured without the camera itself but let's not go there for now.
It's approximately reverse engineered from a measured System MTF curve fit to defocus, SA3 and pixel aperture. It should be a decent enough approximation in the center of the FOV as shot. Should you be interested some of the detail is described here

https://www.strollswithmydog.com/dof-diffraction-setup/#Appendix
If I read this chart correctly the gradient is 10 lp/mm however if for example I looked at Nyquist limit the expectation would be 25 lp/mm difference between 4 and 5 so it would appear that the actual benefit of decreasing pixel size is less than the theoretical maximum matrix resolution.
I am not sure what matrix resolution means, the gradient refers to MTF50 as a proxy for pixel-peeping sharpness. My discussion above ignores aliasing. Monochrome Nyquist would be at 1/2 cycles per pixel pitch. With 4um per pixel that would be 1/2/.004 = 125 lp/mm, well above MTF50. As pixel size decreases the monochrome Nyquist frequency increases accordingly. In other words, as far as aliasing is concerned you want a poorer lens with larger pixels - and vice versa.
Conscious of many approximations here but in essence looks like there are loss factors that impact that maximum limit before you start taking the lens into account and the lens own contrast losses

If I now overlay other topics for example the ability to read rapidly in a rolling shutter sensor a high number of pixels this means that I need to be careful increasing the pixel count as the resolution benefit may be less than expected and at the same time I have other side effects that are important from a use case point of view.

I am not going into different construction types for the sensor and micro lenses effectivness etc etc for simplificity
 
Last edited:
It appears to me that with pixel size ranging between 3.7 to 6 microns for consumer cameras there is not really a lot of significant increases possible to pixel count without other effects such as taking it a very long time to read all this information

So it may be that is not that the lens is the weakest link but more the fact that with sensor resolution being maxed out by other considerations lenses are what makes the difference more and more?
Ignoring aliasing and Anti-Aliasing filters, there are typically two main drivers to system 'resolution' In modern Interchangeable Lens Cameras: lens blur and sensor blur.

Lens blur is caused by diffraction and aberrations due to the physical set up of the lens.

Sensor blur is introduced by filtering right on top of silicon in combination with the shape and size of the active area of the pixel. To simplify things we will assume a perfectly square aperture that fills the pixel entirely - and measure resolution horizontally or vertically in the center of the field of view only.

Each of these two main components results in an MTF curve. If we multiply them together frequency-by-frequency we get the System MTF curve as shown below for a modern ILC:

4.35um perfectly square pixel aperture, f-number = 5, Aberrations of a good 24-70mm zoom in the center
4.35um perfectly square pixel aperture, f-number = 5, Aberrations of a good 24-70mm zoom in the center

So what's resolution? Choose a threshold that is relevant for the purpose at hand. Often online one finds MTF50, the spatial frequency at which the captured contrast is half of what would be possible. This may be relevant when pixel peeping .

System MTF50 above is about 84.6 lp/mm, lens MTF50 is about 125.0 lp/mm and pixel aperture 138.7 lp/mm.

Can we say that as set up the lens brings down system MTF50 more than the sensor does? We could, though that answer depends on lens and pixel aperture readings not at MTF50 - but at about MTF65 and MTF80 respectively. Does that mean that the sensor 'outresolves' the lens in general? No, because if we instead chose MTF10 as a threshold, as required by a different application, the opposite would be true. And then there are Nyquist and aliasing to contend with.

As pixels get smaller the dashed line of pixel aperture and its null above move more to the right, bigger to the left. The null for a 6um pixel aperture is 167lp/mm, for 3.7um 270lp/mm. On the other hand the dotted MTF curve of the lens depends only on aberrations and f-number so unless those change it stays put. This should give an intuition on how system resolution is affected by varying pixel size in this case.

Jack

PS The 1/r formula yields a system MTF50 of 65.7 and the 1/r^2, 92.9 lp/mm versus the actual 84.6.
Very helpful, clear and concise description.

All of this assumes the lens, sensor and target are all static in relation to each other (all at the same frame of reference). Maybe average sharpness (across images shot by a user) need to include something linked to AF and user capability. Much less likely to achieve this when the light levels are very low as well which is a function of the camera and lens
 
I just look at the images you know if you are happy with the performance or not.

Figures can be of some use, but practical side is important too
 
When someone says that a camera sensor outresolves a lens, what does that mean in practice?

When someone mentions that a sensor outresolves a lens, it is typically implied that:
  • Such a lens will perform worse or equally on high as on a low-resolution sensor.
  • Using such a lens is wasted on high-resolution sensors.
Based on my experience, both statements are wrong.

If those statements are wrong, what are P-MPs good for?
My simplest way to see it:
  • Lens outresolves the sensor -> aliasing
  • Sensor ouresolves the lens -> no aliasing
Real life may be a bit different. One of those things is that colors are sampled at half frequency compared to luminance, especially if we regard 'G' and 'G2' channels to be separate.
You described the positive effect when the lens is outresolved by the sensor (less aliasing = better IQ).

I was wondering about the negative effect when using such a lens with high vs. low resolution sensors.

My question is motivated by the regularly heard disqualification of lenses because they are outresolved by sensors. I believe that disqualification is wrong.
Best regards

Erik
I agree. If you want to capture all the information from a lens, you need a sensor with higher resolution than the lens.

Don
 
A 20mp grid has an infinite Nyquist limit. If you want a 1-1 correspondence of grids to Nyquist, they have to be sinusoidal.
The nyquist limit (better nyquist frequency) is an attribute of the sampling process. What you mean is that a grid with maximum contrast has unlimited upper frequency (like a square wave), and the amplitude of high frequencies is proportional to 1/f.

 
So what resolution of camera and lens would provide the required detail if National Geographic sent me to the Ural Mountains to photograph the uniquely fine fur of wild minks.

Mink fur is about 15 µm in diameter and minks themselves are about .6 m long; let’s say I can’t get any closer than 20 meters without spooking them.

If I want to clearly resolve the fur detail what would the respective resolution figures be and what frequencies does the lens need to be best at?
 
Last edited:
So what resolution of camera and lens would provide the required detail if National Geographic sent me to the Ural Mountains to photograph the uniquely fine fur of wild minks.

Mink fur is about 15 µm in diameter and minks themselves are about .6 m long; let’s say I can’t get any closer than 20 meters without spooking them.

If I want to clearly resolve the fur detail what would the respective resolution figures be and what frequencies does the lens need to be best at?
Hi,

The hairs are 0.15 microns and the animals are 0.6 m long, so resolution needed would be 0.6 / 0.00015 -> 4000 units. What units? Good question! it could be either be pixels, line pairs or cycles.

Assuming say 4000 lines/PW at 50% MTF the lens would probably resolve the detail, but with a 4000 lines/PW sensor the detail would not properly be resolved. Resolving the detail we may need 4000 lp/PW, preferably at low MTF. Having a high MTF would result in aliased fur detail.

4000 lp/PW would correspond to 8000 pixels, or -> 42 MP on a 24x36 mm sensor.

Resolving that detail at 20 m would need a horisontal angle of view of 2*atan(0.3/20) -> 1.7 degrees.

Which would translate to 0.036 / tand(1.7) -> 1.2 m, that is 1200 mm.

So, your best answer is probably a good quality 1200 mm lens on a 40+ MP sensor.

Best regards

Erik
 

Keyboard shortcuts

Back
Top