How Many Megapixels Can a Lens Resolve?

MarshallG

Forum Pro
Messages
10,251
Solutions
13
Reaction score
7,257
Location
MA, US
One day soon, the megapixel war must come to an end, because the number of pixels will reach the point where they exceed the resolving power of the system's best lens. I even wonder if we've reached such a point already, and we're just seeing the effects of interpolation made by the camera firmware.

Has anyone ever determined what APS and 35mm sensor pixel count is the highest that can be resolved?
 
From what I understand some of the newest 15mp APS-C and 20+mp 135 format cameras really show the flaws in lesser lenses and do really only realize their full capabilities when paired with very high quality lenses.

--

 
These tests show when lens out resolves the sensor and vice versa. As they say: "NOTE the line marked 'Nyquist Frequency' indicates the maximum theoretical resolution of the camera body used for testing. Whenever the measured numbers exceed this value, this simply indicates that the lens out-resolves the sensor at this point - the calculated MTF values themselves become meaningless."
Bert
 
Is a question of density more than absolute pixel count.
Lens Reviews here in this site show a line called 'Nyquist Frequency'.
It explains:

"the line marked 'Nyquist Frequency' indicates the maximum theoretical resolution of the camera body used for testing. Whenever the measured numbers exceed this value, this simply indicates that the lens out-resolves the sensor at this point."

The sharp Canon Prime lens 50mm f1.4 never passes to resolve more than 12 Megapixel in a sensor of 3.7 MP/cm² pixel density suchs in Canon 450D.

On the other hand you have P&S cameras with 40 MP/cm² pixel density and fixed cheap lens that have no problem resolving that.

I predict that these makes many film legacy lenses obsolete and Manufacturers will cash in selling newer sharper lenses for cameras with higher pixel densities.

But outside of specmaship there are REAL WORLD factors too that many forget to consider.

As Pixel count increase so does the difficulty of obtaining pixel sharp images, you need more and better stabilization, more precise autofocus, faster shutter speeds, and more tripod shots.

Conclusion, stop lusting for pixels, there is a point where it becomes silly.
 
the number of pixels will reach the point where they exceed the
resolving power of the system's best lens.
This is a widespread misconception. A lens cannot outresolve the sensor, nor vice versa.

Each component in the optical system reduces sharpness. Improving the sharpness of any component will improve the overall sharpness of the image.

It certainly would be possible to have a "drop in the ocean" effect—where one component is so bad that the effect of improving other components is too small to matter—but that is generally not the case for modern DSLRs and their lens systems.
 
Hi,

Wouldn't it be nearer reality to say that they won't put a super resolving lens into a system that the CCD won't resolve and vice versa?

It's a waste of money, their's and your's; no other reason. Although I'm the first to admit that a lot of CCD's deserve better than the kit lenses they get.

Remember the old saying about a chain being a strong as it's weakest link? So it's best not to design a chain with very strong and very weak links. Just make all the links as near as possible to the same strength.

Regards, David
 
the number of pixels will reach the point where they exceed the
resolving power of the system's best lens.
This is a widespread misconception. A lens cannot outresolve the
sensor, nor vice versa.
True, the term "outresolve" is a misnomer. Resolution is a combination of lens AND sensor (and software).
Each component in the optical system reduces sharpness. Improving the
sharpness of any component will improve the overall sharpness of the
image.
Again, this is correct even though it is very poorly understood by most people. This is NOT a "weakest link in the chain" situation.
It certainly would be possible to have a "drop in the ocean"
effect—where one component is so bad that the effect of improving
other components is too small to matter—but that is generally not the
case for modern DSLRs and their lens systems.
Again, agreed. Even though lens resolution above the Nyquist limit is wasted, the opposite is NOT correct. In other words, it is possible to have a sensor that has a Nyquist limit just above the very best that a lens can resolve (either a very good sensor, a very poor lens or a combination of both will give you this) and you will get a certain amount of detail in your image. Increase the sensor resolution (pixel density) so that the Nyquist limit is WAY above the lens resolution and you will STILL get a further increase in resolution. Of course as you point out the "law" of diminishing returns will limit how much more resolution you will get.

But wait, there's more than just resolution alone! The greater the pixel density of the sensor, the less colour inaccuracies you will see from the Bayer array, the fewer "jaggies" you will see on diagonal lines, the smaller the haloes around edges will be and so on. Also, you can apply more sharpening to a high resolution sensor image before digital artefacts become noticeable and more noise reduction can be applied before the loss of detail this entails becomes noticeable.

Think about it. Even though your lens may not do justice to the sensor, would you rather your image was limited by pixelation, jaggies, haloes and false colours or the limits of the lens, no matter how poor?
 
As others have pointed out the system response is the product of the responses of each component. There is still a point called the diffraction cutoff frequency where the lens does set an absolute limit to the system resolving power but it is at a much finer spacing than typically cited "diffraction limits". When we consider that we can sharpen the lens output to recover information where the MTF is less than 50% we find that we can get useful information with about half the pixel spacing (which is four times the pixels) as the usually cited limit. Here is a post where I calculate this limit:
http://forums.dpreview.com/forums/read.asp?forum=1018&message=30447131

Here is a post with a table of full frame pixel counts versus aperture:
http://forums.dpreview.com/forums/read.asp?forum=1018&message=30448212

The table above doesn't include the fact that Bayer sensors only have unambiguous luminance information at half the full pixel count so Bayer sensors with twice the pixel counts given in the table are still reasonable.

Note that while real lenses have aberrations that lower the MTF response compared to a perfect diffraction limited lens, the point where the MTF falls to zero is the same as for an ideal lens.
 
As others have pointed out the system response is the product of the
responses of each component. There is still a point called the
diffraction cutoff frequency where the lens does set an absolute
limit to the system resolving power but it is at a much finer spacing
than typically cited "diffraction limits".
Here is a post with a table of full frame pixel counts versus aperture:
http://forums.dpreview.com/forums/read.asp?forum=1018&message=30448212
Thank you for taking the time to write such an in-depth response. Out of respect to you for your effort, I must confess that... I do not fully understand it. It's been two decades since I've needed to use that much math in my life. :-\

I was beginning to think that photography has become far more complicated than it was when I started back in the 1970's, but at least these days, I don't need to maintain a perfect 104.6 degree water bath for the chemicals.

I did look at the review of the 24 megapixel Nikon, and, sure enough, the RAW test images clearly appear to resolve finer details than Nikon cameras with the same lens and lower pixel counts. Based on that, and your authoritative say-so, it appears that 24 megapixels resolves more detail than 12.

But since you took the time to explain this to me, let me please try to ask a follow-up question or two, to see if I'm on the right track here. You wrote that, with full-frame sensors, a aperture of f/22.16 can resolve 16 "FF-mpixels". With an APS-sized sensor, I would imagine this value would be somewhat less. If I understand you correctly, does this mean that, with an APS-sized sensor, very small apertures can produce less detail than a high resolution sensor can record? (Please, I'll take your word for it; I won't understand the equations.)

Also, you mentioned a Bayer sensor. I don't understand; is a Bayer sensor the type commonly used in digital cameras?

Thank you again for educating me.
 
the number of pixels will reach the point where they exceed the
resolving power of the system's best lens.
This is a widespread misconception. A lens cannot outresolve the
sensor, nor vice versa.

Each component in the optical system reduces sharpness. Improving the
sharpness of any component will improve the overall sharpness of the
image.

It certainly would be possible to have a "drop in the ocean"
effect—where one component is so bad that the effect of improving
other components is too small to matter—but that is generally not the
case for modern DSLRs and their lens systems.
Exactly. AA filters, microlenses, lens MTF, and pixel spacing all play a part, and even with the wide range of lens MTFs, no lens is unaffected by the AA filter or pixel spacing in its sweet spot imaging.

How many MPs a lens is worth is a bit of a meaningless concept, because in order to give an answer, even if we agree that we are only talking about the center of the focal plane, we must also agree on the contrast between neighboring pixels to satisfy the need for a pixel, and some people require a lot of contrast (although that leads to aliased, false imaging), and some expect very little contrast.

--
John

 
Again, this is correct even though it is very poorly understood by
most people. This is NOT a "weakest link in the chain" situation.
Very few things are. People just like to oversimplify things.

You can hear the difference between department store speakers and audiophile speakers on both department store amps and audiophile amps, and you can hear the difference between the two amps with either set of speakers, too.

It would be really interesting to have a camera with a P&S sensor that could take DSLRs lenses (and shift them, to use APS and FF "corners"), to see how DSLR lenses fare at high pixel densities.

Many P&S lenses are quite sharp in an absolute sense, but they only cover a very small area of central focal plane.

--
John

 
But outside of specmaship there are REAL WORLD factors too that many
forget to consider.
As Pixel count increase so does the difficulty of obtaining pixel
sharp images, you need more and better stabilization, more precise
autofocus, faster shutter speeds, and more tripod shots.
The image is never worse for having more pixels, though (unless a botched design introduces more image noise). If having less pixels saves you from noticing motion blur, the whole thing is a blur, anyway, relative to what it could be. You can always get a less detailed image from a more detailed image, but never visa-versa. A detailed image with camera twitch can be sharpened/deconvolved far more effectively than a low-MP version of the same camera motion. Also, intentional blur stands out better in contrast against a more detailed background (or panned object).

--
John

 
This is a widespread misconception. A lens cannot outresolve the
sensor, nor vice versa.
well, if we could make photosites that were smaller than photons...
 
quite a lot of leses are already out-resolved by the sensors ( see all the posts about the nyquist frequency). e.g. the sony kit-lens comes to mind.

I also believe that most mobile phone cameras outresolved whatever is in there as optics...

I read that there are lenses around which can resolve somewhere above 100 lines/mm on their sweet-spot (manual focus Zeiss??).

to resolve 1 line you need 2 pixels, so 100 l/mm is equal to 34 MP ON FULL FRAME ( ignoring all the additional problems like bayer patterns). The same source claimed that most AF film cameras were only accurately enough in AF to have sharpness suffitient to resolve 50 l/mm equal to around 9 MP ( Full frame) at the plane of wished focus ( in reality this woudl result in slight front or back focus).

Basically most available systems are close or beyond their limits in terms of max. resolution,

to push it further you need very big effort in lens design, mechanical accuracy and focusing ( wether AF or MF). Therfore I doupt that we will get anything significantely better than what we have currently in the near future. Most of the progress realistically to be made is likely on the electronics side, as can be seen in the recent developments towaeds e.g. usable high ISO.

Another point against much higher resolution: If you check what the maximum detail the eye can resolve is (if you assume you look at the whole picture and not a close up) you end up in the 6-10 MP range.
cheers.
r.
 
quite a lot of leses are already out-resolved by the sensors ( see
all the posts about the nyquist frequency). e.g. the sony kit-lens
comes to mind.
The nyquist is a moving target with variations in pixel pitch, and the strength of signal at and near the nyquist depends not only on the lens and pixel pitch but on the strength of the AA filter and/or the effective fill factor. Looking at the contrast at the nyquist of specific cameras is not a very useful way to predict the performance of higher pixel densities.
I read that there are lenses around which can resolve somewhere above
100 lines/mm on their sweet-spot (manual focus Zeiss??).
to resolve 1 line you need 2 pixels,
Only if you want to resolve those lines poorly. It takes 5 to 6 pixels to properly render a line pair.
so 100 l/mm is equal to 34 MP ON
FULL FRAME ( ignoring all the additional problems like bayer
patterns). The same source claimed that most AF film cameras were
only accurately enough in AF to have sharpness suffitient to resolve
50 l/mm equal to around 9 MP ( Full frame) at the plane of wished
focus ( in reality this woudl result in slight front or back focus).
Your numbers are way off. The sharpest 35mm format lenses resolve over 300 line pairs per mm.
Basically most available systems are close or beyond their limits in
terms of max. resolution,
Not at all. You are basing all this on your perceived need for high contrast between neighboring pixels, which is illusory.
Another point against much higher resolution: If you check what the
maximum detail the eye can resolve is (if you assume you look at the
whole picture and not a close up) you end up in the 6-10 MP range.
cheers.
The eye is completely irrelevant to imaging; and almost all perceptual anecdotes have no application to imaging; that is just simplistic assumption.

--
John

 
Thank you for taking the time to write such an in-depth response. Out
of respect to you for your effort, I must confess that... I do not
fully understand it. It's been two decades since I've needed to use
that much math in my life. :-\

I was beginning to think that photography has become far more
complicated than it was when I started back in the 1970's, but at
least these days, I don't need to maintain a perfect 104.6 degree
water bath for the chemicals.

I did look at the review of the 24 megapixel Nikon, and, sure enough,
the RAW test images clearly appear to resolve finer details than
Nikon cameras with the same lens and lower pixel counts. Based on
that, and your authoritative say-so, it appears that 24 megapixels
resolves more detail than 12.

But since you took the time to explain this to me, let me please try
to ask a follow-up question or two, to see if I'm on the right track
here. You wrote that, with full-frame sensors, a aperture of f/22.16
can resolve 16 "FF-mpixels". With an APS-sized sensor, I would
imagine this value would be somewhat less.
The spot size in microns is a function of the f-stop. So how many pixels you get at a certain f-stop scales with sensor area. An APS-C camera therefore would have 1/2.25 for Nikon or 1/2.56 for Canon the pixels of a full frame camera. The result is a D300 can resolve about 7 megapixels at f/22 with very strong sharpening.

The spacing needed to recover the high spatial frequency information that is reduced to 10% contrast by the lens diffraction is f-stop/3 in microns. Sharpening can then boost this contrast back up while also boosting high frequency noise ten fold. The noise boost is why this high resolution is only really usable for low ISO settings.
If I understand you
correctly, does this mean that, with an APS-sized sensor, very small
apertures can produce less detail than a high resolution sensor can
record? (Please, I'll take your word for it; I won't understand the
equations.)
Yes, above f/16 a lens has less detail than the latest APS sized sensors can record.
Also, you mentioned a Bayer sensor. I don't understand; is a Bayer
sensor the type commonly used in digital cameras?
Yes, other than the Sigma cameras they all use Bayer color filter arrays where 1/2 of the pixels are green and the other 1/2 of the pixels are split between red and blue. This means at full resolution there is ambiguity between color and intensity details. For this reason most cameras include a blurring (anti-alias) filter to eliminate the finest image details that could produce artifacts like color moire. The raw converters still try to guess details at the full pixel count though and this sometimes leads to ugly looking artifacts. If you increase the pixel count of a Bayer camera to twice the pixels you want in the output jpeg file then this aggressive raw conversion and the artifacts it causes can be avoided.
Thank you again for educating me.
 
to resolve 1 line you need 2 pixels,
Only if you want to resolve those lines poorly. It takes 5 to 6
pixels to properly render a line pair.
John, this is not true for sine waves and when we talk about MTF versus frequency we are talking about sine waves. The examples I have seen that claimed to show 5 to 6 oversampling is required either used poor reconstruction techniques or they were imaging sharp edged lines that required not just the fundamental but also the third harmonic to maintain the sharp edges. For a sine wave you only need a little more than the Nyquest limit: perhaps 2.5 times the frequency. On the other hand Bayer arrays only have unambiguous luminance information for half the pixels so for a Bayer sensor you need to increase the pixel count by a factor of two compared to a monochrome sensor.
 
This is a widespread misconception. A lens cannot outresolve the
sensor, nor vice versa.
well, if we could make photosites that were smaller than photons...
photons don't have a 'size', they have a probability distribution of being in a particular place (so does everything, really but we tend to confuse that with 'size' at a macroscopic scale - in my case the probability of me being in the pub after work is substantially higher than for other locations). Remember the experiment where you build up a diffraction pattern from individual photons passing through a grating with slots placed a substantial distance apart.
--
Bob

 
Only if you want to resolve those lines poorly. It takes 5 to 6
pixels to properly render a line pair.
John, this is not true for sine waves and when we talk about MTF
versus frequency we are talking about sine waves. The examples I have
seen that claimed to show 5 to 6 oversampling is required either used
poor reconstruction techniques or they were imaging sharp edged lines
that required not just the fundamental but also the third harmonic to
maintain the sharp edges. For a sine wave you only need a little more
than the Nyquest limit: perhaps 2.5 times the frequency. On the other
hand Bayer arrays only have unambiguous luminance information for
half the pixels so for a Bayer sensor you need to increase the pixel
count by a factor of two compared to a monochrome sensor.
I want to see the optics without artifact; not count sine waves when they register with luck of alignment.

I think the whole Nyquist thing is poorly thought out with no foresight; 2x the highest frequency is not sufficient for sampling; you no signal with just the "right" (wrong) phase, and you get amplitude modulation with simple integer ratios. The Nyquist is insufficient, from a high-quality perspective. It is "good enough" for crude purposes, and with the understanding that the highest frequencies are ignorable in the sampling system.

Try it for yourself; make a sinusoidal image of alternating white-grey-black-gray-white across the image with a period of about 36 pixels. Then, do a very slight perspective correction so the frequency is a little different at the top and bottom. Open a pixelate filter in preview mode, and try various numbers. I think you'll see that the signal is not artifact free unless the tiles are 6 pixels square or less. At the nyquist (18x18 tiles for a period of 36 pixels), what you get can be best described as "garbage". This is much more obvious with graphics than with audio.

--
John

 
Only if you want to resolve those lines poorly. It takes 5 to 6
pixels to properly render a line pair.
John, this is not true for sine waves and when we talk about MTF
versus frequency we are talking about sine waves. The examples I have
seen that claimed to show 5 to 6 oversampling is required either used
poor reconstruction techniques or they were imaging sharp edged lines
that required not just the fundamental but also the third harmonic to
maintain the sharp edges. For a sine wave you only need a little more
than the Nyquest limit: perhaps 2.5 times the frequency. On the other
hand Bayer arrays only have unambiguous luminance information for
half the pixels so for a Bayer sensor you need to increase the pixel
count by a factor of two compared to a monochrome sensor.
I want to see the optics without artifact; not count sine waves when
they register with luck of alignment.

I think the whole Nyquist thing is poorly thought out with no
foresight; 2x the highest frequency is not sufficient for sampling;
you no signal with just the "right" (wrong) phase, and you get
amplitude modulation with simple integer ratios. The Nyquist is
insufficient, from a high-quality perspective. It is "good enough"
for crude purposes, and with the understanding that the highest
frequencies are ignorable in the sampling system.

Try it for yourself; make a sinusoidal image of alternating
white-grey-black-gray-white across the image with a period of about
36 pixels. Then, do a very slight perspective correction so the
frequency is a little different at the top and bottom. Open a
pixelate filter in preview mode, and try various numbers. I think
you'll see that the signal is not artifact free unless the tiles are
6 pixels square or less. At the nyquist (18x18 tiles for a period of
36 pixels), what you get can be best described as "garbage". This is
much more obvious with graphics than with audio.

--
John

It doesn't look like you have studied sampling theory. If you give me a bandlimited signal that has been sampled at more than two times the highest frequency the signal contains then I can reconstruct an exact representation of the signal as if it had originally been sampled at 10 or more times the highest frequency. Since a lens has a diffraction cutoff frequency above which there is precisely zero content that establishes the band limit of the image it creates. The only complication is the truncation of the image which occurs because the entire image is not sampled by the rectangular sensor (this truncation means the rectangular image is no longer perfectly bandlimited). This means that my reconstruction would not be exact close to the image border but once we are a dozen pixels or so (for 2.5x sampling, how close to the border I can get with an accurate reconstruction depends on how far above Nyquest we sample) from the border my reconstruction would be accurate.

Here is a 1-D example in Matlab:

fm= 0.8; % fraction of Nyquest for band limit
Nup= 10; % upsampling ratio
Nfilt= 24; % reconstruction filter order at original sampling rate
NfiltUp= Nfilt*Nup;
h= remez(NfiltUp,[0 fm/Nup 1/Nup*(2-fm) 1],[1 1 0 0]);
f= 0.77777;
N= 40*Nup; % number of high sample rate points to generate
t= 1/(2*Nup) (0:N-1);
s5= sin(2*pi*f*t);
s1= s5(1:Nup:end);
t1= t(1:Nup:end);
s2= [Nup;zeros(Nup-1,1)]
s1;
s2= s2(:)';
s3= filter(h,1,s2);
s3= s3(NfiltUp+1:end);
s5mid= s5(NfiltUp/2+1:end-NfiltUp/2);
tmid= t(NfiltUp/2+1:end-NfiltUp/2);
s1mid= s1(Nfilt/2+1:end-Nfilt/2);
t1mid= t1(Nfilt/2+1:end-Nfilt/2);
plot(2*t1mid,s1mid,' ',2*tmid,s5mid,'.-',2*tmid,s5mid-s3,'.-');
legend(char({'samples';'signal';'error'}));


Here is the output showing re-sampled signal error that is 80 dB down:

 

Keyboard shortcuts

Back
Top