How much ff resolution to not be "weak link" with Otus?

l_d_allan

Veteran Member
Messages
5,093
Solutions
5
Reaction score
837
Location
Colorado Springs, CO, US
My understanding is that the very expensive Zeiss 55mm f1.4 Otus is pretty much state-of-the-art for lens quality, including line-pair resolution.

I also have the impression that the Otus has more resolution than even the Nikon D800e with 36mpx can resolve. In that sense, the Nikon sensor is the "weak link".
  • I'm unclear if this is a valid question, but what full frame sensor resolution would "balance" the Otus so that neither was the "weak link"?
  • How about APS-C sensors with 24 mpx and no AA filter? I believe I've read that these sensors present the greatest challenge to the center of premium lenses.
  • Is there some kind of formula that relates optimal sensor resolution to line-pair resolution with a certain MFT definition?
  • How about very good, but not ultra-premium lenses like a Canon 35mm f2 IS prime?
  • How about a very good zoom like the Canon 70-200mm f2.8L II zoom?
  • Is there a way to estimate how much resolution a lens can "feed" from DxoMark lens ratings?
  • Sorry if this question has been asked before.
 
Solution
Has anybody seen these? Surely they must have but I'm posting them anyway for completeness and easy reference:

5bb4013fff06490897f585cd6e8a1df4.jpg

Here we see that the lens appears to do it's best for 40 lp/mm at about 87% MTF sagittal 5mm off-center. So that would be the 'target' for any candidate sensor, I would have thought.

It does compare well to the perfect lens at f/4:

b11ff7ab52a448af81e7daa54ffb491f.jpg.gif

By comparison at 40 lp/mm, the perfect lens has an MTF of just less than 90%.

At f/4, the OTUS seems almost perfect, i.e. just about diffraction-limited.

Therefore, taking the OTUS as 'virtually perfect' at f/4, it has an Airy Disk radius of 2.71um. In the simple Nyquist world, that would be the limiting sensor pixel pitch for an OTUS at f/4, would it not...
Roland Karlsson wrote:
Then current sensors are enough for OTIS.
Actually, this is a surprise to me.
Same here. By "current sensors", does this mean high-end state-of-the-art like Nikon D800e or Sony A7r, or like the Canon ff's of 20 Mpx or so?

And what about the Hasselblad approach of sub-pixel shifting a 50 Mpx sensor to get 200 Mpx effective resolution? Seems like I read about that a while back. What's that all about?
One thought, have we now considered the Nyquist criteria - that you need to sample with twice the frequency than the highest frequency in the signal?

Shall we not double the sensor resolution? To begin with?
That seems like a legitimate approach, but my understanding of Nyquist criteria is limited and may be flawed.
And do we really catch all what the lens can provide?
I've seen values of 50 lp/mm for the Otus, but that was with a 36 Mpx sensor. My speculation is that an optical bench might provide a higher number that would better reflect the resolution potential of the lens. LensRentals recently acquired an optical bench.

Overall, I'm still puzzled after reading the replies. They've ranged from "current sensors are enough" to "would take 500 Mpx". That's a big range, but I suppose that reflects diminishing returns.

Again, I no doubt have a flawed understanding of the issues involved, but it would seem that ideally the premium prime lens would be well matched to a premium sensor such that neither was the "weak link" or "bottleneck". Or not?
 
I've seen values of 50 lp/mm for the Otus, but that was with a 36 Mpx sensor.
Lens spatial resolution numbers are for the lens only; a camera sensor itself does not affect the numbers published by lens manufacturers. I realize that it is difficult to disassociate the two in one's mind but I could put a sheet of toilet paper in the image plane without affecting that 50 lp/mm at all.

BTW, has anyone answered the original question to your satisfaction yet?

In engineering, that kind of question is usually solved by a graph with two lines on it. Where they cross is usually the answer to the type of question being asked. For example, how fast is this car? Wheel torque decreases as the car speed (engine rpm) increases. Drag increases with the square of the car speed. Where the falling torque crosses the increasing drag is the speed of the car. Over-simplified but ya get the idea.

All some has to do is find an equivalent in the world of optics and imaging and the job is done.

--
Cheers,
Ted
 
Last edited:
  • How about APS-C sensors with 24 mpx and no AA filter? I believe I've read that these sensors present the greatest challenge to the center of premium lenses.
Nikon D7100s are aliasing with consumer-grade lenses.
But could some of that aliasing be from the demosaicing algorithm? Especially since this doesn’t have an AA filter?
No. A good demosaicing algorithm will have the opposite effect, behaving like an upsizing followed by mixing information from more than one photosite location, smearing spatial detail.
 
I also have the impression that the Otus has more resolution than even the Nikon D800e with 36mpx can resolve. In that sense, the Nikon sensor is the "weak link".
  • I'm unclear if this is a valid question, but what full frame sensor resolution would "balance" the Otus so that neither was the "weak link"?
As a criterion, how about when the contribution to Total MTF50 by lens aberrations/diffraction is equal to the contribution of effective photosite aperture, around half way down to the edge of the lens, averaging tangential and sagittal measurements?

That would mean an MTF of about 71% for each of the two contributors (0.71*0.71=0.5). All one would need is optical bench measurements of the lens at MTF71 in lp/mm around the desired spot and plug that value into the pixel aperture MTF formula. Solving for aperture when MTF is 71% would provide the 'matching' effective sensor resolution as a function of pixel pitch.

As an example take the Sigma or Canon 24-105 f/4, which at 24mm fully open (I suppose) read around 60 lp/mm on their own .

MTF71 due to effective pixel aperture in microns (A) occurs at about 0.44 cycles per pixel [from 0.71=sinc(pi.f) ] or 440/A lp/mm. Equating the two terms in lp/mm and assuming I did not make any silly mistakes yields a 'matching' pixel pitch for those lenses of about A=7.3 microns (440/60).

Required aperture would be clearly somewhat smaller if we took optical bench readings at the lens' sharpest f/#.

The problem is where to get the optical bench data for the Otus. However, I think we can safely assume that it's incrementally better than those in this example (i.e. not twice as good), so perhaps the D7100 at 3.9 microns is not so badly matched to it after all :-)

Imho this would be a more practical criterion than 'no aliasing', which I am not even sure is desirable today.
 
i have thought all along that non-OLPF cameras ought ought to be a good (easily accessible device) to estimate lens MTF beyond Nyquist. Use a tailormade target (one that bandpass at some multiple of Nyquist), exploit the aliasing of the image sensor to gain knowledge about the lens' (+ other components in the chain) response at the equivalent of Nx24MP pixel counts.
Interesting idea, -h.

On the other hand I am not sure what information we would glean that isn't already fairly accurately available in a slanted-edge-derived MTF curve.
 
Interesting idea, -h.

On the other hand I am not sure what information we would glean that isn't already fairly accurately available in a slanted-edge-derived MTF curve.
I don't know the details about that test, but does it give reliable information about >fs/2?

The edge can be described by an analytic function, so I guess if a good model can be fitted to the samples of such a chart, it might be possible to estimate the PSF denser than what the sensel pitch would suggest in a Nyquistian sense?

-h
 
Interesting idea, -h.

On the other hand I am not sure what information we would glean that isn't already fairly accurately available in a slanted-edge-derived MTF curve.
I don't know the details about that test, but does it give reliable information about >fs/2?
It gets less reliable as frequency approaches 1 cycle/pixel. However to make it more reliable all one needs to do is feed it a longer (straight) edge with more points to sample.
The edge can be described by an analytic function, so I guess if a good model can be fitted to the samples of such a chart, it might be possible to estimate the PSF denser than what the sensel pitch would suggest in a Nyquistian sense?
The beauty of the approach is that it projects all 'gray' and 'white' captured intensities around the edge onto a single line normal to it - effectively massively oversampling it and therefore obtaining an Edge Spread Function and related PSF independent of pixel pitch. This is how Frans van den Bergh put it in a recent dpr post:

J8UqK60.png


In a perfect system the edge intensity profile should be a perfectly square step. But it's not as a result of smearing contributions by various components in the hardware capture chain (diffraction, lens aberrations, effective pixel aperture, AA, defocus etc).

By taking the derivative of the Edge Spread Function we obtain an approximation of the camera/lens system's Point Spread Function, the modulus of the Fourier transform of which is the MTF curve. Very, very clever.

And here is what the D610+85/1.8 MTF curves look like (from a recent thread that explains a bit more where they come from):

MTF curve labeled Vertical Crop appears not to have been LP filtered, while Horizontal one does.

MTF curve labeled Vertical Crop appears not to have been LP filtered, while Horizontal one does.

Each curve is the result of at least 300 samples, the length in pixels of the edge crop fed to MTF Mapper. Therefore as long as there is some energy up there (as in the supposedly AA-less Vertical Crop above) the MTF curves should be fairly stable well up into the 0.eighties and 0.nineties cycles/pixel.

Jack
 
Last edited:
As in so many common other cases, the answer depends on what you mean (or "intend") with the question...

You're asking for a "discrete number". That is a one-number answer to a question in which you haven't given any real ramifications or criteria - that means that you can get a million different answers that say different numbers.

No here's the catch: They (each and every one of the answers) can all be equally wrong AND equally correct. At the same time!

Let's go through the first two criteria of possibly hundreds of possible question scenarios. Depending on your answers below (let's make it simple, and use a 0-5 scale from "absolutely no" to "definite yes") I can give you 36 DIFFERENT but all "correct" answers. Question criteria:

A: Usage

(0) You're a sloppy action photographer that often shoots in low-light scenarios.
.........
(5) You're a very deliberate landscape or product photographer that always strives to get the absolute maximum out of the shot by using good light, meticulous focusing and so on.

B: Contrast threshold

(0) You need (want!) very good pixel>pixel contrast accuracy.
.........
(5) You accept very low pixel>pixel contrast, as long as the detail is still there.

...................................................

Not that this in any way should be taken literally, but then imagine that you do grade the two questions above by giving me two 0-5 answers to "A" and "B". I could then construct an "MP needed to not limit the lens" answer by using an equation like this:

MP = baseMP*((A+1)*kA)*((B+1)*kB)

BaseMP will be some kind of baseline calculated for the lowest possible resolution needed, given the criteria spans. kA and kB respectively are some functions giving a certain (maybe non-linear) weight to your criteria grades. They also need to incorporate the physical capabilities of the lens.

Note that this results in 36 possible answers. Also note that you can modify the answer span even further both upwards and downwards by adding in more specific criteria...

For the OTUS case, I'd say most of the relevant and "correct" answers fall between 10MP and 150MP. Anything between those two are definitely in the "correct" region. Higher AND lower answers can also be correct, if you use very strict criteria to modify the question.

For me, and MY criteria - I'd say about 100MP given my usual mode of usage. That number is correct for MY average usage scenario and post processing habits, but it may be totally wrong for someone else.
 
Last edited:
Good point, The_Suede. Any idea what an average optical bench reading might be for the Otus 55?

And for everyone who is pulling out a calculator 100MP corresponds to about a 3 micron pitch on FF.
 
My understanding is that the very expensive Zeiss 55mm f1.4 Otus is pretty much state-of-the-art for lens quality, including line-pair resolution.

I also have the impression that the Otus has more resolution than even the Nikon D800e with 36mpx can resolve. In that sense, the Nikon sensor is the "weak link".
The sensor has discrete resolution, and the lens has analog resolution, as different as apples and oranges. No sensor fully resolves any lens. Both work towards limiting resolution. There is no thresholding involved.
  • I'm unclear if this is a valid question, but what full frame sensor resolution would "balance" the Otus so that neither was the "weak link"?
I would guess that at about 500 MP FF or so, the best lenses have very little more to give.
  • How about APS-C sensors with 24 mpx and no AA filter? I believe I've read that these sensors present the greatest challenge to the center of premium lenses.
Nikon D7100s are aliasing with consumer-grade lenses.
  • Is there some kind of formula that relates optimal sensor resolution to line-pair resolution with a certain MFT definition?
To capture a line pair with luck of alignment, no AA filter and one pixel row or column per line will work. As soon as it gets out of phase or alignment, though, the line pairs will distort and break up. This is horrible imaging (not the same as "bad photography", of course, as many interesting photos are technically broken). A more reasonable common standard is about 1.4 camera lines per lens line, but that is not ideal, IMO, and I would take it further, and say that a B&W or Foveon sensor needs about 3 sensor lines per lens line to be distortion-free, and a Bayer sensor, 6 lines. That of course, is very expensive by today's standards in terms of needed storage and CPU power for processing. It comes very close to practicality, however, for someone doing something like heavily cropping small subjects with a DSLR; a compact sensor attached to a DSLR lens gives much better results. If I attach my Pentax Q to my DSLR telephoto and shoot a detailed object, and shoot with my DSLRs from the same distance with the same lens, the latter looks like cr@p compared to the former, unless the DSLR is my 6D and the ISO is 3200 or above, because the 6D has state-of-the art low high-ISO noise. The 7D and my older 5DmkII can't touch the Q, even at high ISOs.
  • How about very good, but not ultra-premium lenses like a Canon 35mm f2 IS prime?
  • How about a very good zoom like the Canon 70-200mm f2.8L II zoom?
I've seen 100% crops from this lens with a 2x and 1.4x TC stacked on a 7D that had pixel-level detail visible with mild sharpening. That's with a strong AA filter on the 7D. The same or better would be possible without the TC, and 8x the pixel density, or about 143 MP, APS-C 1.6x; especially with no AA filter.
  • Is there a way to estimate how much resolution a lens can "feed" from DxoMark lens ratings?
The closer the "perceptual MP" is to the actual MP of the sensor used, the much further beyond the MP of the sensor that could be easily appreciated with high returns. Of course, a strong AA filter lowers the "PMP" from what it would be without one or with a weaker one.

The bottom line is that all current MP counts are insufficient for all half-way decent lenses, especially in their sweet spots. We are taking shortcuts that distort the analog image projected by the lens with AA filters and/or low pixel densities. Keeping MP counts low to keep pixel-level (100% pixel view) sharpness high is counter-productive to imaging, and only helps with processing speed and storage space issues. The best capture is one where there is so much pixel density that everything is soft at the pixel level on a ~100 PPI monitor, except the noise, which can be much more effectively identified and eliminated. The resulting capture is immune to damage such as sharpening halos, loss of detail from CA and geometric corrections, rotation, scaling, etc. An image capture sharp at the pixel level can be interesting eye candy, but is a fragile mess of partial and distorted sharpness that breaks down as soon as you try to do any editing to it.
...there are also the matters of focus accuracy, diffraction, motion, and display medium to consider.

For wide apertures (shallow DOF), diffraction is a non-issue, but focus accuracy is critical. As you stop down, diffraction becomes more of an issue and focal accuracy less of an issue.

Motion blur, of course, is very often an issue, especially in low light. Noise is interesting in that a certain amount of noise can increase the *apparent* resolution for some scenes, but is often destructive for many scenes, so lens sharpness is not the limiting factor, here.

So, for a landscape photographer taking photos in good light, as John says, 500 MP on a Bayer CFA would be a nice round number to set as a practical limit. However, for low light sports photography, we may already be past the practical limit.

Of course, all the above is in terms of the detail of the captured photo, not the displayed photo. The display size, display medium, viewing distance, and visual acuity all play primary roles in terms of practical limits. For example, current pixel counts for 1200 x 800 photos on the web viewed on a computer monitor, 8x12 inch pirnts, etc., are already well past any practical limit for pixel counts (assuming the photos are not heavily cropped, of course).
 
MTF71 due to effective pixel aperture in microns (A) occurs at about 0.44 cycles per pixel [from 0.71=sinc(pi.f) ] or 440/A lp/mm. Equating the two terms in lp/mm and assuming I did not make any silly mistakes yields a 'matching' pixel pitch for those lenses of about A=7.3 microns (440/60).
I've seen various formulae that use the sinc function for sensor MTF:

1) The one above.

Then some that have pi in them and some that do not.

Then some that have an exponent, i.e. sinc^2(x) aka (sinc(x))^2.

Additionally, Norman Koren assigns different values to that exponent depending on sensor type, e.g. Foveon, Bayer with AA, Bayer without.

My current spreadsheet for such stuff uses:

MTF = sinc^2(pi.r) where r=modulation frequency/sampling frequency

But now I'm beginning to wonder whether that is right :-(

--
Cheers,
Ted
 
Last edited:
Jack Hogan wrote:
The problem is where to get the optical bench data for the Otus.
I'll attempt to get Roger Cicala and OLAF at LensRentals involved in this discussion.
However, I think we can safely assume that it's incrementally better than those in this example (i.e. not twice as good), so perhaps the D7100 at 3.9 microns is not so badly matched to it after all :-)
Interesting. Simple math would suggest that with a crop factor of 1.5, that would work out to 24 Mpx x 1.5 x 1.5 ~= 54 Mpx.
 
Is the 1 cycle/pixel a hard limit or simply a reflection of (usually) poor SNR at those kind of frequencies?

One might think that detailed information about the sub-pixel active area (and/or lens PSF beyond 1 cycle/pixel) could be estimated from suitable input, especially if there was no OLPF.

-h
 
Good point, The_Suede. Any idea what an average optical bench reading might be for the Otus 55?
Agree! Mainly because that was my initial guess :)
And for everyone who is pulling out a calculator 100MP corresponds to about a 3 micron pitch on FF.
Hmmmm ... should it not be 3u on any sensor size for OTIS?

Otherwise, I think 3 u sounds like a reasonable pitch.
 
Jack Hogan wrote:
The problem is where to get the optical bench data for the Otus.
I'll attempt to get Roger Cicala and OLAF at LensRentals involved in this discussion.
However, I think we can safely assume that it's incrementally better than those in this example (i.e. not twice as good), so perhaps the D7100 at 3.9 microns is not so badly matched to it after all :-)
Interesting. Simple math would suggest that with a crop factor of 1.5, that would work out to 24 Mpx x 1.5 x 1.5 ~= 54 Mpx.
I've actually been following along this interesting discussion, but I don't have too much to contribute: I don't know the 'right' answer. I think John covered pretty much everything I know and said it better than I would have.

I'd also throw in that we have to consider things like refraction of sensor cover glass, sensor microlenses, etc. so just the pixel numbers are going to give an incomplete answer (although closer than what we have now).

OLAF, I'm afraid can't be much help because we don't calculate the point spread function numbers when we're using it.

The good news is next week I take delivery of a Trioptics Imagemaster MTF bench, which is the bet optical bench made right now. (We currently do have a Wells bench but I don't trust the results with larger lenses like the Otus.)

One of my first interests is going to be comparing straight optical MTF from lenses with system MTF on different cameras. I'm planning on an Otus vs Sigma Art vs Canon 50mm f/1.2; first on Imatest then with the lenses just on a bench. Hopefully the results will let some of you with better math than me answer this question accurately.

Once we begin to get a database of these kind of results, I think it will be even more interesting to look at the same lens on various mounts. It would make perfect sense that Lens A might actually perform better on a Canon mount than on a Sony mount with equal pixel density because of ray angles, sensor cover refraction, microlenses, etc. (Or vice versa, of course.)

Roger
 
Last edited:
RCicala wrote:
One of my first interests is going to be comparing straight optical MTF from lenses with system MTF on different cameras. I'm planning on an Otus vs Sigma Art vs Canon 50mm f/1.2; first on Imatest then with the lenses just on a bench. Hopefully the results will let some of you with better math than me answer this question accurately.
Thanks for joining the discussion.

Looking forward to your measurements.
 
Last edited:
MTF71 due to effective pixel aperture in microns (A) occurs at about 0.44 cycles per pixel [from 0.71=sinc(pi.f) ] or 440/A lp/mm. Equating the two terms in lp/mm and assuming I did not make any silly mistakes yields a 'matching' pixel pitch for those lenses of about A=7.3 microns (440/60).
I've seen various formulae that use the sinc function for sensor MTF:

1) The one above.

Then some that have pi in them and some that do not.
Those that do not may assume that it's already been factored into the f
Then some that have an exponent, i.e. sinc^2(x) aka (sinc(x))^2.
Yes, the 2D transform of a square pixel results in MTF=sinc(pi.fx)*sinc(pi.fy), so if you are evaluating the MTF of a lens with equal intensity and spatial frequency response in the x and y directions it becomes sinc^2(pi.f).

But I don't think this is our case (open to correction, though): the optical bench produces data from a unidimensional, unidirectional line, so I believe the appropriate formula is the 1D transform of a box function of width A where MTF=sinc(pi.f), with f in cycles (lp) per pixel. (This is also true when dealing with the unidimensional/directional Edge Spread Function).
Additionally, Norman Koren assigns different values to that exponent depending on sensor type, e.g. Foveon, Bayer with AA, Bayer without.
Yes, we are instead assuming that sampling MTF is 1 through test optimization and the assumption that we are looking at a neutral, spectrally uniformly lit target whose full-res capture raw data has been zero blocked and white balanced - a big simplifying assumption.
My current spreadsheet for such stuff uses:

MTF = sinc^2(pi.r) where r=modulation frequency/sampling frequency

But now I'm beginning to wonder whether that is right :-(
I am getting in way over my head so I invite more erudite members to take over, but I am starting to think that if you wanted to go that way for modelling purposes other than optical bench or ESF evaluations perhaps the easiest way to look at it would be to give a range, because MTF performance would depend on scene color/luminosity distribution: the upper end of the range being the one I indicated above, valid where/when the image can be properly white balanced; the lower end of the range being the MTF of the sparsely populated G1+G2 channels which are supposedly dominant in our perception of luminance/spatial resolution/contrast. The MTF formula would probably have four sinc terms including angular components. Help!
 
Last edited:
Is the 1 cycle/pixel a hard limit or simply a reflection of (usually) poor SNR at those kind of frequencies?
From what I understand it's just that there is very little energy there so noise plays havoc with the apparent results (see the ringing in the Horizontal crop MTF graph above). Increase the samples, decrease the noise.
One might think that detailed information about the sub-pixel active area (and/or lens PSF beyond 1 cycle/pixel) could be estimated from suitable input, especially if there was no OLPF.
In either case the method works equally. It's just that with an OLPF the PSF/MTF are more predictable thus better behaved (see the D4 graphs in the linked thread).
 
  • How about APS-C sensors with 24 mpx and no AA filter? I believe I've read that these sensors present the greatest challenge to the center of premium lenses.
Nikon D7100s are aliasing with consumer-grade lenses.
But could some of that aliasing be from the demosaicing algorithm? Especially since this doesn’t have an AA filter?
No. A good demosaicing algorithm will have the opposite effect, behaving like an upsizing followed by mixing information from more than one photosite location, smearing spatial detail.
From what I understand of the Nyquist theorem, pre-blurring analog data before digital capture is optimal for eliminating artifacts. You can blur the data after digitizing, but you need to do more blur in demosaicing than if you originally blurred the analog signal.

Not that this means that an antialias filter is always needed in a camera, or that existing AA filter designs are optimal, etc.
 

Keyboard shortcuts

Back
Top