How much ff resolution to not be "weak link" with Otus?

l_d_allan

Veteran Member
Messages
5,093
Solutions
5
Reaction score
837
Location
Colorado Springs, CO, US
My understanding is that the very expensive Zeiss 55mm f1.4 Otus is pretty much state-of-the-art for lens quality, including line-pair resolution.

I also have the impression that the Otus has more resolution than even the Nikon D800e with 36mpx can resolve. In that sense, the Nikon sensor is the "weak link".
  • I'm unclear if this is a valid question, but what full frame sensor resolution would "balance" the Otus so that neither was the "weak link"?
  • How about APS-C sensors with 24 mpx and no AA filter? I believe I've read that these sensors present the greatest challenge to the center of premium lenses.
  • Is there some kind of formula that relates optimal sensor resolution to line-pair resolution with a certain MFT definition?
  • How about very good, but not ultra-premium lenses like a Canon 35mm f2 IS prime?
  • How about a very good zoom like the Canon 70-200mm f2.8L II zoom?
  • Is there a way to estimate how much resolution a lens can "feed" from DxoMark lens ratings?
  • Sorry if this question has been asked before.
 
Solution
Has anybody seen these? Surely they must have but I'm posting them anyway for completeness and easy reference:

5bb4013fff06490897f585cd6e8a1df4.jpg

Here we see that the lens appears to do it's best for 40 lp/mm at about 87% MTF sagittal 5mm off-center. So that would be the 'target' for any candidate sensor, I would have thought.

It does compare well to the perfect lens at f/4:

b11ff7ab52a448af81e7daa54ffb491f.jpg.gif

By comparison at 40 lp/mm, the perfect lens has an MTF of just less than 90%.

At f/4, the OTUS seems almost perfect, i.e. just about diffraction-limited.

Therefore, taking the OTUS as 'virtually perfect' at f/4, it has an Airy Disk radius of 2.71um. In the simple Nyquist world, that would be the limiting sensor pixel pitch for an OTUS at f/4, would it not...
RCicala wrote: One of my first interests is going to be comparing straight optical MTF from lenses with system MTF on different cameras. I'm planning on an Otus vs Sigma Art vs Canon 50mm f/1.2; first on Imatest then with the lenses just on a bench. Hopefully the results will let some of you with better math than me answer this question accurately.
Excellent. It would be great to be able to set up Imatest to work only on the two green channels directly off the raw data without demosaicing (dcraw -D -4). This would isolate the hardware from the processing as much as possible while minimizing the effect of easily corrected color aberrations on the results.

Jack
 
  • How about APS-C sensors with 24 mpx and no AA filter? I believe I've read that these sensors present the greatest challenge to the center of premium lenses.
Nikon D7100s are aliasing with consumer-grade lenses.
But could some of that aliasing be from the demosaicing algorithm? Especially since this doesn’t have an AA filter?
No. A good demosaicing algorithm will have the opposite effect, behaving like an upsizing followed by mixing information from more than one photosite location, smearing spatial detail.
From what I understand of the Nyquist theorem, pre-blurring analog data before digital capture is optimal for eliminating artifacts. You can blur the data after digitizing, but you need to do more blur in demosaicing than if you originally blurred the analog signal.

Not that this means that an antialias filter is always needed in a camera, or that existing AA filter designs are optimal, etc.
I see what you mean, although technically you cannot completely make up for a missing/weak AA automatically during conversion (other than effectively painting the area in manually)

And generic demosaicing's blurring can be thought of as a low pass filter of sorts, so if anything it will reduce aliasing not produce it. Some advanced algorithms try to 'guess' at the missing information (effectively 'painting' it in), sometimes getting it wrong. Perhaps this is what you are referring to? If so I agree.
 
Last edited:
  • How about APS-C sensors with 24 mpx and no AA filter? I believe I've read that these sensors present the greatest challenge to the center of premium lenses.
Nikon D7100s are aliasing with consumer-grade lenses.
But could some of that aliasing be from the demosaicing algorithm? Especially since this doesn’t have an AA filter?
No. A good demosaicing algorithm will have the opposite effect, behaving like an upsizing followed by mixing information from more than one photosite location, smearing spatial detail.
From what I understand of the Nyquist theorem, pre-blurring analog data before digital capture is optimal for eliminating artifacts. You can blur the data after digitizing, but you need to do more blur in demosaicing than if you originally blurred the analog signal.

Not that this means that an antialias filter is always needed in a camera, or that existing AA filter designs are optimal, etc.
I see what you mean, although technically you cannot completely make up for a missing/weak AA automatically during conversion (other than effectively painting the area in manually)

And generic demosaicing's blurring can be thought of as a low pass filter of sorts, so if anything it will reduce aliasing not produce it. Some advanced algorithms try to 'guess' at the missing information (effectively 'painting' it in), sometimes getting it wrong. Perhaps this is what you are referring to? If so I agree.
We should perhaps distinguish between aliasing and imaging. Improper (pre-) filtering in the ADC (or downsampling) stage can introduce aliasing components anywhere in the spectrum (even at "DC"). Improper (post-) filtering in the DAC (or upsampling) stage can introduce imaging components at multiples of the sampling frequency.

From a Nyquist point of view, the Demosaic processing should perhaps be seen as an upsampling stage. If you do interpolation filtering in the separate color planes (perhaps not state-of-the-art, but pedagogic), you are "filling in the dots" in order to recreate a "smooth" 2-dimensional waveform. Nevermind that those samples were not created with "proper" filtering in the first place, and some "correction" is probably warranted.

My point is that aliasing in the capture stage can cause DC or near-DC errors (think tiled roof at a very high frequency, causing slowly fluctuating smooth level). No amount of post blurring is going to fix this.

-h
 
Has anybody seen these? Surely they must have but I'm posting them anyway for completeness and easy reference:

5bb4013fff06490897f585cd6e8a1df4.jpg

Here we see that the lens appears to do it's best for 40 lp/mm at about 87% MTF sagittal 5mm off-center. So that would be the 'target' for any candidate sensor, I would have thought.

It does compare well to the perfect lens at f/4:

b11ff7ab52a448af81e7daa54ffb491f.jpg.gif

By comparison at 40 lp/mm, the perfect lens has an MTF of just less than 90%.

At f/4, the OTUS seems almost perfect, i.e. just about diffraction-limited.

Therefore, taking the OTUS as 'virtually perfect' at f/4, it has an Airy Disk radius of 2.71um. In the simple Nyquist world, that would be the limiting sensor pixel pitch for an OTUS at f/4, would it not?

Meaning that any sensor with a greater pitch than 2.71um will be no match for the mighty OTUS :-)

--
Cheers,
Ted
 
Last edited:
Solution
Meaning that any sensor with a greater pitch than 2.71um will be no match for the mighty OTUS :-)
I have my doubts. I suggest that they give me a copy of the lens so that I can do extensive testing on it.

:-D
 
Meaning that any sensor with a greater pitch than 2.71um will be no match for the mighty OTUS :-)
I have my doubts. I suggest that they give me a copy of the lens so that I can do extensive testing on it.

:-D
Indeed. And I'm waiting for my Sigma SA mount version . . and waiting . . add waiting . .

Meanwhile, for the OP, I figure 2.71um translates to approx 118MP total for a 24x36mm FF sensor.
 
[snip]

The sensor has discrete resolution, and the lens has analog resolution, as different as apples and oranges. No sensor fully resolves any lens. Both work towards limiting resolution. There is no thresholding involved.

[snip]
That ain't necessarily so. At some level, everything discrete is really analogue, and at another level everything analogue is really quantised. More than that, harmonic analysis unifies the two views.

J.
 
Good point, The_Suede. Any idea what an average optical bench reading might be for the Otus 55?
Agree! Mainly because that was my initial guess :)
And for everyone who is pulling out a calculator 100MP corresponds to about a 3 micron pitch on FF.
Hmmmm ... should it not be 3u on any sensor size for OTIS?
Hmmmm ... an odd question, that. What is meant by 'any sensor size for OTIS'?

Er, should that be 'OTUS', by the way?
 
Good point, The_Suede. Any idea what an average optical bench reading might be for the Otus 55?
Agree! Mainly because that was my initial guess :)
And for everyone who is pulling out a calculator 100MP corresponds to about a 3 micron pitch on FF.
Hmmmm ... should it not be 3u on any sensor size for OTIS?
Hmmmm ... an odd question, that. What is meant by 'any sensor size for OTIS'?
I just thought it was odd to qualify the needed pixel pitch with also the sensor size. The needed pitch should be independent of sensor size.
Er, should that be 'OTUS', by the way?
I assume so :P
 
Good point, The_Suede. Any idea what an average optical bench reading might be for the Otus 55?
Agree! Mainly because that was my initial guess :)
And for everyone who is pulling out a calculator 100MP corresponds to about a 3 micron pitch on FF.
Hmmmm ... should it not be 3u on any sensor size for OTIS?
Hmmmm ... an odd question, that. What is meant by 'any sensor size for OTIS'?
I just thought it was odd to qualify the needed pixel pitch with also the sensor size. The needed pitch should be independent of sensor size.
Thanks Roland, now it is clear and you're quite right.

It is rather gratifying that my earlier post which postulated 2.71um is quite close to 'about a 3 micron pitch'.

(smug smirk)
 
Good point, The_Suede. Any idea what an average optical bench reading might be for the Otus 55?
Agree! Mainly because that was my initial guess :)
And for everyone who is pulling out a calculator 100MP corresponds to about a 3 micron pitch on FF.
Hmmmm ... should it not be 3u on any sensor size for OTIS?

Otherwise, I think 3 u sounds like a reasonable pitch.
Sorry Roland I had missed this question and I see where the misunderstanding arose. The_Suede had mentioned 100MP in the context of a FF sensor, which work out to about a 3 micron pixel pitch (If he had mentioned 100MP in the context of mFT that would have implied about a 1.6 micron pitch). But as you say what counts is the 3 microns period.
 
  • How about APS-C sensors with 24 mpx and no AA filter? I believe I've read that these sensors present the greatest challenge to the center of premium lenses.
Nikon D7100s are aliasing with consumer-grade lenses.
But could some of that aliasing be from the demosaicing algorithm? Especially since this doesn’t have an AA filter?
Does it make a difference if it is from the demosaicing algorithm?

If we don't have better demosaicing algorithms, then I suppose we have to consider the CFA, sensor and demosaicing algorithm as a whole system, and then the question will be how much resolution we need in this system before lenses no longer cause aliasing.
 
There was an article on lenscore about lenses vs. sensors saying the Otus maxes out at close to 100 MP at f/3.7 and 79 MP at f/1.4 on FF.
 
Last edited:
Does it make a difference if it is from the demosaicing algorithm?

If we don't have better demosaicing algorithms, then I suppose we have to consider the CFA, sensor and demosaicing algorithm as a whole system, and then the question will be how much resolution we need in this system before lenses no longer cause aliasing.
Considered as a black box, perhaps not. But this is "Photographic Science and Technology", and when discussing the total performance through analysis of the components, it seems counter-productive to close ones eyes for where aliasing is really introduced.

Aliasing is introduced when you reduce the sample rate (sampling itself being a particular case of reducing the sample rate from inifinite to finite). Debayer/demosaic does not reduce the sample rate, thus I can see no sensible mechanism for it to introduce aliasing.

I do agree that for a complete understanding of the end-to-end resolution, you need to include the demosaic (and also screen/print). I guess there are so many non-linear, signal-dependant and proprietary algorithms going on that it is (often) preferreable to ignore that part of the pipeline.

-h
 
Does it make a difference if it is from the demosaicing algorithm?

If we don't have better demosaicing algorithms, then I suppose we have to consider the CFA, sensor and demosaicing algorithm as a whole system, and then the question will be how much resolution we need in this system before lenses no longer cause aliasing.
Considered as a black box, perhaps not. But this is "Photographic Science and Technology", and when discussing the total performance through analysis of the components, it seems counter-productive to close ones eyes for where aliasing is really introduced.

Aliasing is introduced when you reduce the sample rate (sampling itself being a particular case of reducing the sample rate from inifinite to finite). Debayer/demosaic does not reduce the sample rate, thus I can see no sensible mechanism for it to introduce aliasing.

I do agree that for a complete understanding of the end-to-end resolution, you need to include the demosaic (and also screen/print). I guess there are so many non-linear, signal-dependant and proprietary algorithms going on that it is (often) preferreable to ignore that part of the pipeline.
I did some tests on demosaicing using Bruce Lindbloom’s artificial ray-traced image here. I processed the images as if they were taken with an RGGB filter array similar to my Nikon.

Using a nearest-neighbor algorithm does produce strong artifacts that I think ought to be called aliasing, or maybe we can call it color aliasing:

7a2296acea254379b8c78fe3a3a6aeb9.jpg.png



00e48cd87c1449a39b06549d7e7a5444.jpg.png

Note that some algorithms use a similar nearest-neighbor algorithm, but reduces the resolution of the image by ½ — but this is a poor demosaicing method because of the color fringing.

The Bilinear algorithm does better, but produces a softer-looking image:

f7cf8280df444880b32b85bbceafa641.jpg.png



1e9df63ed9f14a3b9032ca679c1f7013.jpg.png

If I pre-blur the images first, simulating an antialias filter, the results of even using the lowly Bilinear demosaicing method are better:



671b6c59803749f084a3acbbf01d058e.jpg.png



2aee732231a644f1982de11e7e4987e3.jpg.png

These are softer but much cleaner, eliminating much of the ‘color aliasing’ or edge color defects, something which cannot be reproduced by processing the digital data.

Now this does not tell us if an antialias filter should always be used, or can be eliminated as is the current trend in cameras. With a high-megapixel small sensor cameras, are we right to think that lens defects will give us the analog pre-blur needed to avoid Bayer sensor aliasing as seen above? That it doesn’t make sense to blur an already blurry image with an AA filter? Or does the use of lenses like the Otus tell us that they are still needed for the very best work? Certainly they were essential for the largest but low resolution sensors of the old days.

The performance of all demosaicing algorithms depend on where they are performed in the image processing chain, and these simulations show it being done after all other processing, which is the best case. Doing it earlier in processing will lead to more color artifacts, which might be partially corrected in real cameras by extra chroma noise reduction, causing extra softening at least of color.

So I do think that the demosaicing process has a lot of influence on the final resolution of lens delivered by a lens on a given camera and cannot be ignored.

--
 
I did some tests on demosaicing using Bruce Lindbloom’s artificial ray-traced image here. I processed the images as if they were taken with an RGGB filter array similar to my Nikon.

Using a nearest-neighbor algorithm does produce strong artifacts that I think ought to be called aliasing, or maybe we can call it color aliasing:
I believe that there are artifacts in your raw image (any raw image, really) that should be called aliasing before it ever enters a raw converter.

If there are aliasing artifacts in the raw converter output, how can you tell (visually) that it is caused by the raw converter, and not (as is my belief) created by the CFA and insufficiently supressed/concealed by the raw converter?

Using nearest neighbour is a very "non-Nyquist" reconstruction filter that can cause strong imaging (the reconstruction sibling of aliasing). A bilinear filter should be better in terms of recreating the smooth assumed source waveform.

-h
 
If there are aliasing artifacts in the raw converter output, how can you tell (visually) that it is caused by the raw converter, and not (as is my belief) created by the CFA and insufficiently supressed/concealed by the raw converter?
If you are still interpreting on what I wrote earlier in the thread, let me just clarify one thing:

I have not claimed that it is caused by the demosaicing algorithm. I too believe that it is caused by the CFA and that the demosaicing can only try to repair it.
 
I believe that there are artifacts in your raw image (any raw image, really) that should be called aliasing before it ever enters a raw converter.
That’s true. Hence the need for an antialias filter to pre-blur the analog data. Although I’d be interested in seeing a debate whether such a filter is still needed these days with small high megapixel sensors.
If there are aliasing artifacts in the raw converter output, how can you tell (visually) that it is caused by the raw converter, and not (as is my belief) created by the CFA and insufficiently supressed/concealed by the raw converter?
Well I agree that it is created by the CFA. A strong AA filter and a really smooth demosaicing algorithm will make a final image that is very soft, if clean, when viewed at 100%. But I wonder if using a perfect Nyquist imaging system is acceptable — perhaps some folks would rather get a sharper-looking image even if there are some color artifacts?

These color artifacts do sometimes resemble chromatic aberration, but you can often tell the difference if you just look at the center of the image at something that is in sharp focus.

A poor algorithm or pre-blur might cause aliasing to show up more in the final image. But in practical situations, will that actually show up? With my old Nikon D40, with only 6 megapixels, these demosaicing artifacts were readily apparent and caused lots of extra retouching time for me if I had to produce a large image for a client. This is hardly an issue with my other Nikons.
Using nearest neighbour is a very "non-Nyquist" reconstruction filter that can cause strong imaging (the reconstruction sibling of aliasing). A bilinear filter should be better in terms of recreating the smooth assumed source waveform.
Yes it is horrible — but it is still being used. But even that algorithm doesn’t do a bad job if the analog input is really blurry, and the Bilinear algorithm (which is not generally thought of as being all that great) is really good too with a blurry lens. I tend to think of the demosaicing algorithm as the last line of defense in squeezing out the greatest amount of image quality from a digital camera — important only if everything else is already excellent.
 

Keyboard shortcuts

Back
Top