Details from a telephoto lens?

Where are the super-teles tested on that site? I think the Nikkor 200mm f/2 is the only one and it doesn't support your argument.
A very impressive lens.

But .. a rather normal and cheap 50 mm lens out-resolves it, at all apertures, up to F11.

So ... this monster of a lens is very sharp, almost as sharp as a cheap 50 mm. That is very impressive.

But, this lens hardly proves me wrong. What I said was that the super teles are not the sharpest lenses.

You have to show me another proof to prove me wrong.
Roger C. at Lens Rentals has tested the Canon 300mm f/2.8 II and the 200-400L. At f2.8 he gets similar results from the 300mm f/2.8 as any 50mm lens at f/2.8 in the center and slightly better averaged. The 50mm's may do a tiny bit better at f/4 than the 200-400 @ 400mm.
No 35 mm lenses I know of are sharpest at F2.8.
 
Last edited:
But .. a rather normal and cheap 50 mm lens out-resolves it, at all apertures, up to F11.
Which 50mm on the DX3? Are we reading the same numbers? The 200mm peaks at 4076 center, while the best 50mm peaks at 4008. And what about border/corner resolution? Or did you look at only DX?
So ... this monster of a lens is very sharp, almost as sharp as a cheap 50 mm.
Since we are really past the limit of the measuring system (sensor), the numbers may not be precise for either of these lenses. Small differences are not conclusive, particularly given the issues with testing long vs. normal lenses. BTW, what do you think is PZ's margin of error? Copy variation?
But, this lens hardly proves me wrong. What I said was that the super teles are not the sharpest lenses.
I guess we have to agree on what "sharpest" means vs. aperture and the image field. Certainly I think you can agree that super-teles are among the sharpest lenses from any manufacturer. To say if they are sharpest or not needs better data.
 
But .. a rather normal and cheap 50 mm lens out-resolves it, at all apertures, up to F11.
Which 50mm on the DX3? Are we reading the same numbers? The 200mm peaks at 4076 center, while the best 50mm peaks at 4008. And what about border/corner resolution? Or did you look at only DX?
So ... this monster of a lens is very sharp, almost as sharp as a cheap 50 mm.
Since we are really past the limit of the measuring system (sensor), the numbers may not be precise for either of these lenses. Small differences are not conclusive, particularly given the issues with testing long vs. normal lenses. BTW, what do you think is PZ's margin of error? Copy variation?
But, this lens hardly proves me wrong. What I said was that the super teles are not the sharpest lenses.
I guess we have to agree on what "sharpest" means vs. aperture and the image field. Certainly I think you can agree that super-teles are among the sharpest lenses from any manufacturer. To say if they are sharpest or not needs better data.
 
My point was only that I believe that the sharpness of super teles is often highly exaggerated.
In whose mind? If they are among the sharpest lenses (over the entire field and wider f-stops), then who is highly exaggerating?
It is actually very hard to make a sharp and fast super tele. Thats one reason why they are expensive.
It also hard and expensive to make a super sharp 50mm. That's why the Otus is $4k.

--
Erik
 
Last edited:
MirekE wrote:
Super teles are usually the best resolving lenses from a given manufacturer.
Hmmmm ... that is not my experience. Even the most expensive long teles have less resolution than even the cheapest normal lenses.

You can look here http://www.photozone.de

What I think gives you the impression that long teles are sharp is that you get better resolution for far away subjects. But ... that is only the magnifying effect.

It is the same effect as macro photo. You think it is very sharp as you normally cannot see that detail.
I think what gives me the impression are the MTF charts.



e7d7b015756f4fc284a2ba9c6aa6d9a2.jpg.png

Canon EF 50/1.4



Canon EF 400/2.8L
Canon EF 400/2.8L
 
Erik Magnuson wrote: what do you think is PZ's margin of error?
PZ measures the system response of hardware + software (it demosaics and sharpens images), so I am not sure how relevant their results are when it comes to the sharpness of a lens.

Demosaicing acts as a low-pass filter and by sharpening one can make MTF50 values come up as one wants simply by tweaking a couple of key parameters. A bit like taking a driving test after having polished off a bottle of wine and chugged five Grandes to attempt to counteract the wine's effects: the most predictable result is a trip to the loo :-)

Other sites like Lenstip.com on the other hand attempt to minimize the effects of software therefore resulting in more reliable estimates as far as the hardware is concerned - assuming a representative sample and no operator error of course. Just make sure that you compare lenses tested on the same camera.

Jack
 
Erik Magnuson wrote: what do you think is PZ's margin of error?
PZ measures the system response of hardware + software (it demosaics and sharpens images), so I am not sure how relevant their results are when it comes to the sharpness of a lens.
Most of the cameras used for testing also have AA filters.
Demosaicing acts as a low-pass filter and by sharpening one can make MTF50 values come up as one wants simply by tweaking a couple of key parameters.
This might be a significant concern if
  • the sharpening applied was different for each lens for a given system
  • the sharpening applied was unrealistic to how photographers typically process their files
PZ uses ACR default capture sharpening so neither of these apply
Just make sure that you compare lenses tested on the same camera.
Which ends us being the same restriction as PZ.

--
Erik
 
Last edited:
Erik Magnuson wrote: what do you think is PZ's margin of error?
PZ measures the system response of hardware + software (it demosaics and sharpens images), so I am not sure how relevant their results are when it comes to the sharpness of a lens.
Most of the cameras used for testing also have AA filters.
Yes, and differing pixel pitches, hence the suggestion to compare lenses tested on the same camera. The key imho is to try to assess the physical characteristics of the hardware separately from the processing - since the processing can produce arbitrary results.
Demosaicing acts as a low-pass filter and by sharpening one can make MTF50 values come up as one wants simply by tweaking a couple of key parameters.
This might be a significant concern if
  • the sharpening applied was different for each lens for a given system
  • the sharpening applied was unrealistic to how photographers typically process their files
PZ uses ACR default capture sharpening so neither of these apply
They do to my way of thinking. Plus PZ demosaics the image data, effectively performing an upsizing operation on the raw information. It's a bit like trying to assess the quality of the meat used in a Big Mac by eating the whole, dressed up burger drenched in pickles and sauces to (their) taste. It's not a coincidence that they sometimes get impossible values (e.g. lw/ph > actual number of rows on sensor).
Just make sure that you compare lenses tested on the same camera.
Which ends us being the same restriction as PZ.
Not quite, Lenstip attempts to isolate the hardware. PZ = Big Mac. Lenstip = mostly the beef.
 
Last edited:
Erik Magnuson wrote: what do you think is PZ's margin of error?
PZ measures the system response of hardware + software (it demosaics and sharpens images), so I am not sure how relevant their results are when it comes to the sharpness of a lens.
Another thing is that these tests are done at relatively short shooting distance and do not reliably describe infinity performance. This heavily penalizes lenses that are great at infinity, but not great at MFD (case in point: the current Zeiss Planar 50/1.4). It also penalizes some wide angle lenses that have field curvature at short distances.

On the other hand, having information on behavior of the whole system can be sometimes revealing. For example, there are some excellent wide angle lenses for Leica M system. These lenses behave well on Leica, but give blurry corners on some other cameras like Sony due to the different sensor design. MTF charts won't tell the whole story here.
 
Another thing is that these tests are done at relatively short shooting distance and do not reliably describe infinity performance.
How much shortness is relatively short? Look at the end of the page here:


In addition to issues with hitting the limits of the sensor, we also have to worry about hitting the limits of the test charts:


 
Yes, and differing pixel pitches, hence the suggestion to compare lenses tested on the same camera. The key imho is to try to assess the physical characteristics of the hardware separately from the processing - since the processing can produce arbitrary results.
Compared to the other sources of variation, this is small potatoes. PZ data corresponds to real world experience as well (or as poorly) as the other sites. MTF50 itself is only a simple proxy for what most people look for in a lens.
Which ends us being the same restriction as PZ.
Not quite, Lenstip attempts to isolate the hardware.
They still use dcraw to demosaic -- IIRC, the non-demosaic options for Imatest were not available when LT started testing. They would have to invalidate or recalculate the prior results to switch methods.
 
How much shortness is relatively short? Look at the end of the page here:

http://www.imatest.com/2013/10/no-perfect-lens-no-perfect-lens-test/
Thanks for the link. I found Roger's articles are always worth reading.
In addition to issues with hitting the limits of the sensor, we also have to worry about hitting the limits of the test charts:

http://www.imatest.com/2013/09/transmissive-chart-quality-comparison/

http://www.imatest.com/2013/11/reflective-chart-quality-comparisoninkjet-vs-photographic/
Thanks, I was always wondering about that.
 
Yes, and differing pixel pitches, hence the suggestion to compare lenses tested on the same camera. The key imho is to try to assess the physical characteristics of the hardware separately from the processing - since the processing can produce arbitrary results.
Compared to the other sources of variation, this is small potatoes. PZ data corresponds to real world experience as well (or as poorly) as the other sites. MTF50 itself is only a simple proxy for what most people look for in a lens.
Which ends us being the same restriction as PZ.
Not quite, Lenstip attempts to isolate the hardware.
They still use dcraw to demosaic -- IIRC, the non-demosaic options for Imatest were not available when LT started testing. They would have to invalidate or recalculate the prior results to switch methods.
Ok, I can tell you are smart guy who understands the compromises involved, so if MacDonald's is your thing go right ahead.

On the other hand don't expect some of the more discerning diners to follow in your footsteps. PZ's margin of error (the question you asked and that I responded to) is imo larger than that found in some other, better sources.

Jack
 
Last edited:
On the other hand don't expect some of the more discerning diners to follow in your footsteps.
I'm not asking any such thing. For numeric result lens tests of super-teles, we have very limited data. That data neither busts or confirms the myth; it remains plausible.
PZ's margin of error (the question you asked and that I responded to) is imo larger than that found in some other, better sources.
If you have a better source of data for the lenses in question, by all means bring it to our attention.
 
Jack Hogan wrote: PZ's margin of error (the question you asked and that I responded to) is imo larger than that found in some other, better sources.
If you have a better source of data for the lenses in question, by all means bring it to our attention.
Our? Yours you mean, since anybody who has been paying attention already knows what my criteria for better lens testing sites are: more beef, less sauce and pickles. Or put another way: stick with sites that try to minimize processing and stay away from those whose MTF50 readings exceed the laws of physics.

Cheers,

Jack
 
Our? Yours you mean, since anybody who has been paying attention already knows what my criteria for better lens testing sites are: more beef, less sauce and pickles.
And those sites have actually tested some super-tele lenses? That's why I asked if you know of some relevant data. Of course relevance is not important when you only want to rant on your pet peeve.
 
Lets simplify it and say we have a diffraction limited lens and a perfect round aperture.

Then ....
  1. ... the resolution in arc sec of the subject, i.e. what you can resolve of the subject at the same distance, is proportional to the the inverse of the diameter (in mm or whatever) of the aperture, no matter what focal length you have.
  2. ... the resolution in the film/sensor plane (in lines per mm or whatever) is proportional to the inverse of the F-number, no matter what focal length you have.
  3. ... the resolution in the film/sensor plane (in lines per image hight, for a given FOV) is directly proportional to the diameter (in mm or whatever) no matter what focal length (and therefore sensor size) you have.
Hmmmm ... hope I got that right now :)

So ... if you want to resolve details on the moon, you need a big lens or mirror.

So ... if you want to make high resolution images you also need a big lens ... or make some stitching.

But ... if it is high resolution (in lines per mm) you want, you have a problem. You need a low F-stop value to beast diffraction, but a low F-stop makes it very hard to make sharp lenses for optical design reasons.
 
Our? Yours you mean, since anybody who has been paying attention already knows what my criteria for better lens testing sites are: more beef, less sauce and pickles.
And those sites have actually tested some super-tele lenses? That's why I asked if you know of some relevant data. Of course relevance is not important when you only want to rant on your pet peeve.
Now now Erik, no need to get nasty and shoot the messenger. Physics is physics and the D3x's sensor only has 4044 rows.
Which 50mm on the DX3? [...] The 200mm peaks at 4076 center, while the best 50mm peaks at 4008.
With the criteria mentioned anyone could look up the data themselves. But you are making me work, so here for instance is the Canon EF200 f/2L which by any account is comparable to the Nikon you were referring to: MTF50 of 45 lp/mm on the 21.5MP 1DsMkIII in the center, or 2160 lw/ph. Similar results on the EF300 f/2.8L

And here is the Canon EF50 F/1.2L , another outstanding lens on the same camera doing just a little better: close to 2200 lw/ph.

The 1DsMkIII's sensor having 3774 rows, an old school AA filter and the lenses - as good as they are - having their aberrations theses are more credible results than the un-believable, better than perfect ones produced earlier.

And the reason is clear: PZ's results are un-believable because they use a totally uncontrolled raw converter with unknown arbitrary default prameters which are in fact known to change from hardware to hardware and from version to version.

Which is why when trying to evaluate the physical characteristics of the hardware it is best to minimize uncontrolled variables including the processing: having established the hardware's properties for the given setup as objectively as possible a photographer can be confident that the results can be made even more pleasing through subjective conversion and processing later on.

Minimal Processing. Uncontrolled everything else, handle with care, operator error lurking everywhere. Nothing around 4000 lw/ph here.

Minimal Processing. Uncontrolled everything else, handle with care, operator error lurking everywhere. Nothing around 4000 lw/ph here.

But you knew all that.

Jack
 
Last edited:
Someone here posted that he did measurements with 5 degree slanted edges and RAW images. I.e. no problems with converters and also the sub pixel results you can get with slanted edges.

Just a thought, if you want to use an AA filter free sensor for high resolution measurements for of lenses.
 

Keyboard shortcuts

Back
Top