Fermy: Essentially, if one wants to be scientific (or really anal) it's very easy to shoot down not only cross format comparisons, but also comparisons within the same format. Differences in sensor resolution, AA filters, different processing and so forth will do that.
Agreed, and Jonas B is all over that topic as well. By the way Fermy I'm not against looking at numbers and mulling them over, my concern is with our pop culture engagement with spurious or imaginary precision. Right down to thinking that some Olympic runner #1 is any profound sense "faster" than another, because at some certain event runner #1 crossed the 1-mile finish line 0.01 seconds before runner #2.
Hey when Eric Tastad on ERphotoReview posts that some Pentax 110 lens resolves 750 LPPH line pairs per picture height at the edge, and the Sony ZA 24mm/F2 on an Alpha DSLR camera resolves 1500 LPHH according to DPreview, I know that the Pentax lens on a Nex 5 is not a pack leader, and it's because of looking at the cross-camera numbers that I have just finished criticizing.
So the thing I intend to make fun of is overly detailed scrutinizing of single-increment cross-camera differences. By definition testing institutions come up with some kind of slightly too-fine-increment rating scores, just to make sure that any given rating isn't too far off from the raw data. Thus it seems silly to say "I'm not gonna buy a lens because it's half a blur unit score less than some other lens". Or to talk about how incompetent a Nex camera maker must be because of some tiny incremental score difference with some other format and camera, ignoring subjective impressions of the equipment, ignoring test images, etc.
fermy: What's more, so far there has not been a single logical argument put forward that shows that comparisons between any two lenses on different Canon APS-C cameras are any more valid than between Pany 20mm on E-PL1 and Zeiss 24mm on NEX-5N.
Agreed, since we don't know about firmware differences (hey SLRgear uses JPEGs, maybe Lenstip does too?), AA anti-aliasing filter and other sensor surface treatment differences, etc etc.
fermy:To me the real question is not whether one can make cross system comparison with SLRGear data, but what would be the reasonable way of doing so.
Ideas welcome. My hat in the ring is to look at lensTip.com resolution page data for an easy-sipping slug of numbers. Having studied their testing logic, have begun looking simply at the relative line pairs per millimeter scores of lenses (that don't have much lateral chromatic aberration i.e. color fringing) at
F16 on LensTip, versus their peak score. If a lens has an F16 LensTip resolution score that is about as high as the lens' peak score, if the peak res is only 1.1 times the F16 res, it's not a very sharp lens. If the F16 lenstip score is 30 lines per millimeter, and the peak score is 1.8 times that F16 score, 54 lines per millimeter, dude that lens is in the fast lane.
There's no need to single out LensTip for this data, DPreview and lots of other rating systems could do this also. I just find the LensTip graphs real quick to glance at. And we're not talking here about whether a lens is to be prized for some peak F5.6 central reading, or prized because of some lesser reading that is maintained at all frame positions and apertures. Endless black holes of debate possible.
Anyway, a suggestion is getting the ratios between F16 central resolution and resolutions at other apertures, with lenses not showing much lateral color fringing, as a quality index that can be used across camera formats. A peak/F16 ratio of 2 is super, and a ratio of 1.1 is fairly lifeless.
P.S. Would offer that when someone wants to look at numbers instead of test images to decide between lenses, that is such an inherently overly-"scientific" or overly-"anal" activity...that all arguments about the validity of this activity inherently sound overly scientific or anal.
Pretty sure this was a 28mm/F4 non-multicoated Rodagon enlarging lens at around F8, that just barely covers APS-C.