Diffraction discussion continued

Started Jul 27, 2013 | Discussions thread
The_Suede Contributing Member • Posts: 644
Re: Diffraction + sensor MTF graphs, MTF50 limits

Bobn2 wrote:

Posted on behalf of Detail Man

It would be helpful if "The Suede" were to also factor in optical ("AA") filter responses of various strengths. (At least from what I can find) nobody seems to be able state with certainty that the D800E is entirely bereft of any optical ("AA") filter (perhaps having a lighter, but nevertheless finite "strength" optical ("AA") filter.

In the numbers I've included a weak (~0.15px quad point PSF), though it isn't in the sensor MTF graph - or the "combined" results.

The D800 has an AA filter of ~0.35 p-p px quad nature, the vertical slightly stronger than the horizontal.

The D800E has the exact same material base with the same material thickness - but the second birefringent layer is at a different cut orientation so that it roughly acts as an inversion of the first 2-point spread. There is a residual error though, on the scale of 0.07-0.1 px PSF (Gaussian). This is partly material imperfections, partly surface imperfections and partly sligtht alignment errors.

But the largest detractor (and this is the main reason I see the D800E as "not necessary) remain though the production idea behind it (keep all mounting machines and distance calibrations the same as the non-"E" model) is sound:

-You STILL get the fairly large angle variance, AND the large-scale MTF hit! Firstly, the rather large thickness of the layers affect both spherical aberration and image edge angle variance (astigmatism). The SA hit gets angle^2 effect, so the corners of the image where the chief ray angles are large(r) suffers quite badly. The larger-scale MTF hit is probably material- and surface imperfections. In cameras with the layers totally removed, the MTF at 1/3-1/10 of Nyquist gets a lot better (a lot = 10% relative increase). But the focus of the camera is totally shot, you'd need to move the sensor 0.3-0.5mm forwards to offset the missing filter layers. And there just isn't room to do that within the sensor mount...

Found when running calculations for my modelling (including optical ("AA") assemblies) that when diffraction MTF extinction begins to limit the composite system MTFs, when the (relative) optical ("AA") filter strength is the same (relative to the individual photosite dimensions), the presence (as well as the value of the relative strength) of optical ("AA") filters makes a significant difference in comparisons made between systems having photosites of differing dimensions. Found that the higher the "strength" of the optical ("AA") filters used, the more advantage the smaller photosite system exhibited. The relative advantages (of smaller photosite systems) are least prominent when no optical ("AA") filter present in the model used. Thus, when examining differences in effects of diffraction extinction upon composite system spatial frequency (MTF) responses for systems having differing photosite dimensions as well as optical ("AA") filter assemblies attached, it appears to be important to factor the effects of any such optical ("AA") filter responses into the models utilized for running calculations.

Of course! The average AA strength used (pretty equal for all manufacturers) is between 0.3 and 0.4px in c-p spread. That is, you move a "ghost" image carrying (ideally, but almost never the case) exactly 50% of the light energy identical to the "real" image by 2*0.3=0.6px widths heightways, then you do the same for width. What you aim for is to move the first NULL on the MTF to about 0.65f, 30% over Nyquist. This does minimum possible contrast damage while still correcting all but the sharpest lenses and settings for aliasing and moire.

Looking at the resulting PSF, it looks like this:

Left is pixel aperture at ML-layer surface. Right is the projection of that aperture at the top AA layer (lens side)

The good thing about birefringent filters is that they have a very limited bandwidth. They have virtually NO (except for material imperfections!) impact on frequencies below their spread factor. So stuff at a 4-pixel wide scale retain almsot all contrast while stuff at 1-pixel scale lose significant ampounts of contrast. But it gains several decades of chroma- and orientation accuracy, that is: A more than tenfold increase in p-p detail accuracy is what you get as compensation for a halving of p-p contrast.

The AA filters drop the MTF at Nyquist (0.5f) to about one third of their "perfect" value. This is largely valid for most cameras I know. And they have very close to zero effect on the 40lp/mm MTF value for the center position of the field in modern cameras. But cheaper quartz-based AA filters tend to blur image edges considerably - for other reasons than their inherent birefringency, but still. Look at the NEX7 for instance - it HATES symmetrical wide anlgles, due the way too thick filter package it sports.

Iliah Borg raised the also relevant issue of what de-mosaicing algorithms of Bayer-arrayed color-filtered image-sensors does (in terms of spatial frequency analysis of the composite system MTF). While that would seem to buy a "fudge factor" of (at least) a factor of 2 where it comes to the F-Ratio (at a given Wavelength) where diffraction exctinction begins to limit the upper spatial frequency response, it also would seem to make the size of a single pixel-value interpolated from photosites (at least) 2 times the (effective) physical size of the individual photosite dimensions.

From the little that I have read about de-mosaicing algorithms, only the very crudest de-mosaicing schemes interpolate only 2x2 arrays. The fancier ones appear to take more surrounding photosites into account - which would have the effect of even further enlarging the (effective) "photosite aperture". If the algorithms are at all in any way(s) adaptive (and thus non-linear), then it would seem that frequency domain analysis (applicable to linear systems with constant coefficients only) could not be easily employed in modelling for analysis.

One other thing that I wonder about is what in particular is happening when all of the various plots in these diagrams:
http://3.static.img-dpreview.com/files/t/TS560x560~5b33129ad9374b988e8c3204c27f968c http://3.static.img-dpreview.com/files/t/TS560x560~90c0d22361584a78be46a2ef0286672f ... suddenly take a precipitous linear (but not a straight vertical) "dive" in MTF magnitude at some point along the horizontal X-axis in this post:


Perhaps the computed data values just stopped at those points - and the large negative slope shown is just a way to indicate such a situation ?

DM ...

-- hide signature --


The drop-off is when the resolution of the specific pixel size sensor "ends". I didn't include spurious resolution, so no need to show the MTF for it. The graph lines end at the lp/mm that the pixel size entails.

Most normal demosaic routines need about sqrt(2)*pixel width mains lobe PSF to be able to get accurate pixel-level results, like strands of hair crossing each other, very small details in landscape shots and so on.

The support area around the values you want to "guess" in the demosaic process is of course very important, but when the support CONTAINS no clues about that detail (since it is to small) - what do you do then?

Post (hide subjects) Posted by
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow