Modern Sensors Missing half the AA: in the Horizontal or the Vertical direction?

Is there some way of reflecting light off a camera sensor in situ which could test whether there's only half the AA beam splitting there? Using polarised light? Getting a tiny light source inside the camera close to the sensor? Or do we have to wait until someone can take the sensor out of a smashed camera?
There is a "depolarizer" plate between the first beam splitter of the AA and the second beam splitter of the AA (in the case of a normal 4-way splitter design). If I had to guess, I would say that they would still use the depolarizer even if they remove the one beam splitter, otherwise you will have polarized light arriving at the microlenses / CFA, which may nor may not matter.

So I would guess that you cannot verify the number of beam splitters by injecting polarized light from outside the camera.

I vote for the smashed camera option.
 
Am I correct in assuming that MTFMapper uses a maximum of 400 pixels even if it is fed more?
Whoops! Missed this one in my previous answer.

You assumption is correct. MTF Mapper will only use the centre-most 400 pixels, even if more are provided. It will still use those other pixels during edge location and orientation estimation, so there is some (minor) benefit in providing longer edges but you have to watch out for lens distortion. If the edge is strongly curved, it would be better to crop out a smaller, but straighter, middle section before passing the image to MTF Mapper.
 
xpatUSA wrote: The image was exported with a gamma curve of 2.2 which in QuickMTF is entered as 1/2.2 = 0.45. Messing with it makes little difference to the results, I've found.
I wrote hogwash there (fresh from using RawDigger for a different purpose!) Please ignore.
Right. But in this case with the switches below we are feeding a raw (gamma 1.0) image to QuickMTF while it is set up to expect a gamma 2.2 one. That's quite a non linear transformation, so results may be affected.
Yes, results were affected (blush).
Also, your ROI seems a bit small - I am feeding it most of the edges for improved accuracy (660x220)
Noted. Is my ROI too small?
I am not sure how QuickMTF works, so to be safe I would go big.
QuickMTF example plots here show ROIs of 170x129 and 148x79, from which I deduce that my ROIs were big enough.

Anyhow . . . .

I repeated the test w/dcraw 9.19, quickMTF gamma=1, green channel and got:

MTF50: 0.252 cy/px on the up-and-down edge, 0.220 cy/px on the left-to-right edge.

10-90% rise 2.06 px and 2.30 px respectively. (as you know, edge response is the basis for the MTF plot).

Although QuickMTF warns of the slant angles (it prefers 5 degs), 9.01 and 9.41 deg respectively, it declares the accuracy of the result as 100%.

--
"Engage Brain before operating Keyboard!"
Ted
 
Last edited:
This may be also of interest.

I looked at differences between D600, D610, D800 and D800E.

Too bad DxO has tested only three lenses with D800E and all of them are third party zooms. I picked Tamron 70-200/2.8 from that as it is known as a sharp lens.

Tamron 70-200/2.8 with D600 (click on Measurements tab/Sharpness/P-Mpix)
Tamron 70-200/2.8 with D610 (click on Measurements tab/Sharpness/P-Mpix)
Tamron 70-200/2.8 with D800 (click on Measurements tab/Sharpness/P-Mpix)
Tamron 70-200/2.8 with D800E (click on Measurements tab/Sharpness/P-Mpix)

You can see that D600 has worst sharpness of the four, D610 and D800 are same, D800E has best sharpness.
Fact that D610 and D800 are same while other two are different suggest that this is related to AA filter.

Relative difference between D600 and D610 is smaller than relative difference between D800 and D800E. That could be explained by AA filter in single dimension only in D610 as suggested Optyczne and in this thread.
 
Hi Frans,

I apologize in advance for the ridiculous number of questions below. As you can tell I am quite interested in this subject these days. Any help would be greatly appreciated :-)
Ok, so I claimed that using an edge of length 100 pixels with the --bayer green option is potentially better than using a grayscale image with an edge length of 50 pixels because the longer edge in the --bayer green case would be a boost to edge orientation estimate accuracy.

So I quickly simulated 1000 measurements using the airy-box PSF at an aperture of f/8. The baseline case ("base") is a square with a side length of 100 pixels, rendered at 550 nm wavelength. Gaussian noise with an SD of 0.03 (fairly noisy) was added. Here is a sample:

296f272262f041888ff9f14f834664f3.jpg.png

The control case ("short") is the same set-up, but with the square side length reduced to 50 pixels.

Next, the case we would like to measure ("mosaic green"): three images were rendered at 450, 550 and 630 nm wavelengths at a side length of 100 pixels, and combined into a Bayer mosaic. MTF mapper was run with the --bayer green option, so the effective number of samples (photosites) used was only 50%, or roughly the same number as the "short" case.

The last case is called "mosaic WB", and represents Jack's method of performing white balancing directly on the mosaiced image. This maintains all the samples of the 100-pixel edge length case, but combines samples from all channels (which we know have slightly differing MTF50 values because of the diffraction simulation).

The error in MTF50 value, expressed as a percentage, was obtained for each test:

base: 2.4878 %

short: 3.6225 %

mosaic green: 3.5151 %

mosaic WB: 2.7595 %

I doubt that the difference between "short" and "mosaic green" is significant (or meaningful), with mosaic WB performing much closer to the base case than either of the other two methods. Here is a boxplot:

13ad9b0b383d423aabad2cb5ce53b48f.jpg.png

Summary: my claim was bogus. The "--bayer green" case on an edge of length 100 is no more accurate than the grayscale case on an edge of length 50. This would seem to suggest that you have to double your edge length to maintain the same error for --bayer green.

Jack's method ("mosaic WB") also performs better than "--bayer green" under these conditions (fairly high noise level, no chromatic aberration at all). I have not measured CA on typical DPR test shots (like those of the D610), but I am reasonably certain that CA would be low enough to not matter.



Extrapolating from these results, one might expect the high-ISO images to benefit from the "mosaic WB" technique, but I would have to do some more experiments to confirm.
 
Good catch. But upon further thought I don't think it's the same thing: Iliah is showing an out of focus (sans lens, very diffuse light) phenomenon at sensor scale, while here we are debating theoretically focus peaked results with a sharp lens at pixel scale.
Jack

here is my thinking.

In the discussion of Iliah's test, one thought that was proposed were slanted microlens arrays. In essence, the mechanism would be scattering across the lens boundaries. I really know very little about lithium niobate AA-filters. However, I am guessing that the filter is designed to have light enter orthogonal to the crystal +/- some angle. Lenses for a Nikon camera would meet that criteria. However, should the angle of the light exceed some critical angle, I am wondering if scattering occurs, with that scattering being dependent on the orientation of the crystal lattice.

A diffuse light source by definition has no constraint on entry angle into the light box. Is it possible that what he is seeing at the sensor level is a result of some sort of asymmetric scattering? If some sort of odd post processing of the sensor data to produce the RAW data is eliminated as the cause, then what is left is something to do with the way the sensor "sees" diffuse light. It seems to me that scattering at the microlens array or in the crystal could be a possible source. Although it might just be a result of shadowing by the light box itself.

Just throwing possibilities out there.
 
xpatUSA wrote: The image was exported with a gamma curve of 2.2 which in QuickMTF is entered as 1/2.2 = 0.45. Messing with it makes little difference to the results, I've found.
I wrote hogwash there (fresh from using RawDigger for a different purpose!) Please ignore.
Right. But in this case with the switches below we are feeding a raw (gamma 1.0) image to QuickMTF while it is set up to expect a gamma 2.2 one. That's quite a non linear transformation, so results may be affected.
Yes, results were affected (blush).
Also, your ROI seems a bit small - I am feeding it most of the edges for improved accuracy (660x220)
Noted. Is my ROI too small?
I am not sure how QuickMTF works, so to be safe I would go big.
QuickMTF example plots here show ROIs of 170x129 and 148x79, from which I deduce that my ROIs were big enough.

Anyhow . . . .

I repeated the test w/dcraw 9.19, quickMTF gamma=1, green channel and got:

MTF50: 0.252 cy/px on the up-and-down edge, 0.220 cy/px on the left-to-right edge.

10-90% rise 2.06 px and 2.30 px respectively. (as you know, edge response is the basis for the MTF plot).

Although QuickMTF warns of the slant angles (it prefers 5 degs), 9.01 and 9.41 deg respectively, it declares the accuracy of the result as 100%.
 
Good catch. But upon further thought I don't think it's the same thing: Iliah is showing an out of focus (sans lens, very diffuse light) phenomenon at sensor scale, while here we are debating theoretically focus peaked results with a sharp lens at pixel scale.
Jack

here is my thinking.

In the discussion of Iliah's test, one thought that was proposed were slanted microlens arrays. In essence, the mechanism would be scattering across the lens boundaries. I really know very little about lithium niobate AA-filters. However, I am guessing that the filter is designed to have light enter orthogonal to the crystal +/- some angle. Lenses for a Nikon camera would meet that criteria. However, should the angle of the light exceed some critical angle, I am wondering if scattering occurs, with that scattering being dependent on the orientation of the crystal lattice.

A diffuse light source by definition has no constraint on entry angle into the light box. Is it possible that what he is seeing at the sensor level is a result of some sort of asymmetric scattering? If some sort of odd post processing of the sensor data to produce the RAW data is eliminated as the cause, then what is left is something to do with the way the sensor "sees" diffuse light. It seems to me that scattering at the microlens array or in the crystal could be a possible source. Although it might just be a result of shadowing by the light box itself.

Just throwing possibilities out there.
Gotcha, Scott.
 
This may be also of interest.

I looked at differences between D600, D610, D800 and D800E.

Too bad DxO has tested only three lenses with D800E and all of them are third party zooms. I picked Tamron 70-200/2.8 from that as it is known as a sharp lens.

Tamron 70-200/2.8 with D600 (click on Measurements tab/Sharpness/P-Mpix)
Tamron 70-200/2.8 with D610 (click on Measurements tab/Sharpness/P-Mpix)
Tamron 70-200/2.8 with D800 (click on Measurements tab/Sharpness/P-Mpix)
Tamron 70-200/2.8 with D800E (click on Measurements tab/Sharpness/P-Mpix)

You can see that D600 has worst sharpness of the four, D610 and D800 are same, D800E has best sharpness.
Fact that D610 and D800 are same while other two are different suggest that this is related to AA filter.

Relative difference between D600 and D610 is smaller than relative difference between D800 and D800E. That could be explained by AA filter in single dimension only in D610 as suggested Optyczne and in this thread.
Noted jtra, with the proviso that P-MPix are only precise to +/-0.5 and whether one was rounded down or up can make a big difference to the conclusions.
 
Frans, wow, lots of fantastic stuff here. I need a little time to digest it and come back at it with a clear head before responding. Thanks!
Many questions warrant many (long) answers :)
Hi Frans,

I apologize in advance for the ridiculous number of questions below. As you can tell I am quite interested in this subject these days. Any help would be greatly appreciated :-)
Of course, the expected increased resolution at blue wavelengths (lower diffraction) can be countered by focus (i.e., green is more in focus than blue), which appears to be the case with these D610 images. I do have actual D7000 photos where the blue channel produces slightly higher resolution than the green.
I never thought of it in such depth. Of course though: Spherical and Chromatic Aberrations + Diffraction would account for the differences and some would even out across channels. Does your model account for the Aberrations? I'd be interested to see the mathematics behind it in a future blog post
No, unfortunately I am only modelling diffraction and photosite aperture, with the optional inclusion of a 4-dot OLPF. I have not yet read up anything on spherical aberrations, but I suspect that they will be hard to include in my current rendering algorithm. I suspect that a ray tracing approach would be required; I have actually considered this, but where would I obtain sufficiently accurate parameters for the lenses --- you would have to know the exact optical formula for a given lens.

Chromatic aberration is straightforward to simulate with mtf_generate_rectangle: simply change the magnification for the three channels, and/or add an offset. For example, you could render three channels like this:

./mtf_generate_rectangle --b16 -n 0 -d 100 -x 1 -y 1 -p airy-box --lambda 0.63 -o ca_red.png

./mtf_generate_rectangle --b16 -n 0 -d 100 -x 0 -y 0 -p airy-box --lambda 0.55 -o ca_green.png

./mtf_generate_rectangle --b16 -n 0 -d 100 -x -1 -y -1 -p airy-box --lambda 0.45 -o ca_blue.png

These images are then combined into a Bayered mosaic (I have a little program for that --- maybe I should make a package that combines the Bayer mosaic and DNG creation steps?), and then passed through my hacked makeDNG tool. This gives us the following image:

Bayer Mosaic of the three ca_*.png images, scaled down to 8 bits for display here. This image is exactly what dcraw -D gives you
Bayer Mosaic of the three ca_*.png images, scaled down to 8 bits for display here. This image is exactly what dcraw -D gives you

Passing this through dcraw (without any further options) gives us this:

272f6eae5a6b46fbb5a854d50b8ddd99.jpg.png

Maybe the WB is not ideal, but you can clearly see the red/blue fringes.

So we can now try three experiments:

a) pass the Bayered mosaic image through MTF Mapper as-is. Since the original images generated by mtf_generate_rectangle all had the same intensity range, they are already perfectly white balanced (the zippering you see in the gray Bayer image above is the simulated CA colour fringes).

b) pass the demosaiced image (dcraw -A 80 80 72 72 -6 ca.dng) through MTF mapper.

c) pass the Bayered mosaic image through MTF Mapper using the "--bayer" options.

So here goes:

Option a): passing the mosaiced image (comparable to dcraw -D) through MTF Mapper as-is
Option a): passing the mosaiced image (comparable to dcraw -D) through MTF Mapper as-is

Option b): demosaiced image using dcraw
Option b): demosaiced image using dcraw

Option c.1): mtf_mapper --bayer blue
Option c.1): mtf_mapper --bayer blue

Option c.2): mtf_mapper --bayer green
Option c.2): mtf_mapper --bayer green

Option c.3): mtf_mapper --bayer red
Option c.3): mtf_mapper --bayer red

By the way, the images simulated with mtf_generate_rectangle (above) have the following expected MTF50 values: red=0.309194, green=0.337129, blue=0.377227)

This sequence of images demonstrate the following:

1) In the presence of significant CA (about 1 pixel shift in R/B in both x and y), the only accurate method for measuring MTF50 is to use the "mtf_mapper --bayer" modes.

2) The white-balanced mosaiced image (i.e., dcraw -D) produces more consistent results across the four edges, but all four values are far too low.

3) The demosaiced image (dcraw defaults to AHD, it seems) suffers from both inaccurate values, and large variations.

The relatively short edges (100 pixels) would lead to a small number of samples for --bayer red and --bayer blue (only 25 pixels, effectively), but this does not lead to large inaccuracies in this example because sensor noise was suppressed ("-n 0" option when generating the synthetic images).

In short, unless you know that CA is non-existent, it seems safer to use the "dcraw -D" image followed by "--bayer red/green/blue".
Am I correct in assuming that MTFMapper uses a maximum of 400 pixels even if it is fed more?
So here is the trade-off: we know how noise affects MTF measurements, i.e., the mean value over many individual measurements is unbiased, but the deviation from the mean can be quite large for any single measurement (at high noise levels). A difference between the blue and green (for example) focal plane position relative to the sensor would be systematic, i.e., it would not decrease with repeated measurements. This means that an un-demosaiced image (dcraw -D) followed by white balancing will produce an edge that is a blend of three individual edges (red, green and blue).
I see, that's an interesting way of looking at it: the same physical edge sampled four times at different sampling rates and observed wavelenths - then averaged together.
[ATTACH alt="A "perfectly white-balanced" mosaiced bayer image passed through MTF mapper. Same process as above, except that no CA was introduced"]446137[/ATTACH]
A "perfectly white-balanced" mosaiced bayer image passed through MTF mapper. Same process as above, except that no CA was introduced

It seems that in this case (no noise, etc.) the green channel dominates the result. (Incidentally, if we use the --bayer red/green/blue options on the above image, we get exactly the same result as above, as we would expect). In other words, if there is no significant CA, then the "dcraw -D followed by WB" method is very similar to the "--bayer green" method, which is a good thing :)
However, by thinking of it as 'blending' the four edge images together aren't we giving up some of the spatial resolution information intrinsic in the fact that we know where each sampled 'edge' is positioned wrt to the others? Ignoring wavelength specific effects for a moment and assuming that the neutral subject is 'uniformly' illuminated (D50 is good enough for this discussion, I think), isn't a white balanced dcraw -d file as precise a reconstruction of the light intensity at the scene as possible with a CFA - at full sensor resolution, approximating the results expected from a monochrome sensor (albeit with a lower base ISO)?
Ok, at this point I will have to reveal my secrets. When you use MTF Mapper's --bayer options I cheat a bit. The first phase of the slanted edge method, i.e., finding the edge location and orientation, simply pretends that we are dealing with a grayscale image. I do this because good demosaicing algorithms are slow, and the fast ones produce even less accurate edge orientation and position estimates (I tested this component individually to come to this conclusion). If you have significant CA, this means that your edge location will be estimated mostly based on the green channel, which would mean that your extracted PSF will not be centred perfectly. Fortunately, the FFT applied to the PSF to obtain the MTF is not sensitive to this shift at all. Unfortunately, I do perform some apodization using a Hamming window (IIRC), which may introduce a tiny bit of sensitivity to exact edge location. Either way, the edge orientation has a far greater impact on MTF accuracy, and as far as I can tell (or remember) the orientation is extracted just fine when treating the Bayer mosaic image as a grayscale image.

Summary: I doubt that edge position is a major factor in accuracy. The "blending" of the three edges (R,G,B) has more of a "broadening" effect on the edge transition area, which lowers MTF, just as illustrated above in the simulated CA experiment. In the absence of CA, green appears to dominate.
Would this dcraw -d undemosaiced, white-balanced approach (see underlining below) result in a more precise Edge Spread Function than looking at the individual channels?
No. Based on various experiments while developing MTF Mapper, the single most important step is accurate edge orientation estimation (which is why MTF Mapper will combine parallel edges, if possible, when square targets shapes are used). Normally, the accuracy of edge orientation estimation would be tied to the overall edge length, with ~25 pixels as a rough lower limit for reasonable results. If we assume the sensor noise is Gaussian (which is true enough for our purposes), then we can assume that the noise will show up with roughly equal magnitude at all frequencies in our MTF curve. The trick is that we usually have very little signal at higher frequencies (i.e., above Nyquist), so the signal-to-noise ratio at the lower frequencies is actually quite good. In addition, if we compute MTF50, then we simply do not care about the noise that ends up above Nyquist anyway.

Summary: since our edge orientation estimation is performed on all photosites (regardless of CFA channel), I would predict that the "--bayer" option in MTF Mapper is able to extract the benefit of a longer edge (say, 100 pixels regardless of CFA channel) even when only using the red or blue channel (effectively only 25 pixels along edge) to compute the per-channel MTF. The smaller number of samples will still produce a poorer signal-to-noise ratio, but the overall error might be manageable. I think I should test this --- maybe a future blog post.
The trade-off is that the full resolution -d white balanced image offers up to four times as many samples, but it is affected by larger differences in diffraction and aberrations due to the wider frequency bands observed. Single channel analysis has only 1/4 the samples but it is less affected by differences in diffraction and aberrations.
Agreed. As explained above, my but feeling is that the single-channel analysis would be safer overall (especially in the presence of CA), and that the larger number of samples in the white-balanced image would only start making a meaningful difference when the noise levels are very high. Again, more experiments for me :)
White balancing, in itself, does not affect edge sharpness, so a perfectly white-balanced mosaiced edge image will probably produce a weighted MTF curve (25% red curve, 50% green curve, 25% blue curve).
Why 'mosaiced'? To produce such an image I would suggest the dcraw -d switch (as opposed to -D) because -D does not subtract the black point (immaterial for most Nikons other than the D5300 but quite critical for many other brands) and it does not allow for white balancing by dcraw - which introduces several more steps to generating the TIFF to feed MTFMapper. Since DPR says that they measure the illuminant color temperature and set it right in-camera, dcraw -d -w will produce the undemosaiced white balanced raw data required (-4 and -T will ensure that no gamma or scaling is applied and the output will be a 16-bit tiff file).
Agreed. By "mosaiced" I simply meant "dcraw -d" or "dcraw -D", as opposed to a demosaiced image. Once we introduce demosaicing, all bets are off (as shown above in the extreme CA case).
If absolute accuracy is important, and multiple images are available, then repeated measurements followed by single-channel analysis (e.g., mtf mapper's "--bayer green" option) would be the best strategy.
Does --bayer work with 'gray' TIFFs like those generated by dcraw -D/-d? Would --bayer green use values both in quartet location G1 and G2? Would "--bayer red" work if fed a TIFF that has just values for red in the correct location in each quartet but with the other three colors set to zero (e.g. output of RawDigger 'export to TIFF')?
Yes. It requires a -D/-d image to work as intended, although it will still "work" on a demosaiced image (but your results will probably be worse). Keep in mind, though, that MTF Mapper will convert a demosaiced RGB input image to a grayscale image using one of the "typical" blends: 0.299R + 0.587G + 0.114B. This gives you roughly a luminance-MTF output, which is probably what most people wanted.

The "--bayer green" treats G1 and G2 as one green channel.

In theory you could feed MTF Mapper an image that is zero except at the red CFA photosite locations, but it will not work well. As I discussed above, I use all available pixels for edge orientation estimation. I also think that my thresholding and rectangle-detection code will fail quite badly on such an input. I would still recommend "dcraw -d / -D" as the preferred input when using the --bayer option.
 
I am guessing that the filter is designed to have light enter orthogonal to the crystal +/- some angle. Lenses for a Nikon camera would meet that criteria.
Not always.
A diffuse light source by definition has no constraint on entry angle into the light box. Is it possible that what he is seeing at the sensor level is a result of some sort of asymmetric scattering?
That, plus microlenses' effect, plus mirror box effects.

Removing AA filter does change the picture, it is less elliptical. Shining the light right onto the sensor (no mirror box) changes the picture very slightly.
 
I am guessing that the filter is designed to have light enter orthogonal to the crystal +/- some angle. Lenses for a Nikon camera would meet that criteria.
Not always.
I love you Iliah. You are always so understated. Of course, non-telecentric lenses show increased vignetting.
A diffuse light source by definition has no constraint on entry angle into the light box. Is it possible that what he is seeing at the sensor level is a result of some sort of asymmetric scattering?
That, plus microlenses' effect, plus mirror box effects.

Removing AA filter does change the picture, it is less elliptical. Shining the light right onto the sensor (no mirror box) changes the picture very slightly.
Interesting. So it looks like the predominant effect might be one of scattering (or attenuation) across angled microlenses. Since an AA-filter has preferred beam split directionality in the horizontal and vertical directions, this might accentuate transmission in the ordinal directions, and attenuate transmission in directions towards the corners, causing an elliptical pattern. If there is a single crystal, as hypothesized for the D610 in this thread, then there could be a preferred direction for transmission, distorting the ellipse.

Iliah, if my wild ass conjecture is true, then your measurements of diffuse light on sensors would be able to differentiate between AA-less, single-direction AA, and dual-direction AA. Assuming the second choice actually exists.

I have another interesting thought. The structure of the microlens array might be discerned by measurement of curved, rather than slant edges. Within the same region of the sensor, a slanted or tilted lens array might very well image differently for a curve whose center of radius is at the center of the frame, vs a curve whose center of radius is outside of the frame. (i.e. positive vs negative curve)
 
Ok, thanks Ted. Values still seem to low. There must be an incorrect QuickMTF setting that we are overlooking.
Hmmm . . .

Perhaps QuickMTF doesn't like this:

DSC_0171.tiff blown up in ImageJ
DSC_0171.tiff blown up in ImageJ

Or these:

WTF?! not the finest of edges I've seen lately
WTF?! not the finest of edges I've seen lately

Of interest above is the huge difference between the zones in terms of 'noise' amplitude !!

Just out of interest:

e8e5dccf1a484b27a24750905af2ba02.jpg.gif

Are we really sure that the requested dcraw output is suitable for slant edge testing? Beginning to doubt it, myself.

--
"Engage Brain before operating Keyboard!"
Ted
 
Last edited:
I have another interesting thought. The structure of the microlens array might be discerned by measurement of curved, rather than slant edges. Within the same region of the sensor, a slanted or tilted lens array might very well image differently for a curve whose center of radius is at the center of the frame, vs a curve whose center of radius is outside of the frame. (i.e. positive vs negative curve)
I have a friendly company that have the equipment needed to take precise measurements. I will ask them for a window to use it.
 
Are we really sure that the requested dcraw output is suitable for slant edge testing? Beginning to doubt it, myself.
What you see in ImageJ is what a Bayered mosaiced image really looks like --- your example does not appear to be white balanced, hence the spatial pattern

Red | Green

=========

Green | Blue

is clearly visible. (excuse the ascii art)

MTF Mapper has a special option ("--bayer green", for example) to correctly interpret an image in this format. Imatest (used by most review sites) also supports this format, as far as I know.
 
Last edited:
Ok, thanks Ted. Values still seem to low. There must be an incorrect QuickMTF setting that we are overlooking.
Hmmm . . .

Are we really sure that the requested dcraw output is suitable for slant edge testing? Beginning to doubt it, myself.
I repeated the test with an AHD interpolated, raw color space, gamma = 1 dcraw image. The patterning I showed above was then not present.

Edges: 10-90% rise were 1.92 and 2.21 px.

MTF50s: were 0.267 and 0.23 cy/px

(green channel, BTW)

--
"Engage Brain before operating Keyboard!"
Ted
 
Last edited:
Ok, thanks Ted. Values still seem to low. There must be an incorrect QuickMTF setting that we are overlooking.
Hmmm . . .

Perhaps QuickMTF doesn't like this:

DSC_0171.tiff blown up in ImageJ
DSC_0171.tiff blown up in ImageJ

Or these:

WTF?! not the finest of edges I've seen lately
WTF?! not the finest of edges I've seen lately

Of interest above is the huge difference between the zones in terms of 'noise' amplitude !!

Just out of interest:

e8e5dccf1a484b27a24750905af2ba02.jpg.gif

Are we really sure that the requested dcraw output is suitable for slant edge testing? Beginning to doubt it, myself.

--
"Engage Brain before operating Keyboard!"
Ted
Aha, that's the signature of an image that has not been white balanced. That's the job of the -w switch in dcraw, which makes me wonder whether the TIFF file was properly generated. This is what DSC_0171.tiff should look like after the following command

dcraw -d -4 -T -w DSC_0171.NEF

300% - just a little random noise 'cause it's at ISO 800

300% - just a little random noise 'cause it's at ISO 800

The lack of white balance shows up as apparent 'noise' in your Edge graph and different peaks in your histograms, but all they are is the relative strength of un-white-balanced red, green and blue. They are at the right ratios shown by selecting a portion of the white edge in RawDigger: 0.492, 1, 0.853 resp.

Make sure the NEF goes through dcraw -d -4 -T -w properly as indicated here and let's see if we get closure. Produce MTF50 values for R, G, B and Y if possible.

Jack
 
Last edited:
Aha, that's the signature of an image that has not been white balanced. That's the job of the -w switch in dcraw, which makes me wonder whether the TIFF file was properly generated. This is what DSC_0171.tiff should look like after the following command

dcraw -d -4 -T -w DSC_0171.NEF

'The lack of white balance shows up as apparent 'noise' in your Edge graph and different peaks in your histograms, but all they are is the relative strength of un-white-balanced red, green and blue. They are at the right ratios shown by selecting a portion of the white edge in RawDigger: 0.492, 1, 0.853 resp.

Make sure the NEF goes through dcraw -d -4 -T -w properly as indicated here and let's see if we get closure. Produce MTF50 values for R, G, B and Y if possible.
Well spotted, Jack.


MTF50s were same for R,G,B&Y:
0.247 and 0.220 cy/px for RHS slant and lower slant respectively.
Closure?


I also looked at the center bit, well zoomed in RawDigger. In raw composite view the moire was the same at top and right. In each green view, a slight variation. In blue and red views there were differences which changed places when selecting either one, er, if you see what I mean.

--
"It's all grist to the mill . . ."
Ted
 
Last edited:
Well spotted, Jack [missing dcraw option -w].
MTF50s were same for R,G,B&Y:
0.247 and 0.220 cy/px for RHS slant and lower slant respectively.
I'm wondering if comparison of MTFs at Nyquist is more indicative of the difference between vertical and horizontal 'sharpness'?

At 0.5 cy/px they are 10% and 15% respectively, a ratio of 1.5, whereas the ratio between MTF50s is 0.247/0.220 which is much less at 1.12.
 
I am only modelling diffraction and photosite aperture, with the optional inclusion of a 4-dot OLPF. I have not yet read up anything on spherical aberrations, but I suspect that they will be hard to include in my current rendering algorithm. I suspect that a ray tracing approach would be required; I have actually considered this, but where would I obtain sufficiently accurate parameters for the lenses --- you would have to know the exact optical formula for a given lens.
Right
Chromatic aberration is straightforward to simulate with mtf_generate_rectangle: simply change the magnification for the three channels, and/or add an offset. For example, you could render three channels like this:

./mtf_generate_rectangle --b16 -n 0 -d 100 -x 1 -y 1 -p airy-box --lambda 0.63 -o ca_red.png

./mtf_generate_rectangle --b16 -n 0 -d 100 -x 0 -y 0 -p airy-box --lambda 0.55 -o ca_green.png

./mtf_generate_rectangle --b16 -n 0 -d 100 -x -1 -y -1 -p airy-box --lambda 0.45 -o ca_blue.png

These images are then combined into a Bayered mosaic (I have a little program for that --- maybe I should make a package that combines the Bayer mosaic and DNG creation steps?), and then passed through my hacked makeDNG tool. This gives us the following image:

Bayer Mosaic of the three ca_*.png images, scaled down to 8 bits for display here. This image is exactly what dcraw -D gives you
Bayer Mosaic of the three ca_*.png images, scaled down to 8 bits for display here. This image is exactly what dcraw -D gives you

Passing this through dcraw (without any further options) gives us this:

272f6eae5a6b46fbb5a854d50b8ddd99.jpg.png

Maybe the WB is not ideal, but you can clearly see the red/blue fringes.
Cool, I think this is the first time I truly understand why we get red/blue fringes :-)
So we can now try three experiments:

a) pass the Bayered mosaic image through MTF Mapper as-is. Since the original images generated by mtf_generate_rectangle all had the same intensity range, they are already perfectly white balanced (the zippering you see in the gray Bayer image above is the simulated CA colour fringes).

b) pass the demosaiced image (dcraw -A 80 80 72 72 -6 ca.dng) through MTF mapper.

c) pass the Bayered mosaic image through MTF Mapper using the "--bayer" options.

So here goes:

Option a): passing the mosaiced image (comparable to dcraw -D) through MTF Mapper as-is
Option a): passing the mosaiced image (comparable to dcraw -D) through MTF Mapper as-is

Option b): demosaiced image using dcraw
Option b): demosaiced image using dcraw

Option c.1): mtf_mapper --bayer blue
Option c.1): mtf_mapper --bayer blue

Option c.2): mtf_mapper --bayer green
Option c.2): mtf_mapper --bayer green

Option c.3): mtf_mapper --bayer red
Option c.3): mtf_mapper --bayer red

By the way, the images simulated with mtf_generate_rectangle (above) have the following expected MTF50 values: red=0.309194, green=0.337129, blue=0.377227)

This sequence of images demonstrate the following:

1) In the presence of significant CA (about 1 pixel shift in R/B in both x and y), the only accurate method for measuring MTF50 is to use the "mtf_mapper --bayer" modes.

2) The white-balanced mosaiced image (i.e., dcraw -D) produces more consistent results across the four edges, but all four values are far too low.

3) The demosaiced image (dcraw defaults to AHD, it seems) suffers from both inaccurate values, and large variations.

The relatively short edges (100 pixels) would lead to a small number of samples for --bayer red and --bayer blue (only 25 pixels, effectively), but this does not lead to large inaccuracies in this example because sensor noise was suppressed ("-n 0" option when generating the synthetic images).

In short, unless you know that CA is non-existent, it seems safer to use the "dcraw -D" image followed by "--bayer red/green/blue".
Right
Am I correct in assuming that MTFMapper uses a maximum of 400 pixels even if it is fed more?
So here is the trade-off: we know how noise affects MTF measurements, i.e., the mean value over many individual measurements is unbiased, but the deviation from the mean can be quite large for any single measurement (at high noise levels). A difference between the blue and green (for example) focal plane position relative to the sensor would be systematic, i.e., it would not decrease with repeated measurements. This means that an un-demosaiced image (dcraw -D) followed by white balancing will produce an edge that is a blend of three individual edges (red, green and blue).
I see, that's an interesting way of looking at it: the same physical edge sampled four times at different sampling rates and observed wavelenths - then averaged together.
[ATTACH alt="A "perfectly white-balanced" mosaiced bayer image passed through MTF mapper. Same process as above, except that no CA was introduced"]446137[/ATTACH]
A "perfectly white-balanced" mosaiced bayer image passed through MTF mapper. Same process as above, except that no CA was introduced

It seems that in this case (no noise, etc.) the green channel dominates the result. (Incidentally, if we use the --bayer red/green/blue options on the above image, we get exactly the same result as above, as we would expect). In other words, if there is no significant CA, then the "dcraw -D followed by WB" method is very similar to the "--bayer green" method, which is a good thing :)
Would you say that the difference between the dcraw -d/D+white balance reading and the --bayer color readings from the same TIFF would be mostly due to CA? Could the difference be used as a metric of the strength of CA, ideally in units of m pixels?
However, by thinking of it as 'blending' the four edge images together aren't we giving up some of the spatial resolution information intrinsic in the fact that we know where each sampled 'edge' is positioned wrt to the others? Ignoring wavelength specific effects for a moment and assuming that the neutral subject is 'uniformly' illuminated (D50 is good enough for this discussion, I think), isn't a white balanced dcraw -d file as precise a reconstruction of the light intensity at the scene as possible with a CFA - at full sensor resolution, approximating the results expected from a monochrome sensor (albeit with a lower base ISO)?
Ok, at this point I will have to reveal my secrets. When you use MTF Mapper's --bayer options I cheat a bit. The first phase of the slanted edge method, i.e., finding the edge location and orientation, simply pretends that we are dealing with a grayscale image. I do this because good demosaicing algorithms are slow, and the fast ones produce even less accurate edge orientation and position estimates (I tested this component individually to come to this conclusion). If you have significant CA, this means that your edge location will be estimated mostly based on the green channel, which would mean that your extracted PSF will not be centred perfectly. Fortunately, the FFT applied to the PSF to obtain the MTF is not sensitive to this shift at all. Unfortunately, I do perform some apodization using a Hamming window (IIRC), which may introduce a tiny bit of sensitivity to exact edge location. Either way, the edge orientation has a far greater impact on MTF accuracy, and as far as I can tell (or remember) the orientation is extracted just fine when treating the Bayer mosaic image as a grayscale image.

Summary: I doubt that edge position is a major factor in accuracy. The "blending" of the three edges (R,G,B) has more of a "broadening" effect on the edge transition area, which lowers MTF, just as illustrated above in the simulated CA experiment. In the absence of CA, green appears to dominate.
Right - see next post for some additional comments.
Would this dcraw -d undemosaiced, white-balanced approach (see underlining below) result in a more precise Edge Spread Function than looking at the individual channels?
No. Based on various experiments while developing MTF Mapper, the single most important step is accurate edge orientation estimation (which is why MTF Mapper will combine parallel edges, if possible, when square targets shapes are used). Normally, the accuracy of edge orientation estimation would be tied to the overall edge length, with ~25 pixels as a rough lower limit for reasonable results. If we assume the sensor noise is Gaussian (which is true enough for our purposes), then we can assume that the noise will show up with roughly equal magnitude at all frequencies in our MTF curve. The trick is that we usually have very little signal at higher frequencies (i.e., above Nyquist), so the signal-to-noise ratio at the lower frequencies is actually quite good. In addition, if we compute MTF50, then we simply do not care about the noise that ends up above Nyquist anyway.

Summary: since our edge orientation estimation is performed on all photosites (regardless of CFA channel), I would predict that the "--bayer" option in MTF Mapper is able to extract the benefit of a longer edge (say, 100 pixels regardless of CFA channel) even when only using the red or blue channel (effectively only 25 pixels along edge) to compute the per-channel MTF. The smaller number of samples will still produce a poorer signal-to-noise ratio, but the overall error might be manageable. I think I should test this --- maybe a future blog post.
The trade-off is that the full resolution -d white balanced image offers up to four times as many samples, but it is affected by larger differences in diffraction and aberrations due to the wider frequency bands observed. Single channel analysis has only 1/4 the samples but it is less affected by differences in diffraction and aberrations.
Agreed. As explained above, my but feeling is that the single-channel analysis would be safer overall (especially in the presence of CA), and that the larger number of samples in the white-balanced image would only start making a meaningful difference when the noise levels are very high. Again, more experiments for me :)
I was thinking about this: two things happen as we look at a single Bayer channel vs a monochrome sensor aotbe (blue for instance). First we sample the image on the sensor at half the rate horizontally and vertically; second we do not integrate all light that falls on it (as we would if the sensor had 1/4 the pixels in the same FF area). The former would mean that the Nyquist frequency is cut in half for the blue Bayer vs the monochrome version (for our D610 from about 4000 lw/ph -or 167 lp/mm- to 2000 -or 84 lp/mm-), with implications on accuracy and precision; the latter that a large pillow in one of the corners of a square bed a better model for the sensing shape?
White balancing, in itself, does not affect edge sharpness, so a perfectly white-balanced mosaiced edge image will probably produce a weighted MTF curve (25% red curve, 50% green curve, 25% blue curve).
Why 'mosaiced'? To produce such an image I would suggest the dcraw -d switch (as opposed to -D) because -D does not subtract the black point (immaterial for most Nikons other than the D5300 but quite critical for many other brands) and it does not allow for white balancing by dcraw - which introduces several more steps to generating the TIFF to feed MTFMapper. Since DPR says that they measure the illuminant color temperature and set it right in-camera, dcraw -d -w will produce the undemosaiced white balanced raw data required (-4 and -T will ensure that no gamma or scaling is applied and the output will be a 16-bit tiff file).
Agreed. By "mosaiced" I simply meant "dcraw -d" or "dcraw -D", as opposed to a demosaiced image. Once we introduce demosaicing, all bets are off (as shown above in the extreme CA case).
Right
If absolute accuracy is important, and multiple images are available, then repeated measurements followed by single-channel analysis (e.g., mtf mapper's "--bayer green" option) would be the best strategy.
Does --bayer work with 'gray' TIFFs like those generated by dcraw -D/-d? Would --bayer green use values both in quartet location G1 and G2? Would "--bayer red" work if fed a TIFF that has just values for red in the correct location in each quartet but with the other three colors set to zero (e.g. output of RawDigger 'export to TIFF')?
Yes. It requires a -D/-d image to work as intended, although it will still "work" on a demosaiced image (but your results will probably be worse). Keep in mind, though, that MTF Mapper will convert a demosaiced RGB input image to a grayscale image using one of the "typical" blends: 0.299R + 0.587G + 0.114B. This gives you roughly a luminance-MTF output, which is probably what most people wanted.

The "--bayer green" treats G1 and G2 as one green channel.
Ok, I understand. When we are trying to compare the 'sharpness' of camera+lens systems I think the key is being consistent. Or is there one combination that should be preferred to others by photographers?
In theory you could feed MTF Mapper an image that is zero except at the red CFA photosite locations, but it will not work well. As I discussed above, I use all available pixels for edge orientation estimation. I also think that my thresholding and rectangle-detection code will fail quite badly on such an input. I would still recommend "dcraw -d / -D" as the preferred input when using the --bayer option.
Yes, having just tried it I can confirm that it does not like it :-)
 

Keyboard shortcuts

Back
Top