# MTF: I think I Isolated the D610 Directional AA filter's MTF Curve

Started Apr 10, 2014 | Discussions
 Forum
MTF: I think I Isolated the D610 Directional AA filter's MTF Curve
4

Should your photography journey have brought you to the Sharpness, Frequency Response and Modulation Transfer Function chapter, I have this to say: lots of math, but quite eye opening. Grab a cuppa.

In a simplified model of system frequency response, light from the scene arrives at the lens, gets diffracted by its aperture, smeared by lens aberrations, smeared some more by the AntiAliasing filter, smeared again if not perfectly focused on the sensing plane, passes through microlenses and various filters on top of a pixel before hitting it, being converted into photoelectrons and stored into a raw file by our cameras. Every step of the way needs to be optimized if one wants to have the best spatial resolution in town.

The neat thing about working in the frequency domain (with MTF) is that in certain situations if one knows the frequency response of each component in the system, one can calculate the total system (camera) response simply by multiplying them all together. So bunching up for simplicity lens aberrations, defocus and all left overs in a catch-all gaussian component, one could simply multiply the MTF curves of

1) Diffraction
2) Lens aberrations/defocus/other
3) AA filter
4) Microlens/pixel aperture

together in order to get the camera's overall MTF curve like so:

MTFcamerasystem = MTFdiff x MTFlens x MTFAA x MTFpix

The result should theoretically look like this from the raw green channel of a D610 at ISO100 set up like the one in DPR's new Studio Scene (Low Light DSC_0199.NEF to get away from what looks like a little shutter shock in earlier captures):

Multiplying the individual MTF curves together (dashed lines) yields overall MTF Camera System Response (black line). Wavelength 0.55um, Lens Gaussian stddev 0.3px, f/5.6

The four dashed lines represent the individual MTF curves of the four simplified components above (luckily easyish formulas model each), the black one their product = the total camera system response.

This is the curve that Lenstip.com, Photozone.de, Roger Cicala etc. would look at to evaluate lenses. It's called an Optical Transfer Function (OTF) instead of MTF because it is normalized to 1 at the origin. A value of 0.5 is the famous MTF50 reading which correlates well with human perception of spatial resolution so it is often used as a figure of merit for lenses, the higher the better. In this case the frequency at which MTF is 0.5 is 0.271cycles/px or 46.0lp/mm or 2186lw/ph.

To see how theory does when confronted with reality I used open source MTF Mapper by Frans van den Bergh to generate the MTF curve out of this camera/lens setup from the slanted edge below the center of the referenced NEF. It's shown as a green line below because I only used the green raw channel (assuming an average wavelength of 0.55um) in order to minimize CA and other color related aberrations (the green channel is supposedly determinant in our perception of sharpness anyways).

The green line is the actual MTF Curve produced by MTF Mapper from the green channel of the referenced NEF

MTF50 is measured at 0.2685 cycles/px, 45.5lp/mm, 2163lw/ph, pretty close to theory. The catch-all lens/defocus/other gaussian factor at close to 0.3 pixels standard deviation helped make the graph look good.

Note how the only MTF component that reaches zero is the AA's, which gets multiplied in with all the others. Since any number times zero is zero one can therefore calculate the 'strength' of the AA by seeing at what frequency the total MTF curve first hits zero. In this case around 0.64 cycles/pixel, which corresponds to an AA-induced image shift of about 0.78 pixels (or 2x0.39 half shifts as some prefer to see it).

The fact that individual MTFs get multiplied together to get the Total System's is very useful to answer a multitude of questions. For instance last year I wondered aloud whether recent Exmor sensors such as the ones found in the D610 and A7 have anti aliasing only in one direction. It would be easy to do by simply dropping one of the two lithium niobate plates (say the vertical) while leaving the other one intact.

One way to find out would simply be to divide together the total MTF curves obtained by measurements in the two directions. In Other words, if we measured MTF curves in the horizontal direction (with AA) and the vertical (supposedly without) in the same raw file, the ratio of the two total MTF curves would be:

MTFh/MTFv = MTFdiffh x MTFlensh x MTFAAh x MTFpixh / (MTFdiffv x MTFlensv x MTFpixv)

Ignoring astigmatisms all the individual terms are the same (it's the same camera/lens/NEF) in the vertical and horizontal direction, so they cancel out - except for AA: Therefore

MTFh/MTFv = MTFAA

If this were true, the curve representing the ratio of the vertical and horizontal system MTF curves should look like the theoretical dashed blue line of the AA in the graphs above - at least up to the zero. Things become unreliable after the zero because we are dividing two very small noisy numbers together.

'Measured' MTF of the D610's AA filter: it apparently only smears vertical detail

Looks like an AA MTF curve to me, and no fudge factor involved here. So it does appear that the D610's AA filter only works in one direction, the vertical one. This could explain why DxOmark.com's D610's averaged sharpness measurements typically favor the D610 over the D600 by about 1 P-Mpix with the same lens. And it has interesting implications in terms of how later Exmor images should be sharpened. Still digesting and still playing.

Look forward to comments, corrections, clarifications.

Cheers,

Jack

PS I am indebted to those who have gone before me, especially this excellent discourse between Detail Man and Frans van den Bergh

Nikon D600 Nikon D610 Sony a7
If you believe there are incorrect tags, please send us this post using our feedback form.
Complain
A7 Directional AA filter's MTF Curve

same procedure on the A7. Note from the MTF50 measurements that if the D610 has no AA smearing action in the horizontal direction, the A7 doesn't have it vertically instead. So if the D610 achieved those results by removing the horizontal LiNbO3 plate, the A7 would have removed the vertical one.

Ratio of Measured Vertical and Horizontal MTF curves off slanted edges for the A7 = MTF curve of V-H differences = MTF curve of its AA in Horizontal direction only?

Not as perfect a fit to theory (some astigmatism or imperfect alignment perhaps?) but pretty good and again possibly quite indicative that the difference in MTFs measured off slanted edges in the vertical and horizontal direction from data in the same raw capture result in what looks like an AA filter. Which would mean that it is present in one direction but not the other.

In the  case of the A7 it appears that the AA causes a zero at about 0.685cycles per pixel which corresponds to a shift of about 0.73 pixels (or 2x 0.365 half shifts) in the horizontal direction. This curve would imply roughly no AA action in the vertical direction.

Complain
Re: MTF: I think I Isolated the D610 Directional AA filter's MTF Curve

Thank you for yet another very interesting contribution.

Since no responses are cluttering this thread with intelligent stuff I'll ask a dumb question:

Why, why, why would a manufacturer introduce a one-dimensional AA filter?  Perhaps I've missed the speculation about this and if so I'd be grateful for a reference.

It seems like such a filter adds entirely gratuitous astigmatism and it does indeed raise questions about, eg, sharpening.

AZ Steve's gear list:AZ Steve's gear list
Fujifilm GFX 100 Sony a7R IV +1 more
Complain
...Why oh why...

AZ Steve wrote:

Thank you for yet another very interesting contribution.

Since no responses are cluttering this thread with intelligent stuff I'll ask a dumb question:

Why, why, why would a manufacturer introduce a one-dimensional AA filter? Perhaps I've missed the speculation about this and if so I'd be grateful for a reference.

It seems like such a filter adds entirely gratuitous astigmatism and it does indeed raise questions about, eg, sharpening.

...- would "why" ever be a dumb question?

[informal answer: It gets "dumb" when you're digging in details that have both following two conditions fulfilled; An extremely low impact on your actual productivity AND furthers no actual interest on neither your nor the questioned party's behalf]

There's at least two, probably three reasonable answers, and we will - unless we can directly question the OD group at Sony or Nikon - never get the COMPLETE answer. There probably isn't "just one" correct answer, it's a combined production design choice.

My suggestions:

1. Since the second plate is actually there, it might be a birefringence angle miscalculation.
2. The miscalculation might actually be in the phase plate layer between the two birefringent layers, or the phase plate is calculated to give maximum effect at NIR and NUV (which would give a very low de-polarizing effect on green, where Jack measured and simulated the curves).
3. The "error" is indeed pre-calculated, and its there to minimize the negative effects of the line-skipping readout pattern the sensor uses for video recording.

no.1 would be embarrassing, but not the end of the world.

no.2 is actually quite probable, since it's R&G that really needs the AA layer. The "green" channel has no need of the AA-layer - it would be much better off without it!

no.3 is also quite probable, but probably not the complete and ultimate answer. They no doubt thought about it, the question is how much weight (over the still image performance) Nikon would allow such a product optimization to have.

Complain
Re: MTF: I think I Isolated the D610 Directional AA filter's MTF Curve

Interesting, thanks. Couple of questions, though.

What does the second (and first, for that matter) figure show? I'm not familiar with MTF mapper -- and since this post is about anisotropy -- is it an average of vertical and horizontal? Sagittal and tangential? Just one or another? Is it an average taken at a number of locations in the FOV?

I guess I don't understand the last figure either. It does show a likely zero at about 0.63 cycles/pixel, but also at 0.75 and 0.84. You attribute the variability after the first likely zero to noise from division of small numbers, but that indicates that the value of the denominator is also grovelling in the dirt here. There may well be directionality but showing only the ratio isn't a very good way to show it. Why not just show the MTFs in the vertical and horizontal separately?

Nor do I understand this statement: Ignoring astigmatisms all the individual terms are the same (it's the same camera/lens/NEF) in the vertical and horizontal direction. I'm sure the 85/1.8G is a fine lens with little or no astigmatism, but at this level of analysis surely that should be justified?

Out of curiosity, how did you model the other components of the MTF in figure 1?

Also, where does this deifinition come from? It's called an Optical Transfer Function (OTF) instead of MTF because it is normalized to 1 at the origin. In my world the OTF is the FT(point spread function). MTF is an observable (without the phase information) so is the modulus of the OTF.

Complain
One Directional AA Deconvolution Kernel

AZ Steve wrote:

Why, why, why would a manufacturer introduce a one-dimensional AA filter? Perhaps I've missed the speculation about this and if so I'd be grateful for a reference.

It seems like such a filter adds entirely gratuitous astigmatism and it does indeed raise questions about, eg, sharpening.

Re sharpening: assuming the AA action is perfectly aligned with the sensor, it should not be too tough to develop a kernel to deconvolve the AA in the desired direction only (e.g. using photoshop's filter/other/custom).

Any suggestions for a kernel if we assume that the AA half-shifts the image, say, 0.375 pixels in the horizontal direction only?

Complain
Re: ...Why oh why...

The_Suede wrote: There's at least two, probably three reasonable answers, and we will - unless we can directly question the OD group at Sony or Nikon - never get the COMPLETE answer. There probably isn't "just one" correct answer, it's a combined production design choice.

My suggestions:

1. Since the second plate is actually there, it might be a birefringence angle miscalculation.

Hopefully not Plus this seems to be a bit of a trend with later Exmors I have evaluated from DPR raw captures (e.g. in addition to the above Nex-6, XA1, but curiously not a58) - could polarization in their lighting be causing any of this?

2. The miscalculation might actually be in the phase plate layer between the two birefringent layers, or the phase plate is calculated to give maximum effect at NIR and NUV (which would give a very low de-polarizing effect on green, where Jack measured and simulated the curves).

By phase plate I assume you mean what Nikon shows as the Wave plate below (their image). So if I understand correctly you are saying that if the circular polarization introduced were of light strength around mid-wavelengths the second birefringent plate (LPF2) would have little effect?

Wave plateï¼šBy converting polarized light into circularly polarized light with the wave plate, two points are divided into four points. The original light and light separated in horizontal direction with the low-pass filter 1 are transmitted through the low-pass filter 2 with the wavelengths unchanged. The original light is transmitted as it is, and light separated with the low-pass filter 1 changes only direction vertically (two points are maintained).

Would the fact that the earlier MTF50 graphs showing strong horizontal/vertical spatial resolution separation were derived using information from all three channels (using White Balanced Raw data) negate this possibility?

3. The "error" is indeed pre-calculated, and its there to minimize the negative effects of the line-skipping readout pattern the sensor uses for video recording.

no.1 would be embarrassing, but not the end of the world.

no.2 is actually quite probable, since it's R&G that really needs the AA layer. The "green" channel has no need of the AA-layer - it would be much better off without it!

Interesting.  Can you explain why?

no.3 is also quite probable, but probably not the complete and ultimate answer. They no doubt thought about it, the question is how much weight (over the still image performance) Nikon would allow such a product optimization to have.

Thanks as always for all the great insights!

Jack

Complain
Re: MTF: I think I Isolated the D610 Directional AA filter's MTF Curve

wfektar wrote:

Interesting, thanks. Couple of questions, though.

What does the second (and first, for that matter) figure show?

The first graph shows the theoretical MTF curves for a D610 as set up, including total MTF and the individual components that make it up. Parameters of the individual components are: green light at 0.55um, pixel pitch 5.9 microns, f/5.6, AA d/2 offset 0.39px, Lens/defocus/plug gaussian MTF of 0.3 pixels standard deviation.

The second graph is simply the first graph with the actual MTF curve as produced by open source MTF Mapperby Frans van den Bergh. It shows how well theory models reality

I'm not familiar with MTF mapper -- and since this post is about anisotropy -- is it an average of vertical and horizontal? Sagittal and tangential? Just one or another? Is it an average taken at a number of locations in the FOV?

The measured MTF curves in the graphs are generated by MTF Mapper by analyzing the green channel raw data around the two slanted edges (one horizontal and one vertical) near the center of DPR new studio scene's DSC_0199.NEF. Only the MTF curve from the horizontal 'crop' and the ratio of the two curves are shown in the earlier graphs (see more below).

Slanted edges analyzed by MTF Mapper. I actually used smaller areas than shown for this analysis (about 300x200px) in order to minimize the effects of distortion

I guess I don't understand the last figure either. It does show a likely zero at about 0.63 cycles/pixel, but also at 0.75 and 0.84. You attribute the variability after the first likely zero to noise from division of small numbers, but that indicates that the value of the denominator is also grovelling in the dirt here.

That's a good point. I was just impressed that it looked like the MTF of a cosine (which I understand is the modulus of the fourier transform of the impulse response of an ideal two-dot beam splitter) of roughly the correct wavelength - as one would expect when dividing the MTF curves of identical cameras, one with AA and one without aotbe.

With regards to the zeros following the first one I have noticed that MTF Mapper's MTF curves sometimes show a slight periodic waviness - which I have felt the desire to filter out, but never did. 'Slight' gets amplified when taken as a ratio of two small numbers. I wish it weren't there though, also because it could result in loss of accuracy in determining the exact first zero - which is an indication of the 'strength' of the AA. Correct?

I should ask Frans about it. I have a feeling that the small higher frequency wave (something less than 0.1 cycles/pixels) superimposed on the MTF curve is a result of the various computations required to generate the Line Spread Function, its derivative PSF and subsequent Fourier transform - which include (-1,0,1) smoothing on the PSF and apodization through a Hemming window. I think this is done by most MTF analyzers though.

There may well be directionality but showing only the ratio isn't a very good way to show it. Why not just show the MTFs in the vertical and horizontal separately?

Voila'. What do you see?

Nor do I understand this statement: Ignoring astigmatisms all the individual terms are the same (it's the same camera/lens/NEF) in the vertical and horizontal direction. I'm sure the 85/1.8G is a fine lens with little or no astigmatism, but at this level of analysis surely that should be justified?

Out of curiosity, how did you model the other components of the MTF in figure 1?

Excellent, this gives me a chance to check the math. We are working off a line spread function whose derivative is the point spread function whose Fourier transform is the OTF whose modulus is the MTF (thanks for the correction), so I think we can treat everything as 1 dimensional, right?

f = frequency in cycles/pixel

Diffraction (chat): 2/pi * [arccos(s)-s*sqrt(1-s^2)] with s = lambda/pitch * F# * f

Pixel (box): abs[sin(pi*f)/(pi*f)]

AA (deltas): abs[cos(2pi*d/2*f] with d = the spatial shift in pixels introduced by the birefringent plate

Lens/defocus/fudge = exp[-(2pi*r*f)^2/2] with r = standard deviation in pixels

Are these correct? What would be a better model for slight defocus?

Also, where does this deifinition come from? It's called an Optical Transfer Function (OTF) instead of MTF because it is normalized to 1 at the origin. In my world the OTF is the FT(point spread function). MTF is an observable (without the phase information) so is the modulus of the OTF.

Ah, I read that somewhere but I am sure you are right. A quick check on wikipedia confirms that you are indeed, so I stand corrected. Is there a different name for curves that are normalized to zero?

Jack

Complain
Errata Corrige

Jack Hogan wrote:

I have a feeling that the small higher frequency wave (something less than 0.1 cycles/pixels) superimposed on the MTF curve is a result of the various computations required to generate the Line Spread Function, its derivative PSF and subsequent Fourier transform

...

We are working off a line spread function whose derivative is the point spread function whose Fourier transform is the OTF whose modulus is the MTF

and below zero should read 'one'

Is there a different name for curves that are normalized to zero?

Jack

Complain
Re: MTF: I think I Isolated the D610 Directional AA filter's MTF Curve

This is really good work, Jack. It uses one of my favorite techniques. When I was working in speech synthesis, speech bandwidth compression, and speech recognition in the 60s, we called what you're doing analysis by synthesis. It's very powerful, but dangerous (since it depends on your model being accurate), kind of like using a sampling scope. I say that only as a general caveat; I see no evidence of any probem in your work.

And, as you hinted in your comment on my blog, this explains my a7 slanted edge MTF results.

Jim

-- hide signature --
JimKasson's gear list:JimKasson's gear list
Leica Q2 Monochrom Nikon Z7 Fujifilm GFX 100 Fujifilm GFX 100S Nikon Z9 +1 more
Complain
Re: MTF: I think I Isolated the D610 Directional AA filter's MTF Curve

JimKasson wrote:

This is really good work, Jack. It uses one of my favorite techniques. When I was working in speech synthesis, speech bandwidth compression, and speech recognition in the 60s, we called what you're doing analysis by synthesis. It's very powerful, but dangerous (since it depends on your model being accurate), kind of like using a sampling scope. I say that only as a general caveat; I see no evidence of any probem in your work.

And, as you hinted in your comment on my blog, this explains my a7 slanted edge MTF results.

Jim

Thanks Jim, coming from you it means a lot.

Complain
Re: ...Why oh why...
2

Jack Hogan wrote:

The_Suede wrote: There's at least two, probably three reasonable answers, and we will - unless we can directly question the OD group at Sony or Nikon - never get the COMPLETE answer. There probably isn't "just one" correct answer, it's a combined production design choice.

My suggestions:

1. Since the second plate is actually there, it might be a birefringence angle miscalculation.

Hopefully not Plus this seems to be a bit of a trend with later Exmors I have evaluated from DPR raw captures (e.g. in addition to the above Nex-6, XA1, but curiously not a58) - could polarization in their lighting be causing any of this?

Since polarized scene light would have the same effect on all AA-filter cameras, I wouldn't think that's very probable. Additionally; I get almost the same (somewhat smaller) differences in h/v with those cameras using our backlit chart. And that's as close to perfectly circular in pol as you get - it has two diffusion layers and a grain reflector closest to the light source.

2. The miscalculation might actually be in the phase plate layer between the two birefringent layers, or the phase plate is calculated to give maximum effect at NIR and NUV (which would give a very low de-polarizing effect on green, where Jack measured and simulated the curves).

By phase plate I assume you mean what Nikon shows as the Wave plate below (their image). So if I understand correctly you are saying that if the circular polarization introduced were of light strength around mid-wavelengths the second birefringent plate (LPF2) would have little effect?

Circularly polarized light contains equal amounts of energy in all field vector angles (when integrated over longer periods of time). When you send a ray of circularly polarized light through a transition medium change - from isotropic/random (air) into a birefringent (the filter plate) - the birefringent separates the ray's energy into it's two perpendicularly discretized components. The component with a vector perpendicular to the optical axis of the birefringency surface gets one refraction index, the component where the vector goes through the birefringency surface sees another refraction index.

In an AA filter plate, the optical axis of the birefringency is at an angle to the physical surface. It isn't cut "along" the crystal surface, but at an angle over it. That's why even rays that hit the plate perfectly head on get separated. That's the entrance to (1) below.

At the exit of (1), the both discrete rays are linearly polarized. One is polarized exactly along the surface of the birefringency surface, the other is 90º across.

If you send this directly into LP2, all that will happen is that one of the rays will see one refractive index, the other another one. You'll just move the two rays slightly differently, you will NOT separate them into four rays.

Wave plateï¼šBy converting polarized light into circularly polarized light with the wave plate, two points are divided into four points. The original light and light separated in horizontal direction with the low-pass filter 1 are transmitted through the low-pass filter 2 with the wavelengths unchanged. The original light is transmitted as it is, and light separated with the low-pass filter 1 changes only direction vertically (two points are maintained).

Would the fact that the earlier MTF50 graphs showing strong horizontal/vertical spatial resolution separation were derived using information from all three channels (using White Balanced Raw data) negate this possibility?

A phase/wave plate is exactly the same as the AA filter plates - but in this case you've cut the crystal so that the plate surface and the birefringency surface are aligned. What this does is to take one ray, and make half of the energy pass slightly slower through the plate. The refractive index is basically "1 / light propagation speed", which is why vacum has a refraction index of "1.000"... no particles to slow light down, it travels at full speed ahead. 1/1 speed.

If the ray is circularly polarized from the beginning, that makes no change at all (from a simplified PoV). If the ray passing in is linearly polarized, it means that when the plate thickness is equal to (light speed difference* wavelength/4) the ray is circularly polarized when it exits.

So the polarization (linear, elliptic, circular) at the exit is dependent on the wavelength of he ray passing in. When the wavelength is an integer divisor of the first possible 1/4-wave wavelength, the exit ray is perfectly circularly polarized. At the midpoints between the integer divisor points, the delay for one of the vectors adds up to a full circle phase shift - so it just adds back up to unity again.

So the wave plate is tuned to give good re-polarization at certain wavelengths.

At the wavelenghts where the wave plate is matched to the 1/4 delay the two rays at the exit from the wave plate are both circularly polarized, and they are both split once more by the LP2 plate - into four rays.

At the intermediate wavelengths giving full-circle phase shifts, their polarization are somewhere between elliptical and linear, and that means they won't be split by the second plate. Just displaced slightly differently.

This means the second plate gives different amounts of energy spread depending on the wavelength of the incoming light...

3. The "error" is indeed pre-calculated, and its there to minimize the negative effects of the line-skipping readout pattern the sensor uses for video recording.

no.1 would be embarrassing, but not the end of the world.

no.2 is actually quite probable, since it's R&G that really needs the AA layer. The "green" channel has no need of the AA-layer - it would be much better off without it!

Interesting. Can you explain why?

Information surface holes.

From a reconstruction perspective, the numerical reliability of the interpolation of a Bayer scheme is dependent on how big the information holes are. If a point in the image plane has a very low numerical analysis accuracy probability, it is a "hole". An unknown vector.

The size of the "hole" in the green map is the pixel width, plus the dead gap between pixels, minus the point spread function. Since the dead gap is nowadays typically very small, and the absolute practical minimum PSF in reality approaches 1µm to the 50% energy point, a normal ~5µm pixel will typically have ~3.5-4µm holes in the information map - placed at the centers of the pixels that are "not green".

If you map that so that black is "low probability/predictability", then it looks like this:

Green information map

Red information map

Since the relative area ratio of low predictability is so very much smaller than the "known" area in green, almost all points on the image surface is can be interpolated accurately (minus noise considerations, of course...). Reliable interpolation >> no need of an AA filter to "borrow" adjacent information at the cost of lowered per-pixel contrasts.

In the red (and blue) that area ratio is almost reversed. There's much higher surface percentage  that is black, that we DON'T know (or can predict with any numerical reliably) than what we actually know - if the lens is good. This is what causes aliasing and that horrible sub-set of aliasing called moire. The interpolation has to guess at positions of very low predictability - and if the surrounding "support" data is hard to interpret, the interpolation engine often guesses wrong. Moire is when those incorrect guesses are systematically modulated to a lower frequency pattern by the underlying HF image data, by the matching/mismatching of the underlying original surface data and the information readout overlay pattern.

All/any blurring introduced before the image formation makes the information surface more uniform, minimizes the chance of really wrong interpolation guesses - but at the cost of per-pixel contrast, or "sharpness".

Bayer makes the (correct) assumption that chroma data in a normal image has lower energy at high frequencies than luma data, so in most "image-average" cases this is a good tradeoff of interpolation accuracy. Lose a little bit of green channel (luma) HF and gain a bit of chroma HF stability.

no.3 is also quite probable, but probably not the complete and ultimate answer. They no doubt thought about it, the question is how much weight (over the still image performance) Nikon would allow such a product optimization to have.

Thanks as always for all the great insights!

Jack

Complain
Re: MTF: I think I Isolated the D610 Directional AA filter's MTF Curve

Jack Hogan wrote:

The first graph shows the theoretical MTF curves for a D610 as set up, including total MTF and the individual components that make it up. Parameters of the individual components are: green light at 0.55um, pixel pitch 5.9 microns, f/5.6, AA d/2 offset 0.39px, Lens/defocus/plug gaussian MTF of 0.3 pixels standard deviation.

The second graph is simply the first graph with the actual MTF curve as produced by open source MTF Mapperby Frans van den Bergh. It shows how well theory models reality

Thanks but ... I'm still not getting it. If the topic is directional sensitivity in sensor response then a single curve isn't going to show it. From the discussion and figure below, it seems that this is an average of vertical and horizontal behavior from an area near the center of the FOV (that last is important to rule out contributions from aberrations like astigmatism and distortion). OTOH from the plot below that it seems that the first zero would appear closer to the 1 cycle/px edge than the second plot in the OP does. Hence my confusion.

That said, I suspect this a blind alley not worth chasing down.

I'm not familiar with MTF mapper -- and since this post is about anisotropy -- is it an average of vertical and horizontal? Sagittal and tangential? Just one or another? Is it an average taken at a number of locations in the FOV?

The measured MTF curves in the graphs are generated by MTF Mapper by analyzing the green channel raw data around the two slanted edges (one horizontal and one vertical) near the center of DPR new studio scene's DSC_0199.NEF. Only the MTF curve from the horizontal 'crop' and the ratio of the two curves are shown in the earlier graphs (see more below).

Slanted edges analyzed by MTF Mapper. I actually used smaller areas than shown for this analysis (about 300x200px) in order to minimize the effects of distortion

<...>

There may well be directionality but showing only the ratio isn't a very good way to show it. Why not just show the MTFs in the vertical and horizontal separately?

Voila'. What do you see?

Now that's more like it! Very interesting -- these are each the averages of the horizontal and vertical responses from the areas outlined above? Is that what is meant by vertical crop and horizontal crop? Or is it the behavior of the average response (in the two dimensions) in the two crops? In any case, it definitely shows more than the quotient plot. Wonder how other cameras behave here ...

Nor do I understand this statement: Ignoring astigmatisms all the individual terms are the same (it's the same camera/lens/NEF) in the vertical and horizontal direction. I'm sure the 85/1.8G is a fine lens with little or no astigmatism, but at this level of analysis surely that should be justified?

Out of curiosity, how did you model the other components of the MTF in figure 1?

Excellent, this gives me a chance to check the math. We are working off a line spread function whose derivative is the point spread function whose Fourier transform is the OTF whose modulus is the MTF (thanks for the correction), so I think we can treat everything as 1 dimensional, right?

Well, since this post is about 2D anisotropy not really. But 1D at a time OK.

f = frequency in cycles/pixel

Diffraction (chat): 2/pi * [arccos(s)-s*sqrt(1-s^2)] with s = lambda/pitch * F# * f

Pixel (box): abs[sin(pi*f)/(pi*f)]

AA (deltas): abs[cos(2pi*d/2*f] with d = the spatial shift in pixels introduced by the birefringent plate

Lens/defocus/fudge = exp[-(2pi*r*f)^2/2] with r = standard deviation in pixels

Are these correct? What would be a better model for slight defocus?

Hmm. Don't know what "chat" is -- @f/5.6 shouldn't this lens look pretty much like a tophat? Why not sinc^2? Sinc seems fine for pixel box -- unless microlens behavior makes a difference.

About the fudge factor. It's going to be lens dependent, but a Gaussian is probably close enough. For these things it's always good to run sensitivity tests (useful for the other factors as well). That is, double and halve this factor and see how sensitive the results are. If it's very sensitive you need to be very careful here. If it isn't, then no need to spend a lot of energy on this. Might also be interesting to try something with a completely different lineshape (it doesn't have to be realistic-- Lorentzian, say, for its long wings and simplicity) to test sensitivity.

The other thing is that if the lens shows measurable aberrations here then the assumption of symmetry needs to be revisited. You'd also have to be careful not to conflate horizontal/vertical with sagittal/tangential. In that case agreement of theory and measurement may be fortuitous. But I'd test sensitivity first before getting too cranked up about it.

Complain
Re: ...Why oh why...

The_Suede wrote:

Great write-up. Thanks so much.

Bayer makes the (correct) assumption that chroma data in a normal image has lower energy at high frequencies than luma data, so in most "image-average" cases this is a good tradeoff of interpolation accuracy. Lose a little bit of green channel (luma) HF and gain a bit of chroma HF stability.

I thought the Bayer assumption was based not on what the properties of the captured image were, but on what's important to humans. We are more sensitive to high-spatial-frequency luminance variations than to high-spatial-frequency chromaticity variations. Indeed, we are oversensitive to HSF luminance variations, the evidence being Mach bands, which Wyszecki and Stiles describe as "light or dark narrow bands that are perceived near the border of two juxtaposed fields, one field being darker than the other." It appears that something in the human vision system is taking the derivative with respect to distance. Mach bands are (again quoting W&S, 2nd edition, page 556-7), "...in general, not found near chromatic borders when the two juxtaposed fields have no luminance variation but differ only in chromaticity."

I have seen curves that plot perceived changes in luminance versus frequency. They start out quite low, at low SFs, peak, and fall off. The equivalent curves for chromaticity changes are much higher at low frequencies (a big block of red still looks red to us), but falls off faster then the luminance curve at higher SFs.

I will search out these curves and post them if I find them.

The above was the basis for The YCrCb PhotoCD color encoding scheme, which sampled luminance data (Y) at twice the vertical and horizontal frequency as the chromaticity information (Cr and Cb). Analog television encoding used similar principles.

Don't let this little quibble make you think that I don't have great respect for your knowledge and your work.

Jim

-- hide signature --
JimKasson's gear list:JimKasson's gear list
Leica Q2 Monochrom Nikon Z7 Fujifilm GFX 100 Fujifilm GFX 100S Nikon Z9 +1 more
Complain
Re: ...Why oh why...

JimKasson wrote:

I have seen curves that plot perceived changes in luminance versus frequency. They start out quite low, at low SFs, peak, and fall off. The equivalent curves for chromaticity changes are much higher at low frequencies (a big block of red still looks red to us), but falls off faster then the luminance curve at higher SFs.

I will search out these curves and post them if I find them.

Here's the curve I mentioned above:

Jim

-- hide signature --
JimKasson's gear list:JimKasson's gear list
Leica Q2 Monochrom Nikon Z7 Fujifilm GFX 100 Fujifilm GFX 100S Nikon Z9 +1 more
Complain
Re: ...Why oh why...

The_Suede wrote:

Jack Hogan wrote:

The_Suede wrote: There's at least two, probably three reasonable answers, and we will - unless we can directly question the OD group at Sony or Nikon - never get the COMPLETE answer. There probably isn't "just one" correct answer, it's a combined production design choice.

My suggestions:

1. Since the second plate is actually there, it might be a birefringence angle miscalculation.

Hopefully not Plus this seems to be a bit of a trend with later Exmors I have evaluated from DPR raw captures (e.g. in addition to the above Nex-6, XA1, but curiously not a58) - could polarization in their lighting be causing any of this?

Since polarized scene light would have the same effect on all AA-filter cameras, I wouldn't think that's very probable. Additionally; I get almost the same (somewhat smaller) differences in h/v with those cameras using our backlit chart. And that's as close to perfectly circular in pol as you get - it has two diffusion layers and a grain reflector closest to the light source.

Excellent, good to know.

2. The miscalculation might actually be in the phase plate layer between the two birefringent layers, or the phase plate is calculated to give maximum effect at NIR and NUV (which would give a very low de-polarizing effect on green, where Jack measured and simulated the curves).

By phase plate I assume you mean what Nikon shows as the Wave plate below (their image). So if I understand correctly you are saying that if the circular polarization introduced were of light strength around mid-wavelengths the second birefringent plate (LPF2) would have little effect?

Circularly polarized light contains equal amounts of energy in all field vector angles (when integrated over longer periods of time). When you send a ray of circularly polarized light through a transition medium change - from isotropic/random (air) into a birefringent (the filter plate) - the birefringent separates the ray's energy into it's two perpendicularly discretized components. The component with a vector perpendicular to the optical axis of the birefringency surface gets one refraction index, the component where the vector goes through the birefringency surface sees another refraction index.

In an AA filter plate, the optical axis of the birefringency is at an angle to the physical surface. It isn't cut "along" the crystal surface, but at an angle over it. That's why even rays that hit the plate perfectly head on get separated. That's the entrance to (1) below.

At the exit of (1), the both discrete rays are linearly polarized. One is polarized exactly along the surface of the birefringency surface, the other is 90º across.

If you send this directly into LP2, all that will happen is that one of the rays will see one refractive index, the other another one. You'll just move the two rays slightly differently, you will NOT separate them into four rays.

Aha, that makes a lot of sense.

Wave plateï¼šBy converting polarized light into circularly polarized light with the wave plate, two points are divided into four points. The original light and light separated in horizontal direction with the low-pass filter 1 are transmitted through the low-pass filter 2 with the wavelengths unchanged. The original light is transmitted as it is, and light separated with the low-pass filter 1 changes only direction vertically (two points are maintained).

Would the fact that the earlier MTF50 graphs showing strong horizontal/vertical spatial resolution separation were derived using information from all three channels (using White Balanced Raw data) negate this possibility?

A phase/wave plate is exactly the same as the AA filter plates - but in this case you've cut the crystal so that the plate surface and the birefringency surface are aligned. What this does is to take one ray, and make half of the energy pass slightly slower through the plate. The refractive index is basically "1 / light propagation speed", which is why vacum has a refraction index of "1.000"... no particles to slow light down, it travels at full speed ahead. 1/1 speed.

If the ray is circularly polarized from the beginning, that makes no change at all (from a simplified PoV). If the ray passing in is linearly polarized, it means that when the plate thickness is equal to (light speed difference* wavelength/4) the ray is circularly polarized when it exits.

So the polarization (linear, elliptic, circular) at the exit is dependent on the wavelength of he ray passing in. When the wavelength is an integer divisor of the first possible 1/4-wave wavelength, the exit ray is perfectly circularly polarized. At the midpoints between the integer divisor points, the delay for one of the vectors adds up to a full circle phase shift - so it just adds back up to unity again.

So the wave plate is tuned to give good re-polarization at certain wavelengths.

At the wavelenghts where the wave plate is matched to the 1/4 delay the two rays at the exit from the wave plate are both circularly polarized, and they are both split once more by the LP2 plate - into four rays.

At the intermediate wavelengths giving full-circle phase shifts, their polarization are somewhere between elliptical and linear, and that means they won't be split by the second plate. Just displaced slightly differently.

This means the second plate gives different amounts of energy spread depending on the wavelength of the incoming light...

Right!

3. The "error" is indeed pre-calculated, and its there to minimize the negative effects of the line-skipping readout pattern the sensor uses for video recording.

no.1 would be embarrassing, but not the end of the world.

no.2 is actually quite probable, since it's R&G that really needs the AA layer. The "green" channel has no need of the AA-layer - it would be much better off without it!

Interesting. Can you explain why?

Information surface holes.

From a reconstruction perspective, the numerical reliability of the interpolation of a Bayer scheme is dependent on how big the information holes are. If a point in the image plane has a very low numerical analysis accuracy probability, it is a "hole". An unknown vector.

The size of the "hole" in the green map is the pixel width, plus the dead gap between pixels, minus the point spread function. Since the dead gap is nowadays typically very small, and the absolute practical minimum PSF in reality approaches 1µm to the 50% energy point, a normal ~5µm pixel will typically have ~3.5-4µm holes in the information map - placed at the centers of the pixels that are "not green".

If you map that so that black is "low probability/predictability", then it looks like this:

Green information map

Red information map

Since the relative area ratio of low predictability is so very much smaller than the "known" area in green, almost all points on the image surface is can be interpolated accurately (minus noise considerations, of course...). Reliable interpolation >> no need of an AA filter to "borrow" adjacent information at the cost of lowered per-pixel contrasts.

In the red (and blue) that area ratio is almost reversed. There's much higher surface percentage that is black, that we DON'T know (or can predict with any numerical reliably) than what we actually know - if the lens is good. This is what causes aliasing and that horrible sub-set of aliasing called moire. The interpolation has to guess at positions of very low predictability - and if the surrounding "support" data is hard to interpret, the interpolation engine often guesses wrong. Moire is when those incorrect guesses are systematically modulated to a lower frequency pattern by the underlying HF image data, by the matching/mismatching of the underlying original surface data and the information readout overlay pattern.

All/any blurring introduced before the image formation makes the information surface more uniform, minimizes the chance of really wrong interpolation guesses - but at the cost of per-pixel contrast, or "sharpness".

Bayer makes the (correct) assumption that chroma data in a normal image has lower energy at high frequencies than luma data, so in most "image-average" cases this is a good tradeoff of interpolation accuracy. Lose a little bit of green channel (luma) HF and gain a bit of chroma HF stability.

Yes, now that you put it like that it all falls into place. Why didn't I think of that? Each one of your posts is like a full fledged lecture - and it takes me several reads (often over several days) to disgest them. Â Sometimes I only understand them completely a few weeks later

So 1 micron is "the absolute practical minimum PSF in reality" huh?  I always wondered about that.  Especially given diffraction, current accuracy/precision of focus and shake.  I guess that's why Canon went back to pixel pitches larger than 1.5 microns after having introduced the SX50HS.

Thank you very much for your help and patience.

Jack

Complain
Re: MTF: I think I Isolated the D610 Directional AA filter's MTF Curve

wfektar wrote:

Jack Hogan wrote:

The first graph shows the theoretical MTF curves for a D610 as set up, including total MTF and the individual components that make it up. Parameters of the individual components are: green light at 0.55um, pixel pitch 5.9 microns, f/5.6, AA d/2 offset 0.39px, Lens/defocus/plug gaussian MTF of 0.3 pixels standard deviation.

The second graph is simply the first graph with the actual MTF curve as produced by open source MTF Mapperby Frans van den Bergh. It shows how well theory models reality

Thanks but ... I'm still not getting it. If the topic is directional sensitivity in sensor response then a single curve isn't going to show it. From the discussion and figure below, it seems that this is an average of vertical and horizontal behavior from an area near the center of the FOV (that last is important to rule out contributions from aberrations like astigmatism and distortion).

Each slanted edge is measured separately and results in the MTF curves below. Each refers to the spatial resolution curve (MTF) measured in the direction perpendicular to the edge under analysis.

OTOH from the plot below that it seems that the first zero would appear closer to the 1 cycle/px edge than the second plot in the OP does. Hence my confusion.

That said, I suspect this a blind alley not worth chasing down.

I'm not familiar with MTF mapper -- and since this post is about anisotropy -- is it an average of vertical and horizontal? Sagittal and tangential? Just one or another? Is it an average taken at a number of locations in the FOV?

The measured MTF curves in the graphs are generated by MTF Mapper by analyzing the green channel raw data around the two slanted edges (one horizontal and one vertical) near the center of DPR new studio scene's DSC_0199.NEF. Only the MTF curve from the horizontal 'crop' and the ratio of the two curves are shown in the earlier graphs (see more below).

Slanted edges analyzed by MTF Mapper. I actually used smaller areas than shown for this analysis (about 300x200px) in order to minimize the effects of distortion

<...>

There may well be directionality but showing only the ratio isn't a very good way to show it. Why not just show the MTFs in the vertical and horizontal separately?

Voila'. What do you see?

Now that's more like it! Very interesting -- these are each the averages of the horizontal and vertical responses from the areas outlined above? Is that what is meant by vertical crop and horizontal crop? Or is it the behavior of the average response (in the two dimensions) in the two crops? In any case, it definitely shows more than the quotient plot.

It appears that when the edge runs almost horizontally the convention is to speak of Vertical Spatial Resolution and vice versa. To avoid confusion I refer to the crop direction. So the horizontal crop is the one that includes an almost horizontal edge (about 9 degrees off horizontal actually), just below the center of the frame in the referenced raw file.

I assume you are familiar with the slanted edge method of extracting a camera/lens system's MTF. It generates an Edge Spread Function from each edge by projecting all pixels around the edge in question onto its normal:

Image by Frans van den Bergh from his post here: http://www.dpreview.com/forums/post/52944847

This clever approach results in a highly oversampled, very well defined and accurate Edge Spread Function. If I undestand the math correctly the differential of the ESF is a fairly accurate representation of the 2D Line Spread Function of the edge, which we can consider a 1D Point Spread Function assuming the edge is perfectly straight (or perhaps I should say 3D LSF and 2D PSF. What's the convention?). The Fourier transform of the 1D PSF is the Optical Transfer Function and its modulus is finally the MTF we are after - measured in the direction of the normal to the edge only. Correct?

Wonder how other cameras behave here ...

No time now, but I'll post an old style regular strength AA sensor's H/V MTF curves when I have some.

Nor do I understand this statement: Ignoring astigmatisms all the individual terms are the same (it's the same camera/lens/NEF) in the vertical and horizontal direction. I'm sure the 85/1.8G is a fine lens with little or no astigmatism, but at this level of analysis surely that should be justified?

Out of curiosity, how did you model the other components of the MTF in figure 1?

Excellent, this gives me a chance to check the math. We are working off a line spread function whose derivative is the point spread function whose Fourier transform is the OTF whose modulus is the MTF (thanks for the correction), so I think we can treat everything as 1 dimensional, right?

Well, since this post is about 2D anisotropy not really. But 1D at a time OK.

Right.

f = frequency in cycles/pixel

Diffraction (chat): 2/pi * [arccos(s)-s*sqrt(1-s^2)] with s = lambda/pitch * F# * f

Pixel (box): abs[sin(pi*f)/(pi*f)]

AA (deltas): abs[cos(2pi*d/2*f] with d = the spatial shift in pixels introduced by the birefringent plate

Lens/defocus/fudge = exp[-(2pi*r*f)^2/2] with r = standard deviation in pixels

Are these correct? What would be a better model for slight defocus?

Hmm. Don't know what "chat" is -- @f/5.6 shouldn't this lens look pretty much like a tophat? Why not sinc^2? Sinc seems fine for pixel box -- unless microlens behavior makes a difference.

You may be interested in this entry in Frans' blog which explains both.

About the fudge factor. It's going to be lens dependent, but a Gaussian is probably close enough. For these things it's always good to run sensitivity tests (useful for the other factors as well). That is, double and halve this factor and see how sensitive the results are. If it's very sensitive you need to be very careful here. If it isn't, then no need to spend a lot of energy on this. Might also be interesting to try something with a completely different lineshape (it doesn't have to be realistic-- Lorentzian, say, for its long wings and simplicity) to test sensitivity.

As one would expect it gets more and more sensitive as the pixel pitch gets smaller. Which makes me understand the need for Nikon's technical guide and lens recommendations (only the best;-) for the D800e.

The other thing is that if the lens shows measurable aberrations here then the assumption of symmetry needs to be revisited. You'd also have to be careful not to conflate horizontal/vertical with sagittal/tangential. In that case agreement of theory and measurement may be fortuitous. But I'd test sensitivity first before getting too cranked up about it.

Right.

Complain
Re: ...Why oh why...
1

JimKasson wrote:

The_Suede wrote:

Great write-up. Thanks so much.

Bayer makes the (correct) assumption that chroma data in a normal image has lower energy at high frequencies than luma data, so in most "image-average" cases this is a good tradeoff of interpolation accuracy. Lose a little bit of green channel (luma) HF and gain a bit of chroma HF stability.

I thought the Bayer assumption was based not on what the properties of the captured image were, but on what's important to humans. We are more sensitive to high-spatial-frequency luminance variations than to high-spatial-frequency chromaticity variations. Indeed, we are oversensitive to HSF luminance variations, the evidence being Mach bands, which Wyszecki and Stiles describe as "light or dark narrow bands that are perceived near the border of two juxtaposed fields, one field being darker than the other." It appears that something in the human vision system is taking the derivative with respect to distance. Mach bands are (again quoting W&S, 2nd edition, page 556-7), "...in general, not found near chromatic borders when the two juxtaposed fields have no luminance variation but differ only in chromaticity."

I have seen curves that plot perceived changes in luminance versus frequency. They start out quite low, at low SFs, peak, and fall off. The equivalent curves for chromaticity changes are much higher at low frequencies (a big block of red still looks red to us), but falls off faster then the luminance curve at higher SFs.

I will search out these curves and post them if I find them.

The above was the basis for The YCrCb PhotoCD color encoding scheme, which sampled luminance data (Y) at twice the vertical and horizontal frequency as the chromaticity information (Cr and Cb). Analog television encoding used similar principles.

Don't let this little quibble make you think that I don't have great respect for your knowledge and your work.

Jim

As long as a counterpoint is valid, it should be appreciated

Riposte:

WHY do you think the human vision system evolved to that balance of bandwidth considerations? I have the reports from the research you mentioned here somewhere, but in paperback form.

The main reason I usually don't use your (very correct) PoV or P.o.Explanation regarding cameras and optics is that the Mach effect is almost perfectly uniform between human test subjects in space angle dependency. Though it seems to decrease in spatial frequency as you get older...

In a camera on the other hand, you can often not pinpoint a single scale of presentation. Which means that you have to fall back to the linear scale-invariant presentation mode. The human vision adaptation would/should then be applied to the end result AFTER you have determined the "average inspection resolution". Not to the original "resolution" - which is dependent on camera resolution (in MP or linear) anyway...

One reason as to why the Mach band effect is hard to quantify unless you have lots and lots of test subjects is that it isn't really a physical effect. It is entirely a physcho-visual after-construct. Which is also quite obvious when you compare the space angle coverage of the high resolution part of the fovea to the angle resolution of the Mach effect. So the effect varies with the programming of the visual cortex of the test subject, not the actual, physical eye.

..............

And while we're at it...

The R&D work with YCbCr was based on the analog YUV signal transmission standard, and that was actually based on the need to keep both a B/W signal and the possibility to add in color (when the customer had a color-capable TV)... The bandwidth savings was a lower priority than the physical distribution channel separation of luma/chroma for YUV, but it was indeed as you say a larger part of the YCbCr specification.

Complain
Sharpening Strategies?
1

wfektar wrote:

Voila'. What do you see?

...Wonder how other cameras behave here ...

D4 Supposedly with the same set up and lens as above

Here are the green channel MTF curves from a D4 resulting from analyzing the slanted edges in the Horizontal and Vertical crops of DSC_9811.NEF from DPR's new studio scene captures. The D4 appears to have a 'classic' AA of roughly the same strength horizontally and vertically, which causes first minima around 0.703 and 0.734 cycles per pixel, corresponding to half shifts of around 0.356 and 0.340 pixels in the H and V crops respectively. Let's call it an average 0.35px shift

So, with this knowledge about the strength of the AA in the green channel, what kernel would you use to apply proper AA-restoring capture sharpening?  What about in the case of the D610 above?  And is it worthwhile to calculate the AA strength in the other channels and apply capture sharpening to the raw data by channel before demosaicing?

Jack

Complain
Re: MTF: I think I Isolated the D610 Directional AA filter's MTF Curve

Jack Hogan wrote:

With regards to the zeros following the first one I have noticed that MTF Mapper's MTF curves sometimes show a slight periodic waviness - which I have felt the desire to filter out, but never did. 'Slight' gets amplified when taken as a ratio of two small numbers. I wish it weren't there though, also because it could result in loss of accuracy in determining the exact first zero - which is an indication of the 'strength' of the AA. Correct?

I should ask Frans about it. I have a feeling that the small higher frequency wave (something less than 0.1 cycles/pixels) superimposed on the MTF curve is a result of the various computations required to generate the Line Spread Function, its derivative PSF and subsequent Fourier transform - which include (-1,0,1) smoothing on the PSF and apodization through a Hemming window. I think this is done by most MTF analyzers though.

First off, I believe congratulations are in order on a well-executed experiment! I rather like your idea of dividing the H/V MTFs.

I have also seen those ripples superimposed on the MTF curves, but have not yet taken the time to isolate the exact cause. I'll take a stab at speculating at a possible cause any day, though

a) On long edges, there may be some barely detectable curvature cause by radial lens distortion. This usually just broadens the PSF slightly, but it is possible that the curvature interacts with the pixel spacing in just the right (wrong?) way to cause ripples in the MTF curve. Or,

b) Estimating edge orientation accurately is crucial for the slanted edge method to produce an accurate MTF curve. Peter Burns published a few papers on this topic (well, the slanted edge method in general) where he shows exactly how an error in edge orientation manifests in the MTF. Incidentally, he concluded that edge orientation estimation must be accurate to at leat 0.5 degrees, IIRC. MTF Mapper tries really hard to measure edge orientation accurately, but I have to trade off the computational cost against the potential accuracy gains. Anyhow, once an initial edge orientation estimate has been calculated using plain old least-squares line fitting, a second refinement phase kicks in. In this refinement phase the edge orientation is systematically adjusted to minimize the variance of difference between consecutive samples in the ESF. This step is something that sounded like a good idea at the time, and fairly extensive testing seems to indicate that it always improves the edge orientation estimation. Having said that, I can imagine that this particular algorithm might cause the ripples in the MTF curve by inadvertently inducing some periodic interaction.

Well, enough speculation for one day. I would appreciate it if you could send me a sample when you encounter this phenomenon again, and I'll see if there is a way to eliminate the ripples (without having to resort to filtering).

-F

Complain
 Forum