MTF: I think I Isolated the D610 Directional AA filter's MTF Curve

Started Apr 10, 2014 | Discussions thread
The_Suede Contributing Member • Posts: 652
Re: ...Why oh why...
2

Jack Hogan wrote:

The_Suede wrote: There's at least two, probably three reasonable answers, and we will - unless we can directly question the OD group at Sony or Nikon - never get the COMPLETE answer. There probably isn't "just one" correct answer, it's a combined production design choice.

My suggestions:

  1. Since the second plate is actually there, it might be a birefringence angle miscalculation.

Hopefully not Plus this seems to be a bit of a trend with later Exmors I have evaluated from DPR raw captures (e.g. in addition to the above Nex-6, XA1, but curiously not a58) - could polarization in their lighting be causing any of this?

Since polarized scene light would have the same effect on all AA-filter cameras, I wouldn't think that's very probable. Additionally; I get almost the same (somewhat smaller) differences in h/v with those cameras using our backlit chart. And that's as close to perfectly circular in pol as you get - it has two diffusion layers and a grain reflector closest to the light source.

2. The miscalculation might actually be in the phase plate layer between the two birefringent layers, or the phase plate is calculated to give maximum effect at NIR and NUV (which would give a very low de-polarizing effect on green, where Jack measured and simulated the curves).

By phase plate I assume you mean what Nikon shows as the Wave plate below (their image). So if I understand correctly you are saying that if the circular polarization introduced were of light strength around mid-wavelengths the second birefringent plate (LPF2) would have little effect?

Circularly polarized light contains equal amounts of energy in all field vector angles (when integrated over longer periods of time). When you send a ray of circularly polarized light through a transition medium change - from isotropic/random (air) into a birefringent (the filter plate) - the birefringent separates the ray's energy into it's two perpendicularly discretized components. The component with a vector perpendicular to the optical axis of the birefringency surface gets one refraction index, the component where the vector goes through the birefringency surface sees another refraction index.

In an AA filter plate, the optical axis of the birefringency is at an angle to the physical surface. It isn't cut "along" the crystal surface, but at an angle over it. That's why even rays that hit the plate perfectly head on get separated. That's the entrance to (1) below.

At the exit of (1), the both discrete rays are linearly polarized. One is polarized exactly along the surface of the birefringency surface, the other is 90º across.

If you send this directly into LP2, all that will happen is that one of the rays will see one refractive index, the other another one. You'll just move the two rays slightly differently, you will NOT separate them into four rays.

Wave plate´╝ÜBy converting polarized light into circularly polarized light with the wave plate, two points are divided into four points. The original light and light separated in horizontal direction with the low-pass filter 1 are transmitted through the low-pass filter 2 with the wavelengths unchanged. The original light is transmitted as it is, and light separated with the low-pass filter 1 changes only direction vertically (two points are maintained).

Would the fact that the earlier MTF50 graphs showing strong horizontal/vertical spatial resolution separation were derived using information from all three channels (using White Balanced Raw data) negate this possibility?

A phase/wave plate is exactly the same as the AA filter plates - but in this case you've cut the crystal so that the plate surface and the birefringency surface are aligned. What this does is to take one ray, and make half of the energy pass slightly slower through the plate. The refractive index is basically "1 / light propagation speed", which is why vacum has a refraction index of "1.000"... no particles to slow light down, it travels at full speed ahead. 1/1 speed.

If the ray is circularly polarized from the beginning, that makes no change at all (from a simplified PoV). If the ray passing in is linearly polarized, it means that when the plate thickness is equal to (light speed difference* wavelength/4) the ray is circularly polarized when it exits.

So the polarization (linear, elliptic, circular) at the exit is dependent on the wavelength of he ray passing in. When the wavelength is an integer divisor of the first possible 1/4-wave wavelength, the exit ray is perfectly circularly polarized. At the midpoints between the integer divisor points, the delay for one of the vectors adds up to a full circle phase shift - so it just adds back up to unity again.

So the wave plate is tuned to give good re-polarization at certain wavelengths.

At the wavelenghts where the wave plate is matched to the 1/4 delay the two rays at the exit from the wave plate are both circularly polarized, and they are both split once more by the LP2 plate - into four rays.

At the intermediate wavelengths giving full-circle phase shifts, their polarization are somewhere between elliptical and linear, and that means they won't be split by the second plate. Just displaced slightly differently.

This means the second plate gives different amounts of energy spread depending on the wavelength of the incoming light...

3. The "error" is indeed pre-calculated, and its there to minimize the negative effects of the line-skipping readout pattern the sensor uses for video recording.

no.1 would be embarrassing, but not the end of the world.

no.2 is actually quite probable, since it's R&G that really needs the AA layer. The "green" channel has no need of the AA-layer - it would be much better off without it!

Interesting. Can you explain why?

Information surface holes.

From a reconstruction perspective, the numerical reliability of the interpolation of a Bayer scheme is dependent on how big the information holes are. If a point in the image plane has a very low numerical analysis accuracy probability, it is a "hole". An unknown vector.

The size of the "hole" in the green map is the pixel width, plus the dead gap between pixels, minus the point spread function. Since the dead gap is nowadays typically very small, and the absolute practical minimum PSF in reality approaches 1µm to the 50% energy point, a normal ~5µm pixel will typically have ~3.5-4µm holes in the information map - placed at the centers of the pixels that are "not green".

If you map that so that black is "low probability/predictability", then it looks like this:

Green information map

Red information map

Since the relative area ratio of low predictability is so very much smaller than the "known" area in green, almost all points on the image surface is can be interpolated accurately (minus noise considerations, of course...). Reliable interpolation >> no need of an AA filter to "borrow" adjacent information at the cost of lowered per-pixel contrasts.

In the red (and blue) that area ratio is almost reversed. There's much higher surface percentage  that is black, that we DON'T know (or can predict with any numerical reliably) than what we actually know - if the lens is good. This is what causes aliasing and that horrible sub-set of aliasing called moire. The interpolation has to guess at positions of very low predictability - and if the surrounding "support" data is hard to interpret, the interpolation engine often guesses wrong. Moire is when those incorrect guesses are systematically modulated to a lower frequency pattern by the underlying HF image data, by the matching/mismatching of the underlying original surface data and the information readout overlay pattern.

All/any blurring introduced before the image formation makes the information surface more uniform, minimizes the chance of really wrong interpolation guesses - but at the cost of per-pixel contrast, or "sharpness".

Bayer makes the (correct) assumption that chroma data in a normal image has lower energy at high frequencies than luma data, so in most "image-average" cases this is a good tradeoff of interpolation accuracy. Lose a little bit of green channel (luma) HF and gain a bit of chroma HF stability.

no.3 is also quite probable, but probably not the complete and ultimate answer. They no doubt thought about it, the question is how much weight (over the still image performance) Nikon would allow such a product optimization to have.

Thanks as always for all the great insights!

Jack

Post (hide subjects) Posted by
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow