Optical Spatial Frequency Filtering of Image Sensors ?

Detail Man

Forum Pro
Messages
17,490
Solutions
11
Reaction score
2,745
I know just a little bit about analog (continuous) and digital (discrete) filtering systems. Was wondering if there may be someone who can help me better understand the combination of the various components involved in optical low-pass filtering of image-sensor photo-sites for the purpose of limiting spatial-frequencies at and beyond the Nyquist-Shannon sampling limit (at 1/2 the spatial sampling frequency). I read the following information published at a LumoLabs web-page:

... what everybody calls "AA filter" or "anti-aliasing filter" or "low-pass filter" isn't one. A low pass filter destroys the signal at high spatial frequencies. Every digital camera does have a low pass filter. But it is the finite photo-sensitive size of a sensel or pixel, typically enlarged to almost the full sensel's extensions by a microlens. The microlens array provides a better quantum efficiency (less photons get lost) but also provides a camera's true low pass filter. Assuming a fill factor of 100%, it destroys signal at twice the Nyquist frequency (the spatial frequency which sensels are placed at on the sensor) and dampens signal at the Nyquist frequency to 64%.

The problem is as follows: Assume a ray (e.g., from a star) reaches your camera and hits a single sensel. Above the sensel will be a green, red or blue filter from the Bayer filter array. This means that the camera will see a green, red or blue star. There is no way for the camera to detect the star's real color, and no smart software can recover the missing color information. This is the false color problem when demosaicing specular highlights ("stars") or contrast edges. If contrast edges form a regular pattern, the false color will vary slowly across the image forming what is called a false color moiré pattern ...


... the entire problem disappears if a ray hits more then a single sensel. Because then the color can be reconstructed by comparing the signal from neighboring sensels having different Bayer filter colors.

This is what a filter achieves which I call the "Bayer-AA filter". It actually isn't a low pass filter at all. It typically is a pair of lithium niobate crystal plates turned 90° with respect to each other. Lithium niobate is birefringent and splits a ray into two. The pair therefore splits a ray into four, designed to hit exactly one of each of the four Bayer color filters. In order to achieve this goal, the ray split distance must be equal to the sensel pitch (or pixel size). The result is exact color reproduction. Such a Bayer-AA filter is called to have 100% strength.

But the birefringent crystals interact with the microlens array and combine to a low pass filter of only half the spatial frequency: Now, the signal at the Nyquist frequency itself is destroyed.

Because vendors don't want to sacrifice the full sensor resolution, they typically design their Bayer-AA filters to be weaker where a 0% strength would correspond to their absence.

The Nikon D800E has such a 0% strength. However, it isn't achieved by replacing the birefringent crystals. This would require a re-calibration of the optical path, AF module, focus screen and color profiles. Rather, Nikon uses a pair of lithium niobate crystal plates turned 180° with respect to each other which happen to cancel each other (if aligned properly).


Source: http://www.falklumo.com/lumolabs/articles/D800AA/index.html


In your understanding, is the above quoted text an accurate description of what is going on ?

It sound like the microlens array assembly itself is (at least) as important of a component as are the lithium-niobate crystal plates themselves if it results in a zero response at the spatial sampling frequency, and up to a -3.87 dB (or -0.64 EV) attenuation at 1/2 of the spatial sampling frequency.

Thus, it sounds like when people say that a cameras has "no AA filter", there still nevertheless remains a significant optical low-pass spatial-frequency filter response which is implemented by the microlens-array assembly itself.

Any informative and interesting thoughts appreciated,

DM ... :P
 
Last edited:
If you sample a voltage signal by integrating over a small sampling interval, then there is a natural "sin(x) / x" type low-pass response caused by systematic cancelling of the positive and negative parts of the voltage signal within the interval. For example, if the sampling interval exactly matches one complete cycle of a sinusoidal signal, then you get zero response to that signal. But for a sensor that responds to intensity I don't see why you get a low-pass response because you don't get cancellation - intensity is a positive quantity. An extended sensor responds to any number of black and white lines falling on it by giving an average value.

But it is late and I've had a long day...

Joe
 
Honestly, what possible need can a photographer have in knowing this technical stuff you seem so focused on in your posts ?

You want to know about stuff like that, take a physics degree.
 
Detail Man wrote:

I know just a little bit about analog (continuous) and digital (discrete) filtering systems. Was wondering if there may be someone who can help me better understand the combination of the various components involved in optical low-pass filtering of image-sensor photo-sites for the purpose of limiting spatial-frequencies at and beyond the Nyquist-Shannon sampling limit (at 1/2 the spatial sampling frequency). I read the following information published at a LumoLabs web-page:

... what everybody calls "AA filter" or "anti-aliasing filter" or "low-pass filter" isn't one. A low pass filter destroys the signal at high spatial frequencies. Every digital camera does have a low pass filter. But it is the finite photo-sensitive size of a sensel or pixel, typically enlarged to almost the full sensel's extensions by a microlens. The microlens array provides a better quantum efficiency (less photons get lost) but also provides a camera's true low pass filter. Assuming a fill factor of 100%, it destroys signal at twice the Nyquist frequency (the spatial frequency which sensels are placed at on the sensor) and dampens signal at the Nyquist frequency to 64%.

The problem is as follows: Assume a ray (e.g., from a star) reaches your camera and hits a single sensel. Above the sensel will be a green, red or blue filter from the Bayer filter array. This means that the camera will see a green, red or blue star. There is no way for the camera to detect the star's real color, and no smart software can recover the missing color information. This is the false color problem when demosaicing specular highlights ("stars") or contrast edges. If contrast edges form a regular pattern, the false color will vary slowly across the image forming what is called a false color moiré pattern ...


... the entire problem disappears if a ray hits more then a single sensel. Because then the color can be reconstructed by comparing the signal from neighboring sensels having different Bayer filter colors.

This is what a filter achieves which I call the "Bayer-AA filter". It actually isn't a low pass filter at all. It typically is a pair of lithium niobate crystal plates turned 90° with respect to each other. Lithium niobate is birefringent and splits a ray into two. The pair therefore splits a ray into four, designed to hit exactly one of each of the four Bayer color filters. In order to achieve this goal, the ray split distance must be equal to the sensel pitch (or pixel size). The result is exact color reproduction. Such a Bayer-AA filter is called to have 100% strength.

But the birefringent crystals interact with the microlens array and combine to a low pass filter of only half the spatial frequency: Now, the signal at the Nyquist frequency itself is destroyed.

Because vendors don't want to sacrifice the full sensor resolution, they typically design their Bayer-AA filters to be weaker where a 0% strength would correspond to their absence.

The Nikon D800E has such a 0% strength. However, it isn't achieved by replacing the birefringent crystals. This would require a re-calibration of the optical path, AF module, focus screen and color profiles. Rather, Nikon uses a pair of lithium niobate crystal plates turned 180° with respect to each other which happen to cancel each other (if aligned properly).


Source: http://www.falklumo.com/lumolabs/articles/D800AA/index.html


In your understanding, is the above quoted text an accurate description of what is going on ?

It sound like the microlens array assembly itself is (at least) as important of a component as are the lithium-niobate crystal plates themselves if it results in a zero response at the spatial sampling frequency, and up to a -3.87 dB (or -0.64 EV) attenuation at 1/2 of the spatial sampling frequency.

Thus, it sounds like when people say that a cameras has "no AA filter", there still nevertheless remains a significant optical low-pass spatial-frequency filter response which is implemented by the microlens-array assembly itself.

Any informative and interesting thoughts appreciated,

DM ... :P
Doesn't the article just say that it's the finite size of the pixel that's the low pass filter? If the size of the pixels photo-sensitive area was as large as the pixel itself (= 100 fill factor), then micro lenses wouldn't be needed.
 
Last edited:
The pixel has depth. The microlens array is required to focus off-nadir light into the pixel where it is absorbed and converted into electricity. This increases QE by about 50%.
 
BSweeney wrote:

The pixel has depth. The microlens array is required to focus off-nadir light into the pixel where it is absorbed and converted into electricity. This increases QE by about 50%.
Don't know.. would microlenses be needed on a BSI sensor?
 
The idea is that the light sensitive area is closer to the surface- but there is still a depth associated with the photo-sensitive area. "Theoretically" you should always have some gain with the light coming straight into the light sensitive area and not "escape" before being absorbed. I have not followed the BSI detectors. I should depend on how deep the pixel is, and the percentage of surface area of the detector that is light sensitive.



Quick Google Search...



DOH! You still need a Mosaic Array for color! Keep the Microlens array.
 
Last edited:
The point sampler suggested by the Shannon-Nyquist is a perfect point sampler (i.e. integration area/time approachin zero). For this to allow perfect recreation of general baseband signals you want to do perfect bandlimiting prior to the sampler (anti-aliasing prefilter) using a non-realizable sin(x)/x filter.

Assume that the active sensel + micro lense has a 100% "fill-rate". Then each sensel is integrating light spatially in a rectangular continous function. A boxcar integrator is a lowpass filter, although not a particularly good one. Then add the (common) lowpass pre-filter. Then add the lense PSF. Then add camera/subject movement. All of those "smear" the signal spatially, reducing high-(spatial-)frequency variation.

In my view the quote seems reasonable, except the claims about what is the "true" lowpass filter: it is what it is, every aspect that contribute to a reduction in high-frequency energy is in practice a lowpass filter. For indication of what would have happened if the fill-rate was extremely low (each sensel approaching a point-sampler), check out the results obtained with Canon 5Dmk2 for video when reading every n-th line to form a 1080p video frame.

Conclusion:

It is very hard to do a sub-pixel accurate characterization of a complete camera. But gut-feeling and practice suggests that it is quite far from an ideal Shannon-Nyquist spatial sampler.

-h
 
Last edited:
Joe Pineapples wrote:

If you sample a voltage signal by integrating over a small sampling interval, then there is a natural "sin(x) / x" type low-pass response caused by systematic cancelling of the positive and negative parts of the voltage signal within the interval. For example, if the sampling interval exactly matches one complete cycle of a sinusoidal signal, then you get zero response to that signal. But for a sensor that responds to intensity I don't see why you get a low-pass response because you don't get cancellation - intensity is a positive quantity. An extended sensor responds to any number of black and white lines falling on it by giving an average value.

But it is late and I've had a long day...
Thank you for the relevant and coherent response, Joe. Referring to one of my books, I see that the (one-dimensional) spatial-frequency transform of a unipolar (ranging from zero to a positive numerical value only) "square pulse" (in space) of intensity "I" and spatial-dimension "D" is of the same [ sin(x)/(x) ] form that you describe above:

X(F) = ABS ( (I) (D) ( SIN ( (PI) (F) (D) ) / ( (PI) (F) (D) ) )

where:

X(F) is the magnitude of the complex spatial frequency transform;

ABS is the Absolute Value function;

SIN is the sinusoidal function;

PI is the transcendental number (3.141593 ... ).


... with the first zero-crossings of that sin(x)/(x) function located at -1/2D and +1/2D .

The two-dimensional case is the superposition of a vertical and a horizontal one-dimensional transform.

As each individual photo-site represents a unique "sampling window" (in the spatial domain), the occurance of more than one-half cycle of the (spatial) "rate of repetition" of intensity (I) variation within that "sampling window" causes frequency-domain aliasing - where the associated spatial-frequency components will "foldover" around the Nyquist frequency (which is located at a spatial-frequency of 1/2D), and will appear at inverted locations in the spatial-frequency domain ("mirrored" around the Nyquist frequency, and as a result appearing below rather than above the Nyquist frequency).

In order to counteract the aliasing phenomenon, light that would normally illuminate (only) on each individual photo-site has to (by some mechanism) be spread so that it (also) illuminates adjacent photo-sites.

Thus, it seems (to me) that the microlens array assembly in concert with the lithium-niobate crystal plates needs to (in some manner) spread incoming light onto multiple adjacent photo-sites.
 
hjulenissen wrote:

The point sampler suggested by the Shannon-Nyquist is a perfect point sampler (i.e. integration area/time approachin zero). For this to allow perfect recreation of general baseband signals you want to do perfect bandlimiting prior to the sampler (anti-aliasing prefilter) using a non-realizable sin(x)/x filter.
Right. The complex spatial-frequency transform that is presented in my post here:

http://forums.dpreview.com/forums/post/50320373


... represents the complex spatial-frequency transform of an ideal (spatial-domain) "rectangular window" with a sin(x)/(x) spatial-frequency domain representation - but assumes a significantly higher spatial (point) sampling-frequency than the transformed "rectangular window" function itself.
Assume that the active sensel + micro lense has a 100% "fill-rate". Then each sensel is integrating light spatially in a rectangular continous function. A boxcar integrator is a lowpass filter, although not a particularly good one. Then add the (common) lowpass pre-filter. Then add the lense PSF. Then add camera/subject movement. All of those "smear" the signal spatially, reducing high-(spatial-)frequency variation.
Yes, we are talking about the spatial-domain convolution of all the above individual spatial functions.
In my view the quote seems reasonable, except the claims about what is the "true" lowpass filter: it is what it is, every aspect that contribute to a reduction in high-frequency energy is in practice a lowpass filter.
Makes sense.
For indication of what would have happened if the fill-rate was extremely low (each sensel approaching a point-sampler), check out the results obtained with Canon 5Dmk2 for video when reading every n-th line to form a 1080p video frame.

Conclusion:

It is very hard to do a sub-pixel accurate characterization of a complete camera. But gut-feeling and practice suggests that it is quite far from an ideal Shannon-Nyquist spatial sampler.
So, where does that leave us where it comes to the possibility of numerically analyzing the convolution (the frequency domain product) of all these various spatial-frequency transfer-functions ?
 
Last edited:
One possibility: assume that there are numerous independent contributors, and estimate the optimal parameters of a Gaussian fit.




-h
Detail Man wrote:

So, where does that leave us where it comes to the possibility of numerically analyzing the convolution (the frequency domain product) of all these various spatial-frequency transfer-functions ?
 
Detail Man wrote:

So, where does that leave us where it comes to the possibility of numerically analyzing the convolution (the frequency domain product) of all these various spatial-frequency transfer-functions ?
Well, you can see lots of graphs/numbers if looking at DxO's lens tests, but it becomes a bit chaotic if also adding things like DoF, camera/subject movement, haze, etc.
 
Move along. If you're not interested in the topic, just ignore it.
 
The filter is a low-pass filter, it "blurs" out high frequency, and this is required as most digital cameras use a Mosaic filter in front of the sensor to produce color images. The "AA" filter is used to cut out frequency that are between the Mosaic (Usually Bayer pattern) filter and the size of the pixels of the sensor. An Olympus EP2 has ~200 pixels per millimeter, which would be enough for 100LP/mm at the Nyquist rate. It uses a 2x2 Bayer pattern Mosaic filter for color. Color artifacfts occur for resolutions between ~50LP/mm and ~100LP/mm. Cameras that do not use a Mosaic filter typically are not typically equipped with an AA filter. Cameras that do not use an AA filter that use a Mosaic filter for producing color images are subject to having color artifacts in the image.



Buy a Kodak DCS420c and a DCS420m to try this out. Should be cheap on Ebay.
 
Last edited:
I disagree with the comment on microlenses. The function is one of integration, not bandpassing. The real low pass filter comes from the spacing of the sensels. As someone else pointed out already, if the photosensor area matched the sensel size the microlens would have no purpose. If the photosensor size is smaller, the sensor will record a higher frequency, but it will be a false one due to the point sampling effect of having a small photosensor. So the microlenses do have a low pass effect, but it acts on the false signal, not the frequencies in the image.
 
Read the articles published by Kodak in the late 1990s. Their CCD had a large active area, the microlens array increased QE to ~85%.
 
Last edited:
Detail Man wrote:

I know just a little bit about analog (continuous) and digital (discrete) filtering systems. Was wondering if there may be someone who can help me better understand the combination of the various components involved in optical low-pass filtering of image-sensor photo-sites for the purpose of limiting spatial-frequencies at and beyond the Nyquist-Shannon sampling limit (at 1/2 the spatial sampling frequency). I read the following information published at a LumoLabs web-page:

... what everybody calls "AA filter" or "anti-aliasing filter" or "low-pass filter" isn't one. A low pass filter destroys the signal at high spatial frequencies. Every digital camera does have a low pass filter. But it is the finite photo-sensitive size of a sensel or pixel, typically enlarged to almost the full sensel's extensions by a microlens. The microlens array provides a better quantum efficiency (less photons get lost) but also provides a camera's true low pass filter. Assuming a fill factor of 100%, it destroys signal at twice the Nyquist frequency (the spatial frequency which sensels are placed at on the sensor) and dampens signal at the Nyquist frequency to 64%.

The problem is as follows: Assume a ray (e.g., from a star) reaches your camera and hits a single sensel. Above the sensel will be a green, red or blue filter from the Bayer filter array. This means that the camera will see a green, red or blue star. There is no way for the camera to detect the star's real color, and no smart software can recover the missing color information. This is the false color problem when demosaicing specular highlights ("stars") or contrast edges. If contrast edges form a regular pattern, the false color will vary slowly across the image forming what is called a false color moiré pattern ...


... the entire problem disappears if a ray hits more then a single sensel. Because then the color can be reconstructed by comparing the signal from neighboring sensels having different Bayer filter colors.

This is what a filter achieves which I call the "Bayer-AA filter". It actually isn't a low pass filter at all. It typically is a pair of lithium niobate crystal plates turned 90° with respect to each other. Lithium niobate is birefringent and splits a ray into two. The pair therefore splits a ray into four, designed to hit exactly one of each of the four Bayer color filters. In order to achieve this goal, the ray split distance must be equal to the sensel pitch (or pixel size). The result is exact color reproduction. Such a Bayer-AA filter is called to have 100% strength.

But the birefringent crystals interact with the microlens array and combine to a low pass filter of only half the spatial frequency: Now, the signal at the Nyquist frequency itself is destroyed.

Because vendors don't want to sacrifice the full sensor resolution, they typically design their Bayer-AA filters to be weaker where a 0% strength would correspond to their absence.

The Nikon D800E has such a 0% strength. However, it isn't achieved by replacing the birefringent crystals. This would require a re-calibration of the optical path, AF module, focus screen and color profiles. Rather, Nikon uses a pair of lithium niobate crystal plates turned 180° with respect to each other which happen to cancel each other (if aligned properly).


Source: http://www.falklumo.com/lumolabs/articles/D800AA/index.html


In your understanding, is the above quoted text an accurate description of what is going on ?
Firstly the issue discussed about microlenses isn't about microlenses at all, it is just about having a non-zero sampling aperture. As the sampling aperture increases towards the sample period, the high frequency response will fall. However, it is not worth getting vexed about, because the consequence of a zero length sampling aperture is zero quantum efficiency. The solution is to increase the sampling frequency so that the loss in the desired passband is negligible. As to the optical low pass filter not being a low pass filter, this is nonsense. All low pass filters, be they RC, LC or FIR, work by 'spreading' the input signal, just like a birefringent OLPF. The line 'The pair therefore splits a ray into four, designed to hit exactly one of each of the four Bayer color filters. In order to achieve this goal' is just nonsense. The AA filter is not a sampled system, there is no way that each output ray could be guaranteed to hit exactly one of the four Bayer filters. That much is enough to indicate that the author is having difficulty conceptualising what is going on here.
It sound like the microlens array assembly itself is (at least) as important of a component as are the lithium-niobate crystal plates themselves if it results in a zero response at the spatial sampling frequency, and up to a -3.87 dB (or -0.64 EV) attenuation at 1/2 of the spatial sampling frequency.

Thus, it sounds like when people say that a cameras has "no AA filter", there still nevertheless remains a significant optical low-pass spatial-frequency filter response which is implemented by the microlens-array assembly itself.

Any informative and interesting thoughts appreciated,
Certainly, the sampling window produces an MTF of its own, and the total system MTF is the product of the lens, AA filter and sensor MTF. Ideally, the AA filter will be designed to produce the required system MTF, taken in combination with the other component MTF's.
 
hjulenissen wrote:

One possibility: assume that there are numerous independent contributors, and estimate the optimal parameters of a Gaussian fit.

-h
Detail Man wrote:

So, where does that leave us where it comes to the possibility of numerically analyzing the convolution (the frequency domain product) of all these various spatial-frequency transfer-functions ?
Except that it seems that the main contributor is not gaussian, but a 4-dot beam splitter. I found these blog posts by Frans van den Bergh on its PSF and the effect of diffraction on it particularly interesting - all linked through MTF . If interested, I suggest reading the whole series.

Jack
 
Last edited:
BSweeney wrote:

Read the articles published by Kodak in the late 1990s. Their CCD had a large active area, the microlens array increased QE to ~85%.
I think I am missing your point. What has QE to do with the discussion?
 

Keyboard shortcuts

Back
Top