Optical Spatial Frequency Filtering of Image Sensors ?

Of course - the MTF is measuring contrast, not absolute response, and so if a full cycle of the black / white striped pattern exactly matches the sampling interval, then that produces a "mid-gray" response, which corresponds to zero contrast. That's how you get a cancellation effect from a sensor that is responding to intensity, and how you get a the low-pass response. I must think about this some more...

Good thread, DM!

Joe
 
Joe Pineapples wrote:

Of course - the MTF is measuring contrast, not absolute response, and so if a full cycle of the black / white striped pattern exactly matches the sampling interval, then that produces a "mid-gray" response, which corresponds to zero contrast. That's how you get a cancellation effect from a sensor that is responding to intensity, and how you get a the low-pass response. I must think about this some more...

Good thread, DM!
Yes, indeed - thanks much in part to the interesting minds contributing other than myself (yours included). There are some interesting ideas and questions being presented, and I am already learning some things relating to my own existing curiosity surrounding these things. This particular reference (provided by Jack Hogan) is exactly the kind of information that I was hoping to locate:

http://mtfmapper.blogspot.it/2012/06/nikon-d40-and-d7000-aa-filter-mtf.html

Strangely the web-page redirects to ... nowhere ... shortly after loading (on my system, anyway). But, if I quickly stop my browser before the re-direct occurs, it loads OK and is displayed (and can be stored as a web-page).

The matter of unipolar values of intensity (in the spatial domain) is just a matter of (arbitrary) numerical reference, and is not a problem mathematically in a (non-periodic) single "pulse" case of a complex spatial frequency transform. Note that the sin(x)/(x) function (the transform) has a maximum (and positive) value at zero spatial frequency, regardless of the numerical "polarity" of the transformed spatial domain function's maximum and minimum values.


In the case of a (periodic) "pulse-train" (of some particular "duty cycle"), the complex spatial frequency transform is (itself) an "impulse train" in the spatial frequency domain. If the transformed spatial function is "unipolar" (consisting only of positive/negative and zero values), then the zero spatial frequency impulse has a positive value. If the transformed spatial function is "bipolar" (consisting of equal positive and negative values, anyway), then the zero spatial frequency impulse existing in the spatial frequency domain "implulse train" has a zero value.

I think that the low-pass filter effect arises (in part) out of the fact that the sin(x)/(x) function (of the photo-site aperture as it exists) crosses though zero at the Nyquist frequency (the spatial sampling frequency divided by 2), and continues to decrease [in proportion to the 1/(x) divisor] in amplitude as it oscillates around zero value (where the magnitude-function reflects only the absolute-value those variations). The ("mirrored") negative-frequency components represent counter-rotating vectors which sum with the vectors of the postive-frequency components in order to generate what occurs (outside of the mathematical representation of the transform).

As noted by others contributing on this thread, that photo-site aperture is not by any means the only spatial frequency limiting element influencing the recorded image-data - which is the interesting (and now, thanks to the input and references kindly provided by others here, better elucidated) part.


My previously posted statement:

The complex spatial-frequency transform that is presented in my post here:

http://forums.dpreview.com/forums/post/50320373


... represents the complex spatial-frequency transform of an ideal (spatial-domain) "rectangular window" with a sin(x)/(x) spatial-frequency domain representation - but assumes a significantly higher spatial (point) sampling-frequency than the transformed "rectangular window" function itself.

From: http://forums.dpreview.com/forums/post/50320446


... was not (in fact) relevant to the situation being considered. I was thinking (at the time) of the case of the measurement of a complex frequency transform performed numerically by the discrete sampling of specific measured values (where the spatial sampling frequency needs to be much higher than the spatial frequency of the data being sampled in order to provide a highly accurate result).

Instead, the situation being considered relates to the continuous-space complex spatial frequency transform (as in the difference between a continuous S-plane integral such as a Fourier Integral, and a discrete Z-plane summation such as a Fourier Series). That is, the theoretical complex spatial frequency transform (as opposed to one derived from discrete-sampling and subsequent numerical computation operations).
 
bobn2 wrote:
Detail Man wrote:

I know just a little bit about analog (continuous) and digital (discrete) filtering systems. Was wondering if there may be someone who can help me better understand the combination of the various components involved in optical low-pass filtering of image-sensor photo-sites for the purpose of limiting spatial-frequencies at and beyond the Nyquist-Shannon sampling limit (at 1/2 the spatial sampling frequency). I read the following information published at a LumoLabs web-page:

... what everybody calls "AA filter" or "anti-aliasing filter" or "low-pass filter" isn't one. A low pass filter destroys the signal at high spatial frequencies. Every digital camera does have a low pass filter. But it is the finite photo-sensitive size of a sensel or pixel, typically enlarged to almost the full sensel's extensions by a microlens. The microlens array provides a better quantum efficiency (less photons get lost) but also provides a camera's true low pass filter. Assuming a fill factor of 100%, it destroys signal at twice the Nyquist frequency (the spatial frequency which sensels are placed at on the sensor) and dampens signal at the Nyquist frequency to 64%.

The problem is as follows: Assume a ray (e.g., from a star) reaches your camera and hits a single sensel. Above the sensel will be a green, red or blue filter from the Bayer filter array. This means that the camera will see a green, red or blue star. There is no way for the camera to detect the star's real color, and no smart software can recover the missing color information. This is the false color problem when demosaicing specular highlights ("stars") or contrast edges. If contrast edges form a regular pattern, the false color will vary slowly across the image forming what is called a false color moiré pattern ...


... the entire problem disappears if a ray hits more then a single sensel. Because then the color can be reconstructed by comparing the signal from neighboring sensels having different Bayer filter colors.

This is what a filter achieves which I call the "Bayer-AA filter". It actually isn't a low pass filter at all. It typically is a pair of lithium niobate crystal plates turned 90° with respect to each other. Lithium niobate is birefringent and splits a ray into two. The pair therefore splits a ray into four, designed to hit exactly one of each of the four Bayer color filters. In order to achieve this goal, the ray split distance must be equal to the sensel pitch (or pixel size). The result is exact color reproduction. Such a Bayer-AA filter is called to have 100% strength.

But the birefringent crystals interact with the microlens array and combine to a low pass filter of only half the spatial frequency: Now, the signal at the Nyquist frequency itself is destroyed.

Because vendors don't want to sacrifice the full sensor resolution, they typically design their Bayer-AA filters to be weaker where a 0% strength would correspond to their absence.

The Nikon D800E has such a 0% strength. However, it isn't achieved by replacing the birefringent crystals. This would require a re-calibration of the optical path, AF module, focus screen and color profiles. Rather, Nikon uses a pair of lithium niobate crystal plates turned 180° with respect to each other which happen to cancel each other (if aligned properly).


Source: http://www.falklumo.com/lumolabs/articles/D800AA/index.html


In your understanding, is the above quoted text an accurate description of what is going on ?
Firstly the issue discussed about microlenses isn't about microlenses at all, it is just about having a non-zero sampling aperture. As the sampling aperture increases towards the sample period, the high frequency response will fall. However, it is not worth getting vexed about, because the consequence of a zero length sampling aperture is zero quantum efficiency. The solution is to increase the sampling frequency so that the loss in the desired passband is negligible.
So, while the individual micro-lenses convolve with their own PSF and multiply by their own MTF themselves, their purpose is solely to gather light over the area of plus or minus 1/2 of the pixel-pitch (in 2-dimensions, and centered on the optically active portion of the photo-detector), and project that light onto the optically active portion of the photo-detector ?

Are such micro-leneses (at least sometimes, and in some designs that you know of) also used in order to project light onto the optically active portions of adjacent photo-detectors, as well ?
As to the optical low pass filter not being a low pass filter, this is nonsense. All low pass filters, be they RC, LC or FIR, work by 'spreading' the input signal, just like a birefringent OLPF. The line 'The pair therefore splits a ray into four, designed to hit exactly one of each of the four Bayer color filters. In order to achieve this goal' is just nonsense. The AA filter is not a sampled system, there is no way that each output ray could be guaranteed to hit exactly one of the four Bayer filters. That much is enough to indicate that the author is having difficulty conceptualising what is going on here.
It sounds to me like the following text appears to indicate that:

The degree of blur is controlled by the d parameter, with d = 0 yielding no blurring, and d = 0.5 giving us a two-pixel-wide blur. Since d can be varied by controlling the thickness of the Lithium Niobate layers, a manufacturer can fine-tune the strength of the OLPF for a given sensor.

http://mtfmapper.blogspot.it/2012/06/nikon-d40-and-d7000-aa-filter-mtf.html

Perhaps a wistful theoretical statement - as opposed to being a precisely realizable reality in practice ?
It sound like the microlens array assembly itself is (at least) as important of a component as are the lithium-niobate crystal plates themselves if it results in a zero response at the spatial sampling frequency, and up to a -3.87 dB (or -0.64 EV) attenuation at 1/2 of the spatial sampling frequency.

Thus, it sounds like when people say that a cameras has "no AA filter", there still nevertheless remains a significant optical low-pass spatial-frequency filter response which is implemented by the microlens-array assembly itself.

Any informative and interesting thoughts appreciated,
Certainly, the sampling window produces an MTF of its own, and the total system MTF is the product of the lens, AA filter and sensor MTF. Ideally, the AA filter will be designed to produce the required system MTF, taken in combination with the other component MTF's.
 
hjulenissen wrote:

But if highly accurate focusing is practically impossible for real-world images (that introduce other variables, such as wavelength-dependency), would you not be reduced to some simple "one size fits all" approximation anyways?
Fair enough. It may also be that other techniques can be used to reduce some of these other variables. For instance, most raw converters do a pretty good job with Chromatic Aberrations (CNX2 does great for Nikon cameras)
I think it is interesting to finally see something approaching "hard facts" in an area surrounded by much speculation. We may be able to predict at what point (e.g. aperture) it makes sense to go for the D800E instead of the regular D800.
Frans partly addresses this issue in this post.
 
Detail Man wrote:
Firstly the issue discussed about microlenses isn't about microlenses at all, it is just about having a non-zero sampling aperture. As the sampling aperture increases towards the sample period, the high frequency response will fall. However, it is not worth getting vexed about, because the consequence of a zero length sampling aperture is zero quantum efficiency. The solution is to increase the sampling frequency so that the loss in the desired passband is negligible.
So, while the individual micro-lenses convolve with their own PSF and multiply by their own MTF themselves, their purpose is solely to gather light over the area of plus or minus 1/2 of the pixel-pitch (in 2-dimensions, and centered on the optically active portion of the photo-detector), and project that light onto the optically active portion of the photo-detector ?

Are such micro-leneses (at least sometimes, and in some designs that you know of) also used in order to project light onto the optically active portions of adjacent photo-detectors, as well ?
Certainly there can be crosstalk, which should be taken into account to make a very accurate model.
As to the optical low pass filter not being a low pass filter, this is nonsense. All low pass filters, be they RC, LC or FIR, work by 'spreading' the input signal, just like a birefringent OLPF. The line 'The pair therefore splits a ray into four, designed to hit exactly one of each of the four Bayer color filters. In order to achieve this goal' is just nonsense. The AA filter is not a sampled system, there is no way that each output ray could be guaranteed to hit exactly one of the four Bayer filters. That much is enough to indicate that the author is having difficulty conceptualising what is going on here.
It sounds to me like the following text appears to indicate that:

The degree of blur is controlled by the d parameter, with d = 0 yielding no blurring, and d = 0.5 giving us a two-pixel-wide blur. Since d can be varied by controlling the thickness of the Lithium Niobate layers, a manufacturer can fine-tune the strength of the OLPF for a given sensor.

http://mtfmapper.blogspot.it/2012/06/nikon-d40-and-d7000-aa-filter-mtf.html

Perhaps a wistful theoretical statement - as opposed to being a precisely realizable reality in practice ?
The filter can be made to give the right displacement, but how would it be ensured that every photon struck the AA filter so as to be precisely aligned with the Bayer quad position in the sensor beneath?
Bob
 
If sensels were separated by something with a low coefficient of refraction (like air), then crosstalk could be reduced by total internal reflection. I wonder how difficult something like that would be to manufacture. Probably pretty difficult.
 
Again, fascinating link.

Since the OLPF is constant for a given camera, while diffraction, camera-shake etc tends to vary, a clever deconvolution algorithm might be able to give better results with the knowledge of individual components of the total PSF. If you try to really dig up faint details close to the noise-floor (deep nulls in the spatial frequency response), a precise description of the PSF sounds sensible.

For images of anything but brick-walls, large parts of the scene will not be in focus. One might like to extend or move the plane of acceptable focus by deconvolution, but then some regions will have a PSF dominated by being out of focus, while others might be like the PSF described in the blog post.

Do you know of any deconvolution algorithms that does this?

-h
 
I do not claim to understand the physics behind these filters. But my understanding is that the black-box behaviour is that of a continous-space (analog to continous-time) filter adding a shifted version of the input signal, just like a wall providing an acoustic echo.

If the underlying microlense/sensel active area can be assumed to be 100% fill-rate, then a continous space 2-tap prefilter provides only a single parameter: the amount of shift. Let us say that the shift parameter is set equal to the width of a sensel. If a train of photons hit the same spot somewhere within some sensel, then 1/2 of those photons would be redirected to the neighboring sensel, and would hit the same relative spot within the sensel. If those photons hit the sensel uniformly dispersed, then the neighboring sensel would receive 1/2 the photons with the same dispersed pattern. If the filter shift parameter was set to less than a sensel width, then some photons would be redirected to hit within the same sensel (i.e. no effect after sampling).

-h
bobn2 wrote:The filter can be made to give the right displacement, but how would it be ensured that every photon struck the AA filter so as to be precisely aligned with the Bayer quad position in the sensor beneath?

Bob
 
bobn2 wrote:

The filter can be made to give the right displacement, but how would it be ensured that every photon struck the AA filter so as to be precisely aligned with the Bayer quad position in the sensor beneath?
Again a bit out of my depth, but if I understand it correctly that's not the implication. The PSF works as described but then its filtering function is convolved/deconvolved over the whole sensor area.
 
Detail Man wrote:
I think that the low-pass filter effect arises (in part) out of the fact that the sin(x)/(x) function (of the photo-site aperture as it exists) crosses though zero at the Nyquist frequency (the spatial sampling frequency divided by 2), and continues to decrease [in proportion to the 1/(x) divisor] in amplitude as it oscillates around zero value (where the magnitude-function reflects only the absolute-value those variations).
Ooops, sorry. My ignorance is showing ... :P

The above text is in error, and should be revised to read:

I think that the low-pass filter effect arises (only in part) out of the fact that the sin(x)/(x) spatial frequency transform (modulation transfer function) of the adjacent photo-site apertures as they exists arrayed on an image-sensor crosses though zero at twice the Nyquist frequency (which is a spatial frequency equal to the spatial sampling frequency itself), and continues to decrease in amplitude [in proportion to the 1/(x) divisor] as the sin(x)/(x) function oscillates around zero value at integer multiples of the sampling frequency (where the magnitude-function reflects only the absolute-value those variations).


The spatial frequency transform (modulation transfer function) of a single (isolated, and not arrayed) photo-site aperture has the same characteristics - where the first zero-crossing of the sin(x)/(x) occurs at a spatial frequency that is equal to the reciprocal of the photo-site aperture (spatial) dimension, with additional zero-crossings at integer multiples of that spatial frequency value.


Have been looking through Frans van den Bergh's various blog-posts. Really high quality stuff !


http://mtfmapper.blogspot.com/2012/05/pixels-aa-filters-box-filters-and-mtf.html


Thank you for the excellent reference(s) to this particular blog, Jack Hogan !
 
Jack Hogan wrote:
hjulenissen wrote: Do you know of any deconvolution algorithms that does this?
Afraid not. I tend to use Topaz InFocus with decent results. It's a little bit better than RL in most situations imo, but I have no details on its inner workings.
Am pretty sure that Topaz InFocus is, like Richardson-Lucy, "blind" deconvolution-deblurring. Have never tried it myself - but (IMO) LR/ACR's use of deconvolution-deblurring in their Sharpening tools is sadly lacking, and riddles images with artifacts when much of any deconvolution-deblurring is mixed-in via the "Detail" control. Have not used PS (and it's single-pass deconvolution-deblurring) - but it may well not be an improvement on LR/ACR.

Have used DxO Optics Pro's "Lens Softness" corrections, which are known (from a previous personal written communication received from a DxO Labs employee) to employ deconvolution-deblurring as part of a process which corrects for the characterized imperfections of various supported lenses attached to particular supported camera bodies, for nearly 3 years time, and have been very impressed with the results obtained. The RAW-level implementation is significantly more effective than the JPG-level implementation.

Have taken a lot of interest in what they are doing, and how they are doing this proprietary function, and have found speculation that it (may) be accomplished (in the case of RAW-level processing) prior to (or in conjunction with) the de-mosaicing processes in DxO Optics Pro.

Have also read (some speculation by others, who may or may not know whereof they speak) that they are (probably) not employing a measured PSF of the supported and characterized camera-lens systems, but that they (may) use some sort of Gaussian approximation, instead.

I wonder, however, if such an approximation might (in some ways, to some extent) reflect the measured results obtained from their characterization of supported camera-lens combinations (which DxO Labs states involves around 1,000 individual test-shots per camera-lens combination, recorded at a large number of various Focal Lengths and F-Numbers) ?

At any rate, here is DxO Labs' published information about their "Lens Softness" corrections:

http://www.dxo.com/us/photo/dxo_optics_pro/features/optics_geometry_corrections/lens_softness


Some demonstration images published by an independent photographer:

http://www.beautiful-landscape.com/Thoughts81-Jim Scott Panostitch.html


For what it's worth, my DPR Image Gallery has some GH2 RAWs processed using "Lens Softness" corrections, and a lot of LX3 RAWs using "Lens Softness" corrections.
 
Last edited:
Detail Man wrote:
.......................
Have been looking through Frans van den Bergh's various blog-posts. Really high quality stuff !

http://mtfmapper.blogspot.com/2012/05/pixels-aa-filters-box-filters-and-mtf.html


Thank you for the excellent reference(s) to this particular blog, Jack Hogan !
Another interesting parer by Canon. Although it's a special case sensor (C300) there are some interesting diagrams about MTF vs Lph ...

http://learn.usa.canon.com/app/pdfs...tion_Considerations_in_New_CMOS_Sensor_WP.pdf
 
Last edited:
Jack Hogan wrote:
bobn2 wrote:

The filter can be made to give the right displacement, but how would it be ensured that every photon struck the AA filter so as to be precisely aligned with the Bayer quad position in the sensor beneath?
Again a bit out of my depth, but if I understand it correctly that's not the implication. The PSF works as described but then its filtering function is convolved/deconvolved over the whole sensor area.
Your interpretation is the same as mine, but that isn't what is said in the article cited in the OP.
 
hjulenissen wrote:

I do not claim to understand the physics behind these filters. But my understanding is that the black-box behaviour is that of a continous-space (analog to continous-time) filter adding a shifted version of the input signal, just like a wall providing an acoustic echo.

If the underlying microlense/sensel active area can be assumed to be 100% fill-rate, then a continous space 2-tap prefilter provides only a single parameter: the amount of shift. Let us say that the shift parameter is set equal to the width of a sensel. If a train of photons hit the same spot somewhere within some sensel, then 1/2 of those photons would be redirected to the neighboring sensel, and would hit the same relative spot within the sensel. If those photons hit the sensel uniformly dispersed, then the neighboring sensel would receive 1/2 the photons with the same dispersed pattern. If the filter shift parameter was set to less than a sensel width, then some photons would be redirected to hit within the same sensel (i.e. no effect after sampling).

-h
bobn2 wrote:The filter can be made to give the right displacement, but how would it be ensured that every photon struck the AA filter so as to be precisely aligned with the Bayer quad position in the sensor beneath?

Bob
Think of a Bayer quad as a single multi colour pixel. Without some way of aligning rays to the centroid of the quad, there is no way of confining the spread version of a ray to a single quad, it will usually overlap with the neighbours.
 
Great Bustard wrote:
However, the Airy Disk certainly does the same job as the AA filter, at least when the aperture is small enough that the diameter of the Airy Disk is the width of two pixels (more or less), no? If not, what makes the blur of the AA filter so much different than the blur of the Airy Disk?.
I personally think that the low-pass filtering effect induced by diffraction has some benefits over that produced by the 4-dot AA filter.

For one, the AA filter's MTF does not stay at zero after the first zero. From what I can see on the D40's AA filter, that means that some power does leak through between 0.7 and 0.9 cycles per pixel (with Nyquist at 0.5 c/p). Fortunately, this seems to be attenuated rather strongly, i.e., bounded from above by a contrast of 0.1, so probably not really visible.

The circular aperture diffraction MTF drops to zero and remains there, so it acts more like an ideal low-pass filter in that respect.

The main disadvantage of the diffraction MTF appears to be stronger attenuation at lower frequencies, particularly 0 to 0.5*Nyquist. In theory, you can fix this up with deconvolution, whereas you cannot do anything about the aliasing that will result from the non-zero response of the AA filter above Nyquist.
 
Detail Man wrote:

As to the optical low pass filter not being a low pass filter, this is nonsense. All low pass filters, be they RC, LC or FIR, work by 'spreading' the input signal, just like a birefringent OLPF. The line 'The pair therefore splits a ray into four, designed to hit exactly one of each of the four Bayer color filters. In order to achieve this goal' is just nonsense. The AA filter is not a sampled system, there is no way that each output ray could be guaranteed to hit exactly one of the four Bayer filters. That much is enough to indicate that the author is having difficulty conceptualising what is going on here.
It sounds to me like the following text appears to indicate that:

The degree of blur is controlled by the d parameter, with d = 0 yielding no blurring, and d = 0.5 giving us a two-pixel-wide blur. Since d can be varied by controlling the thickness of the Lithium Niobate layers, a manufacturer can fine-tune the strength of the OLPF for a given sensor.

http://mtfmapper.blogspot.it/2012/06/nikon-d40-and-d7000-aa-filter-mtf.html


Perhaps a wistful theoretical statement - as opposed to being a precisely realizable reality in practice ?
I'll give it a stab.

The 4-dot OLPF is visualised (drawn) as the overlap of the four squares (representing the idealized sensel aperture). But this is just the impulse response, not the physical realization of the filter.

In other words, this process happens continuously over the surface of the sensor because it is a convolution process. A given point inside a sensel will be covered by an infinte number of copies of the impulse response, each centered on some idealized ray of light (which is split into 4 parts). You only have one "big blurry mess" leaving the OLPF, not a neat compartmentalization into sensel-sized chunks
 
Detail Man wrote:
Thus, it sounds like when people say that a cameras has "no AA filter", there still nevertheless remains a significant optical low-pass spatial-frequency filter response which is implemented by the microlens-array assembly itself.
No; it just means that whether or not filtering occurs depends on luck of alignment. There is nothing in the pixel size or microlens to prevent high frequency transients from occurring at pixel boundaries, even if it severely attenuates them in the center of a pixel. A near-perfect microlens approaches a box filter, which, while not aliasing as much as a point sampling, is still prone to aliasing.
 
fvdbergh2501 wrote:
Detail Man wrote:

As to the optical low pass filter not being a low pass filter, this is nonsense. All low pass filters, be they RC, LC or FIR, work by 'spreading' the input signal, just like a birefringent OLPF. The line 'The pair therefore splits a ray into four, designed to hit exactly one of each of the four Bayer color filters. In order to achieve this goal' is just nonsense. The AA filter is not a sampled system, there is no way that each output ray could be guaranteed to hit exactly one of the four Bayer filters. That much is enough to indicate that the author is having difficulty conceptualising what is going on here.
It sounds to me like the following text appears to indicate that:

The degree of blur is controlled by the d parameter, with d = 0 yielding no blurring, and d = 0.5 giving us a two-pixel-wide blur. Since d can be varied by controlling the thickness of the Lithium Niobate layers, a manufacturer can fine-tune the strength of the OLPF for a given sensor.

http://mtfmapper.blogspot.it/2012/06/nikon-d40-and-d7000-aa-filter-mtf.html


Perhaps a wistful theoretical statement - as opposed to being a precisely realizable reality in practice ?
I'll give it a stab.
Frans, I was hoping to better understand bobn2's expressed thoughts in saying what I did. There is not much of anything about your excellent and informative blog-posts that causes me to believe that you do not have a comprehensive understanding of what you are discussing.
The 4-dot OLPF is visualised (drawn) as the overlap of the four squares (representing the idealized sensel aperture). But this is just the impulse response, not the physical realization of the filter.
Understood.
In other words, this process happens continuously over the surface of the sensor because it is a convolution process. A given point inside a sensel will be covered by an infinte number of copies of the impulse response, each centered on some idealized ray of light (which is split into 4 parts). You only have one "big blurry mess" leaving the OLPF, not a neat compartmentalization into sensel-sized chunks.
I have recently been thinking (with my fairly rudimentary understanding of convolutions and complex frequency transforms) about the differences between analyzing isolated photo-sites, and a 2-dimension array of many such photo-sites existing across the surface of an image-sensor.

(Assuming, for the sake of simplicity, that the optically active portion of each photo-site section is physically centered within those sections), it seems like the image sensor "surface" (itself, not considering filters and micro-lenses located above that "surface") would be like a finite-numbered "pulse trains" of some particular "duty cycle" (spatial) pulses (in each of 2-dimensions) - "spatially windowed" by the dimensions of the active image-sensor area over which the multiplications and summations involved in the relevant convolution operations would proceed.

The foregoing humbly said, I am very glad to have (thanks to Jack Hogan) discovered your very interesting, well thought out, and well illustrated various blog-posts. Thank you for composing and publishing them ! And, thanks to all interested and interesting folks contributing ideas and various information on this thread. I have learned a lot, and hope to learn even more in these processes.


DM ... :P
 
Last edited:
John Sheehy wrote:
Detail Man wrote:
Thus, it sounds like when people say that a cameras has "no AA filter", there still nevertheless remains a significant optical low-pass spatial-frequency filter response which is implemented by the microlens-array assembly itself.
No; it just means that whether or not filtering occurs depends on luck of alignment. There is nothing in the pixel size or microlens to prevent high frequency transients from occurring at pixel boundaries, even if it severely attenuates them in the center of a pixel. A near-perfect microlens approaches a box filter, which, while not aliasing as much as a point sampling, is still prone to aliasing.
Thanks for that, John. It's good to clarify the scope of the theoretically intended purpose, and the actually implemented functional characteristics, of micro-lens assemblies in such low-pass filtering.
 
Last edited:

Keyboard shortcuts

Back
Top