Optical Spatial Frequency Filtering of Image Sensors ?

Detail Man wrote:

Thus, it seems (to me) that the microlens array assembly in concert with the lithium-niobate crystal plates needs to (in some manner) spread incoming light onto multiple adjacent photo-sites.
Imho microlenses have little to do with it while the other mentioned factors are dominant. See this other blog post by Frans van den Bergh.
 
"Steen Bay" asked about fill factor and microlenses. I replied to him. Microlenses increase QE even for detectors with 100% fill factor. Microlenses are on detectors to increase QE, that is there purpose. Anti-Aliasing filters, also known as low-pass filters, are on detectors to eliminate artifacts caused by the use of Mosaic filters.

Kodak KAF-3200 and KAF-3200me, 100% fill-factor, large increase in QE with microlenses:

http://www.ccd.com/pdf/ccd_32.pdf
 
Last edited:
If you were replying to Steen Bay, why was it in response to my post? You should have replied to Steen Bay's post. Replying to mine makes it a non sequitur.

I don't think increasing QE is the primary purpose of microlenses, although it is a happy side effect. The main purpose is to gather image data from the maximum area possible. The sensel data should represent the entire sensel area, not just a small subset of it.
 
Victor Engel wrote:

I don't think increasing QE is the primary purpose of microlenses, although it is a happy side effect. The main purpose is to gather image data from the maximum area possible. The sensel data should represent the entire sensel area, not just a small subset of it.
I think the different understandings are due to an issue of semantics. One of microlenses' main jobs is to collect light on top of a sensel and focus it on its active area, making up for initially rather poor fill factors (obviously FF is not as material an issue with back illuminated sensors).

In imaging, sensor manufacturers often refer to Absolute Quantum Efficiency as the percentage of photons arriving on top of the sensor that get turned into electrons after passing through the AA, IR, microlens, CFA if present and whatever other filter is on top of the sensing silicon, like so



KAF-8300-color-QE.jpg




So by this definition, microlenses play a big part in 'effective' 'absolute' QE. Without them it could be reduced by 1-FF.
 
So your disagreement on the comment about microlenses was not about my comment on microlenses. Flat view problem.

Microlenses were originally used on CCD's with 100% fill factor. They were used to concentrate light in the center of the pixel, closest to the electrode where it is collected most efficiently. This increased the QE by almost 50%. That was the original reason for employing them in the late 90s. This also benefits detectors with small fill factors, such as CMOS detectors.



http://wwwcaen.kodak.com/global/plugins/acrobat/en/business/ISS/supportdocs/Quantum.pdf
 
Last edited:
Jack Hogan wrote:
hjulenissen wrote:

One possibility: assume that there are numerous independent contributors, and estimate the optimal parameters of a Gaussian fit.

-h
Except that it seems that the main contributor is not gaussian, but a 4-dot beam splitter. I found these blog posts by Frans van den Bergh on its PSF and the effect of diffraction on it particularly interesting - all linked through MTF . If interested, I suggest reading the whole series.

Jack
Are you saying that the OLPF is the main source of total PSF?

The combination of effective sensel/microlense active sensing area with OLPF, diffraction, camera shake, scene movement, weird quantum effects etc is what produce the total PSF. I find it hard to believe that it always looks likt a "4-dot beam splitter"

-h
 
hjulenissen wrote:

The combination of effective sensel/microlense active sensing area with OLPF, diffraction, camera shake, scene movement, weird quantum effects etc is what produce the total PSF. I find it hard to believe that it always looks likt a "4-dot beam splitter"
Of course not, I was referring to the modeled beam splitter's PSF as modified by diffraction, apologies if I wasn't clearer. In the blog posts I linked above Frans measures end-to-end MTF for a D40 and a D7000 and compares them to the model. Obviously the tests were carried out in controlled conditions with (I assume) tripod and best blur reducing practices. He figures that one of the biggest contributors to discrepancies is the difficulty in achieving perfect focus :-)
 
Jack Hogan wrote:
hjulenissen wrote:

One possibility: assume that there are numerous independent contributors, and estimate the optimal parameters of a Gaussian fit.

-h
Detail Man wrote:

So, where does that leave us where it comes to the possibility of numerically analyzing the convolution (the frequency domain product) of all these various spatial-frequency transfer-functions ?
Except that it seems that the main contributor is not gaussian, but a 4-dot beam splitter. I found these blog posts by Frans van den Bergh on its PSF and the effect of diffraction on it particularly interesting - all linked through MTF . If interested, I suggest reading the whole series.

Jack
Good find, Jack. Some nice work there.
 
Jack Hogan wrote:
hjulenissen wrote:

One possibility: assume that there are numerous independent contributors, and estimate the optimal parameters of a Gaussian fit.

-h
Detail Man wrote:

So, where does that leave us where it comes to the possibility of numerically analyzing the convolution (the frequency domain product) of all these various spatial-frequency transfer-functions ?
Except that it seems that the main contributor is not gaussian, but a 4-dot beam splitter. I found these blog posts by Frans van den Bergh on its PSF and the effect of diffraction on it particularly interesting - all linked through MTF . If interested, I suggest reading the whole series.

Jack
Your link is very interesting. But I dont get your point here. I am referring to the total PSF presented in the blog post. If it can be reasonably modelled as as gaussian (or some other smooth, simple function), is not that description sufficient for most applications?

[IMG width="400px" alt=""Notice how diffraction has smoothed out the combined OLPF and pixel aperture PSF (from previous section)." "]http://4.bp.blogspot.com/-plxiHPS4Z...BZF5jn2-4E/s320/psf_diff_olpf_3d_rs.png[/IMG]
"Notice how diffraction has smoothed out the combined OLPF and pixel aperture PSF (from previous section)."
 
hjulenissen wrote:

Your link is very interesting. But I dont get your point here. I am referring to the total PSF presented in the blog post. If it can be reasonably modelled as as gaussian (or some other smooth, simple function), is not that description sufficient for most applications?
Ok, I am a bit out of my depth here and I am sure you understand the detail better than I do. My point was that you can indeed model it as a gaussian and that will get you in the ballpark - but that's nothing new and what everybody does: pretend it's gaussian and convolve/deconvolve.

With Frans' model, on the other hand, you are actually closer to the real PSF and can therefore organize your information that much better. No? It may in the end very well be that the gaussian components of the MTF overwhelm the more specific ones, but it did not look that way in the post, especially for the D40.
 
Last edited:
Jack Hogan wrote:

Ok, I am a bit out of my depth here and I am sure you understand the detail better than I do. My point was that you can indeed model it as a gaussian and that will get you in the ballpark - but that's nothing new and what everybody does: pretend it's gaussian and convolve/deconvolve.

With Frans' model, on the other hand, you are actually closer to the real PSF and can therefore organize your information that much better. No? It may in the end very well be that the gaussian components of the MTF overwhelm the more specific ones, but it did not look that way in the post, especially for the D40.
Now I get your point. Yes, more detailed knowledge about the PSF would seem to be a good thing for deconvolution. But if highly accurate focusing is practically impossible for real-world images (that introduce other variables, such as wavelength-dependency), would you not be reduced to some simple "one size fits all" approximation anyways?

I think it is interesting to finally see something approaching "hard facts" in an area surrounded by much speculation. We may be able to predict at what point (e.g. aperture) it makes sense to go for the D800E instead of the regular D800.

-h
 
BSweeney wrote:

So your disagreement on the comment about microlenses was not about my comment on microlenses. Flat view problem.

Microlenses were originally used on CCD's with 100% fill factor. They were used to concentrate light in the center of the pixel, closest to the electrode where it is collected most efficiently. This increased the QE by almost 50%. That was the original reason for employing them in the late 90s. This also benefits detectors with small fill factors, such as CMOS detectors.

http://wwwcaen.kodak.com/global/plugins/acrobat/en/business/ISS/supportdocs/Quantum.pdf
It's also used in solar collectors where there is no fill factor issue.
 
Detail Man wrote:

Source: http://www.falklumo.com/lumolabs/articles/D800AA/index.html


In your understanding, is the above quoted text an accurate description of what is going on ?
The AA filter introduces a blur so that light is spread over the RGGB filters over adjacent pixels. Diffraction also produces a blur, and so do lens aberrations.

At wide apertures, we get a blur from lens aberrations, at small apertures, we get a blur from diffraction. So, is it fair to say that the AA filter is only useful at the intermediate apertures, and has an unnecessary an deleterious effect at wide and narrow apertures?

Also, I've read in this thread how microlenses increase QE even if there is a 100% fill factor. Could someone explain this to me, if true? I mean, if the entire area of the sensor were photosensitive, why would we need microlenes?
 
Great Bustard wrote:
At wide apertures, we get a blur from lens aberrations, at small apertures, we get a blur from diffraction. So, is it fair to say that the AA filter is only useful at the intermediate apertures, and has an unnecessary an deleterious effect at wide and narrow apertures?
No. I don't think so. You particularly can't count on aberrations at wide apertures.
Also, I've read in this thread how microlenses increase QE even if there is a 100% fill factor. Could someone explain this to me, if true? I mean, if the entire area of the sensor were photosensitive, why would we need microlenes?
I think it has to do with the nature of silicon. It's more likely to generate a charge with a more concentrated beam of light than with a weaker beam. Think of it as an activation energy that must be reached before it gets started. I hope someone corrects me if I'm wrong here, but that's an assumption I've been using.
 
Victor Engel wrote:
Great Bustard wrote:

At wide apertures, we get a blur from lens aberrations, at small apertures, we get a blur from diffraction. So, is it fair to say that the AA filter is only useful at the intermediate apertures, and has an unnecessary an deleterious effect at wide and narrow apertures?
No. I don't think so. You particularly can't count on aberrations at wide apertures.
I agree that the aberrations at wide apertures not only depend on the lens and the aperture, but are also not nearly as well defined as the Airy Disk that comes from diffraction softening.

However, the Airy Disk certainly does the same job as the AA filter, at least when the aperture is small enough that the diameter of the Airy Disk is the width of two pixels (more or less), no? If not, what makes the blur of the AA filter so much different than the blur of the Airy Disk?


Also, I've read in this thread how microlenses increase QE even if there is a 100% fill factor. Could someone explain this to me, if true? I mean, if the entire area of the sensor were photosensitive, why would we need microlenes?
I think it has to do with the nature of silicon. It's more likely to generate a charge with a more concentrated beam of light than with a weaker beam. Think of it as an activation energy that must be reached before it gets started. I hope someone corrects me if I'm wrong here, but that's an assumption I've been using.
Were that true, then the QE of a sensor would be a function of the exposure, and, so far as I know, the QE is fixed.
 
Victor Engel wrote:
Great Bustard wrote:
At wide apertures, we get a blur from lens aberrations, at small apertures, we get a blur from diffraction. So, is it fair to say that the AA filter is only useful at the intermediate apertures, and has an unnecessary an deleterious effect at wide and narrow apertures?
No. I don't think so. You particularly can't count on aberrations at wide apertures.
Also, I've read in this thread how microlenses increase QE even if there is a 100% fill factor. Could someone explain this to me, if true? I mean, if the entire area of the sensor were photosensitive, why would we need microlenes?
I think it has to do with the nature of silicon. It's more likely to generate a charge with a more concentrated beam of light than with a weaker beam. Think of it as an activation energy that must be reached before it gets started. I hope someone corrects me if I'm wrong here, but that's an assumption I've been using.
I don't think so. My guess would be that it had more to do with the collection of the liberated electrons, making sure that they were generated in a place where the potential well was, rather than where it wasn't.
 
The Kodak paper cited two factors: efficiency within the pixel is greatest close to the electrode; and color cross-talk. Basically, the detector with the Mosaic filter is deep. Light coming in close to Nadir will stay within a single element of the mosaic filter and will be focused close to the electrode. Light coming in at a steep angle can cross from one element of the mosaic filter into another, can come in at a shallow angle into the detector element and not be fully absorbed.

Color artifacts occur mostly due to interpolation errors for the demosaicing process. I can use a 60 year old Jupiter-3 stopped down to F2.8 and get color artifacts on the M8 and M9. Once you resolve past 36 LP/mm the resolution of the Bayer 2x2 site is exceeded, but absolute resolution of the sensor is not exceeded until 72LP/mm. This will not be a problem for the M Monochrom. I'm not going to compute Voigt profiles. Used to do that in the 1970s for overlaying modeled imagery over experimentally generated imagery.
 
Last edited:
BSweeney wrote:

The Kodak paper cited two factors: efficiency within the pixel is greatest close to the electrode;
...so QE varies across even the photosensitive area of the pixel? Then smaller pixels should have a higher QE?
and color cross-talk. Basically, the detector with the Mosaic filter is deep. Light coming in close to Nadir will stay within a single element of the mosaic filter and will be focused close to the electrode. Light coming in at a steep angle can cross from one element of the mosaic filter into another, can come in at a shallow angle into the detector element and not be fully absorbed.
This makes sense.
Color artifacts occur mostly due to interpolation errors for the demosaicing process. I can use a 60 year old Jupiter-3 stopped down to F2.8 and get color artifacts on the M8 and M9. Once you resolve past 36 LP/mm the resolution of the Bayer 2x2 site is exceeded, but absolute resolution of the sensor is not exceeded until 72LP/mm. This will not be a problem for the M Monochrom. I'm not going to compute Voigt profiles.
I get this.
 
Pixel size- interesting observation. the Kodak 6.8um family does seem to have better QE than the 9um family. The downside- smaller pixels have lower well-capacity, meaning they saturate more easily. My oldest -and still working- digital camera is 20 years old this year, no IR cut filter, No Mosaic filter, and no microlens array. It has a Kodak KAF-1600. Over the past 20 years it has picked up 3 hot pixels. Will be interesting to compare with the KAF-18500m in the M Monochrom.

Comparing images between the M8 (0.5mm thick IR absorbing cover glass) and M9 (0.8mm IR absorbing cover glass): The M8 shows more color aliasing. The IR absorbing cover glass also contributes to image blurring, acts as a weak AA filter.
 
BSweeney wrote:

Pixel size- interesting observation. the Kodak 6.8um family does seem to have better QE than the 9um family. The downside- smaller pixels have lower well-capacity, meaning they saturate more easily.
Not if we're talking the same size sensor. For example four 1x1 pixels have the same saturation as one 2x2 pixel. The wildcard here is read noise. How does the read noise for a 1x1 pixel fare compared to the read noise for a 2x2 pixel for a given tech?

If the read noise scales with the area of the pixel, then four 1x1 pixels will have half the read noise of one 2x2 pixel. If the read noise per pixel is the same, then four 1x1 pixels will have double the read noise of one 2x2 pixel.
My oldest -and still working- digital camera is 20 years old this year, no IR cut filter, No Mosaic filter, and no microlens array. It has a Kodak KAF-1600. Over the past 20 years it has picked up 3 hot pixels. Will be interesting to compare with the KAF-18500m in the M Monochrom.

Comparing images between the M8 (0.5mm thick IR absorbing cover glass) and M9 (0.8mm IR absorbing cover glass): The M8 shows more color aliasing. The IR absorbing cover glass also contributes to image blurring, acts as a weak AA filter.
Interesting. Both the M8 and M9 have the same sized pixels, and presumably the same AA filter (or lack thereof, as I recall), so the thicker IR filter of the M8 serves the role of a stronger weak AA filter than the IR filter of the M9?
 

Keyboard shortcuts

Back
Top