Optical Spatial Frequency Filtering of Image Sensors ?

Steen Bay wrote:

Doesn't the article just say that it's the finite size of the pixel that's the low pass filter? If the size of the pixels photo-sensitive area was as large as the pixel itself (= 100 fill factor), then micro lenses wouldn't be needed.
... but a low-pass filter would still be needed for ideal sampling in most cases, as the pixel size only cancels high frequencies withing each pixel; not between neighboring pixels.
 
BSweeney wrote:

"Steen Bay" asked about fill factor and microlenses. I replied to him. Microlenses increase QE even for detectors with 100% fill factor. Microlenses are on detectors to increase QE, that is there purpose. Anti-Aliasing filters, also known as low-pass filters, are on detectors to eliminate artifacts caused by the use of Mosaic filters.
Grayscale sensors (no CFA) and Foveon-type sensors also need low-pass filters; the problems are just not as commonly dramatic.
 
> Thus, it sounds like when people say that a cameras has "no AA filter", there still nevertheless remains a significant optical low-pass spatial-frequency filter response which is implemented by the microlens-array assembly itself.

Optical crosstalk effects are nowhere close to zero.

--

 
John Sheehy wrote:
Detail Man wrote:
Thus, it sounds like when people say that a cameras has "no AA filter", there still nevertheless remains a significant optical low-pass spatial-frequency filter response which is implemented by the microlens-array assembly itself.
No; it just means that whether or not filtering occurs depends on luck of alignment. There is nothing in the pixel size or microlens to prevent high frequency transients from occurring at pixel boundaries, even if it severely attenuates them in the center of a pixel. A near-perfect microlens approaches a box filter, which, while not aliasing as much as a point sampling, is still prone to aliasing.
 
Last edited:
hjulenissen wrote:
... and you quickly get a complex system.

-h
I think that for the practical photographer, the important thing may be to identify (through gut-feeling or simple analysis) the most significant contributor(s) to the total PSF, and fix them if possible (if the goal is to capture more spatial detail). Or possibly to increase them (if the goal is to reduce aliasing).

I think that for someone into sharpening/deconvolution, the problem is sufficiently complex/multidimensional that you need to do some simplifications. E.g. estimate the optimal parameters of a gaussian function for each case. I have 35000 image from my (mostly) Canon 7D. Supposedly, this model have an OLPF with a "low cutoff". Would it be possible to analyze those 35000 images (or a subset that is deemed sufficiently sharp, noiseless etc) and see a pattern of spatial nulls that correspond to the OLPF (I assume that the contribution of a OLPF filter resemble a comb filter)? E.g. "take the 2-d FFT of each color channel in each image. Compare the magnitude of all of them for common minima"

Only for someone into analyzing each component on its own do I see the real application for this stuff. If you are designing an OLPF filter for a camera, you surely want to figure out how it affects the total PSF in great detail. If you are reviewing DSLR lenses for a technical readership, you might want to know how much is contributed by non-lense-factors (I assume that every Nikon lense reviewer will want to use the D800E).

-h
 
Detail Man wrote:
Jack Hogan wrote:
hjulenissen wrote: Do you know of any deconvolution algorithms that does this?
Afraid not. I tend to use Topaz InFocus with decent results. It's a little bit better than RL in most situations imo, but I have no details on its inner workings.
Am pretty sure that Topaz InFocus is, like Richardson-Lucy, "blind" deconvolution-deblurring. Have never tried it myself...
Without knowing details of its inner workings there are two reasons why I like InFocus a little better than most deconvolvers:

1. it's a little less blind than most in (Auto mode) in that it tries to analyze the image to determine scene based parameters for the PSF it will use. In one of the videos they mention that it 'looks for straight lines'. It maybe just guessing at the radius, or maybe a bit more;

2. it appears to use a wavelet algorithm, with consequent advantages and disadvantages - which you can actually see at work when you crank up the 'radius'.

To me it seems slightly better than RL as implemented in LR (3.x)/ACR(6.x), SmartSharpen (CS5), RT(4.x), DxO (6.x) in tests that I performed last year - which on the other hand all appeared pretty well about the same once optimized, imo. But it is a very sharp knife that needs to be used with greater care and moderation.
 
hjulenissen wrote:

The point to be taken is that it is the convolution of all PSFs that matters (or the multiplication of their frequency transforms). A sensel that integrate light over some area can be (should be?) analyzed as a spatial lowpass filter followed by a perfect "Nyquistian" point sampler. Now convolve this filter with the (optional) OLPF, focus error, diffraction, lense abhoration, camera shake, subject movement, haze, weird quantum behaviour, sensel cross-talk, etc, and you quickly get a complex system....
Quite clear. What I found interesting from Frans' comparison of the D40 vs D7000 MTF responses is the fact that aotbe the bigger the sensel area, the 'simpler' the system. Obviously, right?
I think that for the practical photographer, the important thing may be to identify (through gut-feeling or simple analysis) the most significant contributor(s) to the total PSF, and fix them if possible (if the goal is to capture more spatial detail). Or possibly to increase them (if the goal is to reduce aliasing).

I think that for someone into sharpening/deconvolution, the problem is sufficiently complex/multidimensional that you need to do some simplifications. E.g. estimate the optimal parameters of a gaussian function for each case. I have 35000 image from my (mostly) Canon 7D. Supposedly, this model have an OLPF with a "low cutoff". Would it be possible to analyze those 35000 images (or a subset that is deemed sufficiently sharp, noiseless etc) and see a pattern of spatial nulls that correspond to the OLPF (I assume that the contribution of a OLPF filter resemble a comb filter)? E.g. "take the 2-d FFT of each color channel in each image. Compare the magnitude of all of them for common minima"

Only for someone into analyzing each component on its own do I see the real application for this stuff. If you are designing an OLPF filter for a camera, you surely want to figure out how it affects the total PSF in great detail. If you are reviewing DSLR lenses for a technical readership, you might want to know how much is contributed by non-lense-factors (I assume that every Nikon lense reviewer will want to use the D800E).
Very well put.

Nevertheless it would be interesting if someone were to try to break down the various main components of the total MTF for a few typical photographic applications to better understand the contributing proportion of each. I am sure astrophotographers would have pretty good answers for their application. I would be interested to see data for typical 'good technique' landscapes.

Jack
 
Last edited:
Jack Hogan wrote:
Detail Man wrote:
Jack Hogan wrote:
hjulenissen wrote: Do you know of any deconvolution algorithms that does this?
Afraid not. I tend to use Topaz InFocus with decent results. It's a little bit better than RL in most situations imo, but I have no details on its inner workings.
Am pretty sure that Topaz InFocus is, like Richardson-Lucy, "blind" deconvolution-deblurring. Have never tried it myself...
Without knowing details of its inner workings there are two reasons why I like InFocus a little better than most deconvolvers:

1. it's a little less blind than most in (Auto mode) in that it tries to analyze the image to determine scene based parameters for the PSF it will use. In one of the videos they mention that it 'looks for straight lines'. It maybe just guessing at the radius, or maybe a bit more;
Interesting. As I have Lightroom 3.6 (but don't tend to use it), and am not a PS user, I have not tried it out. Am pretty stuck on DxO Optics Pro and it's "Lens Softness" correction stuff.
2. it appears to use a wavelet algorithm, with consequent advantages and disadvantages - which you can actually see at work when you crank up the 'radius'.
Out of curiosity, how can one tell that it is using wavelet-transforms (as opposed to standard Z-transforms, such as the FFT, etc.) ? I have not read (or at least, absorbed) much about wavelet transforms (which I do know have some differing characteristics). Any brief thoughts ?
To me it seems slightly better than RL as implemented in LR (3.x)/ACR(6.x), SmartSharpen (CS5), RT(4.x), DxO (6.x) in tests that I performed last year - which on the other hand all appeared pretty well about the same once optimized, imo. But it is a very sharp knife that needs to be used with greater care and moderation.
It is clear that RAW Therapee uses Richardson-Lucy deconvolution-deblurring - but you are the first person who I have seen express knowledge that the deconvolution-deblurring portion of Lightroom 3.x / Camera RAW 6.x (which Adobe's Eric Chan has publicly stated is unchanged in Lighroom 4.x / Camera RAW 7.x) utilizes the Richardson-Lucy algorithm.

Adobe would not have to pay royalties on Richardson-Lucy - which I am sure they would be attracted to. I have seen Chan also acknowledge that LR / CR uses some form of deconvolution-deblurring (which is "mixed" in with what appears to be a more standard USM-related process, but only when the "Detail" control-slider is set to non-zero settings). Chan states that when the "Detail" control-slider is set to 100, the output (of that particular sub-block of the Sharpening tool) is entirely composed of their deconvolution-deblurring item. I find that it loves to generate rather horrid looking artifacts, and find it approaching the unusable at "Detail" control-slider settings approaching the default value of 25.

I have read (here and there on the internet) speculation that PS's "Smart Sharpen" incorporates a (single-pass, not iterative) deconvolution-deblurring. Have never used PS at all.

On the other hand, RAW Therapee's RL DD is not much better (than LR), and I always end up backing it pretty far off from the defaults (Radius near 0.5, Damping not less than 33%-25%, typical maximum of around 16 re-circulations, Amount as sparing as can be). RT 4.0.9.50's NR is just OK. The newer item (a stable version for Win OS which may be posted soon) is said to have much improved NR. Hopefully, that may allow utilizing the RL DD a bit more liberally ...

The general problem of artifacts (increasing with the number of iterative re-circulations) can be a significant one (particularly in out-of-focus areas with a lot of color-contrasts, I find with DxO).

In DxO Optics Pro, I use just a tad more NR to help to offset that - and DxO Optics Pro versions 7.0+ include a "Bokeh" control (which appears to be a low-pass spatial-frequency filter of some sort, added for the purpose of decreasing such artifacts). Everything has it's price, though - and I tend to bias that control towards less low-pass filtering (still tending to favor using just a tad more NR instead).

It appears that DxO (6.x and 7.x) may (itself) employ some amount of some sort of "silent" NR (or LPF) that acts to reduce the deconvolution artifacts when image-noise (and/or ISO Gain annunciated in the image-file meta-data) is at higher levels.

Interesting that you find Topaz InFocus to be perhaps more effective than DxO Optical Corrections Modules. Here is a without/with "Lens Softness" corrections comparison of foliage-detail located at 30 Meters or more (DMC-GH2 RAW + LGV 7-14mm lens at 7mm). 100% crops, loss-less JPGs:

Without "Lens Softness" corrections:



Download Original at: http://www.dpreview.com/galleries/4464732135/download/2274309


.


With "Lens Softness" corrections:




Download Original at: http://www.dpreview.com/galleries/4464732135/download/2274310
 
Last edited:
Detail Man wrote:
Jack Hogan wrote:
Out of curiosity, how can one tell that it is using wavelet-transforms (as opposed to standard Z-transforms, such as the FFT, etc.) ? I have not read (or at least, absorbed) much about wavelet transforms (which I do know have some differing characteristics). Any brief thoughts ?
If you cheat-peek at (much) greater than 100% you can actually see little short pixelated waves, that's why I am guessing a wavelet based algorithm.
To me it seems slightly better than RL as implemented in LR (3.x)/ACR(6.x), SmartSharpen (CS5), RT(4.x), DxO (6.x) in tests that I performed last year - which on the other hand all appeared pretty well about the same once optimized, imo. But it is a very sharp knife that needs to be used with greater care and moderation.
It is clear that RAW Therapee uses Richardson-Lucy deconvolution-deblurring - but you are the first person who I have seen express knowledge that the deconvolution-deblurring portion of Lightroom 3.x / Camera RAW 6.x (which Adobe's Eric Chan has publicly stated is unchanged in Lighroom 4.x / Camera RAW 7.x) utilizes the Richardson-Lucy algorithm.
Writing too quickly, apologies. I only know for sure that RT uses RL. I suspect that that's what is used in PS SmartSharpen ('more accurate', 'lens blur') and ACR/LR, all of which I believe use multiple passes and a PSF to simulate a circular aperture. These are just tidbits of information that I picked-up here and there over time and may not be correct. I also do not like ACR/LR's milkshake approach with the 'detail' slider and would prefer to have separate controls.
It appears that DxO (6.x and 7.x) may (itself) employ some amount of some sort of "silent" NR (or LPF) that acts to reduce the deconvolution artifacts when image-noise (and/or ISO Gain annunciated in the image-file meta-data) is at higher levels.
I hadn't looked at DxO's since my trial a year or so ago. The DxO sharpened image you linked above seemed to also have a fair amount of USM applied - was that intentional or is this what DxO does by default? It could just be that strong dampening makes deconvolution start looking like USM :-) When I have a little time I will pass your original through InFocus and post a comparison of the two.

Jack
 
Jack Hogan wrote:
Detail Man wrote:
Jack Hogan wrote:
Out of curiosity, how can one tell that it is using wavelet-transforms (as opposed to standard Z-transforms, such as the FFT, etc.) ? I have not read (or at least, absorbed) much about wavelet transforms (which I do know have some differing characteristics). Any brief thoughts ?
If you cheat-peek at (much) greater than 100% you can actually see little short pixelated waves, that's why I am guessing a wavelet based algorithm.
To me it seems slightly better than RL as implemented in LR (3.x)/ACR(6.x), SmartSharpen (CS5), RT(4.x), DxO (6.x) in tests that I performed last year - which on the other hand all appeared pretty well about the same once optimized, imo. But it is a very sharp knife that needs to be used with greater care and moderation.
It is clear that RAW Therapee uses Richardson-Lucy deconvolution-deblurring - but you are the first person who I have seen express knowledge that the deconvolution-deblurring portion of Lightroom 3.x / Camera RAW 6.x (which Adobe's Eric Chan has publicly stated is unchanged in Lighroom 4.x / Camera RAW 7.x) utilizes the Richardson-Lucy algorithm.
Writing too quickly, apologies. I only know for sure that RT uses RL. I suspect that that's what is used in PS SmartSharpen ('more accurate', 'lens blur') and ACR/LR, all of which I believe use multiple passes and a PSF to simulate a circular aperture. These are just tidbits of information that I picked-up here and there over time and may not be correct. I also do not like ACR/LR's milkshake approach with the 'detail' slider and would prefer to have separate controls.
That would allow the user to select only the deconvolution-deblurring output, or only the (presumably USM-related) output independently. The sum of their outputs is (presumably) then "mixed" with the original image-data (via the "Amount" control) - so (maybe) the same could be accomplished by just having the "Detail" control-slider at a setting of 100 or 0 ? Dunno. That is precisely the problem with Adobe "secret sauce" (or any other commercial RAW processor's "secret sauce"). We have no way of definitively informing ourselves as to the internal system architectures.

I don't find the (presumably USM-related) section to be very effective (and disable it in favor of applying my own controllable-parameter 16-bit USM following any down-sampling while still in 16-bit TIFF format) when using Lightroom. RAW Therapee's (adjustable parameter) RL deconvolution-deblurring sharpening seems friendlier to me - though it's deconvolution-deblurring artifacts are not much better than what I see LR creating. As I mentioned, perhaps the new upcoming RT NR may help - though that is, admittedly, sort of a (seemingly perhaps necessary) "cheating" ...
It appears that DxO (6.x and 7.x) may (itself) employ some amount of some sort of "silent" NR (or LPF) that acts to reduce the deconvolution artifacts when image-noise (and/or ISO Gain annunciated in the image-file meta-data) is at higher levels.
I hadn't looked at DxO's since my trial a year or so ago. The DxO sharpened image you linked above seemed to also have a fair amount of USM applied - was that intentional or is this what DxO does by default? It could just be that strong dampening makes deconvolution start looking like USM :-) When I have a little time I will pass your original through InFocus and post a comparison of the two.
No USM used (in or after) DxO Optics Pro 7.23. Just the "Lens Softness" corrections at default values (Global=0, Detail=0). The "Bokeh" low-pass filtering as backed off to 35 from it's (50 out of 100) default value - as I suspect that they may have made the default (50) value of the new control cause a little more low-pass filtering to occur than was the case in version 6.x (which had no "Bokeh" control). It is (only) my guess that DxO Labs (probably) responded to complaints regarding artifacts (in Versions 6.x) by setting-up the default settings (in Versions 7.x) to have a little more low-pass filtering than previously existed (when there was no user-control for that).

I find that this (setting of 35) is indeed enough to place the "Lens Softness" tool a bit on the "crispy side". As I suspect that increasing the "Detail" control increases the number of iterative recirculations, I usually leave it alone, in favor of reducing the "Bokeh" control-setting, which (I suspect) does not affect the number of iterative re-circulations.

The (Chrominance/Luminance) NR was set very low (at only 20% of the setting-values that DxO Optcs Pro automatically sets). As I mentioned, I generally tend to increase the NR setting slightly above those levels on a percentage basis) as a technique for slightly softening any "rough edges" srrounding artifacts or perceived "crispiness". In this case I did not (in favor of minimal NR).


Personal perceptions of such (sharpened) images are always quite subjective, it seems. I do not myself see characteristic "halos" or "jaggies" commonly associated with USM-related edge-enhancement processes in this particular image. That is not to say that you do not, however ...

Others have at times remarked that my "Lens Softness" results look "oversharpened". I initiially used to use more aggressive "Detail" and "Global" settings - but these days almost always keep them to their default settings (except for what are poorly focused, soft images, etc.) - though doing that alone in no way guarantees "perceptual bliss". All these things are somewhat relative.

Sometimes I wonder (as well) if our senses are tuned to respond in adverse ways when we see what is not "soft and blurry" to some extent in the far-field/background. I think that it is largely a matter of what our minds' eyes are "tuned" to expect in images from previous viewing experiences ? Thus, if/when we get what we think that we desired, such things may register an unexpected perceptual "dissonance" ? There seems no "objective" way to determine such things, though ...

It's not clear whether your processing of the loss-less JPG in my DPR Image Gallery would be a meaningful way to proceed when using Topas InFocus ? I see that the TIF version from which the JPG was made is 16 Mbytes in size - so I could email it to you if you like (PM me an email address if so). In the final analysis, though, we don't really know if the InFocus plug-in (may, possibly) operate on a RAW-data level (like DxO "Lens Softness" corrections very likely do) - so even then, processing that TIF may not be the same ? What do you think about that ?
 
Last edited:
Detail Man wrote:

Out of curiosity, how can one tell that it is using wavelet-transforms (as opposed to standard Z-transforms, such as the FFT, etc.) ? I have not read (or at least, absorbed) much about wavelet transforms (which I do know have some differing characteristics). Any brief thoughts ?
I am a dsp guy. In my (non-mathematical) mind, wavelets are just a fancy name for "filterbanks with certain properties". Filterbanks can be seen as a generalization of block transforms (such as FFT, DCT & friends). In practice, you get to trade spatial resolution for spatial-frequency resolution in ways that can be benefitial for a given problem (same as trading temporal resolution for frequency resolution in audio applications). The core issue is that no transform can ever give you the exact frequency and the exact place for every signal component (sort of the Heissenberg uncertainty theoreme for signal processing).

I am highly sceptical to claims to the effect that wavelets will cure cancer and serve coffee in the morning. However, it seems to be a nice framework for solving some dsp problems better and efficiently. What bugs me is that wavelets seems like "physicists reinventing what signal-processing community have been doing for 50 years". Being clever people, one might assume that they will propose clever solutions, sometimes better than what the signal processing community had developed, but since they did it from scratch the terminology is very different, and it can be really hard to understand what is really new/better about it, and what is just "same sh*t, new wrapping"...




-h
 
More than just deconvolution then, from your description.

Here is a first pass: the original, DxO with your settings and InFocus alternating in this sequence every two seconds. For my tastes the DxO settings are too strong (fuzzy tonal-contrast) so I matched them with the 'Pre-sharpening Common' setting in InFocus that I also usually consider too strong (too crisp). I don't know what DPR will do to the GIF so I recommend downloading the original and opening it in a browser - hit ctrl-zero to make sure that the browser is not zooming the images: in these comparisons one is not allowed to view images at more than 100% ;-)

Original at 100%, DxO as setup by DM, Topaz InFocus 'Presharpening Common' settings
Original at 100%, DxO as setup by DM, Topaz InFocus 'Presharpening Common' settings

Below are just the DxO and InFocus renderings alternating every 3 seconds for ease of comparison. It's clear which is which.

 DxO and InFocus set up as above
DxO and InFocus set up as above

Jack

Edit: It appears that DPR is not displaying the animated GIFs properly (the top one just looks like the original and the bottom one just looks like InFOcus). Let me know if the originals work (they work for me). If they don't I'll post individual images.
 
Last edited:
The animation works just fine when clicking original. Can you do the same thing with a picture having several faces? I think artifacts would be more noticeable for that sort of picture.
 
Victor Engel wrote:

The animation works just fine when clicking original. Can you do the same thing with a picture having several faces? I think artifacts would be more noticeable for that sort of picture.
More than happy to if you'd like to share an unsharpened image to work on. However I should mention that we normally shy away from super sharp portraits that show every little defect, preferring instead to 'soften' them a bit. That's why deconvolution is typically thought to be best suited for use on natural scenes with a lot of high frequency detail, such as the one shared by the OP.
 
Jack Hogan wrote:
Victor Engel wrote:

The animation works just fine when clicking original. Can you do the same thing with a picture having several faces? I think artifacts would be more noticeable for that sort of picture.
More than happy to if you'd like to share an unsharpened image to work on. However I should mention that we normally shy away from super sharp portraits that show every little defect, preferring instead to 'soften' them a bit. That's why deconvolution is typically thought to be best suited for use on natural scenes with a lot of high frequency detail, such as the one shared by the OP.
Good point. The idea was not so much to apply it to that sort of image but to try to flush out artifacts. Our brains are particularly tuned to faces, which is why I suggested that. I'll let you know if I run across a good test image.
 
In the lower comparison, which is the one I looked at, there are horrendous artifacts which are somewhat blurred in one of the versions - I don´t know which one.

Sorry if I sound harsh, but is there a point to showing these awful renderings?

II´m not on attack here, I just don´t understand what is supposed to be demonstrated. I mean, i can do ugly renderings with any converter - but try not to...
 
True. Looks like excessive USM applied to jpeg compression artifacts. Still, given the subject matter, it's hard to discern how much of that is from the doconvolution procedure and how much is from other factors, like jpeg compression.

I guess I don't really care that much. I have yet to see a deconvolution product that produces satisfactory results, so I likely wouldn't use the product anyway.
 
I share your view on deconvolution.

I get extremely sharp results from my D800 files using ACR 7.2, and even more so using CaptureOne 7, wihtout visible artifacts. And that is more than I can say for any trials with deconvolution.
 
Ralf Ronander wrote:

In the lower comparison, which is the one I looked at, there are horrendous artifacts which are somewhat blurred in one of the versions - I don´t know which one.

Sorry if I sound harsh, but is there a point to showing these awful renderings?

II´m not on attack here, I just don´t understand what is supposed to be demonstrated. I mean, i can do ugly renderings with any converter - but try not to...
Nothing is supposed to be demonstrated: just showing two different algorithms at work. And since you mention it, you don't sound harsh: you sound boorish
 
Last edited:
Ralf Ronander wrote:

I share your view on deconvolution.

I get extremely sharp results from my D800 files using ACR 7.2, and even more so using CaptureOne 7, wihtout visible artifacts. And that is more than I can say for any trials with deconvolution.
Excellent, I'd be interested in seeing that. Feel free to share your interpretation of the same area of the original above sharpened with your tools of choice.
 

Keyboard shortcuts

Back
Top