The whole question of lens sharpness...

Started Jun 12, 2013 | Discussions
olliess
Contributing MemberPosts: 874
Like?
Re: The whole question of lens sharpness ...
In reply to Detail Man, Jun 21, 2013

Detail Man wrote:

Thanks for the links. But what about the "(model) blur function of a lens" ?

You might find useful the Wiki entries for the Point spread function (PSF)

What is the specific significance of "distance to subject" ?

and the Circle of confusion.

Reply   Reply with quote   Complain
Jeff
Veteran MemberPosts: 4,470
Like?
Re: A different view ...
In reply to Reilly Diefenbach, Jun 21, 2013

Reilly Diefenbach wrote:

Jeff wrote:

Reilly Diefenbach wrote:

My advice to anyone on the fence would be to go for the highest MP body you can afford, then look for great cheap lenses to match if you're budget limited. They are out there in quantity, and you won't be disappointed. 4K monitors and big flat panels are here, and the more pixels the merrier.

I strongly disagree with this blanket advice. If maximizing detail is your thing -- I'm not judging, that's an interesting thing to do -- then fine.

It is indeed my thing and a whole lot of others, and the only sane reason to lug around four or five pounds of camera and lens and spend lots of time and money on hard and software :^) Also microcontrast and overall "pop" which gets better as you go up the chain. Even a casual snapshot take on a whole new intensity. ( not my shot.)

As I said, if that's your thing ...

Reply   Reply with quote   Complain
hjulenissen
Senior MemberPosts: 1,593
Like?
Re: The whole question of lens sharpness...
In reply to olliess, Jun 21, 2013

I think that you are approaching this from a strange angle. Previous claims suggests that it is impossible to recover "true sharpness". Now, as long as true sharpness is not clearly defined, this could be taken to mean anything, but I think that for most sensible interpretations, this is wrong:

1. An image that is perceived as "blurry" can, in fact, usually be processed so as to appear "less blurry"

2. A camera system with a given mtf50 can usually be processed so as to have a higher mtf50.

In both cases, there are practical compromises to be made, and deconvolution is far from a solution to all problems.

-h

Reply   Reply with quote   Complain
hjulenissen
Senior MemberPosts: 1,593
Like?
Re: The whole question of lens sharpness...
In reply to Basalite, Jun 21, 2013

Basalite wrote:

.... There is also no point in a soft focus lens when you can use software to blur a sharp lens.

If you are about to clip highlights in your camera (we all do from time to time, I think), then blurring that bright dot before clipping will produce different results that may be hard to simulate in Photoshop.

-h

Reply   Reply with quote   Complain
hjulenissen
Senior MemberPosts: 1,593
Like?
Re: Sharpness not too important on 2.4MP monitor ...
In reply to Basalite, Jun 21, 2013

Basalite wrote:

??? Where I'd I round anything? The only people that are taking license to round things off are those claiming there is such a thing as a 2.4 MP monitor.

You have managed to spam this thread with irrelevant nonsense by not understanding the intent of the poster ("high-resolution displays are readily available"), starting a meaningless number-war. Sounds like you should perhaps take your own advice:

Basalite wrote:

...You are grossly over-thinking the subject, as a lot of modern day "photographers" like to do in our digital age.

"BertIverson wrote:

My take on this is simple. If one can fill the frame (avoiding cropping), lens sharpness is usually irrelevant when viewing on a 2.4MP monitor. Viewing a 16MP shot on a 2.4MP monitor means about 4-6 sensor pixels are rendered into 1 pixel on the monitor."

Reply   Reply with quote   Complain
Alphoid
Senior MemberPosts: 2,254
Like?
Re: The whole question of lens sharpness...
In reply to olliess, Jun 21, 2013

olliess wrote:

Alphoid wrote:

In theory, given a perfect model of the lens, I can completely undo any blur caused by the lens.

I took this statement to mean just what it said, namely that you could undo any blur given a perfect knowledge about the blur function of the lens. To me, that means lens defects, diffraction, defocus, and anything else that modifies a point in the image.

My comment was, specifically, any blur caused by the lens. Diffraction, limited depth-of-field, antialiasing filter, resolution limits of the sensor, etc. are not caused by the lens. You would have those even with an ideal lens. Some of these are removable, and some are not. Chromatic aberration, spherical aberations, etc. are caused by the lens.

Let's leave things which transform the focal plane (tilt, field curvature, and the like) out of the discussion for now. Those one cannot correct for, but it's not clear whether they are of any relevance to sharpness in photography (subjects are rarely flat).

If you begin by assuming the PSF is linear and translation invariant

I assume it is linear, but sadly, not translation invariant. I do not rely on translation invariance for anything I say. It complicates the modeling and inversion (quite significantly), but does not change what's possible.

call it P, then the observed version of the original image, O(x,y), is the modified image, I(x,y), where

I(x,y) = (P(s) * O)(x,y) + N(x,y),

which includes a functional dependence of P on s, the distance to subject.

We can ignore distance to subject for now, and just worry about P at the focal distance (as explained in an earlier post).

There is also additive noise, N(x,y), which means you are not going to be able to invert perfectly by just applying the inverse of P(s) to I(x,y) - N(x,y).

You will get back, in your notation, O(x,y)+(P^-1 * N)(x,y). This is the exact original image, with a transformed version of the noise. In practice, you end up increasing high-frequency noise, but with a typical lens, to an extent that does not matter at low ISO. Note that this, specifically, undoes any blur caused by the lens, as in my original claim.

Even you take away the noise, then you're still left with something that looks just like the 2-d heat equation. Thus if you are guaranteed an unique inverse for the blur problem, then it seems to imply that you are also guaranteed solutions to the backward heat equation.

This is not correct. The systems are sort-of-similar in that they sort-of-blur things, but the math is different (hence, my counterexample). If one is not invertable, it does not guarantee that the other is not.

That said, I do believe (and the key word is believe -- we're slightly outside of my domain of expertise for heat equation) that the 2d heat equation is invertable (and a quick Google search brings up papers which believe the same).

Hence my comment about entropy.

Please explain this argument. How does entropy relate? I believe you have a misunderstanding of entropy. If you give me an exact state of any physical system (classical or quantum -- but I'll ignore quantum for the purposes of this discussion, since it will only complicate things), I can simulate it backwards to get the state at any point in the past. Thermodynamics just tells me that I cannot physically bring it back to that state without increasing entropy elsewhere. Total entropy in the world increases.

Now moving on to your modified claim:

Original claim. You just misunderstood it

Convolving an image with a PSF means, at least in theory:

1) in the frequency domain, some of the spectrum gets spread beyond the Nyquist limit

This is incorrect.

I see I was completely unclear, so let me try again:

1) The operation of masking the image with a fixed frame (e.g., a rectangular windowing) is equivalent to convolution in the frequency domain. Since the Fourier transform of a rectangle has infinite support, some of the variance below the Nyquist limit must be spread beyond the Nyquist limit. Do you agree?

I am not sure. I don't believe I agree, but I think I may be misunderstanding what you're trying to say. There are several things which I'm finding ambiguous (e.g. What operation in the optical chain corresponds to 'masking the image with a fixed frame?'). Can you write out the above being slightly more verbose/specific?

2) in the spatial domain, some of the image gets spread beyond the edge of the frame

This is correct, but not significant. The PSF is small. This would only matter if the PSF was a substantial portion of the image.

The PSF may or may not be small in extent. In theory, even the PSF due only to diffraction has infinite support, although the magnitude is small outside of a small extent.

PSF due to the lens is small, at least as it relates to sharpness. Places where it is has large extent but small magnitude contribute to contrast (rather than sharpness).

G(H(image)+noise)=image+G(noise)

For a sharpening filter, G>1 at high frequencies, so noise increases. In practice, this doesn't matter much at low ISO

H and G are linear operators. H(image) + noise is not a linear operator. G is the inverse of H but not of H + noise, so G is not the inverse solution of the problem, right?

I think we're arguing terminology and what we consider to be 'the system.' You get exactly what I said -- your original image plus a transformed version of the noise. This undoes any blur caused by the lens. Do you disagree?

Reply   Reply with quote   Complain
Reilly Diefenbach
Senior MemberPosts: 8,020Gear list
Like?
Re: A different view ...
In reply to Jeff, Jun 22, 2013

Yes, it's true, some of us fringe types, (especially people who hang out at DPR) want a lot more than iphone quality. We've made the commitment in dollars and time to make our pics the best they can be.

1/4 size, taken with the elderly 35f2D.

Reply   Reply with quote   Complain
GaryW
Veteran MemberPosts: 6,772Gear list
Like?
Re: The whole question of lens sharpness...
In reply to Alphoid, Jun 22, 2013

Alphoid wrote:

olliess wrote:

Alphoid wrote:

In theory, given a perfect model of the lens, I can completely undo any blur caused by the lens.

I took this statement to mean just what it said, namely that you could undo any blur given a perfect knowledge about the blur function of the lens. To me, that means lens defects, diffraction, defocus, and anything else that modifies a point in the image.

My comment was, specifically, any blur caused by the lens. Diffraction, limited depth-of-field, antialiasing filter, resolution limits of the sensor, etc. are not caused by the lens. You would have those even with an ideal lens. Some of these are removable, and some are not. Chromatic aberration, spherical aberations, etc. are caused by the lens.

I had one photo in particular that was a bit unsharp due to diffraction -- not exactly blurry, but you could tell it wasn't sharp if you zoomed in-- and deconvolution completely rescued it.  It was like the blurriness was not even there.

I have less luck with countering poor focus, but for slight diffraction, it seems to work fine.

In addition, it's what DxO uses (I think) for their "lens sharpness", to counteract the effects of unsharp lenses, particularly in the corners where they use more intensity to counter the greater softness.  This seems to work to a large extent.

-- hide signature --

Gary W.

 GaryW's gear list:GaryW's gear list
Sony Alpha NEX-6 Sony E 16-50mm F3.5-5.6 PZ OSS Sony Cyber-shot DSC-V3 Sony Cyber-shot DSC-HX5 Sony Alpha DSLR-A100 +10 more
Reply   Reply with quote   Complain
GaryW
Veteran MemberPosts: 6,772Gear list
Like?
Re: A different view ...
In reply to Reilly Diefenbach, Jun 22, 2013

Reilly Diefenbach wrote:

Yes, it's true, some of us fringe types, (especially people who hang out at DPR) want a lot more than iphone quality. We've made the commitment in dollars and time to make our pics the best they can be.

But you have to spend increasingly large amounts of money to get ever slighter increases in quality.  You don't necessarily need "the sharpest" cameras and lenses or nothing else will do.  It only has to be good enough for the intended use.  I've got a number of poster-sized prints from myself or pro photographers, and as far as I know or can tell, nothing was over 20mp.  I guess they could have been sharper or more detailed, but yet, they still pried my money away from me.  So, at least as for a couple of the prints, a 12MP Nikon was sufficient.    And they are making money at this thing!  Photography is just a hobby, for me.  I can't justify a top-of-the-line expenditure. I've seen people with large bags, vests, etc.  What would I do with it all?  I already have multiple lenses, and feel like it's a bit much. What I have already does a good job.  So, it's really all relative, and depends on each individual.  But pursuing sharpness for sharpness' sake does seem like a bit obsessive.

I think part of why there's so much attention on sharpness is that it's easy to measure and quantify.  I also am interested in color and bokeh, and general usability.

1/4 size, taken with the elderly 35f2D.

Eh?

-- hide signature --

Gary W.

 GaryW's gear list:GaryW's gear list
Sony Alpha NEX-6 Sony E 16-50mm F3.5-5.6 PZ OSS Sony Cyber-shot DSC-V3 Sony Cyber-shot DSC-HX5 Sony Alpha DSLR-A100 +10 more
Reply   Reply with quote   Complain
Jeff
Veteran MemberPosts: 4,470
Like?
Re: A different view ...
In reply to Reilly Diefenbach, Jun 22, 2013

Reilly Diefenbach wrote:

Yes, it's true, some of us fringe types, (especially people who hang out at DPR) want a lot more than iphone quality. We've made the commitment in dollars and time to make our pics the best they can be.

1/4 size, taken with the elderly 35f2D.

BFD.

You see, the issue is what we mean by quality.  Quality can mean very, very different things. If what you have in mind is gathering pixels, then amen brother, go out and get 'em.  The technical achievement is amazing. Others may have very different values.

What I'm objecting to is the narrow conceit, and poor advice, that gathering pixels is always (or ever) the overriding objective. That's just not true. There's a lot of interesting phone photography going on, and folks otherwise capturing great images with pretty minimal equipment. Dismissing it is a pretty silly, if you ask me.

Good photography can be done, and is done everyday, on modest budgets.  That's a good thing, and should be encouraged.

Reply   Reply with quote   Complain
Detail Man
Forum ProPosts: 14,950
Like?
Re: DxO Labs' "Lens Softness" Corrections
In reply to GaryW, Jun 22, 2013

GaryW wrote:

Alphoid wrote:

olliess wrote:

Alphoid wrote:

In theory, given a perfect model of the lens, I can completely undo any blur caused by the lens.

I took this statement to mean just what it said, namely that you could undo any blur given a perfect knowledge about the blur function of the lens. To me, that means lens defects, diffraction, defocus, and anything else that modifies a point in the image.

My comment was, specifically, any blur caused by the lens. Diffraction, limited depth-of-field, antialiasing filter, resolution limits of the sensor, etc. are not caused by the lens. You would have those even with an ideal lens.

Diffraction is cause by the aperture within the lens-system - and diffraction, antialiasing filter, and resolution limits of the sensor all result in effects influencing the shape of the system PSF.

Some of these are removable, and some are not. Chromatic aberration, spherical aberations, etc. are caused by the lens.

I had one photo in particular that was a bit unsharp due to diffraction -- not exactly blurry, but you could tell it wasn't sharp if you zoomed in -- and deconvolution completely rescued it. It was like the blurriness was not even there.

I have less luck with countering poor focus, but for slight diffraction, it seems to work fine.

In addition, it's what DxO uses (I think) for their "lens sharpness", to counteract the effects of unsharp lenses, particularly in the corners where they use more intensity to counter the greater softness. This seems to work to a large extent.

The only published mention of deconvolution-deblurring being used in DxO Labs' "Lens Softness" corrections that I know of is here (see the portion of the quoted text I have underlined below):

Lens Softness correction

Lens softness’ is the intrinsic degradation of sharpness introduced by an imaging device (camera body plus lens). In image processing terms, this is a local, color-channel dependent, anisotropic convolution of the original image, which results in a 'blurry' image. In terms of image spatial frequencies, ‘blurriness' refers to how well low spatial frequencies are reproduced in the image.

You may be familiar with these concepts if you are familiar with MTF (Modulation Transfer Function) curves. DxO Labs has developed a unique unit called the BxU (Blur eXperience Unit) which is a mathematical way of describing this 'blur'. Reducing the ‘lens softness’ or 'blur' or 'lack of sharpness' means performing local, color-channel dependent and anisotropic deconvolution of the image produced by the camera.

Furthermore, DxO deblurring uses a complex contextual approach taking into account both local noise and local detail level in the image. As a result, deblurring will be automatically reduced in uniform areas (like a pure blue sky.), but increased in a detailed zone.

http://www.beautiful-landscape.com/Thoughts35.html

In a private written communication in 2010, a DxO Labs employee stated to me a confirmation that DxO Optics Pro "Lens Softness" does (in-part) utilize "deconvolution deblurring". Here is their present information describing it in general terms:

http://www.dxo.com/intl/photography/tutorials/enhance-sharpness-your-camera-dxo-optics-pro

As the DxO RAW Optics Correction Modules for a given camera-lens combination appear (to me) to in general to be significantly more effective than the DxO JPG Optics Correction Modules (for the same camera-lens combination), I think that it (may) be the case that some of the related processes may proceed on a RAW-level prior to de-mosaicing of the RAW image-data.

This approach is distinct from the deconvolution-deblurring that occurs in the sharpening tools of Adobe Lightroom, Camera RAW, Photoshop, the PS/LR plugin Topaz Infocus, as well as RAW Therapee's R-L DD - all of which appear operate on what is already de-mosaiced image-data.

DM ...

Reply   Reply with quote   Complain
hjulenissen
Senior MemberPosts: 1,593
Like?
Theoretical deconvolution vs practical
In reply to Alphoid, Jun 22, 2013

I come from convolution of 1-d LTI audio signals. There is a mature theory for those things. I assume that some have worked out the theory for extending this into non-time/shift-invariant systems?

So, basically, an invertible known linear system with known output can be inverted. The degradations of camera lense can be inverted to the degree that the entire system behaves like that.

This theoretical model is a handy model to understand what is going on, but there are several practical complications:

1. It is impossible to measure the PSF perfectly, and if you did, it might change ever so slightly 2 seconds later

2. To know all output, you would need the outputs beyond the sensor. I guess that this can be sorted out by discarding results that are 1 or 0.5 effective PSF kernel width from the image edge (cropping), mirror-imaging the image to articficially get something to work on, or living with image artifacts at the edges.

3. The PSF might be very large. Perhaps a deconvolution + contrast modelling works to give good pictures, but surely there must be cases where the PSF reduce contrast in the right half of the image and not the left half. Global contrast adjustement would not fix that, and a 2MP PSF would be impractical?

4. The kernel might contain infinitely deep zeros (located on the unit circle of the Z-transform). These represent complete loss of information that cannot be brought back. I don't know if this is common or possible for PSF.

5. Light itself contains noise (being a stream of photons).

6. Any real camera contains an image sensor with a CFA, limited density sensels, clipping highlights and noise. Most also have an OLPF. We dont get to process the light from the lense until it has been further distorted by those things.

-h

Reply   Reply with quote   Complain
Alphoid
Senior MemberPosts: 2,254
Like?
Re: Theoretical deconvolution vs practical
In reply to hjulenissen, Jun 22, 2013

hjulenissen wrote:

1. It is impossible to measure the PSF perfectly, and if you did, it might change ever so slightly 2 seconds later

True indeed.

Systems that have an approximate measurement (DxO) do pretty well. Doing a near-exact measurement for a lens would be very expensive.

2. To know all output, you would need the outputs beyond the sensor. I guess that this can be sorted out by discarding results that are 1 or 0.5 effective PSF kernel width from the image edge (cropping), mirror-imaging the image to articficially get something to work on, or living with image artifacts at the edges.

Irrelevant, for the reasons you mention later. The PSF is pretty tiny. We're talking about a tiny, few-pixel crop.

This is a big deal for audio, where you really do end up with long transients.

3. The PSF might be very large. Perhaps a deconvolution + contrast modelling works to give good pictures, but surely there must be cases where the PSF reduce contrast in the right half of the image and not the left half. Global contrast adjustement would not fix that, and a 2MP PSF would be impractical?

For the purposes of sharpness (PSF is more than 3dB or so), the relevant part of the kernel really is quite tiny.

4. The kernel might contain infinitely deep zeros (located on the unit circle of the Z-transform). These represent complete loss of information that cannot be brought back. I don't know if this is common or possible for PSF.

This is very hard to do optically, but not impossible. See figure 4 in:

https://graphics.stanford.edu/courses/cs448a-08-spring/levin-coded-aperture-sig07.pdf

Diffraction aside, you can't practically subtract light (this is different from audio). As a result, the way to get zeros is to add light in really funny patterns.

This is not something I'd expect to see in any real-world lens.

This is something you do see in acoustics, where you will have standing waves in a room and the like.

5. Light itself contains noise (being a stream of photons).

No problem.

6. Any real camera contains an image sensor with a CFA, limited density sensels, clipping highlights

CFA and clipping are potential problems, indeed.

and noise.

Most also have an OLPF. We dont get to process the light from the lense until it has been further distorted by those things.

No problem. I intentionally left this out of the discussion, but this has a PSF as well that could be included in the model. In that case, we'll compensate for the overall blur better than for just the blur of the lens (although, depending on OLPF design, possibly not perfectly). Or we can exclude it, in which case we only correct for blur of the lens, but perfectly.

Reply   Reply with quote   Complain
GaryW
Veteran MemberPosts: 6,772Gear list
Like?
Re: DxO Labs' "Lens Softness" Corrections
In reply to Detail Man, Jun 22, 2013

Detail Man wrote:

GaryW wrote:

I have less luck with countering poor focus, but for slight diffraction, it seems to work fine.

In addition, it's what DxO uses (I think) for their "lens sharpness", to counteract the effects of unsharp lenses, particularly in the corners where they use more intensity to counter the greater softness. This seems to work to a large extent.

...You may be familiar with these concepts if you are familiar with MTF (Modulation Transfer Function) curves. DxO Labs has developed a unique unit called the BxU (Blur eXperience Unit) which is a mathematical way of describing this 'blur'. Reducing the ‘lens softness’ or 'blur' or 'lack of sharpness' means performing local, color-channel dependent and anisotropic deconvolution of the image produced by the camera.

Furthermore, DxO deblurring uses a complex contextual approach taking into account both local noise and local detail level in the image. As a result, deblurring will be automatically reduced in uniform areas (like a pure blue sky.), but increased in a detailed zone....

...

As the DxO RAW Optics Correction Modules for a given camera-lens combination appear (to me) to in general to be significantly more effective than the DxO JPG Optics Correction Modules (for the same camera-lens combination), I think that it (may) be the case that some of the related processes may proceed on a RAW-level prior to de-mosaicing of the RAW image-data.

The "lens sharpness" option does not appear for unsupported lenses.  I am not sure what happens for RAW vs. JPEG.

This approach is distinct from the deconvolution-deblurring that occurs in the sharpening tools of Adobe Lightroom, Camera RAW, Photoshop, the PS/LR plugin Topaz Infocus, as well as RAW Therapee's R-L DD - all of which appear operate on what is already de-mosaiced image-data.

Whatever DxO is doing, they've really got it figured out.  Typical deconvolution tends to produce artifacts if overdone, so they are definitely doing something to control the noise.

I think where DxO has done a good job overall is making their program deceptively simple.  You could probably get good results with RAW Therapee, but RT requires more fiddling.  My thought is that at lower ISO, I liked RT just fine, but at higher ISO, the NR is not the best, and the deconvolution will increase the noise.  I can get pretty good results with DxO at any ISO.

DM ...

-- hide signature --

Gary W.

 GaryW's gear list:GaryW's gear list
Sony Alpha NEX-6 Sony E 16-50mm F3.5-5.6 PZ OSS Sony Cyber-shot DSC-V3 Sony Cyber-shot DSC-HX5 Sony Alpha DSLR-A100 +10 more
Reply   Reply with quote   Complain
Detail Man
Forum ProPosts: 14,950
Like?
Re: DxO Labs' "Lens Softness" Corrections
In reply to GaryW, Jun 22, 2013

GaryW wrote:

Detail Man wrote:

GaryW wrote:

I have less luck with countering poor focus, but for slight diffraction, it seems to work fine.

In addition, it's what DxO uses (I think) for their "lens sharpness", to counteract the effects of unsharp lenses, particularly in the corners where they use more intensity to counter the greater softness. This seems to work to a large extent.

...You may be familiar with these concepts if you are familiar with MTF (Modulation Transfer Function) curves. DxO Labs has developed a unique unit called the BxU (Blur eXperience Unit) which is a mathematical way of describing this 'blur'. Reducing the ‘lens softness’ or 'blur' or 'lack of sharpness' means performing local, color-channel dependent and anisotropic deconvolution of the image produced by the camera.

Furthermore, DxO deblurring uses a complex contextual approach taking into account both local noise and local detail level in the image. As a result, deblurring will be automatically reduced in uniform areas (like a pure blue sky.), but increased in a detailed zone....

...

As the DxO RAW Optics Correction Modules for a given camera-lens combination appear (to me) to in general to be significantly more effective than the DxO JPG Optics Correction Modules (for the same camera-lens combination), I think that it (may) be the case that some of the related processes may proceed on a RAW-level prior to de-mosaicing of the RAW image-data.

The "lens sharpness" option does not appear for unsupported lenses. I am not sure what happens for RAW vs. JPEG.

When processing OOC JPG from supported camera-lens combinations (using the accompanying JPG DxO Optical Corrections Module), the "Lens Softness" tool works - but is less dramatic than the RAW-mode version (with my LX3, have not checked with my GH2). Part of that is that even if adjusted to minimum using on-camera controls, the applied in-camera JPG Sharpening and NR limit how much can be done after the fact of in-camera JPG encoding.

This approach is distinct from the deconvolution-deblurring that occurs in the sharpening tools of Adobe Lightroom, Camera RAW, Photoshop, the PS/LR plugin Topaz Infocus, as well as RAW Therapee's R-L DD - all of which appear operate on what is already de-mosaiced image-data.

Whatever DxO is doing, they've really got it figured out. Typical deconvolution tends to produce artifacts if overdone, so they are definitely doing something to control the noise.

I think where DxO has done a good job overall is making their program deceptively simple. You could probably get good results with RAW Therapee, but RT requires more fiddling. My thought is that at lower ISO, I liked RT just fine, but at higher ISO, the NR is not the best, and the deconvolution will increase the noise. I can get pretty good results with DxO at any ISO.

Regarding RT's Richardson-Lucy DD and Lightroom / Camera RAWs DD incorporated into their Sharpening tool. RT's seems perhaps just a bit better, but I rather loathe Adobe's tool. Both can create some "gritty" and ugly processing artifacts.

DxO Optics Pro silently adjusts and increases their NR (and/or a separate related similar functionality) when higher ISO Sensitivities are used (or perhaps actual image-noise detected) - at least in the case of some cameras - as a measure to reduce the visiblity of DD artifatcs. See what "falconeye" reports in conversing with me on this thread at Pentax Forums dot com (which for some silly reason is a domain that DPReview still blocks the URLs of):

/forums/digital-processing-software-printing/88315-dxo-pentax-k-7-now-available.html

The "Bokeh" control (Versions 7.x on) - which appears to be some sort of variable corner-frequency low-pass filter(?) - has not seemed to do that much for me when adjusted above the default setting of 50 when I have tried to use it to reduce artifacts - but it seems to manage to increase artifacts when I have tried to lower to levels of around 40. As a result, I generally keep it parked at the default setting of 50, and sometimes use a bit more NR than I would otherwise. Artifacting seems most common in OOF areas where color contrasts exist.

Here is an example in a shot that I took the other day (note the "grittiness" in the upper-right OOF areas). Upping the "Bokeh" control does not help much, and I have to increase the NR all the way up to the automatically set NR values to quash it (neither of which were done in the example below) - whereas I am used to only having to select a fraction of the automatic NR setting values in all but higher ISO cases.

I usually can use only around 1/2 of the automatic NR setting values in such a case as below (which does here manage the shadows and the Red/Blue channel noise in the flower petals). One contributing factor is that the Gamma in the DxO Lighting tools is set fairly high (upwards towards 2.0, which is usually the very most that I use).

DMC-LX3 RAW, F=5.6, T=1/30, ISO=200, DxO Optics Pro 7.23

Note: It look like the Quantization data compression (QF~90%) that DPR is performing these days in all view modes of their Image Viewer interface is low-pass filtering out some of what I am describing above out. To see the artifacts better, download and view the Original loss-less JPG here:

http://www.dpreview.com/galleries/4464732135/download/2594948

DM ...

Reply   Reply with quote   Complain
olliess
Contributing MemberPosts: 874
Like?
Re: The whole question of lens sharpness...
In reply to Alphoid, Jun 23, 2013

Alphoid wrote:

My comment was, specifically, any blur caused by the lens. Diffraction, limited depth-of-field, antialiasing filter, resolution limits of the sensor, etc. are not caused by the lens. You would have those even with an ideal lens.

Of course diffraction and defocus blur are "caused" by the lens" (even an ideal lens). They are certainly a motivation for much of the real-world work on image deconvolution, and I would argue that a "perfect" characterization of the PSF would have to include diffraction at the very least.

Let's leave things which transform the focal plane (tilt, field curvature, and the like) out of the discussion for now. Those one cannot correct for, but it's not clear whether they are of any relevance to sharpness in photography (subjects are rarely flat).

This would also seem to imply that subjects are rarely contained exactly in the plane of focus, hence defocus blur would be relevant after all. But I agree, let's leave some of the extra complications aside for now.

You will get back, in your notation, O(x,y)+(P^-1 * N)(x,y). This is the exact original image, with a transformed version of the noise. In practice, you end up increasing high-frequency noise, but with a typical lens, to an extent that does not matter at low ISO. Note that this, specifically, undoes any blur caused by the lens, as in my original claim.

Yes, in principle you have removed some of the blur caused by the lens, but you have also added an unknown amount of extra noise. Since you cannot show unambiguously which new information is restored signal and which is added noise, it's kind of a bogus claim. It would be like claiming you've successfully removed all of the cruft from the ceiling of the Cistine Chapel, but you just aren't sure how much of Michelangelo's paint you've removed in the process.

Even you take away the noise, then you're still left with something that looks just like the 2-d heat equation. Thus if you are guaranteed an unique inverse for the blur problem, then it seems to imply that you are also guaranteed solutions to the backward heat equation.

This is not correct. The systems are sort-of-similar in that they sort-of-blur things, but the math is different (hence, my counterexample). If one is not invertable, it does not guarantee that the other is not.

Well, since both inverse problems are ill-posed for essentially the same mathematical reasons, I'm not sure how you can claim this.

That said, I do believe (and the key word is believe -- we're slightly outside of my domain of expertise for heat equation)

If you are outside your domain of expertise (what is your expertise btw, if you don't mind my asking), then why are you so sure that the problems are mathematically different?

that the 2d heat equation is invertable (and a quick Google search brings up papers which believe the same).

A quick Google Scholar search brings up papers on both topics which immediately discuss why both the backwards heat equation and the image deconvolution problem are fundamentally ill-posed, and then talks about all the clever ways people are trying to come up with clever (and practical) workarounds.

How does entropy relate? I believe you have a misunderstanding of entropy. If you give me an exact state of any physical system (classical or quantum -- but I'll ignore quantum for the purposes of this discussion, since it will only complicate things), I can simulate it backwards to get the state at any point in the past.

And what the ill-posedness of the backward heat equation, negative diffusion processes, and image deconvolution tell you is that in general you may not be able to find solutions to simulate backwards in time.

Thermodynamics just tells me that I cannot physically bring it back to that state without increasing entropy elsewhere. Total entropy in the world increases.

You may find interesting some of the discussion in the literature about the relationship between the ill-posedness of the backwards heat equation (and other similar processes) and the arrow of time.

Anyway, let me respond to the more specific points about the edge effects in a following post, because the topic is somewhat separate from the discussion above.

Reply   Reply with quote   Complain
olliess
Contributing MemberPosts: 874
Like?
Re: The whole question of lens sharpness...
In reply to Alphoid, Jun 23, 2013

To follow on my previous post:

Alphoid wrote:

olliess wrote:

Alphoid wrote:

In theory, given a perfect model of the lens, I can completely undo any blur caused by the lens.

I took this statement to mean just what it said, namely that you could undo any blur given a perfect knowledge about the blur function of the lens. To me, that means lens defects, diffraction, defocus, and anything else that modifies a point in the image.

(above quoted for context)

[...]

Now moving on to your modified claim:

Original claim. You just misunderstood it

I will freely admit, my misunderstanding could be ongoing.    So let me just stick to points of clarification here...

1) The operation of masking the image with a fixed frame (e.g., a rectangular windowing) is equivalent to convolution in the frequency domain. Since the Fourier transform of a rectangle has infinite support, some of the variance below the Nyquist limit must be spread beyond the Nyquist limit. Do you agree?

I am not sure. I don't believe I agree, but I think I may be misunderstanding what you're trying to say. There are several things which I'm finding ambiguous (e.g. What operation in the optical chain corresponds to 'masking the image with a fixed frame?'). Can you write out the above being slightly more verbose/specific?

The lens blurs the original image; let's assume for simplicity it's exactly a convolution with a fixed PSF. Since the frame you capture is finite, some information that should have been within the frame has now been blurred outside the frame and is lost. Also, sources that should have been completely outside the frame can contaminate the captured frame.

In the spatial domain, your captured image (I) is the convolution of the PSF (P) and the original image (O), windowed by a rectangular window (R):

I = R . (O * P).

2) in the spatial domain, some of the image gets spread beyond the edge of the frame

This is correct, but not significant. The PSF is small. This would only matter if the PSF was a substantial portion of the image.

The PSF may or may not be small in extent. In theory, even the PSF due only to diffraction has infinite support, although the magnitude is small outside of a small extent.

PSF due to the lens is small, at least as it relates to sharpness. Places where it is has large extent but small magnitude contribute to contrast (rather than sharpness).

Now you're getting into subjective arguments about what constitutes sharpness as opposed to contrast (or resolution). Objectively, the stated goal is to undo the blur.

The assumption that the PSF is "small" (here taken to mean "contained within a small radius") is also problematic. You might reasonably claim that deconvolution works in the limit that both the blur and the noise are small, but the point is, for any given blur radius there will be some level of noise which breaks the inversion, and for any level of noise, there is some size to the blur beyond which the original signal cannot be recovered.

As an aside, the PSF for the Hubble Space Telescope was not particularly small.

Furthermore, no matter how "small" (in physical units) your PSF is, it will become relatively "large" with increasing resolution.

G(H(image)+noise)=image+G(noise)

For a sharpening filter, G>1 at high frequencies, so noise increases. In practice, this doesn't matter much at low ISO

H and G are linear operators. H(image) + noise is not a linear operator. G is the inverse of H but not of H + noise, so G is not the inverse solution of the problem, right?

I think we're arguing terminology and what we consider to be 'the system.' You get exactly what I said -- your original image plus a transformed version of the noise. This undoes any blur caused by the lens. Do you disagree?

To sort of repeat the argument made in my previous post, it's meaningless to say you've "undone" the lens blur when you've also added an unspecified amount of noise which may or may not obscure the gains of your inversion.

Reply   Reply with quote   Complain
hjulenissen
Senior MemberPosts: 1,593
Like?
Re: The whole question of lens sharpness...
In reply to olliess, Jun 23, 2013

olliess wrote:

Furthermore, no matter how "small" (in physical units) your PSF is, it will become relatively "large" with increasing resolution.

Large in terms of number of pixels, not larger in terms of picture height/width. Pseudo-inverting a 100x100 element smooth (lowpass) kernel could not be that much harder than pseudo-inverting a 10x10 element kernel as long as expectations of recovering high-frequency data are relative to SNR, and not relative to sensel density? From my simple non-mathematician, non-heat-equation background, I would assume that something like Wiener filtering provides a pedagogic starting-point for the non-blind image deconvolution problem.

When we have the next megapixel monster, it will probably be an even better instrument for being practically limited by the lens alone. Good deconvolution would seem to have even better conditions for removing as much as practically possible of the lense errors, provided that sensor noise also continues to improve.

-h

Reply   Reply with quote   Complain
Alphoid
Senior MemberPosts: 2,254
Like?
Re: The whole question of lens sharpness...
In reply to olliess, Jun 23, 2013

olliess wrote:

This is not correct. The systems are sort-of-similar in that they sort-of-blur things, but the math is different (hence, my counterexample). If one is not invertable, it does not guarantee that the other is not.

Well, since both inverse problems are ill-posed for essentially the same mathematical reasons, I'm not sure how you can claim this.

That said, I do believe (and the key word is believe -- we're slightly outside of my domain of expertise for heat equation)

If you are outside your domain of expertise (what is your expertise btw, if you don't mind my asking),

For the purposes of this discussion, for signal/image/audio processing, my expertise goes well beyond everything we've talked about. I can make statements with very high levels of confidence there. I have a basic level of knowledge about thermodynamics -- on the order of what a physics major would learn in an undergraduate course on thermodynamics and statistical physics, as well as a scattering of more advanced relevant topics from graduate-level physics (Lagrangian mechanics, quantum mechanics, etc.). I have a deep understanding of concepts like entropy, but I do not have the applied experience of e.g. a numerically trying to invert the heat equation. You can assume a level of knowledge about information theory somewhere in between the two, and sufficient for this discussion (I'd actually love to see you try to make the information theory entropy argument you alluded to; I believe I could take that one apart).

Yourself?

then why are you so sure that the problems are mathematically different?

  1. I can read math. I can come up with counterexamples, and places where intuition for one differs from the intuition for the other. I gave one example, but there are many others, and in both directions. The level of 'blur' caused by solving forward a heat equation is much greater than you would find in any sane optical system. The steady-state solution of a heat equation is a harmonic function. Harmonic functions are painfully hard to invert/extrapolate from (I'd encourage you to try to find something resembling a harmonic function in an optical system). There's the octagon example I gave before. I can give a hundred others.
  2. You are making statements based on the heat equation which do not apply to image deconvolution.

A quick Google Scholar search brings up papers on both topics which immediately discuss why both the backwards heat equation and the image deconvolution problem are fundamentally ill-posed, and then talks about all the clever ways people are trying to come up with clever (and practical) workarounds.

The image deconvolution problem is not ill-posed. It becomes ill-posed in a few circumstances:

  • Unknown PSF (blind de-convolution)
  • Extreme levels of blur (band-limited PSF, zeros in the PSF, or nearly band-limited).

In addition, it becomes impractical if there are high levels of noise, either from the sensor or from quantization. Here, the proper term isn't 'ill-posed,' but that's a technicality. If you give me a image taken with a modern camera sensor and a modern lens at ISO100, you will not have too much noise or quantization, and the level of blur will not be such that you are just missing information.

If you give me an image where you're trying to correct for unknown atmospheric blur, or a $37 million space telescope with an incorrectly ground lens, a Holga, or an ISO6400 image, all bets are off. That's where the research papers kick in that talk about ill-posed problems, and much fancier algorithms.

How does entropy relate? I believe you have a misunderstanding of entropy. If you give me an exact state of any physical system (classical or quantum -- but I'll ignore quantum for the purposes of this discussion, since it will only complicate things), I can simulate it backwards to get the state at any point in the past.

And what the ill-posedness of the backward heat equation, negative diffusion processes, and image deconvolution tell you is that in general you may not be able to find solutions to simulate backwards in time.

You were about to get to how entropy fit in.... I'm still waiting. So far, the best I've gotten is to use it as a fancy word to mean 'time.'

1) The operation of masking the image with a fixed frame (e.g., a rectangular windowing) is equivalent to convolution in the frequency domain. Since the Fourier transform of a rectangle has infinite support, some of the variance below the Nyquist limit must be spread beyond the Nyquist limit. Do you agree?

I am not sure. I don't believe I agree, but I think I may be misunderstanding what you're trying to say. There are several things which I'm finding ambiguous (e.g. What operation in the optical chain corresponds to 'masking the image with a fixed frame?'). Can you write out the above being slightly more verbose/specific?

The lens blurs the original image; let's assume for simplicity it's exactly a convolution with a fixed PSF. Since the frame you capture is finite, some information that should have been within the frame has now been blurred outside the frame and is lost. Also, sources that should have been completely outside the frame can contaminate the captured frame.

In the spatial domain, your captured image (I) is the convolution of the PSF (P) and the original image (O), windowed by a rectangular window (R):

I = R . (O * P).

I agree with the mathematics. I am not trying to correct all errors caused by the lens -- just the blur -- not the contrast. Loosely defined, sharpness can be thought of as (depending on who you ask):

  • The 10%-90% rise time of the PSF.
  • The -3dB of the lens' MTF plot
  • Etc.

For any modern lens, the part of the PSF responsible for the blur is a just a few pixels. I can take an inverse of the PSF, approximate it as a fairly small FIR filter. After filtering the image, I can crop the few pixels around the edges -- little enough as to not be noticable.

As an aside, the PSF for the Hubble Space Telescope was not particularly small.

True that. I would not be making the same claims in an astronomy forum. There, the related problems do, indeed, run into numerical instability.

Furthermore, no matter how "small" (in physical units) your PSF is, it will become relatively "large" with increasing resolution.

True too. I might stop making these claims when in a decade, when we see 200MP sensors in our cameras.

To sort of repeat the argument made in my previous post, it's meaningless to say you've "undone" the lens blur when you've also added an unspecified amount of noise which may or may not obscure the gains of your inversion.

Key is 'may or may not.' If you're base image is at a reasonable ISO, you're using a lens made in the past decade that cost more than $100, and a modern sensor with a reasonable number of bits, and generally the type of equipment and settings you'd see people in this forum using for the vast majority of their photos, this falls clearly in the 'may not' camp.

Reply   Reply with quote   Complain
olliess
Contributing MemberPosts: 874
Like?
Re: The whole question of lens sharpness...
In reply to Alphoid, Jun 24, 2013

Alphoid wrote:

For the purposes of this discussion, for signal/image/audio processing, my expertise goes well beyond everything we've talked about.

Well, you obviously have some expertise, which is why I found it so confusing that some terms and examples I was using seemed unfamiliar to you. I was looking for some common "language" in which we could communicate.

I can make statements with very high levels of confidence there. I have a basic level of knowledge about thermodynamics ... I have a deep understanding of concepts like entropy, but I do not have the applied experience of e.g. a numerically trying to invert the heat equation.

You don't need to try inverting numerically at all. You just need to look carefully at the analytic solution and see that the Fourier integral blows up for negative time. You can see the exact same behavior (and I know that you're familiar with this) with solutions to the deconvolution problem.

Also keep in mind that all along what I have been objecting to is your claim that lens blur can, in theory, be undone perfectly.

You can assume a level of knowledge about information theory somewhere in between the two, and sufficient for this discussion (I'd actually love to see you try to make the information theory entropy argument you alluded to; I believe I could take that one apart).

My thoughts re: information entropy were pretty trivial, so I'll hold off on them for the time being. If you want to discuss them in detail, we can continue this via private discussion.

Yourself?

"Sufficient."

The level of 'blur' caused by solving forward a heat equation is much greater than you would find in any sane optical system.

You can make the heat diffusion arbitrarily "small" by choosing a smaller time t > 0, and it doesn't really affect the blowup of high modes as mentioned above.

The image deconvolution problem is not ill-posed. It becomes ill-posed in a few circumstances:

  • Unknown PSF (blind de-convolution)
  • Extreme levels of blur (band-limited PSF, zeros in the PSF, or nearly band-limited).

Ill-posed has to do with the fundamental behavior of the system, not so much our knowledge about the PSF.

In addition, it becomes impractical if there are high levels of noise, either from the sensor or from quantization. Here, the proper term isn't 'ill-posed,' but that's a technicality.

Actually, "ill-posed" is the technical term here.

If you give me a image taken with a modern camera sensor and a modern lens at ISO100, you will not have too much noise or quantization, and the level of blur will not be such that you are just missing information.

You are missing information. That's inevitable. With increasing wavenumber your signal to noise is decreasing, conditional entropy is increasing, hence you are receiving progressively less actual information. (There, that's one of my information entropy arguments).

If you give me an image where you're trying to correct for unknown atmospheric blur, or a $37 million space telescope with an incorrectly ground lens, a Holga, or an ISO6400 image, all bets are off. That's where the research papers kick in that talk about ill-posed problems, and much fancier algorithms.

The research papers usually start off by mentioning how the problem is well-known to be ill-posed, and then goes on to describe how the authors plan to make a go of it anyway.

You were about to get to how entropy fit in.... I'm still waiting. So far, the best I've gotten is to use it as a fancy word to mean 'time.'

There was the heat equation example, which you don't seem to follow/acept. Then there is the conditional entropy argument. And one more, but this is already getting too long, so it'll have to wait for later.

Reply   Reply with quote   Complain
Keyboard shortcuts:
FForum MMy threads