olliess wrote:

Alphoid wrote:

In theory, given a perfect model of the lens, I can completely undo any blur caused by the lens.

I took this statement to mean just what it said, namely that you could undo any blur given a perfect knowledge about the blur function of the lens. To me, that means lens defects, diffraction, defocus, and anything else that modifies a point in the image.

My comment was, specifically, any blur caused by the lens. Diffraction, limited depth-of-field, antialiasing filter, resolution limits of the sensor, etc. are not caused by the lens. You would have those even with an ideal lens. Some of these are removable, and some are not. Chromatic aberration, spherical aberations, etc. are caused by the lens.

Let's leave things which transform the focal plane (tilt, field curvature, and the like) out of the discussion for now. Those one cannot correct for, but it's not clear whether they are of any relevance to sharpness in photography (subjects are rarely flat).

If you begin by assuming the PSF is linear and translation invariant

I assume it is linear, but sadly, not translation invariant. I do not rely on translation invariance for anything I say. It complicates the modeling and inversion (quite significantly), but does not change what's possible.

call it P, then the observed version of the original image, O(x,y), is the modified image, I(x,y), where

I(x,y) = (P(s) * O)(x,y) + N(x,y),

which includes a functional dependence of P on s, the distance to subject.

We can ignore distance to subject for now, and just worry about P at the focal distance (as explained in an earlier post).

There is also additive noise, N(x,y), which means you are not going to be able to invert perfectly by just applying the inverse of P(s) to I(x,y) - N(x,y).

You will get back, in your notation, O(x,y)+(P^-1 * N)(x,y). This is the exact original image, with a transformed version of the noise. In practice, you end up increasing high-frequency noise, but with a typical lens, to an extent that does not matter at low ISO. Note that this, specifically, undoes any blur caused by the lens, as in my original claim.

Even you take away the noise, then you're still left with something that looks just like the 2-d heat equation. Thus if you are guaranteed an unique inverse for the blur problem, then it seems to imply that you are also guaranteed solutions to the backward heat equation.

This is not correct. The systems are sort-of-similar in that they sort-of-blur things, but the math is different (hence, my counterexample). If one is not invertable, it does not guarantee that the other is not.

That said, I do believe (and the key word is believe -- we're slightly outside of my domain of expertise for heat equation) that the 2d heat equation is invertable (and a quick Google search brings up papers which believe the same).

Hence my comment about entropy.

Please explain this argument. How does entropy relate? I believe you have a misunderstanding of entropy. If you give me an exact state of any physical system (classical or quantum -- but I'll ignore quantum for the purposes of this discussion, since it will only complicate things), I can simulate it backwards to get the state at any point in the past. Thermodynamics just tells me that I cannot physically bring it back to that state without increasing entropy elsewhere. Total entropy in the world increases.

Now moving on to your modified claim:

Original claim. You just misunderstood it

Convolving an image with a PSF means, at least in theory:

1) in the frequency domain, some of the spectrum gets spread beyond the Nyquist limit

This is incorrect.

I see I was completely unclear, so let me try again:

1) The operation of masking the image with a fixed frame (e.g., a rectangular windowing) is equivalent to convolution in the frequency domain. Since the Fourier transform of a rectangle has infinite support, some of the variance below the Nyquist limit must be spread beyond the Nyquist limit. Do you agree?

I am not sure. I don't believe I agree, but I think I may be misunderstanding what you're trying to say. There are several things which I'm finding ambiguous (e.g. What operation in the optical chain corresponds to 'masking the image with a fixed frame?'). Can you write out the above being slightly more verbose/specific?

2) in the spatial domain, some of the image gets spread beyond the edge of the frame

This is correct, but not significant. The PSF is small. This would only matter if the PSF was a substantial portion of the image.

The PSF may or may not be small in extent. In theory, even the PSF due only to diffraction has infinite support, although the magnitude is small outside of a small extent.

PSF due to the lens is small, at least as it relates to sharpness. Places where it is has large extent but small magnitude contribute to contrast (rather than sharpness).

G(H(image)+noise)=image+G(noise)

For a sharpening filter, G>1 at high frequencies, so noise increases. In practice, this doesn't matter much at low ISO

H and G are linear operators. H(image) + noise is not a linear operator. G is the inverse of H but not of H + noise, so G is not the inverse solution of the problem, right?

I think we're arguing terminology and what we consider to be 'the system.' You get exactly what I said -- your original image plus a transformed version of the noise. This undoes any blur caused by the lens. Do you disagree?