# The whole question of lens sharpness...

sjgcit wrote:

This isn't true. You can never fully reverse the many optical effects. You'd be breaking a number of well established physical laws if you could.

Please name one such law.

Even if you attempted this to the limits of physics you'd probably be frustrated by sample variations in the lenses.

True. This is the difference between theory and practice. In theory, given a perfect model of the lens, I can completely undo any blur caused by the lens (although increase high-frequency noise in the process -- which matters a lot at ISO1600, but not so much at ISO100).

In practice, if the model is a bit wrong, you run into numerical instability if you tried to take it too far.

Alphoid wrote:

sjgcit wrote:

This isn't true. You can never fully reverse the many optical effects. You'd be breaking a number of well established physical laws if you could.

Please name one such law.

The law of entropy comes to mind.

Even if you attempted this to the limits of physics you'd probably be frustrated by sample variations in the lenses.

True. This is the difference between theory and practice. In theory, given a perfect model of the lens, I can completely undo any blur caused by the lens (although increase high-frequency noise in the process -- which matters a lot at ISO1600, but not so much at ISO100).

I am genuinely curious, what theory guarantees that you can determine a unique inverse for a diffusion-type process with noise?

olliess wrote:

The law of entropy comes to mind.

There is no such thing as 'the law of entropy.' There is a quantity called entropy that is a property of a thermodynamic system. There are a few laws about it (you may be thinking of the second law of thermodynamics?), but it is difficult to see how they would apply.

I am genuinely curious, what theory guarantees that you can determine a unique inverse for a diffusion-type process with noise?

There is no such thing as a 'diffusion-type process.' The blur of a lens is simply a convolution with the PSF of the lens. If the PSF is exactly known, it can be exactly inverted with a deconvolution. Otherwise, it can be inverted to the degree it is known.

Alphoid wrote:

olliess wrote:

The law of entropy comes to mind.

There is no such thing as 'the law of entropy.' There is a quantity called entropy that is a property of a thermodynamic system. There are a few laws about it (you may be thinking of the second law of thermodynamics?),

Yes, I was referring to the second law of thermodynamics and its analogue in information theory. It's sort of like referring to Newton's first law as the law of inertia.

but it is difficult to see how they would apply.

Compare the heat/diffusion kernel to the (model) blur function of a lens.

There is no such thing as a 'diffusion-type process.' The blur of a lens is simply a convolution with the PSF of the lens. If the PSF is exactly known, it can be exactly inverted with a deconvolution.

First off, the problems of finite bandwidth, limited image extent, and noise will limit your ability to invert. And even you assume the PSF of the lens is known perfectly as a function of distance to the subject, it seems a stretch to assume that you know the distance to every point in the unblurred image.

Otherwise, it can be inverted to the degree it is known.

That's a far cry from a theoretically-guaranteed inversion.

Thanks.

olliess wrote:

Alphoid wrote:

olliess wrote:

The law of entropy comes to mind.

There is no such thing as 'the law of entropy.' There is a quantity called entropy that is a property of a thermodynamic system. There are a few laws about it (you may be thinking of the second law of thermodynamics?),

Yes, I was referring to the second law of thermodynamics and its analogue in information theory. It's sort of like referring to Newton's first law as the law of inertia.

but it is difficult to see how they would apply.

Compare the heat/diffusion kernel to the (model) blur function of a lens.

Could you please state (or reference using URLs) both specific models of which you speak ?

There is no such thing as a 'diffusion-type process.' The blur of a lens is simply a convolution with the PSF of the lens. If the PSF is exactly known, it can be exactly inverted with a deconvolution.

So what is a "diffusion-type process" ?

First off, the problems of finite bandwidth, limited image extent, and noise will limit your ability to invert. And even you assume the PSF of the lens is known perfectly as a function of distance to the subject, it seems a stretch to assume that you know the distance to every point in the unblurred image.

Otherwise, it can be inverted to the degree it is known.

That's a far cry from a theoretically-guaranteed inversion.

Based upon such a criterion, the Hubble Space Telescope output would have remained unprocessed.

Thanks.

Detail Man wrote:

olliess wrote:

Compare the heat/diffusion kernel to the (model) blur function of a lens.

Could you please state (or reference using URLs) both specific models of which you speak ?

A basic textbook on differential equations would be relevant here. If wiki counts you can look under "Fundamental Solutions" in Heat equation. You can also look at Fick's laws for diffusion. Then compare to the link given for Deconvolution.

There is no such thing as a 'diffusion-type process.' The blur of a lens is simply a convolution with the PSF of the lens. If the PSF is exactly known, it can be exactly inverted with a deconvolution.

So what is a "diffusion-type process" ?

One that resembles diffusion in a mathematical sense.

First off, the problems of finite bandwidth, limited image extent, and noise will limit your ability to invert. And even you assume the PSF of the lens is known perfectly as a function of distance to the subject, it seems a stretch to assume that you know the distance to every point in the unblurred image.

Otherwise, it can be inverted to the degree it is known.

That's a far cry from a theoretically-guaranteed inversion.

Based upon such a criterion, the Hubble Space Telescope output would have remained unprocessed.

The HST output was greatly improved by deconvolution processing. It did not achieve perfection, nor would one expect it to, on purely theoretical grounds. Also consider the distance to subject for astronomical objects.

olliess wrote:

Detail Man wrote:

olliess wrote:

Compare the heat/diffusion kernel to the (model) blur function of a lens.

Could you please state (or reference using URLs) both specific models of which you speak ?

A basic textbook on differential equations would be relevant here. If wiki counts you can look under "Fundamental Solutions" in Heat equation. You can also look at Fick's laws for diffusion. Then compare to the link given for Deconvolution.

Thanks for the links. But what about the "(model) blur function of a lens" ?

So what is a "diffusion-type process" ?

One that resembles diffusion in a mathematical sense.

Otherwise, it can be inverted to the degree it is known.

That's a far cry from a theoretically-guaranteed inversion.

Based upon such a criterion, the Hubble Space Telescope output would have remained unprocessed.

The HST output was greatly improved by deconvolution processing. It did not achieve perfection, nor would one expect it to, on purely theoretical grounds. Also consider the distance to subject for astronomical objects.

What is the specific significance of "distance to subject" ?

olliess wrote:

but it is difficult to see how they would apply.

Compare the heat/diffusion kernel to the (model) blur function of a lens.

I don't think those laws say what you think they say. You're using (or occasionally inventing) big words without understanding them.

The only bandwidth limit is the Nyquist frequency of your sensor. Limited image extent doesn't matter. You don't need to know the distance -- you only need to know the PSF at the focal distance. You're trying to improve lens sharpness, not increase DOF. Noise gets amplified -- I mentioned that -- but it doesn't limit your ability to invert. It's a linear process.

The practical limits are:

- The PSF has to be very well known.
- The image should be at low ISO (otherwise, you'll clean up the image, but also boost the noise to an unacceptable extent).

Otherwise, it can be inverted to the degree it is known.

That's a far cry from a theoretically-guaranteed inversion.

It's exactly what I wrote in my original post.

olliess wrote:

Detail Man wrote:

olliess wrote:

Compare the heat/diffusion kernel to the (model) blur function of a lens.

Could you please state (or reference using URLs) both specific models of which you speak ?

A basic textbook on differential equations would be relevant here. If wiki counts you can look under "Fundamental Solutions" in Heat equation. You can also look at Fick's laws for diffusion. Then compare to the link given for Deconvolution.

So what is a "diffusion-type process" ?

One that resembles diffusion in a mathematical sense.

If you believe lens blur "resembles diffusion in a mathematical sense," can you please give a diffusion process, as per the links you gave, which has a PSF which is a solid octagon? It is trivial to design an optical system with this (I'll use your made-up terminology here for humor value) "blur function."

Alphoid wrote:

olliess wrote:

but it is difficult to see how they would apply.

Compare the heat/diffusion kernel to the (model) blur function of a lens.

I don't think those laws say what you think they say. You're using (or occasionally inventing) big words without understanding them.

Look at how a 2-d heat kernel describes the spreading (by heat- or Fickian diffusion) of a point source. Now compare this to the blur kernel (PSF) of a lens.

If you're still not understanding how these are related, then you probably shouldn't go around accusing people of inventing big words.

The only bandwidth limit is the Nyquist frequency of your sensor. Limited image extent doesn't matter.

Convolving an image with a PSF means, at least in theory:

1) in the frequency domain, some of the spectrum gets spread beyond the Nyquist limit

2) in the spatial domain, some of the image gets spread beyond the edge of the frame

You're trying to improve lens sharpness, not increase DOF.

What you originally said was:

In theory, given a perfect model of the lens, I can completely undo any blur caused by the lens

So are you now saying you are just trying to "unblur" in the plane of focus, and just accept any side effect for all other distances? (For example, if the PSF of the lens has a different shape at other focal distances, you are now inverting for the wrong PSF everywhere else in the image.)

Noise gets amplified -- I mentioned that -- but it doesn't limit your ability to invert. It's a linear process.

Once you have noise in the problem then it is no longer a linear problem, right?

That's a far cry from a theoretically-guaranteed inversion.

It's exactly what I wrote in my original post.

See above for what you wrote in your original post:

Alphoid wrote:

If you believe lens blur "resembles diffusion in a mathematical sense," can you please give a diffusion process, as per the links you gave, which has a PSF which is a solid octagon? It is trivial to design an optical system with this (I'll use your made-up terminology here for humor value) "blur function."

It isn't the precise shape that is the resemblance. It's the existence of a kernel, which in the heat equation has time dependent radial spreading, and for the lens has focal distance dependent change (not necessarily just spreading).

olliess wrote:

Alphoid wrote:

If you believe lens blur "resembles diffusion in a mathematical sense," can you please give a diffusion process, as per the links you gave, which has a PSF which is a solid octagon? It is trivial to design an optical system with this (I'll use your made-up terminology here for humor value) "blur function."

Look at how a 2-d heat kernel describes the spreading (by heat- or Fickian diffusion) of a point source. Now compare this to the blur kernel (PSF) of a lens.

(Again) what about the "(model) blur function of a lens" ? How is your model mathematically characterized ? What independent variables affect it, and how do they specifically affect it ?

It isn't the precise shape that is the resemblance. It's the existence of a kernel, which in the heat equation has time dependent radial spreading, and for the lens has focal distance dependent change (not necessarily just spreading).

The relevant spatial domain function is across the surface of the film/sensor plane, right ? The effects of de-focus represent additional factors affecting that 2-dimensional spatial domain function, right ? The commonality that you allude to relates to the fact that convolution(s) are required in order to determine a (linear, constant coefficient) system ouptut resulting from a forcing function, right ?

olliess wrote:

Alphoid wrote:

olliess wrote:

but it is difficult to see how they would apply.

Compare the heat/diffusion kernel to the (model) blur function of a lens.

I don't think those laws say what you think they say. You're using (or occasionally inventing) big words without understanding them.

Look at how a 2-d heat kernel describes the spreading (by heat- or Fickian diffusion) of a point source. Now compare this to the blur kernel (PSF) of a lens.

If you're still not understanding how these are related, then you probably shouldn't go around accusing people of inventing big words.

It's obvious how they're related. They're also not identical. It's a poor way to model the system, and would give bad intuition, and in many cases give incorrect results. There's a difference between math looking kind-of-similar and being identical. The reason I gave a counterexample was so that you could discover how and why they differ for yourself. You have a misconception (several actually), and demonstrating to someone that they have cognitive dissonance is a good forcing function for helping overcome misconceptions.

The more key thing you're missing is what you're missing is what entropy is, how it works, and how and why it is unrelated. I'd encourage you to try to actual write down (in math, not in big words) why you think entropy applies. You'll quickly run into a contradiction. Here, I was trying to drive you to make concrete statements, so I could also force a contradiction. You've been unwilling to do that.

The only bandwidth limit is the Nyquist frequency of your sensor. Limited image extent doesn't matter.

Convolving an image with a PSF means, at least in theory:

1) in the frequency domain, some of the spectrum gets spread beyond the Nyquist limit

This is incorrect.

2) in the spatial domain, some of the image gets spread beyond the edge of the frame

This is correct, but not significant. The PSF is small. This would only matter if the PSF was a substantial portion of the image.

You're trying to improve lens sharpness, not increase DOF.

What you originally said was:

In theory, given a perfect model of the lens, I can completely undo any blur caused by the lens

So are you now saying you are just trying to "unblur" in the plane of focus, and just accept any side effect for all other distances? (For example, if the PSF of the lens has a different shape at other focal distances, you are now inverting for the wrong PSF everywhere else in the image.)

Your goal is to have an image as would come back from an ideal lens. An ideal lens would have a perfectly sharp focal plane, but would still have limited DOF. For any sane optical design, any OOF areas will have a stronger low-pass filter at all frequencies than the areas in the focal plane. Hence, the side-effects at all other distances will not be significant or detrimental (slightly different bokeh).

Noise gets amplified -- I mentioned that -- but it doesn't limit your ability to invert. It's a linear process.

Once you have noise in the problem then it is no longer a linear problem, right?

Incorrect. H(S) is the PSF of the lens. G(S) is the inverse of the PSF. Both are linear. Your input is:

H(image)+noise

Your output is:

G(H(image)+noise)=image+G(noise)

For a sharpening filter, G>1 at high frequencies, so noise increases. In practice, this doesn't matter much at low ISO

Most sharpening relics come in because your photoediting program has an unmatched high-pass filter for G, rather than using a model of the PSF of your particular optical system.

This thread is frustrating. Sharpness, by itself, doesn't create a good photograph.

There are two separate ideas being discussed and being horribly confused. One is 'imaging science' - the science and technology based capability to accurately capture an image. The other is photography defined as the art of evoking an emotional or intellectual response from a captured image.

Imaging enables photography, so sharper lenses broadens the range of possible photographs. That's good news for photographers interested in styles involving highly detailed images, like some styles of landscape photography.

But sharpness just one quality of an image. A single blob of grey wouldn't be much to look at, but great, powerful images can be made with pretty minimal levels of sharpness. Were any of Robert Capa's shots ever very sharp? Which photographs have made the strongest impression on you? Were they sharp as in current state-of-the-art standards?

Digital media isn't that sharp for the most part. Facebook, phones, ipads, most monitors, HD TV, movies, all do a real number on image resolution. Printing, of course, is a different matter.

So suit yourself with regard to sharpness. I suppose it comes with the territory in a camera gear forum, but to my mind this narrow minded focus on the technology of sharpness misses the core point in the OP's question, which is the utility of sharpness to photographers.

Imho, sharpness is a big deal for some, not so much for others. Depends on the type of photograph you're trying to create. Given good craftsmanship, today's lenses generally are good enough for most styles of photography.

Jeff wrote:

This thread is frustrating. Sharpness, by itself, doesn't create a good photograph.

There are two separate ideas being discussed and being horribly confused. One is 'imaging science' - the science and technology based capability to accurately capture an image. The other is photography defined as the art of evoking an emotional or intellectual response from a captured image.

Imaging enables photography, so sharper lenses broadens the range of possible photographs. That's good news for photographers interested in styles involving highly detailed images, like some styles of landscape photography.

But sharpness just one quality of an image. A single blob of grey wouldn't be much to look at, but great, powerful images can be made with pretty minimal levels of sharpness. Were any of Robert Capa's shots ever very sharp? Which photographs have made the strongest impression on you? Were they sharp as in current state-of-the-art standards?

Digital media isn't that sharp for the most part. Facebook, phones, ipads, most monitors, HD TV, movies, all do a real number on image resolution. Printing, of course, is a different matter.

So suit yourself with regard to sharpness. I suppose it comes with the territory in a camera gear forum, but to my mind this narrow minded focus on the technology of sharpness misses the core point in the OP's question, which is the utility of sharpness to photographers.

Imho, sharpness is a big deal for some, not so much for others. Depends on the type of photograph you're trying to create. Given good craftsmanship, today's lenses generally are good enough for most styles of photography.

Well said. You differentiate between the very different subjects well. One may be partially reducible to quantitative metrics. The other is implicitly completely subjective, and largely quite mysterious. Discussion of neither of these two subjects is (or should be construed to somehow be) predictive or prescriptive of how visual imagery may possibly be received in the minds' eyes of others.

The mistakes made more often than not in these gear oriented forums is that subjective aesthetics are somehow objectively determinable and transferable truths, and that technical reductionism could or should be proferred as a means to somehow execute and establish such fallacious arguments.

Detail Man wrote:

The mistakes made more often than not in these gear oriented forums is that subjective aesthetics are somehow objectively determinable and transferable truths, and that technical reductionism could or should be proferred as a means to somehow execute and establish such fallacious arguments.

If what you meant was "get your head out of your a$$", then, yes, I agree.

Resolution of detail is limited by the amount of pixels behind the lens. If I shoot a D800e with a $119 50D at f8, I'm going to get more of said fine detail by far than if I shot the same scene with a Zeiss or whatever at 12 or 24MP. The resulting file may or may not have the "pop" that a top shelf nano coated prime would have, but the detail capture will be there in spades. Most primes and a good many zooms outresolve a D800e pretty handily over most of the frame, rendering the lens resolution chart numbers above about 3200 lwph of minor importance in my view. It is the norm for most lenses to fall below the resolution of the body in the corners, which is of course more fodder for discussion and meaningful for landscapes, etc. The best lenses produce a picture with better microcontrast and more vivid color, which is really what people are obsessing about on the boards in regards to "sharpness."

This is an example, taken with the D800e and the humble "kit" zoom, the 24-85VR. At first glance, it looks like a pic that could have been taken with any camera. But download it and zoom in a little or a lot and see what I'm talking about. Over much of the frame, the lens has outresolved the "e" with no problem at all right down to the clearly defined stairstep pixels on the "Oregon Food Bank" trailer at 400% The lens pulls close to 4000 lwph center to mid.

As we scan out to the sides and corners, we start to see blur caused by resolving power of the lens falling well below that of the body. Had I shot this with the Sigma 35 or other top prime, the pic would have been tacks from edge to edge, as well as having picked up the kind of low level contrast that top quality optics can provide.

My advice to anyone on the fence would be to go for the highest MP body you can afford, then look for great cheap lenses to match if you're budget limited. They are out there in quantity, and you won't be disappointed. 4K monitors and big flat panels are here, and the more pixels the merrier.

Reilly Diefenbach wrote:

My advice to anyone on the fence would be to go for the highest MP body you can afford, then look for great cheap lenses to match if you're budget limited. They are out there in quantity, and you won't be disappointed. 4K monitors and big flat panels are here, and the more pixels the merrier.

I strongly disagree with this blanket advice. If maximizing detail is your thing -- I'm not judging, that's an interesting thing to do -- then fine.

But for a large range of situations, 12-16MP and a couple of couple of lenses will give you plenty of resolution work with, whether you're interested in printing or electronic meda. Get the camera and a quality lens or two best suited to capturing the images you care about, whether that means street, landscape, portrait.

Great photography need not be an expensive hobby. Look at the thread "I Almost got punched out" for what you can do with one body and a basic lens or two.

Jeff wrote:

Reilly Diefenbach wrote:

My advice to anyone on the fence would be to go for the highest MP body you can afford, then look for great cheap lenses to match if you're budget limited. They are out there in quantity, and you won't be disappointed. 4K monitors and big flat panels are here, and the more pixels the merrier.

I strongly disagree with this blanket advice. If maximizing detail is your thing -- I'm not judging, that's an interesting thing to do -- then fine.

It is indeed my thing and a whole lot of others, and the only sane reason to lug around four or five pounds of camera and lens and spend lots of time and money on hard and software :^) Also microcontrast and overall "pop" which gets better as you go up the chain. Even a casual snapshot take on a whole new intensity. ( not my shot.)

I'm well aware that many a great pic has been produced at 12MP, but things look noticeably sharper at 24 or 36 at just about any magnification, including portraits and so forth. It's a paradigm shift, and when you do eventually get a high MP camera, your old 12 and 16MP won't look as sharp to you in retrospect. Back a few years, it was widely intoned that 2 to 4 MP was all that was ever necessary. I wish I had all the shots I've taken at various incredible places at 6 or 8MP to do over.

But for a large range of situations, 12-16MP and a couple of couple of lenses will give you plenty of resolution work with, whether you're interested in printing or electronic meda. Get the camera and a quality lens or two best suited to capturing the images you care about, whether that means street, landscape, portrait.

Great photography need not be an expensive hobby. Look at the thread "I Almost got punched out" for what you can do with one body and a basic lens or two....

That is true. Compared to fishing and golf, the price of admission to high res landscape and wildlife photography now stands at about $2000 using a D7100 and a couple of cheap primes, it's a handful of hay :^)

Alphoid wrote:

olliess wrote:

It's obvious how they're related. They're also not identical. It's a poor way to model the system, and would give bad intuition, and in many cases give incorrect results. There's a difference between math looking kind-of-similar and being identical.

Maybe it will clear up some of the confusion here if I go back to the source. My comments were in response to what you said originally, which was:

In theory, given a perfect model of the lens, I can completely undo any blur caused by the lens.

I took this statement to mean just what it said, namely that you could undo any blur given a perfect knowledge about the blur function of the lens. To me, that means lens defects, diffraction, defocus, and anything else that modifies a point in the image.

If you begin by assuming the PSF is linear and translation invariant, call it P, then the observed version of the original image, O(x,y), is the modified image, I(x,y), where

I(x,y) = (P(s) * O)(x,y) + N(x,y),

which includes a functional dependence of P on s, the distance to subject. There is also additive noise, N(x,y), which means you are not going to be able to invert perfectly by just applying the inverse of P(s) to I(x,y) - N(x,y).

Even you take away the noise, then you're still left with something that looks just like the 2-d heat equation. Thus if you are guaranteed an unique inverse for the blur problem, then it seems to imply that you are also guaranteed solutions to the backward heat equation. Hence my comment about entropy.

If you don't agree with what I've just said up to this point in my post, then I'd be happy to hear what you think are the underlying misconceptions.

Now moving on to your modified claim:

Your goal is to have an image as would come back from an ideal lens. An ideal lens would have a perfectly sharp focal plane, but would still have limited DOF.

This is a quite different assertion than the one I was responding to above. So if all you want to do is an inversion with a perfectly known kernel that is only determined for a fixed focus distance s_0, in the absence of noise, then yes I agree it can be done. You can write the solution in Fourier space:

O^(x,y) = I^(x,y)/P(s_0)

There are still some minor details, at least in theory:

Convolving an image with a PSF means, at least in theory:

1) in the frequency domain, some of the spectrum gets spread beyond the Nyquist limit

This is incorrect.

I see I was completely unclear, so let me try again:

1) The operation of masking the image with a fixed frame (e.g., a rectangular windowing) is equivalent to convolution in the frequency domain. Since the Fourier transform of a rectangle has infinite support, some of the variance below the Nyquist limit must be spread beyond the Nyquist limit. Do you agree?

2) in the spatial domain, some of the image gets spread beyond the edge of the frame

This is correct, but not significant. The PSF is small. This would only matter if the PSF was a substantial portion of the image.

The PSF may or may not be small in extent. In theory, even the PSF due only to diffraction has infinite support, although the magnitude is small outside of a small extent.

And at the end of it all, you're still left with the problem of noise.

Once you have noise in the problem then it is no longer a linear problem, right?

Incorrect. H(S) is the PSF of the lens. G(S) is the inverse of the PSF. Both are linear. Your input is:

H(image)+noise

Your output is:

G(H(image)+noise)=image+G(noise)

For a sharpening filter, G>1 at high frequencies, so noise increases. In practice, this doesn't matter much at low ISO

H and G are linear operators. H(image) + noise is not a linear operator. G is the inverse of H but not of H + noise, so G is not the inverse solution of the problem, right?

- Canon EOS M58.8%
- Panasonic G85/G803.3%
- Panasonic FZ2500/FZ20001.9%
- Panasonic LX10/LX151.2%
- Panasonic GH5 development3.6%
- Sony a99 II15.9%
- Nikon KeyMission 170 and 801.0%
- Fujifilm GFX 50S development28.3%
- Olympus E-M1 II development18.7%
- Olympus E-PL80.1%
- Olympus 25mm F1.2 Pro1.5%
- Olympus 12-100mm F4 IS Pro1.9%
- Olympus 30mm F3.5 Macro0.1%
- Sigma 85mm F1.4 Art3.6%
- Sigma 12-24mm F4 Art2.6%
- Sigma 500mm F4 DG OS HSM Sport2.4%
- YI M12.2%
- GoPro Hero50.8%
- GoPro Karma drone2.2%