Diffraction?

2D deconvolution is relatively straightforward (yet more complex than USM), but real-life images are 3D. Depth info and 3D processing would be needed for 3D objects.
Me thinks the best of it will be the optical equivalent of "auto tune" on voices. It can be subtle and nearly undetectable, but the sense what you're not hearing the real thing is irritating.
The equivalent of "auto tune" might be using pattern recognition to guess at what's in the picture and then adjusting or even "patching in" details.

Meanwhile, USM and deconvolution of the diffraction (Airy) pattern are nearly equivalent operations, which is also why they run into the some of the same problems visually.
 
Last edited:
2D deconvolution is relatively straightforward (yet more complex than USM), but real-life images are 3D. Depth info and 3D processing would be needed for 3D objects.
Me thinks the best of it will be the optical equivalent of "auto tune" on voices. It can be subtle and nearly undetectable, but the sense what you're not hearing the real thing is irritating.
The equivalent of "auto tune" might be using pattern recognition to guess at what's in the picture and then adjusting or even "patching in" details.
I was thinking auto tune was more like taking a deviation from expected and in key notes and correcting them. It's a lot like the harmonics your voice is forced into when you make sounds through a long tube.
Meanwhile, USM and deconvolution of the diffraction (Airy) pattern are nearly equivalent operations, which is also why they run into the some of the same problems visually.
Well, kind of. But unsharp mask isn't very satisfactory when the area is visibly blurred. Noting that AA filters do add blur, but it's a predictable pattern. And I think the ability for USM or deconvolving to look convincing starts with the original not being too far off. And when I take diffraction and try to sharpen it, it looks over sharpened.
 
The equivalent of "auto tune" might be using pattern recognition to guess at what's in the picture and then adjusting or even "patching in" details.
I was thinking auto tune was more like taking a deviation from expected and in key notes and correcting them. It's a lot like the harmonics your voice is forced into when you make sounds through a long tube.
My point was just that the auto-tuner forms its "expectation" of what you were trying to sing using a kind of "pitch recognition." Then it adjusts the sound of your voice to match that expectation and "resynthesizes" your voice.

Maybe the description of "patching in" details is a bit extreme, but it's kind of how I feel about patching in pitches. ;)
Meanwhile, USM and deconvolution of the diffraction (Airy) pattern are nearly equivalent operations, which is also why they run into the some of the same problems visually.
Well, kind of. But unsharp mask isn't very satisfactory when the area is visibly blurred.

Noting that AA filters do add blur, but it's a predictable pattern. And I think the ability for USM or deconvolving to look convincing starts with the original not being too far off. And when I take diffraction and try to sharpen it, it looks over sharpened.
I should have been more careful. They are nearly "equivalent" operations in that both involve deconvolving the image and the blur kernel, but the blur kernels are different between the two cases (and indeed between different USMs or diffraction widths). As I'm sure you are aware, deconvolution is works better when you use the right kernel. :)
 
Last edited:

Keyboard shortcuts

Back
Top