Sooner reaching diffraction limit with higher megapixel cameras

David_Winston

Member
Messages
14
Reaction score
3
I am trying to grasp my head around certain things. One of them is diffraction, according to Cambridge in colour it is dependent of pixel pitch of the sensor,the aperture of the lens and of course the final size and viewing distance of the print. Neglecting the print size and the viewing distance and just focusing on aperture and pixel pitch. Will increased megapixels lead to sooner hitting the diffraction limit?

So if you were to jam 60 or 70 megapixels in a full frame sensor wouldn't you at one point reach diffraction very soon? I think it was some manager from Fujifilm that said, that FF's limit would be around 60mp to 100mp. Was it that that he was referring to?

I did some testing with an A7iii and an A7R iii with the same lens of a lens testing chart at various apertures (5.6 to 11). 5.6 where diffraction starts to kick in. When I downsampled the A7R iii image down (both no sharpening) I still found the A7R iii to be sharper at all apertures.

Why is that so? Is that gain in resolution outweighing the diffraction? Or is the jump from 24mp to 42mp just not that big?

Or should I have tested at smaller apertures (maybe then I would have seen the difference (I rarely use f16 so I didn't feel I should test it)?

What do you think? Will 70mp FF cameras even be useful?
 
Solution
1. The point at which diffraction induced blur becomes a factor that limits lens resolution. Stopping a lens down minimises most aberrations, but increases the size of the blur disc from diffraction. As a result, most lenses offer more resolution when stopped down to some extent, then offer less resolution when stopped down more than that. When a lens is described as 'diffraction limited' from a particular aperture, that means it is sharpest at that aperture. This is in theory independent of sensor pixel count and pixel size, though in practice, as most lens tests are carried out on camera, it may be confused with point 2 below.

2. The point at which diffraction induced blur becomes an important factor in practice. When diffraction...
Diffraction is a result of a physcial process at the border of your aperture.
No, it is not. The energy concentrated on the boundary of the aperture is zero and has no effect on the image.
To understan diffraction this may help:


Even laser light shows diffraction - even in a pretty clear pattern:


Diffraction has to do with interference effects. Interference effects can be described with convolution:


And the mathematics to redo convolution processes is: deconvolution!
 
Right: under any given viewing conditions (displayed image size size/viewing distance) a higher resolution (higher pixel count) sensor at the same f-stop will give a somewhat sharper image, not worse. That is "line pairs per picture height" improves at least somewhat. Maybe the image is not as sharp as with the same sensor used at a lower f-stop, but it is never worse for overall resolution to increase sensor resolution at a given f-stop.

At worst, the resolution gains from increasing pixel count become less and less once pixel size in microns is significantly smaller than the aperture ratio.
The color capture is still aliased at diffraction-limited f/8 (IOW, a very sharp f/8 whose blur is almost totally diffraction) on cameras with 1.4 micron pixels, even if there is no aliasing in the luminance abstraction of a greyscale subject. I'd guess based on experience with my Pentax Q and other small-pixel cameras that for luminance, pixels in microns about 0.3x the f-ratio are sufficient, and for color in a Bayer sensor with no AA filter, about 0.1x the f-ratio. I should probably do controlled experiments and get these numbers more confidently.
Since increasing sensor resolution is easier/cheaper than increasing lens resolution, it makes sense to me to have sensors that out-resolve one's best lenses somewhat, so as to get the best out of all one's lenses.
People will differ on what "outresolve" means. To me, it means that you have made a virtually analog capture of the optical projection with no aliasing, including the individual red and blue channels in a Bayer sensor. Other people may draw the line at something like the point where you get no further acute fine greyscale details with fairly high contrast by going to a higher pixel density, but this implies aliasing, at least in the color channels if not in the luminance abstraction.

True, we hate big, slow files, but that does not mean that low pixel counts are ideal from an IQ perspective, just because they are easier to handle and are automatically fairly "sharp" looking (despite having spatially damaged detail).

--
John
http://www.pbase.com/image/55384958.jpg
 
Last edited:
John Sheehy said:
KLO82 said:
D Cox said:
Most lens aberrations improve as you stop down; diffraction gets worse.
As shown in Canon EOS R white paper:


Showing the two boundaries to resolution in all lenses (this is a generic curve shown here)
Those are isolated curves, of course. The combined curve is not the lower of the two in any X position, but rather drops short of that peak where they are crossing each other. Blurs combine in quadrature, just like noise. There is no thresholding effect.
Here is one of my simulations, done for the Olympus 45/1.2, I do not remember for which camera, DXO data; crude curve fitting.


the horizontal axis is f-stop, the vertical one is resolution (whatever DXO measure)

The lower curve is the actual measured data; the left one is the aberrations, the decaying one is the diffraction. The pixel softening is taken into account in the curve fitting but not in the aberration/diffraction curves. I plotted on a log horizontal scale while Canon plotted on a linear one. The lens peaks at around f/3.

The Canon graph looks a bit questionable to me since the slopes at the maximal resolution should be the same but with opposite signs. This might be true but in the inverse squares model, they should also intersect at the sharpest f-stop, somewhere around f/2.8-f/4 for a good lens, and this is not what I see.
 
Last edited:
Diffraction is a result of a physcial process at the border of your aperture.
No, it is not. The energy concentrated on the boundary of the aperture is zero and has no effect on the image.
To understan diffraction this may help:
It also may help to understand math a physic better instead of watching youtube videos.

Even laser light shows diffraction - even in a pretty clear pattern:

https://www.stem.org.uk/elibrary/resource/26770?wvideo=x8f5pjstni

Diffraction has to do with interference effects. Interference effects can be described with convolution:

https://archive.cnx.org/contents/32...41d@2/the-convolution-theorem-and-diffraction

And the mathematics to redo convolution processes is: deconvolution!
Unless you divide by zero or small factors in presence of noise and discretization errors. You seem to have unshakable confidence that you can undo everything.
 
Last edited:
Right: under any given viewing conditions (displayed image size size/viewing distance) a higher resolution (higher pixel count) sensor at the same f-stop will give a somewhat sharper image, not worse. That is "line pairs per picture height" improves at least somewhat. Maybe the image is not as sharp as with the same sensor used at a lower f-stop, but it is never worse for overall resolution to increase sensor resolution at a given f-stop.

At worst, the resolution gains from increasing pixel count become less and less once pixel size in microns is significantly smaller than the aperture ratio.

Since increasing sensor resolution is easier/cheaper than increasing lens resolution, it makes sense to me to have sensors that out-resolve one's best lenses somewhat, so as to get the best out of all one's lenses.
The problem with sensors outresolving lenses is you get MTFs that look like this:
Why is that a problem?
mtf.png


rather than this:

mtf.png
What makes this better?
 
Diffraction is a result of a physcial process at the border of your aperture.
No, it is not. The energy concentrated on the boundary of the aperture is zero and has no effect on the image.
To understan diffraction this may help:
It also may help to understand math a physic better instead of watching youtube videos.

Even laser light shows diffraction - even in a pretty clear pattern:

https://www.stem.org.uk/elibrary/resource/26770?wvideo=x8f5pjstni

Diffraction has to do with interference effects. Interference effects can be described with convolution:

https://archive.cnx.org/contents/32...41d@2/the-convolution-theorem-and-diffraction

And the mathematics to redo convolution processes is: deconvolution!
Unless you divide by zero or small factors in presence of noise and discretization errors. You seem to have unshakable confidence that you can redo everything.
If I have an effect that follows clear physical rules and can be described mathematically in a clear way, there is a way to correct for this effect by correcting the signal mathematically with functions being inverse to the functions that describe the adverse effect mathematically.

This is done with the convolution-deconvolution way.

It is done in scientific fields where diffraction matters quite often:


Best regards

Holger
 
Diffraction is a result of a physcial process at the border of your aperture.
No, it is not. The energy concentrated on the boundary of the aperture is zero and has no effect on the image.
To understan diffraction this may help:
It also may help to understand math a physic better instead of watching youtube videos.

Even laser light shows diffraction - even in a pretty clear pattern:

https://www.stem.org.uk/elibrary/resource/26770?wvideo=x8f5pjstni

Diffraction has to do with interference effects. Interference effects can be described with convolution:

https://archive.cnx.org/contents/32...41d@2/the-convolution-theorem-and-diffraction

And the mathematics to redo convolution processes is: deconvolution!
Unless you divide by zero or small factors in presence of noise and discretization errors. You seem to have unshakable confidence that you can redo everything.
If I have an effect that follows clear physical rules and can be described mathematically in a clear way, there is a way to correct for this effect by correcting the signal mathematically with functions being inverse to the functions that describe the adverse effect mathematically.
Right. Solve 0x=0 then. Or explain to me how you reverse a diffusion process. A small drop of ink in a glass of water - can you tell me after a few seconds where I dropped it?
This is done with the convolution-deconvolution way.

It is done in scientific fields where diffraction matters quite often:

https://iopscience.iop.org/article/10.1086/342606/pdf
As I said several times already, it can be done up to some limt (which is not very high and for all photography purposes is just a but finer sharpening).

BTW, there is a reason why the diffraction limit is a limit indeed (no, I do not mean CiC). There is a reason why people invent new ways to break the diffraction limit; and no, they do not do deconvolution.
 
Diffraction is a result of a physcial process at the border of your aperture.
No, it is not. The energy concentrated on the boundary of the aperture is zero and has no effect on the image.
To understan diffraction this may help:
It also may help to understand math a physic better instead of watching youtube videos.

Even laser light shows diffraction - even in a pretty clear pattern:

https://www.stem.org.uk/elibrary/resource/26770?wvideo=x8f5pjstni

Diffraction has to do with interference effects. Interference effects can be described with convolution:

https://archive.cnx.org/contents/32...41d@2/the-convolution-theorem-and-diffraction

And the mathematics to redo convolution processes is: deconvolution!
Unless you divide by zero or small factors in presence of noise and discretization errors. You seem to have unshakable confidence that you can redo everything.
If I have an effect that follows clear physical rules and can be described mathematically in a clear way, there is a way to correct for this effect by correcting the signal mathematically with functions being inverse to the functions that describe the adverse effect mathematically.

This is done with the convolution-deconvolution way.

It is done in scientific fields where diffraction matters quite often:

https://iopscience.iop.org/article/10.1086/342606/pdf

Best regards

Holger
Two problems. One is that deconvolution is not, in general a well posed problem. That is, there could be a number of convolutions that lead to a given function, so now you have to find out which of those is the one you want. Two is that this continuous domain mathematics does not model the actual problem completely accurately. If you try to deconvolve diffraction, you run out of information very quickly.

--
263, look deader.
 
Last edited:
That is about the “raw” optical image (the formula relates to the Airy disk size that I referred to); it does not rule our subsequent deconvolution image processing to partially undo the effects of diffraction. Bart Van Der Wolf has done some writing and demos of this, described briefly at https://openphotographyforums.com/f...tion-of-diffraction-with-deconvolution.12555/
All deconvolution does is is recover lost apparent sharpness. In other words, as long as MTF is not zero, the detail is still there, just at very low contrast. Deconvolution can recover some of that contrast if noise is low enough. However, if MTF is zero, which it is at that point in the formula I referenced, there's nothing to recover.
 
Since it is not really advertised, but just in the technical material, I suspect that it actually works, but only for a small level of diffraction, such as a stop or a half a stop over the theoretical point where diffraction occurs.
The thing is, even just a general JPEG-engine sharpening amount or radius depending on f-number at exposure time (something many people do if needed, anyway) may qualify as diffraction correction in corporate weasel-speak. There are many different levels at which they could "address" diffraction for those people not willing to accept its greater softness in default conversions with higher f-numbers, but it is impossible to actually deconvolve an Airy disk from even a point source of light against a black background because it can't even have its shape properly resolved by our ridiculously low pixel densities, for that task. Just seeing that f/22 is softer than f/4 does not mean that the "disk" is being resolved, it just means that maximum neighboring-pixel contrast is reduced.
 
Diffraction is a result of a physcial process at the border of your aperture.
No, it is not. The energy concentrated on the boundary of the aperture is zero and has no effect on the image.
To understan diffraction this may help:
It also may help to understand math a physic better instead of watching youtube videos.

Even laser light shows diffraction - even in a pretty clear pattern:

https://www.stem.org.uk/elibrary/resource/26770?wvideo=x8f5pjstni

Diffraction has to do with interference effects. Interference effects can be described with convolution:

https://archive.cnx.org/contents/32...41d@2/the-convolution-theorem-and-diffraction

And the mathematics to redo convolution processes is: deconvolution!
Unless you divide by zero or small factors in presence of noise and discretization errors. You seem to have unshakable confidence that you can redo everything.
If I have an effect that follows clear physical rules and can be described mathematically in a clear way, there is a way to correct for this effect by correcting the signal mathematically with functions being inverse to the functions that describe the adverse effect mathematically.
Right. Solve 0x=0 then. Or explain to me how you reverse a diffusion process. A small drop of ink in a glass of water - can you tell me after a few seconds where I dropped it?
This is done with the convolution-deconvolution way.

It is done in scientific fields where diffraction matters quite often:

https://iopscience.iop.org/article/10.1086/342606/pdf
As I said several times already, it can be done up to some limt (which is not very high and for all photography purposes is just a but finer sharpening).

BTW, there is a reason why the diffraction limit is a limit indeed (no, I do not mean CiC). There is a reason why people invent new ways to break the diffraction limit; and no, they do not do deconvolution.
Diffraction is there everytime. The limit is defined as the point, where the camera sensor does resolve the effect of diffraction. Thus, it is no fix border or anything that has to do with physical rules but simply with the resolution of an effect.

Diffraction is no diffusion effect but an effect of wavefunctions being deformed in a predictable way.You don't have to deal with linear equations but with wave-functions/Fourier transformations.
 
Canon’s DPP can correct diffraction in dual pixel raw files.
... Diffraction is not correctable.
Mathematically, diffraction can be an invertible transformation of the signal, or closely approximated by an invertible transformation, so it is conceivable that with a distinctly oversampled signal (photosites far smaller than the Airy disk diameter) the diffraction effect can be approximately undone in post-processing by some version of sharpening.

I am not sure of the practicalities though.
Diffraction limits resolution. That's not undoable.
Well, what it does is lower the contrast; it is then the noise and other artifacts that limit how well the contrast can be restored.
I think what Canon is doing is just applying some deconvolution sharpening to reduce the blur, but that doesn't improve resolving power.
Noise exists in a spectrum (along with artifacts) competing with signal. In the absence of noise and other artifacts, only total extinction of contrast would actually lose resolution,

Single exposures have significant noise. What if a camera is set up to take a series of many exposures, stacks them with extended precision, and does the black frame and flat field corrections? Then, contrast well below the human-based Rayleigh criterion becomes reversible. Same would be true of a photon-counting camera, with no significant limit on exposure. You could shoot an ISO 0.0001 exposure, and deconvolve it with a minimum of added visible noise.
 
Canon’s DPP can correct diffraction in dual pixel raw files.
... Diffraction is not correctable.
Mathematically, diffraction can be an invertible transformation of the signal, or closely approximated by an invertible transformation, so it is conceivable that with a distinctly oversampled signal (photosites far smaller than the Airy disk diameter) the diffraction effect can be approximately undone in post-processing by some version of sharpening.

I am not sure of the practicalities though.
Diffraction limits resolution. That's not undoable.
Well, what it does is lower the contrast; it is then the noise and other artifacts that limit how well the contrast can be restored.
But it lowers it to zero at a certain point.
I think what Canon is doing is just applying some deconvolution sharpening to reduce the blur, but that doesn't improve resolving power.
Noise exists in a spectrum (along with artifacts) competing with signal. In the absence of noise and other artifacts, only total extinction of contrast would actually lose resolution,
And that's the point the link I provided describes - the cutoff frequency.
Single exposures have significant noise. What if a camera is set up to take a series of many exposures, stacks them with extended precision, and does the black frame and flat field corrections? Then, contrast well below the human-based Rayleigh criterion becomes reversible.
Yes but Rayleigh is at MTF=9, not MTF=0 like I linked to.

--
Lee Jay
 
Diffraction is a result of a physcial process at the border of your aperture.
No, it is not. The energy concentrated on the boundary of the aperture is zero and has no effect on the image.
To understan diffraction this may help:
It also may help to understand math a physic better instead of watching youtube videos.

Even laser light shows diffraction - even in a pretty clear pattern:

https://www.stem.org.uk/elibrary/resource/26770?wvideo=x8f5pjstni

Diffraction has to do with interference effects. Interference effects can be described with convolution:

https://archive.cnx.org/contents/32...41d@2/the-convolution-theorem-and-diffraction

And the mathematics to redo convolution processes is: deconvolution!
Unless you divide by zero or small factors in presence of noise and discretization errors. You seem to have unshakable confidence that you can redo everything.
If I have an effect that follows clear physical rules and can be described mathematically in a clear way, there is a way to correct for this effect by correcting the signal mathematically with functions being inverse to the functions that describe the adverse effect mathematically.

This is done with the convolution-deconvolution way.

It is done in scientific fields where diffraction matters quite often:

https://iopscience.iop.org/article/10.1086/342606/pdf

Best regards

Holger
Two problems. One is that deconvolution is not, in general a well posed problem. That is, there could be a number of convolutions that lead to a given function, so now you have to find out which of those is the one you want. Two is that this continuous domain mathematics does not model the actual problem completely accurately. If you try to deconvolve diffraction, you run out of information very quickly.
That's correct. But there are programs that do this job for you. I use Piccure+ which was developed by scientists working in the field of astronomy and they simply made a consumer products from the routines they use to fix their scientific photos.

And there are new technologies using AI to fix the problem. I am pretty sure that at least some of them will run procedures to adopt deconvolution techniques the best way to an actual photo.

I am interested in this matter as I do a lot of macro photography. We have the stcking technique to take a series of photos below the limit of diffraction and calculate them into one single photo bringing together all sharp parts of the series of photos. But this technique bears the risk of making mistakes. A single hair missing or two hairs where there should be one may make a big difference in the determination of an insect.

If I need full sharpness I take the photo with making no thoughts about problems with diffraction - and fix the blur caused by diffraction with help of programs like Piccure+.

Best regards

Holger
 
Canon’s DPP can correct diffraction in dual pixel raw files.

How? I have no idea. But I suspect other software programs will soon be able to do the same.
I have no idea either. The details of all these image rendering features are proprietary.

Of course no post-acquisition corrections can create information out of thin air. You only have the information available in the original data. This means the some sort of mathematical model modifies the data in an attempt to simulate image rendering as if more information was available.

Dual pixel raw files don't violate the principle that one can not create new information after data acquisition. Dual pixel raw files create two separate images – one for each sub-pixel. The raw file is twice as large. There actually is more information. The additional information can be used to minimize diffraction effects.

So, I suggest other software programs could do the same with dual pixel raw files.

Diffraction remediation form single pixel raw files is another matter. FUJIFILM offers this feature with in-camera JPEGs for newer bodies. I have no idea how this works.
 
I am trying to grasp my head around certain things. One of them is diffraction, according to Cambridge in colour it is dependent of pixel pitch of the sensor,the aperture of the lens and of course the final size and viewing distance of the print. Neglecting the print size and the viewing distance and just focusing on aperture and pixel pitch. Will increased megapixels lead to sooner hitting the diffraction limit?
Yes and no. At 100% viewing the effect of diffraction will be more visible in the higher resolution sensor but when viewed at like size you will see no difference.
So if you were to jam 60 or 70 megapixels in a full frame sensor wouldn't you at one point reach diffraction very soon? I think it was some manager from Fujifilm that said, that FF's limit would be around 60mp to 100mp. Was it that that he was referring to?
Or he could have been referring to the fact that if the sensor density is too great then you will see too much noise when viewed at 100% because the base noise level remains constant.
I did some testing with an A7iii and an A7R iii with the same lens of a lens testing chart at various apertures (5.6 to 11). 5.6 where diffraction starts to kick in. When I downsampled the A7R iii image down (both no sharpening) I still found the A7R iii to be sharper at all apertures.
In all tests I have seen the higher resolution sensor down sampled to match the lower show the higher to be sharper. My own testing has confirmed this. This will be unaffected by diffraction. When downsampled the diffraction limit is the same because both photos are now 24mp.
Why is that so? Is that gain in resolution outweighing the diffraction? Or is the jump from 24mp to 42mp just not that big?
See above.
Or should I have tested at smaller apertures (maybe then I would have seen the difference (I rarely use f16 so I didn't feel I should test it)?
Won't make a difference.
What do you think? Will 70mp FF cameras even be useful?
Time will tell.
 
Also factor in the lens aperture. With some primes, lenses with very large apertures like f/1.2 and f/1.4, diffraction can start at f/9 and f/11.
I have never heard of this. Can you supply a source for this information? If true possibly it's due to the number of elements and type of glass used which can be different in fast lenses than slow lenses. Another possibility is at small apertures a smaller % of the glass elements is used compared to slower lenses.
 
Diffraction is a result of a physcial process at the border of your aperture.
No, it is not. The energy concentrated on the boundary of the aperture is zero and has no effect on the image.
To understan diffraction this may help:
It also may help to understand math a physic better instead of watching youtube videos.

Even laser light shows diffraction - even in a pretty clear pattern:

https://www.stem.org.uk/elibrary/resource/26770?wvideo=x8f5pjstni

Diffraction has to do with interference effects. Interference effects can be described with convolution:

https://archive.cnx.org/contents/32...41d@2/the-convolution-theorem-and-diffraction

And the mathematics to redo convolution processes is: deconvolution!
Unless you divide by zero or small factors in presence of noise and discretization errors. You seem to have unshakable confidence that you can redo everything.
If I have an effect that follows clear physical rules and can be described mathematically in a clear way, there is a way to correct for this effect by correcting the signal mathematically with functions being inverse to the functions that describe the adverse effect mathematically.
Right. Solve 0x=0 then. Or explain to me how you reverse a diffusion process. A small drop of ink in a glass of water - can you tell me after a few seconds where I dropped it?
This is done with the convolution-deconvolution way.

It is done in scientific fields where diffraction matters quite often:

https://iopscience.iop.org/article/10.1086/342606/pdf
As I said several times already, it can be done up to some limt (which is not very high and for all photography purposes is just a but finer sharpening).

BTW, there is a reason why the diffraction limit is a limit indeed (no, I do not mean CiC). There is a reason why people invent new ways to break the diffraction limit; and no, they do not do deconvolution.
Diffraction is there everytime. The limit is defined as the point, where the camera sensor does resolve the effect of diffraction.
This does not make much sense.
Thus, it is no fix border or anything that has to do with physical rules but simply with the resolution of an effect.
No, there is an actual cutoff. It has been linked here previously and you keep ignoring it. There are no frequencies resolved above that, period.
Diffraction is no diffusion effect
It is almost the same. If you model diffraction with a Gaussian kernel, it is exactly the same. And I was responding to you sweeping statement about all math processes, etc.
but an effect of wavefunctions being deformed in a predictable way.
So is Gaussian blur, i.e., diffusion.
You don't have to deal with linear equations but with wave-functions/Fourier transformations.
The latter are linear. Convolution is linear as well.
 
Last edited:
Also factor in the lens aperture. With some primes, lenses with very large apertures like f/1.2 and f/1.4, diffraction can start at f/9 and f/11.
I have never heard of this. Can you supply a source for this information? If true possibly it's due to the number of elements and type of glass used which can be different in fast lenses than slow lenses. Another possibility is at small apertures a smaller % of the glass elements is used compared to slower lenses.
It is very simple. At f/9 or f/11, physics changes and light becomes a wave. Above that, it is particles. The magic aperture is f/9.1919191..
 
Diffraction is a result of a physcial process at the border of your aperture.
No, it is not. The energy concentrated on the boundary of the aperture is zero and has no effect on the image.
To understan diffraction this may help:
It also may help to understand math a physic better instead of watching youtube videos.

Even laser light shows diffraction - even in a pretty clear pattern:

https://www.stem.org.uk/elibrary/resource/26770?wvideo=x8f5pjstni

Diffraction has to do with interference effects. Interference effects can be described with convolution:

https://archive.cnx.org/contents/32...41d@2/the-convolution-theorem-and-diffraction

And the mathematics to redo convolution processes is: deconvolution!
Unless you divide by zero or small factors in presence of noise and discretization errors. You seem to have unshakable confidence that you can redo everything.
If I have an effect that follows clear physical rules and can be described mathematically in a clear way, there is a way to correct for this effect by correcting the signal mathematically with functions being inverse to the functions that describe the adverse effect mathematically.

This is done with the convolution-deconvolution way.

It is done in scientific fields where diffraction matters quite often:

https://iopscience.iop.org/article/10.1086/342606/pdf

Best regards

Holger
Two problems. One is that deconvolution is not, in general a well posed problem. That is, there could be a number of convolutions that lead to a given function, so now you have to find out which of those is the one you want. Two is that this continuous domain mathematics does not model the actual problem completely accurately. If you try to deconvolve diffraction, you run out of information very quickly.
That's correct. But there are programs that do this job for you. I use Piccure+ which was developed by scientists working in the field of astronomy and they simply made a consumer products from the routines they use to fix their scientific photos.
There are programs that can sometimes do what appears to be a convincing job. Essentially what they are doing is using various heuristics to select a likely convolution candidate (a 'prior').
And there are new technologies using AI to fix the problem. I am pretty sure that at least some of them will run procedures to adopt deconvolution techniques the best way to an actual photo.
AI is not magic. All AI does is a slightly more sophisticated version of the above, that is use a prior which might be more convincing to a human being than some of the other methods.
I am interested in this matter as I do a lot of macro photography. We have the stcking technique to take a series of photos below the limit of diffraction and calculate them into one single photo bringing together all sharp parts of the series of photos. But this technique bears the risk of making mistakes. A single hair missing or two hairs where there should be one may make a big difference in the determination of an insect.

If I need full sharpness I take the photo with making no thoughts about problems with diffraction - and fix the blur caused by diffraction with help of programs like Piccure+.
If you can deconvolve away diffraction you can also deconvolve away out-of-focus, so why not cut out the middleman?
 

Keyboard shortcuts

Back
Top