Sooner reaching diffraction limit with higher megapixel cameras

David_Winston

Member
Messages
14
Reaction score
3
I am trying to grasp my head around certain things. One of them is diffraction, according to Cambridge in colour it is dependent of pixel pitch of the sensor,the aperture of the lens and of course the final size and viewing distance of the print. Neglecting the print size and the viewing distance and just focusing on aperture and pixel pitch. Will increased megapixels lead to sooner hitting the diffraction limit?

So if you were to jam 60 or 70 megapixels in a full frame sensor wouldn't you at one point reach diffraction very soon? I think it was some manager from Fujifilm that said, that FF's limit would be around 60mp to 100mp. Was it that that he was referring to?

I did some testing with an A7iii and an A7R iii with the same lens of a lens testing chart at various apertures (5.6 to 11). 5.6 where diffraction starts to kick in. When I downsampled the A7R iii image down (both no sharpening) I still found the A7R iii to be sharper at all apertures.

Why is that so? Is that gain in resolution outweighing the diffraction? Or is the jump from 24mp to 42mp just not that big?

Or should I have tested at smaller apertures (maybe then I would have seen the difference (I rarely use f16 so I didn't feel I should test it)?

What do you think? Will 70mp FF cameras even be useful?
 
Solution
1. The point at which diffraction induced blur becomes a factor that limits lens resolution. Stopping a lens down minimises most aberrations, but increases the size of the blur disc from diffraction. As a result, most lenses offer more resolution when stopped down to some extent, then offer less resolution when stopped down more than that. When a lens is described as 'diffraction limited' from a particular aperture, that means it is sharpest at that aperture. This is in theory independent of sensor pixel count and pixel size, though in practice, as most lens tests are carried out on camera, it may be confused with point 2 below.

2. The point at which diffraction induced blur becomes an important factor in practice. When diffraction...
Canon’s DPP can correct diffraction in dual pixel raw files.

How? I have no idea. But I suspect other software programs will soon be able to do the same.

It might even be possible without dual pixel raw files with a good lens profile. Or not.
 
I don't think we have in-camera lens diffraction correction. This would a problem that would be very difficult to solve mathematically.
Doing it exactly right would be very complex, indeed. You'd need to know exact wavelengths captured, mixed within each pixels, and you'd need to resolve the bulk of the Airy disk with dozens of pixels, to get even the first couple of rings into the correction.

At f/64 and with very high pixel density, with a laser, you will have one well-resolved Airy "disk", easily deconvolved, generating some high-frequency blue noise which disappear with small display or downsampling, a net gain if everything is done right. You could use the single Airy "disk" shape as a deconvolution shape to deconvolve a more complicated scene, especially if enough light was captured to get a smooth Airy "disk". A noisy one will of course create more noise in the deconvolution than an idealized disk.
In post processing, I see RAW Therapee attempts to provide a tool to deal with this problem, and my attepts to use it seem uncertain whether it is helping or not. I haven't seen other SW such as Photoshop even try.

It will be nice when the day comes that this can be addressed. But that day is not here yet.

Sort of the same story with CA. There are some attempts to minimize its effects. But CA is a distructive problem.
Especially with only three wide color bands; what you are correcting as a middle green may be from red light. Basically, what you get with CA through a Bayer CFA is three blurs, which combine to an even greater blur, and all you can really do with CA correction is get the three bands to register with each other. Each is still blurred internally.
If CA is slight, it can be hidden. If it is severe, the hiding techniques hardly hide the problem.
There are more intelligent methods, perhaps, than simply resampling two of the color channels for better luminance registration, but they will still have to operate under assumptions about true wavelength - any color-filtered pixel can capture a wide range of wavelengths.

If sensors captured more bands of light, with narrower spectral response, then intelligent methods that break down the image into assumed wavelengths would cause less error than with 3 bands, and could become significantly better than simply resampling RAW or RGB color bands.
 
I don't think we have in-camera lens diffraction correction. This would a problem that would be very difficult to solve mathematically.
Doing it exactly right would be very complex, indeed. You'd need to know exact wavelengths captured, mixed within each pixels, and you'd need to resolve the bulk of the Airy disk with dozens of pixels, to get even the first couple of rings into the correction.

At f/64 and with very high pixel density, with a laser, you will have one well-resolved Airy "disk", easily deconvolved, generating some high-frequency blue noise which disappear with small display or downsampling, a net gain if everything is done right. You could use the single Airy "disk" shape as a deconvolution shape to deconvolve a more complicated scene, especially if enough light was captured to get a smooth Airy "disk". A noisy one will of course create more noise in the deconvolution than an idealized disk.
In post processing, I see RAW Therapee attempts to provide a tool to deal with this problem, and my attepts to use it seem uncertain whether it is helping or not. I haven't seen other SW such as Photoshop even try.

It will be nice when the day comes that this can be addressed. But that day is not here yet.

Sort of the same story with CA. There are some attempts to minimize its effects. But CA is a distructive problem.
Especially with only three wide color bands; what you are correcting as a middle green may be from red light. Basically, what you get with CA through a Bayer CFA is three blurs, which combine to an even greater blur, and all you can really do with CA correction is get the three bands to register with each other. Each is still blurred internally.
If CA is slight, it can be hidden. If it is severe, the hiding techniques hardly hide the problem.
There are more intelligent methods, perhaps, than simply resampling two of the color channels for better luminance registration, but they will still have to operate under assumptions about true wavelength - any color-filtered pixel can capture a wide range of wavelengths.

If sensors captured more bands of light, with narrower spectral response, then intelligent methods that break down the image into assumed wavelengths would cause less error than with 3 bands, and could become significantly better than simply resampling RAW or RGB color bands.
 
Canon’s DPP can correct diffraction in dual pixel raw files.
No, it can't.
It's a marketing claim. Diffraction is not correctable.
 
The OP never mentioned what lenses he was testing, it was simply a data point.
The post to which your statement here is a reply said " I have done some extensive testing with my lenses and they all appear to loose sharpness at 5.6." It's not the particular lenses that matter, it's the fact that he specifically asked about lenses that lose sharpness sat f/5.6.
He said he did some testing, the word extensive was never mentioned,
As I copied his exact words, which I have now underlined, I think it's very clear that the word extensive was used.
but "I am trying to grasp my head around certain things" was clearly mentioned at the top of the post. Different people will interpret his comments different ways, I interpreted as he did not appear to be a master of the subject, but in retrospect, the OP may have more experience with this type of testing than I thought.

BTW, this is open talk forum, not PS&T, so expecting everyone to be an expert, and talk in expert terms doesn't seem realistic.
Who has suggested that they should? I've not suggested anyone should talk in expert terms; only that people shouldn't give misleading advice.
You seem to like speaking to people who know less than you in condescending ways, maybe its accidental, maybe intentional, but either way its annoying.
I talk directly; if correcting obvious errors comes over as condescending so be it.
 
Canon’s DPP can correct diffraction in dual pixel raw files.
No, it can't.

It's a marketing claim. Diffraction is not correctable.
Could be. I haven’t seen any tests or articles on it, but it’s in the manual and has an adjustment in DPP.

Since there are two “sensors” (for lack of a better word) for each pixel, and each is a different size, it might see the diffraction differently on each and have the information to correct it that way.

I’ll ask Canon and maybe test it some day.

You would think that DPR would want to debunk the claim, if it wasn’t true.
You really believe that?
 
The OP never mentioned what lenses he was testing, it was simply a data point.
The post to which your statement here is a reply said " I have done some extensive testing with my lenses and they all appear to loose sharpness at 5.6." It's not the particular lenses that matter, it's the fact that he specifically asked about lenses that lose sharpness sat f/5.6.
He said he did some testing, the word extensive was never mentioned,
As I copied his exact words, which I have now underlined, I think it's very clear that the word extensive was used.
from the original post:

"I did some testing with an A7iii and an A7R iii with the same lens of a lens testing chart".

he made a different claim in another post that included the word extensive, and I forgot about that one, so we're referring to two different posts, fair enough.

I came into this discussion only partially informed, and did make some incorrect assumptions and statements, I'll admit to that. In the course of this thread I've learned a bit more, so even though I made a few mis-steps along the way, in the end I gained something from participating here.
 
Last edited:
I am trying to grasp my head around certain things. One of them is diffraction, according to Cambridge in colour it is dependent of pixel pitch of the sensor,the aperture of the lens and of course the final size and viewing distance of the print. Neglecting the print size and the viewing distance and just focusing on aperture and pixel pitch. Will increased megapixels lead to sooner hitting the diffraction limit?

So if you were to jam 60 or 70 megapixels in a full frame sensor wouldn't you at one point reach diffraction very soon? I think it was some manager from Fujifilm that said, that FF's limit would be around 60mp to 100mp. Was it that that he was referring to?

I did some testing with an A7iii and an A7R iii with the same lens of a lens testing chart at various apertures (5.6 to 11). 5.6 where diffraction starts to kick in. When I downsampled the A7R iii image down (both no sharpening) I still found the A7R iii to be sharper at all apertures.

Why is that so? Is that gain in resolution outweighing the diffraction? Or is the jump from 24mp to 42mp just not that big?

Or should I have tested at smaller apertures (maybe then I would have seen the difference (I rarely use f16 so I didn't feel I should test it)?

What do you think? Will 70mp FF cameras even be useful?
Also factor in the lens aperture. With some primes, lenses with very large apertures like f/1.2 and f/1.4, diffraction can start at f/9 and f/11.

Some camera companies that have specific firmware for each lens, have in camera diffraction compensation features that work with the camera to compensate for not only distortion and vignette, but also diffraction.
I don't think we have in-camera lens diffraction correction. This would a problem that would be very difficult to solve mathematically.
The FZ-1000 (5+yo) did not have (in-camera) diffraction-correction, BUT ... the FZ-2000/2500 DID.

I am not sure how effective it is, but I did see it as a "on/off" option in the menu.
 
Last edited:
Canon’s DPP can correct diffraction in dual pixel raw files.
No, it can't.

It's a marketing claim. Diffraction is not correctable.
Could be. I haven’t seen any tests or articles on it, but it’s in the manual and has an adjustment in DPP.

Since there are two “sensors” (for lack of a better word) for each pixel, and each is a different size, it might see the diffraction differently on each and have the information to correct it that way.

I’ll ask Canon and maybe test it some day.

You would think that DPR would want to debunk the claim, if it wasn’t true.
You really believe that?
I look at it like mixing up something, in this case pixel information, and not being able to get it back to an organized state. If you had a full monopoly game board and without looking at it before hand, wiped your hand across it and just swirled all the pieces up. Is there a way to know how it was organized before hand?

I don't think so.
 
Canon’s DPP can correct diffraction in dual pixel raw files.
... Diffraction is not correctable.
Mathematically, diffraction can be an invertible transformation of the signal, or closely approximated by an invertible transformation, so it is conceivable that with a distinctly oversampled signal (photosites far smaller than the Airy disk diameter) the diffraction effect can be approximately undone in post-processing by some version of sharpening.

I am not sure of the practicalities though.
 
Canon’s DPP can correct diffraction in dual pixel raw files.
... Diffraction is not correctable.
Mathematically, diffraction can be an invertible transformation of the signal, or closely approximated by an invertible transformation, so it is conceivable that with a distinctly oversampled signal (photosites far smaller than the Airy disk diameter) the diffraction effect can be approximately undone in post-processing by some version of sharpening.

I am not sure of the practicalities though.
Diffraction limits resolution. That's not undoable.

I think what Canon is doing is just applying some deconvolution sharpening to reduce the blur, but that doesn't improve resolving power.
 
Right: under any given viewing conditions (displayed image size size/viewing distance) a higher resolution (higher pixel count) sensor at the same f-stop will give a somewhat sharper image, not worse. That is "line pairs per picture height" improves at least somewhat. Maybe the image is not as sharp as with the same sensor used at a lower f-stop, but it is never worse for overall resolution to increase sensor resolution at a given f-stop.

At worst, the resolution gains from increasing pixel count become less and less once pixel size in microns is significantly smaller than the aperture ratio.

Since increasing sensor resolution is easier/cheaper than increasing lens resolution, it makes sense to me to have sensors that out-resolve one's best lenses somewhat, so as to get the best out of all one's lenses.
 
Can you quote the mathematics to back that claim up? I am a professional mathematician but not expert in this field, so I would be interested to read hard mathematics on this, not just the formula for the diameter of the Airy disk. Limits that apply with film becomes different with digital signal processing possibilities.

There are similar processes that smear information around (rather than totally losing it) in a way that is mathematically reversible. A related example is diffusion (modeled by the heat equation) which smears an image, but is in some situations reversible (solving the backward heat equation; a tool in image processing).
 
Canon’s DPP can correct diffraction in dual pixel raw files.
... Diffraction is not correctable.
Mathematically, diffraction can be an invertible transformation of the signal, or closely approximated by an invertible transformation, so it is conceivable that with a distinctly oversampled signal (photosites far smaller than the Airy disk diameter) the diffraction effect can be approximately undone in post-processing by some version of sharpening.
It cannot be reversed for two reasons. Fist, there is a frequency cutoff to a good approximation, and higher frequencies are lost. This is relevant only if you have an image with enough pixels to fit those frequencies (or, with a fixed pixel count, when the aperture is closed too much). Second, well before the high frequencies are cut, they are attenuated and therefore corrupted by noise, discretization, etc., so the practical frequency limit is even lower.

One could do some partial recovery though. Knowing the profile of the diffraction kernel allows for better deconvolution, up to some limit, of course.
 
Last edited:
That is about the “raw” optical image (the formula relates to the Airy disk size that I referred to); it does not rule our subsequent deconvolution image processing to partially undo the effects of diffraction. Bart Van Der Wolf has done some writing and demos of this, described briefly at https://openphotographyforums.com/f...tion-of-diffraction-with-deconvolution.12555/
This is a really bad simulation. It is a well known "crime" in certain circles. He simulates diffraction with a discrete convolution with a very rough 9x9 block and then inverts it. The real diffraction convolves a continuous image with a continuous kernel, and then samples it with all the noise and discretization errors associated with it. Even then, you can see that the detail in the blinds on the lower right is mostly lost.
 
Last edited:
That is about the “raw” optical image (the formula relates to the Airy disk size that I referred to); it does not rule our subsequent deconvolution image processing to partially undo the effects of diffraction. Bart Van Der Wolf has done some writing and demos of this, described briefly at https://openphotographyforums.com/f...tion-of-diffraction-with-deconvolution.12555/
This is a really bad simulation. It is a well known "crime" in certain circles. He simulates diffraction with a discrete convolution with a very rough 9x9 block and then inverts it. The the real diffraction convolves a continuous image with a continuous kernel, and then samples it with all the noise and discretization errors associated with it. Even then, you can see that the detail in the blinds on the lower right is mostly lost.
Maybe, but diffraction deconvolution is an established method in microscopy and such, with plenty of peer-reviewed sources. I agree that there is a theoretical limit on how much improvement can be made, but it is quite a bit better than often inferred by ignoring the possibility deconvolution. IIRC, the actual hard limit on resolution in cycles per mm is 1/(wavelength times F-number) and since you need (at least) two pixels per cycle [Nyquist], that puts a rough limit on "useful" pixel size at about (wavelength times F-number)/2.

For the typical 550nm wavelength of visible light, that gives a "useful pixel size limit" of about (0.275 times F-number) microns, so for example with an f/2, down to 0.55 micron pixels could be "useful", and conversely, the smallest current pixels in 35mm format — 4 microns — are only "hard diffraction limited" at about f/14 with optimal processing (and otherwise optically perfect lenses.) Without deconvolution instead, diffraction effects are noticeable by about f-number = (2 times pixel pitch in microns), so f/8.
 
It cannot be reversed for two reasons. Fist, there is a frequency cutoff to a good approximation, and higher frequencies are lost. This is relevant only if you have an image with enough pixels to fit those frequencies (or, with a fixed pixel count, when the aperture is closed too much). Second, well before the high frequencies are cut, they are attenuated and therefore corrupted by noise, discretization, etc., so the practical frequency limit is even lower.

One could do some partial recovery though. Knowing the profile of the diffraction kernel allows for better deconvolution, up to some limit, of course.
Agreed on all the practical (and theoretical) limitations; my point is only that there is _some_ room for recovery of resolution in post-processing — which I believe as is practiced in microscopy, and in the recent famous "black hole" photograph, where the Airy disk is about as big as the black disk in the middle of the image.
 

Keyboard shortcuts

Back
Top