Sensor Size, Diffraction, and Resolvable Megapixels

  • Thread starter Thread starter Sqrt2
  • Start date Start date
S

Sqrt2

Guest
A question that crossed my mind was "can smartphones like the S24U actually resolve their 200MP sensor?" and led me down a path of investigating diffraction and resolvable megapixels.

A Diffraction-Limited System™ is any optical device, but for this purpose a camera, whose lens is sharp enough and whose sensor's megapixels are high enough, that the only limiting factor for resolution is diffraction - the phenomenon of light being distorted and bent when moving through an opening - an aperture.

The resolvable detail of such a device is determined by the size of the Airy Disc - the diffraction pattern - on each individual photosite/pixel. If the diameter of the Airy Disc is larger than the pixel pitch, the light from each pixel will blend into its surrounding pixels, softening detail, and reducing resolution.

An Airy DIsc, the product of the diffraction of light through an aperture.
An Airy DIsc, the product of the diffraction of light through an aperture.

Therfore, the smallest pixel pitch that can be resolved is that of the Airy Disc's diameter.

According to Wikipedia, the diameter of the Airy Disc is given by:

d/2 = 1.22 * L * N

or

d = 2.44 * L * N

where,

d is the Airy Disc's diameter,

L is the wavelength of light, and

N is the F number of the lens.

As for the wavelength, we are dealing with visible light, whose wavelength varies between 0.38 and 0.75 µm, but as an approximation we can use the middle of the range, the colour green, whose wavelength is 0.5 µm.

This gives us an Airy Disc's diameter, and a pixel pitch, of

d = 1.22 * F

where F is the F number.

Next, to calculate the number of megapixels resolvable, given the sensor area A and the pixel pitch D, we can simply divide the sensor area by the the area of each pixel (which is the pixel pitch squared):

MP = A / (D^2)

Notice the pixel pitch is measured in µm, but the sensor area is measured in mm^2. We would need to account for a conversion factor of 10^6 here, if not for the fact that a megapixel is 10^6 pixels, so they cancel each other out. Neat.

We can substitute the first equation in the second, and get the maximum resolvable MP for a given sensor area and F number:

MP = A / (1.22*F)^2

where A is the sensor area in mm^2 and F is the F number of the lens.

As for the sensor area, first let's start with a few sensor sizes used in ILCs:

Full Frame = 864 mm^2

APS-C = 370 mm^2 (or 330 mm^2 for Canon)

MFT = 225 mm^2

And here are some sensor sizes common in smartphones today:

1-Type = 116 mm^2 , common in high-end compacts from Canon, Sony, and Panasonic, and appearing in the latest flagship smatphones from Sony, Vivo, and Xiaomi.

1/1.3 = 69 mm^2 , as seen in the Samsung Galaxy S24 Ultra.

1/1.5 = 49 mm^2 , as seen in the Samsung Galaxy S24+, and the Asus Zenfone 10.

1/2.3 = 28.5 mm^2 , as seen in many superzoom compacts

Now, as for the aperture, according to GSMArena, The Zenfone 10's aperture is f/1.9 and the S24 Ultra's aperture f/1.7 , so we can approximate most smartphone main shooters' apertures as f/1.8 , which lets us calculate the maximum number of resolvable megapixels:

MP = A / 4.82

for each sensor size I listed above, the values are:

Full Frame = 180MP

APS-C = 77MP (63MP for Canon)

MFT = 47MP

1-Type = 24MP

1/1.3 = 14MP

1/1.5 = 10MP

1/2.3 = 6MP

From this we can see that while ILCs can easily resolve over 40MP, a smartphone's sensor would struggle to resolve even 24MP, let alone 50MP or 200MP.

No wonder even the best smarphone cameras are very bad at resolving fine detail, as seen here .

If so, why do smartphone manufacturers insist on cramming more megapixels than they can possibly resolve? Just for marketing? For computational photography of some sort?

As a bonus, let's do the same calculations for the ultra-wide and the 5x telephoto cameras of the S24 Ultra:

UW:

1/2.55-Type, 24.7 mm^2, f/2.2 = 3.43MP

5x telephoto:

1/2.52-Type, 24.7 mm^2, f/3.4 = 1.43MP

Is it just me or are these really low? That would mean these can merely resolve a 2MP 1080p video frame.

I want to ask, is there somewhere a mistake in my calculations? Please let me know in the replies!
 
Therefore, the smallest pixel pitch that can be resolved is that of the Airy Disc's diameter.
Welcome to the forum sqrt2, good first post.

Resolved means that if two objects are side by side you can see two objects and not just one bigger one. Obviously as they get closer and closer there is a transition from fully resolved to unresolved, so we should pick a threshold for when we consider them resolved.

The Rayleigh criterion for stars suggests when their peaks are a disc radius apart, 1.22λN as you suggest. There are others.

The middle one is at the Rayleigh criterion

The middle one is at the Rayleigh criterion

Now you have to sample the image properly, which in this case means at least twice from peak to peak, since you want to capture the gap. So pixel pitch < 0.61λN.

To properly sample blue image information with a good lens at wide apertures you need really small pixels.

Jack

PS Many large MP phones have large effective pixels made up of many actual pixels.
 
Last edited:
Therefore, the smallest pixel pitch that can be resolved is that of the Airy Disc's diameter.
Welcome to the forum sqrt2, good first post.

Resolved means that if two objects are side by side you can see two objects and not just one bigger one. Obviously as they get closer and closer there is a transition from fully resolved to unresolved, so we should pick a threshold for when we consider them resolved.

The Rayleigh criterion for stars suggests when their peaks are a disc radius apart, 1.22λN as you suggest. There are others.

The middle one is at the Rayleigh criterion

The middle one is at the Rayleigh criterion

Now you have to sample the image properly, which in this case means at least twice from peak to peak, since you want to capture the gap. So pixel pitch < 0.61λN.

To properly sample blue image information with a good lens at wide apertures you need really small pixels.

Jack

PS Many large MP phones have large effective pixels made up of many actual pixels.
I'm a fan of the Sparrow distance.

For computing diffraction in combination with aberrations and defocusing, I like the diameter of the circle that includes 70% of the energy.

--
 
There's a classical way to look at this for monochromatic sensors:


With some assumptions, this can be extended to Bayer CFA sensors.
 
Different thoughts to this theme:
  • High MP counts are good for 'sell-by-numbers'. An uninformed consumer could think that a 100MP phone is always better then a 50MP phone
  • With every CFA sensor, each final image pixel is the result of interpolation between at least 4 sensor pixel. Many high MP sensors work with pixel binning where each 4 neighboring pixels exist for each patch of the CFA -> more interpolation to get the final image pixel
  • If the diffusion of the optical system (and the optical path between camera and subject) is well known, with deconvolution some of the 'hidden' details could be brought back. I think this is often used in astronomy.
Conclusion:
  • Advertising a high MP number could increase sales
  • With careful processing, the final image could have more details then the formula for diffraction suggest
 
Different thoughts to this theme:
  • High MP counts are good for 'sell-by-numbers'. An uninformed consumer could think that a 100MP phone is always better then a 50MP phone
  • With every CFA sensor, each final image pixel is the result of interpolation between at least 4 sensor pixel. Many high MP sensors work with pixel binning where each 4 neighboring pixels exist for each patch of the CFA -> more interpolation to get the final image pixel
  • If the diffusion of the optical system (and the optical path between camera and subject) is well known, with deconvolution some of the 'hidden' details could be brought back. I think this is often used in astronomy.
Conclusion:
  • Advertising a high MP number could increase sales
  • With careful processing, the final image could have more details then the formula for diffraction suggest
It seems to me that Richardson/Lucy deconvolution is practical for photography as well as for astronomy. It seems to me to work better if the image is not too noisy. Often blur is close enough to Gaussian for a Gaussian approximation to work even if blur comes from more than one source. Often the aperture is close enough to circular.

Rawtherapee does Richardson/Lucy in capture sharpening. GMIC can do Richardson/Lucy. It appears to me that the Canon DPP software does Richardson/Lucy or something similar in "digital lens optimizer".

I sometimes use a nearly 50 year old lens from a film camera for nostalgia.

[ATTACH alt="Made with nearly 50 year old Minolta MC ROKKOR-X PG 1:1.4 f=50mm, A wild Eastern Redbud tree (Cercis canadensis) was blooming in Norman, Oklahoma, United States on March 25, 2024. Processed with Canon DPP software to set white balance and turn off sharpening and save 16 bit TIFF, GMIC and GraphicsMagic ; /opt/local/bin/gmic IMG_9384c.TIF keep[0] normalize 0,255 -deblur_richardsonlucy[-1] 0.4,8,1 normalize[-1] 0,65535 output[-1] IMG_9384c_RL.tiff,uint16,lzw,0,1 ; /opt/local/bin/gm convert -verbose IMG_9384c_RL.tiff -resize "50%" -unsharp 0x1 -define 'jpeg:dct-method=float,jpeg:optimize-coding=true' -interlace line -quality 97 IMG_9384c_RLc.JPG ; exiftool -tagsfromfile IMG_9384c.JPG IMG_9384c_RL.tiff IMG_9384c_RLc.JPG"]3568258[/ATTACH]
Made with nearly 50 year old Minolta MC ROKKOR-X PG 1:1.4 f=50mm, A wild Eastern Redbud tree (Cercis canadensis) was blooming in Norman, Oklahoma, United States on March 25, 2024. Processed with Canon DPP software to set white balance and turn off sharpening and save 16 bit TIFF, GMIC and GraphicsMagic ; /opt/local/bin/gmic IMG_9384c.TIF keep[0] normalize 0,255 -deblur_richardsonlucy[-1] 0.4,8,1 normalize[-1] 0,65535 output[-1] IMG_9384c_RL.tiff,uint16,lzw,0,1 ; /opt/local/bin/gm convert -verbose IMG_9384c_RL.tiff -resize "50%" -unsharp 0x1 -define 'jpeg:dct-method=float,jpeg:optimize-coding=true' -interlace line -quality 97 IMG_9384c_RLc.JPG ; exiftool -tagsfromfile IMG_9384c.JPG IMG_9384c_RL.tiff IMG_9384c_RLc.JPG

Same image. A wild Eastern Redbud tree (Cercis canadensis) was blooming in Norman, Oklahoma, United States on March 25, 2024. Processed with rawtherapee free software using defaults except for reducing radius and iterations in "capture sharpening".
Same image. A wild Eastern Redbud tree (Cercis canadensis) was blooming in Norman, Oklahoma, United States on March 25, 2024. Processed with rawtherapee free software using defaults except for reducing radius and iterations in "capture sharpening".

--
John Moyer
 

Attachments

  • 57306d745e55453abf65460134201b68.jpg
    57306d745e55453abf65460134201b68.jpg
    9.4 MB · Views: 0
Deconvolution cannot bring back detail that was never recorded in the first place.
 
Deconvolution cannot bring back detail that was never recorded in the first place.

 
Detail not recorded in the raw data can be restored by using Richarson/Lucy deconvolution. This is done for images from telescopes and microscopes and can be done for photographs. A point spread function prevents the detail from being recorded.

Since the Airy disk is a convolution of the data arriving at the sensor chip and being recorded, then a deconvolution can remove the Airy disk blur if the point spread function can be approximated closely enough. Often Gaussian and circular is close enough.

 
Detail not recorded in the raw data can be restored by using Richarson/Lucy deconvolution. This is done for images from telescopes and microscopes and can be done for photographs. A point spread function prevents the detail from being recorded.

Since the Airy disk is a convolution of the data arriving at the sensor chip and being recorded, then a deconvolution can remove the Airy disk blur if the point spread function can be approximated closely enough. Often Gaussian and circular is close enough.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3986040/
The phase information is not recorded by the sensor, which limits what can be done with deconvolution.
 
Detail not recorded in the raw data can be restored by using Richarson/Lucy deconvolution. This is done for images from telescopes and microscopes and can be done for photographs. A point spread function prevents the detail from being recorded.

Since the Airy disk is a convolution of the data arriving at the sensor chip and being recorded, then a deconvolution can remove the Airy disk blur if the point spread function can be approximated closely enough. Often Gaussian and circular is close enough.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3986040/
The phase information is not recorded by the sensor, which limits what can be done with deconvolution.
Do you mean something different by "phase information" than "phase detection" in this explanation?

Thanks in advance.

https://snapshot.canon-asia.com/article/eng/canon-technology-explainer-what-is-dual-pixel-cmos-af

"all pixels on the image sensor can conduct both phase detection and imaging."
 
Detail not recorded in the raw data can be restored by using Richarson/Lucy deconvolution. This is done for images from telescopes and microscopes and can be done for photographs. A point spread function prevents the detail from being recorded.

Since the Airy disk is a convolution of the data arriving at the sensor chip and being recorded, then a deconvolution can remove the Airy disk blur if the point spread function can be approximated closely enough. Often Gaussian and circular is close enough.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3986040/
Deconvolution "recovers" to some extent detail suppressed by low contrast and noise but does not recover unrecorded one.
 
Detail not recorded in the raw data can be restored by using Richarson/Lucy deconvolution. This is done for images from telescopes and microscopes and can be done for photographs. A point spread function prevents the detail from being recorded.

Since the Airy disk is a convolution of the data arriving at the sensor chip and being recorded, then a deconvolution can remove the Airy disk blur if the point spread function can be approximated closely enough. Often Gaussian and circular is close enough.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3986040/
Deconvolution "recovers" to some extent detail suppressed by low contrast and noise but does not recover unrecorded one.
Deconvolution recovers detail lost to small aperture diffraction blur to the extent that there is not too much noise and the blur can be approximated as Gaussian and circular.

Even when the central peak of the Airy disk covers several pixels, some detail can sometimes be recovered.
 
Detail not recorded in the raw data can be restored by using Richarson/Lucy deconvolution. This is done for images from telescopes and microscopes and can be done for photographs. A point spread function prevents the detail from being recorded.

Since the Airy disk is a convolution of the data arriving at the sensor chip and being recorded, then a deconvolution can remove the Airy disk blur if the point spread function can be approximated closely enough. Often Gaussian and circular is close enough.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3986040/
Hi John,

I am a big fan of using low radius RL deconvolution for capture sharpening. In moderation it does a great job of improving the sharpness of good quality images in a credible and physically likely way. However, what JACS says is true: if the information was not recorded in the raw data in the first place, it cannot be extracted back to life.

What RL does is to take that information and rearrange it based on our own guess of what might have blurred it - producing the best likelihood (it's own guess) of what might have been.

For instance if we had a diffraction limited lens and used an Airy function as a PSF, wherever RL would find an intensity distribution looking like an Airy function in the image it would replace it with something close to a point. But unless we actually knew that what was captured in the raw data was a distant star, we would just be guessing.

It's a bit of a philosophical argument, along the lines of the tree falling in the forest . Philosophy aside, fortunately it turns out that in practice RL is remarkably good at reconstructing our perception of the scene.

Jack
 
Last edited:
Detail not recorded in the raw data can be restored by using Richarson/Lucy deconvolution. This is done for images from telescopes and microscopes and can be done for photographs. A point spread function prevents the detail from being recorded.

Since the Airy disk is a convolution of the data arriving at the sensor chip and being recorded, then a deconvolution can remove the Airy disk blur if the point spread function can be approximated closely enough. Often Gaussian and circular is close enough.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3986040/
Deconvolution "recovers" to some extent detail suppressed by low contrast and noise but does not recover unrecorded one.
Deconvolution recovers detail lost to small aperture diffraction blur to the extent that there is not too much noise and the blur can be approximated as Gaussian and circular.
Even when the central peak of the Airy disk covers several pixels, some detail can sometimes be recovered.
By "some derail," you mean the one which was recorded in the first place?

Diffraction has a cutoff frequency. Beyond that, you record nothing, nada, zilch. Good luck recovering that. Then there are some frequencies bellow the cutoff but they are so much buried in the noise, that they are lost. Next, some frequencies have been attenuated by the blur but still well above the noise. They can be revealed with better clarity to some extent.
 
Last edited:
Detail not recorded in the raw data can be restored by using Richarson/Lucy deconvolution. This is done for images from telescopes and microscopes and can be done for photographs. A point spread function prevents the detail from being recorded.

Since the Airy disk is a convolution of the data arriving at the sensor chip and being recorded, then a deconvolution can remove the Airy disk blur if the point spread function can be approximated closely enough. Often Gaussian and circular is close enough.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3986040/
The phase information is not recorded by the sensor, which limits what can be done with deconvolution.
Do you mean something different by "phase information" than "phase detection" in this explanation?
Yes. Light, like other waves, has amplitude and phase components. Point spread functions discard the phase information.
Thanks in advance.

https://snapshot.canon-asia.com/article/eng/canon-technology-explainer-what-is-dual-pixel-cmos-af

"all pixels on the image sensor can conduct both phase detection and imaging."
 
Why so many threads go to this type of useless back and forth discussion were nitpickers claim they have all wisdom?
  • It is obvious that information that is not recorded can not recovered
  • Diffraction (and other diffusion effects) 'smear' the image
  • The 'smearing' destroys some information, and some information is 'hidden' in contrast loss
  • Some of the degradation from the smearing can be undone with deconvolution, if the 'smearing characteristic' (=PSF) is known
This assumes that the image degradation can be modeled by a convolution function. I am not sure, but i think that not all image degradation of optical system can be modeled by a (finite) convolution.
 
Detail not recorded in the raw data can be restored by using Richarson/Lucy deconvolution. This is done for images from telescopes and microscopes and can be done for photographs. A point spread function prevents the detail from being recorded.

Since the Airy disk is a convolution of the data arriving at the sensor chip and being recorded, then a deconvolution can remove the Airy disk blur if the point spread function can be approximated closely enough. Often Gaussian and circular is close enough.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3986040/
Deconvolution "recovers" to some extent detail suppressed by low contrast and noise but does not recover unrecorded one.
Deconvolution recovers detail lost to small aperture diffraction blur to the extent that there is not too much noise and the blur can be approximated as Gaussian and circular.
Even when the central peak of the Airy disk covers several pixels, some detail can sometimes be recovered.
By "some derail," you mean the one which was recorded in the first place?

Diffraction has a cutoff frequency. Beyond that, you record nothing, nada, zilch. Good luck recovering that. Then there are some frequencies bellow the cutoff but they are so much buried in the noise, that they are lost. Next, some frequencies have been attenuated by the blur but still well above the noise. They can be revealed with better clarity to some extent.
It is not possible to collect more detail than the photo site spacing on the sensor chip, but diffraction blur can sometimes be undone. Interpolation might be plausible because differing light frequencies are measured separately.

Diffraction gradually spreads the light over more and more pixels as the F Number increases. That spread is a convolution. To the extent that the point spread function is known or can be approximated, that spread can be undone with a deconvolution.

Even at F/32, Richardson/Lucy can improve the image made by my EOS 80D if there is bright enough light. It fails when there is too much noise or motion blur or clipping or accumulating error from too many iterations. Error accumulates because the assumptions about the point spread function are not near enough to correct, else more iterations would always give a better result.

At F/32, it seems to me that the Sigma for Richardson/Lucy should be about 2, but I get better results with a smaller "radius" parameter and I do not understand why. It should be possible to calculate the correct Sigma from the spacing of photo diodes on the sensor chip and the F Number.

Sometimes with luck other blur comes close to the Gaussian assumption about the central peak of the Airy disk. Canon claims that their "digital lens optimizer" helps with blur from the low pass filter attached to the sensor chip, but I do not know how that works if they are using Richardson/Lucy unless the blur is accidentally a Gaussian.
 
Detail not recorded in the raw data can be restored by using Richarson/Lucy deconvolution. This is done for images from telescopes and microscopes and can be done for photographs. A point spread function prevents the detail from being recorded.

Since the Airy disk is a convolution of the data arriving at the sensor chip and being recorded, then a deconvolution can remove the Airy disk blur if the point spread function can be approximated closely enough. Often Gaussian and circular is close enough.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3986040/
The phase information is not recorded by the sensor, which limits what can be done with deconvolution.
Do you mean something different by "phase information" than "phase detection" in this explanation?
Yes. Light, like other waves, has amplitude and phase components. Point spread functions discard the phase information.
Is the phase component discarded by the Fourier transform? Is it recovered during the inverse transform? It seems that I do not understand this.

Thanks again.

Thanks in advance.

https://snapshot.canon-asia.com/article/eng/canon-technology-explainer-what-is-dual-pixel-cmos-af

"all pixels on the image sensor can conduct both phase detection and imaging."
 

Keyboard shortcuts

Back
Top