knickerhawk
Veteran Member
The problem with your analysis is that, with the possible exception of some astrophotography use cases, we aren't interested in just sampling the photons from a single point. We are interested in sampling from a multitude of neighboring points. Thus, while the larger pixel has a better chance of capturing more of the photons from a single point source, it also has a better chance of capturing more photons from neighboring point sources as well. Depending on the particular image being projected onto the sensor, the result will not be less apparent blur but, rather, greater image blur or aliasing for fine detail in the scene.Can we agree that blur occurs when photons from a single point in the scene do not all arrive at the same pixel?
Can we also agree that no lens is perfect enough to direct all the photons from a point in the scene to a single point on the sensor? (Note that was a point on the sensor, not a pixel on the senor. A pixel has dimensions, a point does not.)
A lens casts a sharp image when a sufficient portion of the photos arriving from a single point are directed to a single pixel. However, those photons may have been directed to different points on the pixel.
Does it not follow that when the pixels are larger, a lens can have a greater amount of error and still get a large enough portion of the photons from a single point to a single pixel?
If you have any lingering doubts, consider the following comparison of a low quality lens (the Oly 15mm body cap lens) when tested on 20mp, 16mp and 12mp cameras. As you can see, the bigger pixels of the EPM1 deliver the worst performance despite the lens having a "[great] amount of error" and its pixels capturing a larger "portion of the photons from a single point" per your analysis.

