Sensor Size, Diffraction, and Resolvable Megapixels

  • Thread starter Thread starter Sqrt2
  • Start date Start date
Hi John,

if the information was not recorded in the raw data in the first place, it cannot be extracted back to life.
Definitely. A cluster of pixels that could equally well have been produced by a bird or a tree cannot reliably be estimated to be either. What we are talking about in classical signal processing is usually maximising the accuracy of the output according to some criterion within the limits posed by the captured information.

Modern machine learning does seem to have an eerie way to produce "plausible" or "believable" information out of nearly thin air - capturing not only the degradation process of the instrument at hand, but also the "typical" traits of whatever dataset it is trained on. E.g. if you train it on only seagulls, it might produce a seagull in the example above, and that might be perfect for a photographer focusing mainly on seagulls.
For instance if we had a diffraction limited lens and used an Airy function as a PSF, wherever RL would find an intensity distribution looking like an Airy function in the image it would replace it with something close to a point. But unless we actually knew that what was captured in the raw data was a distant star, we would just be guessing.
To the degree that RL can be thought of as a (noise-regularised) highpass filter (boosting high frequency components and/or re-aligning their "spatial phase fronts" to combat known or knowable lowpass filtering), I think that you are describing it a bit unfairly here.

If the signal is completely unaliased (which never happens in practice) and there is no noise nor any infinitely deep nulls, then every sensor pixel uniquely defines a LTI-filtered scene according to Nyquist. If that inverse is known (and according to limitations above does not include nasty stuff like infinite gain), then it is probably invertible.

We would then be able to differentiate between an Airy-disc-like feature that originally was a dirac impulse, and one which originally was more airy-disc-like, based only on pixels and knowledge about the blur process?
 
Hi John,

if the information was not recorded in the raw data in the first place, it cannot be extracted back to life.
Definitely. A cluster of pixels that could equally well have been produced by a bird or a tree cannot reliably be estimated to be either. What we are talking about in classical signal processing is usually maximising the accuracy of the output according to some criterion within the limits posed by the captured information.

Modern machine learning does seem to have an eerie way to produce "plausible" or "believable" information out of nearly thin air - capturing not only the degradation process of the instrument at hand, but also the "typical" traits of whatever dataset it is trained on. E.g. if you train it on only seagulls, it might produce a seagull in the example above, and that might be perfect for a photographer focusing mainly on seagulls.
For instance if we had a diffraction limited lens and used an Airy function as a PSF, wherever RL would find an intensity distribution looking like an Airy function in the image it would replace it with something close to a point. But unless we actually knew that what was captured in the raw data was a distant star, we would just be guessing.
To the degree that RL can be thought of as a (noise-regularised) highpass filter (boosting high frequency components and/or re-aligning their "spatial phase fronts" to combat known or knowable lowpass filtering), I think that you are describing it a bit unfairly here.

If the signal is completely unaliased (which never happens in practice) and there is no noise nor any infinitely deep nulls, then every sensor pixel uniquely defines a LTI-filtered scene according to Nyquist. If that inverse is known (and according to limitations above does not include nasty stuff like infinite gain), then it is probably invertible.

We would then be able to differentiate between an Airy-disc-like feature that originally was a dirac impulse, and one which originally was more airy-disc-like, based only on pixels and knowledge about the blur process?
Since for small aperture, we know there will be an Airy disk for each point in the scene being photographed, we do not need to know which one is more Airy disk like. We apply the inverse function to all pixels. The outcome will be undesirable for those pixels that do not accurately represent the scene being photographed because bright pixels will be further brightened. It would be helpful to know which pixels are noise and are not an accurate measurement of light from the scene.
 
If the signal is completely unaliased (which never happens in practice) and there is no noise nor any infinitely deep nulls, then every sensor pixel uniquely defines a LTI-filtered scene according to Nyquist. If that inverse is known (and according to limitations above does not include nasty stuff like infinite gain), then it is probably invertible.
Not if you have to divide by zero. Diffraction is modeled by a blur kernel with a FT zero past some frequency. Everything above that is completely lost forever. What you can recover are the frequencies below the cutoff, in an ideal situation (which should include infinitely many pixels on a finite sensor, BTW). Now, even if you recover them, what are you going to do with them - render the image with a brick-like low-pass filter? That would create a horrible ringing. You would want to put a filter which attenuates the higher frequencies more and more but wait, this is what diffraction does as well. What can be achieved is to have a better smooth low-pass filter than diffraction provides below the cutoff frequency, and that's it.
We would then be able to differentiate between an Airy-disc-like feature that originally was a dirac impulse, and one which originally was more airy-disc-like, based only on pixels and knowledge about the blur process?
The FT of a Dirac impulse is one, and we would not know that from it multiplied by the FT of the diffraction blur.

Diffraction destroys high-frequency information, it is a fact, no need to expect miracles.
 

Keyboard shortcuts

Back
Top