Expert Fourier Optics opinion needed

What about also marking the minimum rms spot size?
Rather than do that, which would be limited by the step size in the transfocal calculation, I wrote a routine to search for it, using Newton's method. The answer for the aberration set in the last post is 4.1 um for lamda = 550 nm. Less than I'd guessed.

The high road would be to compute the rms spot size by weighted combination of all wavelengths, but that seems like gilding the lily.

Update: I tried that, and got the same answer, but it takes a lot longer.

There is something wrong with this calculation in the presence of aberrations. Used with defocus alone, and the other aberrations zeroed, the rms minimum spot size agrees with the visual results, but in the presence of aberrations, it doesn't. I'm using the centroid to find the location of the center of the spot, and I suspect that's a source of a error.

ChatGPT has some alternate suggestions:

1. Encircled Energy Radius (EEr)

Measure the radius within which a fixed percentage (e.g., 80%) of the total PSF energy falls. You can compute this around the geometric center or the intensity peak.
  • Pros: Robust against asymmetry; less sensitive to centroid shifts.
  • Cons: Sensitive to sampling resolution; may need smoothing.
2. Peak Intensity (Strehl Ratio Approximation)

Track the maximum value of the PSF over defocus. At best focus, the energy is most concentrated.
  • Pros: Simple and intuitive; very sensitive to focus.
  • Cons: Sensitive to pixel sampling, noise, and diffraction rings.
3. Modulation-Based Metrics

Compute contrast metrics in the frequency domain—e.g., OTF-based MTF50, or energy in mid-to-high spatial frequencies.
  • Pros: Reflects perceptual sharpness.
  • Cons: Requires consistent PSF size and zero-padding to avoid aliasing.
4. Geometrically-Centered RMS

Instead of computing RMS about the centroid, compute RMS about the geometric center of the PSF array.
  • Pros: Removes centroid shift bias.
  • Cons: May overestimate size for asymmetric PSFs.
5. Second-Moment Matrix Determinant

Define a 2D second moment matrix of the PSF and take the determinant or trace. This is robust and captures elongation and orientation.
  • Pros: Generalizes RMS to 2D shapes.
  • Cons: Requires matrix computation; less intuitive.
6. Zernike-Based Sharpness Estimator

Compute the Zernike coefficients from the PSF or wavefront and look for a minimum of certain modes (like Z4 – defocus). Or optimize fit residuals.
  • Pros: Ties directly into your simulation model.
  • Cons: Computationally involved; assumes a valid fit.
 
Last edited:
What about also marking the minimum rms spot size?
Rather than do that, which would be limited by the step size in the transfocal calculation, I wrote a routine to search for it, using Newton's method. The answer for the aberration set in the last post is 4.1 um for lamda = 550 nm. Less than I'd guessed.

The high road would be to compute the rms spot size by weighted combination of all wavelengths, but that seems like gilding the lily.

Update: I tried that, and got the same answer, but it takes a lot longer.
Yes, because of linearity and superposition, works the same with MTF: The system result is a weighted average of the results at discrete wavelengths, in this case (400+700)/2=550
There is something wrong with this calculation in the presence of aberrations. Used with defocus alone, and the other aberrations zeroed, the rms minimum spot size agrees with the visual results, but in the presence of aberrations, it doesn't.
Assuming paraxial approximations focus error dz = -8 * W020 * N^2

with dz and W020 in the same units, see Wyant and Hopkins in the notes here

I'm using the centroid to find the location of the center of the spot, and I suspect that's a source of a error.

ChatGPT has some alternate suggestions:

1. Encircled Energy Radius (EEr)

Measure the radius within which a fixed percentage (e.g., 80%) of the total PSF energy falls. You can compute this around the geometric center or the intensity peak.
  • Pros: Robust against asymmetry; less sensitive to centroid shifts.
  • Cons: Sensitive to sampling resolution; may need smoothing.
2. Peak Intensity (Strehl Ratio Approximation)

Track the maximum value of the PSF over defocus. At best focus, the energy is most concentrated.
  • Pros: Simple and intuitive; very sensitive to focus.
  • Cons: Sensitive to pixel sampling, noise, and diffraction rings.
3. Modulation-Based Metrics

Compute contrast metrics in the frequency domain—e.g., OTF-based MTF50, or energy in mid-to-high spatial frequencies.
  • Pros: Reflects perceptual sharpness.
  • Cons: Requires consistent PSF size and zero-padding to avoid aliasing.
O/MTF based metrics seem more pertinent to photography, imho. MTF50 gives us a first approximation near the sharper settings. To take better stock of some of the messier aberrations I would go with something like the modern day SQF, CPIQ Acutance, all the way to extinction as for now you are dealing with the lens only I think.

Aliasing should not be an issue as long as 2 << Q < 20 in the MTF calcs (my default is 9):

4. Geometrically-Centered RMS

Instead of computing RMS about the centroid, compute RMS about the geometric center of the PSF array.
  • Pros: Removes centroid shift bias.
  • Cons: May overestimate size for asymmetric PSFs.
5. Second-Moment Matrix Determinant

Define a 2D second moment matrix of the PSF and take the determinant or trace. This is robust and captures elongation and orientation.
  • Pros: Generalizes RMS to 2D shapes.
  • Cons: Requires matrix computation; less intuitive.
6. Zernike-Based Sharpness Estimator

Compute the Zernike coefficients from the PSF or wavefront and look for a minimum of certain modes (like Z4 – defocus). Or optimize fit residuals.
  • Pros: Ties directly into your simulation model.
  • Cons: Computationally involved; assumes a valid fit.
 
I'm thinking of trying this approach, as decribed by chatGPT after some prompting:
Yes, exactly—your instinct is spot on.

Using encircled energy radius (EEr) as a focus metric in the presence of aberrations is most robust when you first find the optimal center—ideally the one that maximizes the energy concentration within a circular aperture. Here's a breakdown of the "high road" method, as you suggest:

Step-by-Step: Optimal-Center EEr Computation
  1. Initial Setup:
    • Let PSF be the 2D point spread function, assumed to be normalized (i.e., sum(PSF(:)) = 1).
    • Choose a set of candidate center positions (xc, yc), possibly subpixel via interpolation or super-resolution if needed.
  2. Fixed Radius Energy Map (Coarse Search):
    • For a moderate fixed radius r0 (e.g., 5–10 pixels), compute the sum of PSF values within a circular mask centered at each (xc, yc) candidate.
    • This can be accelerated with a precomputed circular mask and image shifting (or via convolution with the circular aperture).
  3. Refine the Center:
    • Choose the (xc, yc) that maximizes the enclosed energy for r0.
    • Refine with a local search using Simplex.
  4. Expand/Contract Radius for Target Energy:
    • Starting from r0, grow (or shrink) the radius until the cumulative sum inside the circular aperture equals or just exceeds 80% of the total PSF energy.
Jack, what do you think?
 
I'm thinking of trying this approach, as decribed by chatGPT after some prompting:
Yes, exactly—your instinct is spot on.

Using encircled energy radius (EEr) as a focus metric in the presence of aberrations is most robust when you first find the optimal center—ideally the one that maximizes the energy concentration within a circular aperture. Here's a breakdown of the "high road" method, as you suggest:

Step-by-Step: Optimal-Center EEr Computation
  1. Initial Setup:
    • Let PSF be the 2D point spread function, assumed to be normalized (i.e., sum(PSF(:)) = 1).
    • Choose a set of candidate center positions (xc, yc), possibly subpixel via interpolation or super-resolution if needed.
  2. Fixed Radius Energy Map (Coarse Search):
    • For a moderate fixed radius r0 (e.g., 5–10 pixels), compute the sum of PSF values within a circular mask centered at each (xc, yc) candidate.
    • This can be accelerated with a precomputed circular mask and image shifting (or via convolution with the circular aperture).
  3. Refine the Center:
    • Choose the (xc, yc) that maximizes the enclosed energy for r0.
    • Refine with a local search using Simplex.
  4. Expand/Contract Radius for Target Energy:
    • Starting from r0, grow (or shrink) the radius until the cumulative sum inside the circular aperture equals or just exceeds 80% of the total PSF energy.
Jack, what do you think?
Since we are assuming 2D planes, the aberrated PSFs effectively blur the geometrical image.

I guess the objective would be equivalent to finding the 'best focus' relative distance (dz) by extrapolating from the position of the PSFs that blur the image the least. So the question becomes one of defining a metric for least blur. If we want it to reflect what we see, ideally it would be a perceptual metric.

Here is an example of a simple Thru-Focus with just some SA3, each vertical intensity slice is the LSF at that position. We could eye-ball the outer 'edges' as intensity in the periphery drops below a certain threshold, some variation on FWHM. Here my choice would not be too different from the geometrical CoC (yellow lines)

https://www.strollswithmydog.com/dof-diffraction-image/

I don't have a feel for how representative values of encircled energy or fwhm are of (perceived) blur.

What do practitioners use? I wonder what OpticsEngineer, Alan Robinson and others think. Authors like Wyant produce values for both peak and rms OPD. And of course MTF-based metrics are a shoe-in for perceptual blur.

Jack
 
Last edited:
It seems we are generally working toward visual quality metrics. Here is a good paper along those lines that has free access.

Metrics of optical quality derived from wave aberrations predict visual performance | JOV | ARVO Journals

There are a few thousand papers of similar vein. It became an active field of study with the arrival of LASIK and wavefront guided laser eye surgery. Wavefront aberrations of the eye would be measured with an aberrometer and the corneal ablation pattern adjusted accordingly. Of course, that brought up the debate on what the goal wavefront pattern should be, and maybe if it should be something slightly away from the ideal for today as opposed to what might be best in a few years as the eye ages. Also it was observed people with a little vertical coma tended to have better eyesight but then the question was if that was something to try to replicate considering it was thought a large portion of human vision might be the brain adjusted to a particular aberration pattern after many years.

Obviously the eye is not a wavefront sensor, it senses a PSF instead. Just a little something to keep in mind.

There is a certain kind of logical thinking engineers tend to have when approaching the field of vision which is based on thinking the eye is a camera. There are quite a few places where that will lead to incorrect expectations when comparing actual visual performance as measured in vision labs. To learn a different kind of intuition, one has to study how the neurons in the retina are wired together for edge detection and how that wiring goes to the brain and is further processed. But those are kind of advanced topics. Wavefront and PSFs are the place one should start.

Once you know to do web searches on words like wavefront, visual quality, metrics, vision, aberrations, aberrometer, subjective refraction, objective refraction, phoropter, PSFs, etc, you will turn up a lot of papers.

Here is a link to another paper which has the PSF convolved with the letter E which is kind of informative.

Chen_OVS2005.pdf

I
 
Last edited:
It seems we are generally working toward visual quality metrics. Here is a good paper along those lines that has free access.

Metrics of optical quality derived from wave aberrations predict visual performance | JOV | ARVO Journals

There are a few thousand papers of similar vein. It became an active field of study with the arrival of LASIK and wavefront guided laser eye surgery. Wavefront aberrations of the eye would be measured with an aberrometer and the corneal ablation pattern adjusted accordingly. Of course, that brought up the debate on what the goal wavefront pattern should be, and maybe if it should be something slightly away from the ideal for today as opposed to what might be best in a few years as the eye ages. Also it was observed people with a little vertical coma tended to have better eyesight but then the question was if that was something to try to replicate considering it was thought a large portion of human vision might be the brain adjusted to a particular aberration pattern after many years.

Obviously the eye is not a wavefront sensor, it senses a PSF instead. Just a little something to keep in mind.

There is a certain kind of logical thinking engineers tend to have when approaching the field of vision which is based on thinking the eye is a camera. There are quite a few places where that will lead to incorrect expectations when comparing actual visual performance as measured in vision labs. To learn a different kind of intuition, one has to study how the neurons in the retina are wired together for edge detection and how that wiring goes to the brain and is further processed. But those are kind of advanced topics. Wavefront and PSFs are the place one should start.

Once you know to do web searches on words like wavefront, visual quality, metrics, vision, aberrations, aberrometer, subjective refraction, objective refraction, phoropter, PSFs, etc, you will turn up a lot of papers.

Here is a link to another paper which has the PSF convolved with the letter E which is kind of informative.

Chen_OVS2005.pdf
Thanks for that.
 
I would also add that a couple of optometry professors have told me it is really difficult to get repeatable results on visual metrics because fatigue and boredom set in when doing the repetitive tasks involved. They told me the only people with sufficient motivation to stand up to the torture are graduate students.
 
Last edited:
That is quite a nice set of images.
 

Attachments

  • 6ebe95bd29dd4fee9ee85d215a70b430.jpg.png
    6ebe95bd29dd4fee9ee85d215a70b430.jpg.png
    678.4 KB · Views: 0
Last edited:
Again a very nice result. It illustrates how useful Siemens Stars are as diagnostic tools for optical designers. Things like astigmatism are easily seen when slightly off best focus. As are many other things.
 
It seems we are generally working toward visual quality metrics. Here is a good paper along those lines that has free access.

Metrics of optical quality derived from wave aberrations predict visual performance | JOV | ARVO Journals
That was a most informative paper because it tested 31 different metrics for their ability to predict legibility, thanks OE. The input to the metrics spanned the spectrum, from wavefront, to PSF to OTF to MTF.

It turns out that intensity threshold and encircled energy did not do too well. The best of the lot, with an r^2 of 0.81, was the Visual Strehl ratio computed in the frequency domain (VSOTF);

Appendix A: https://jov.arvojournals.org/article.aspx?articleid=2121846
Appendix A: https://jov.arvojournals.org/article.aspx?articleid=2121846

This is similar to the principle used in CPIQ Acutance linked to earlier, which however is based on MTF instead of OTF (VSMTF). Incidentally VSMTF was the fourth best metric out of 31.

VSOTF is easy to get: once you have a PSF, compute the Fourier transform to obtain the OTF and weight it by the neural contrast sensitivity function (CSFn), for which I would use the version in the CPIQ paper , rotated through 360 degrees (right?). Then sum the 2D result up. Seems that it should be a complex number, I would assume they would take its modulus?

For your purposes Jim ('sharpest' PSF), the denominator is constant for a given sequence so it does not need to be calculated (it is based on the unaberrated PSF). Just pick or interpolate the lowest unnormalized VSOTF.

Jack

PS The paper is 20 years old. I wonder if there are newer ones that generalize to uses other than reading.
 
Last edited:
"I wonder if there are newer ones that generalize to uses other than reading."

It sounds like you already have the insight that good performance with high contrast tasks like reading does not always translate into good visual performance in low contrast tasks like recognizing faces or driving at dusk.

Not that many years ago, a patient could go to an optometrist complaining of poor vision, get tested looking at high contrast letters, be able to read a 20/30 line and the optometrist would not really have much of an idea why the patient was complaining and would suspect psychological issues. Now there is more understanding of wavefront, high order aberrations and CSF contrast sensitivity function. Treatment options are limited but at least things are better understood now.

There are lots of papers out there on these topics but nothing so nice and neat as the paper with all the different metrics. The last time I spoke with the authors of that paper a couple of years ago, they said that paper was still about the best effort they knew of, but as you point, it is for high contrast. As the authors pointed out to me, it is a doable task to quantify good or fair vision and correlate it to objective measures. But quantifying bad vision and correlating it to anything objective is really difficult. There are just so many ways things can be bad it is hard to pin down anything in a quantitative manner. Most clinicians I have worked with will put some effort into quantify vision as 20/20, 20/30 or 20/40. But anything lower than that they just write 20/100 or whatever number they always write down for everyone because it just not clinically useful to try to quantify bad, and it doesn't much match to someone's quality of life. As mentioned before, you can test 20/20 and really be unhappy how you can do with low contrast everyday visual tasks. But someone testing 20/40 might be living a happier life with low contrast tasks. It is often said by a clinician, or goal is not 20/20. It is 20/happy.
 
Just to expand on these thoughts a bit. Here is a typical paper with free access

Comparing the Shape of Contrast Sensitivity Functions for Normal and Low Vision - PMC

So high order aberrations HOAs mean lower contrast sensitivity function CSF. And we would like to have a nice plot showing more HOAs means less CSF.

But HOAs are kind of a jumble of everything, comas, trefoils, quatrefoils, and all kinds of shapes. To try to summarize those, one might do a root sum square summation. But then there are literally an infinite number of PSF shapes that can come from a wavefronts with the same RMS value of HOAs. Two bright blobs, a blob with a tail attached. Three closely shaped blobs. A bright blob and a dim blob. A blob with a halo on one side. The things encountered clinically just goes on and on. It is very hard to come up with metric that relates to how the retina and brain responds to such a variety of possible shapes. Obviously, one needs to be thinking of focal plane metrics, but no one has come up with one that really works.

A PSF with a bright core but a lot of junk around it will let someone make out a letter E on a Snellen chart and get a decent score like 20/30 even though it looks crummy and nothing like what a normal sighted person sees. As soon as a low contrast task comes along the person really struggles.

So that is kind of the root of the difficulties. If someone just has some astigmatism and defocus, the variety of PSFs the retina might have to deal with is pretty limited and we can get good correlations visual performance to objective measures. So fair to good vision is amenable to being quantified, But bad vision is just really difficult to study.

It is estimated about 5% of people have high order aberrations to a degree it impacts quality of life. Estimates vary of course, from 2% to 10% with the higher estimates tending to come from people trying to promote some kinds of business plans and lower estimates coming from people actually treating patients. Some people find relief with eye drops to shrink the size of the pupil if the problem areas are toward the outer edges of the cornea.

When wavefront instruments first came into clinics, doctors typically tried them on problem patients. A wavefront would be measured and a PSF calculated. The doctor would show the patient the calculated PSF and the patient would exclaim "that is exactly what I have been seeing." It could be very emotionally satisfying to people who had been told their whole lives they had some kind of psychological problem when someone else could really verify it was all just optics
 
Interesting, thanks OpticsEngineer, I guess for now Visual Strehl in the frequency domain (VSOTF) or similar it is.

Jack
 
If you were so inclined, you and Jim have the skill set you could advance the science in this area. Generally optical scientists have a lot less knowledge about mathematics than they think they do. Things learned by undergraduates in electrical engineering or mathematics aren't part of the knowledge base that most people with optics training know about, like discrete mathematical theory, digital signal processing and Z-transforms. So approaches you would think have been pursued by now really haven't been.

From anatomical dissections and staining, there is a lot known about how the neurons in the retina are wired together. It is not like a camera at all, but has multiple layers of connections going across, deeper and deeper behind the cones and rods. I was just doing a quick check through Wikipedia but none of it appears there. One would need to get into the medical literature. But if one were to treat the retina in a way that reflected its wiring and applied discrete math to it, I have some hope one could predict contrast sensitivity functions for aberrated PSFs. As it is, with well-formed PSFs, what we know about retinal wiring very neatly explains CSFs when they are ones for good vision.

Subjective testing for CSF is very tedious by the way. Whereas high contrast testing gets tedious after a while, CSF testing gets unbearable right away and so it is not clinically of much use. Again, okay for graduate students though.
 
If you were so inclined, you and Jim have the skill set you could advance the science in this area. Generally optical scientists have a lot less knowledge about mathematics than they think they do. Things learned by undergraduates in electrical engineering or mathematics aren't part of the knowledge base that most people with optics training know about, like discrete mathematical theory, digital signal processing and Z-transforms. So approaches you would think have been pursued by now really haven't been.

From anatomical dissections and staining, there is a lot known about how the neurons in the retina are wired together. It is not like a camera at all, but has multiple layers of connections going across, deeper and deeper behind the cones and rods. I was just doing a quick check through Wikipedia but none of it appears there. One would need to get into the medical literature. But if one were to treat the retina in a way that reflected its wiring and applied discrete math to it, I have some hope one could predict contrast sensitivity functions for aberrated PSFs. As it is, with well-formed PSFs, what we know about retinal wiring very neatly explains CSFs when they are ones for good vision.

Subjective testing for CSF is very tedious by the way. Whereas high contrast testing gets tedious after a while, CSF testing gets unbearable right away and so it is not clinically of much use. Again, okay for graduate students though.
An issue with many of the color psychology experiments is that they are finicky to set up and expensive to run with a sufficient number of subjects. Note the flaws in the 1931 CIE Standard Observer, for example. And that's a simple test compared to those involving adaptation and spatial effects.
 

Keyboard shortcuts

Back
Top