CoC Management and the Object Field.

But if you *did* have an NSA spy camera in Earth orbit, Merklinger's method would allow you to resolve the numbers on a car's license plate far below.

...when using a 50 mm lens at f/5, I know that everything in the scene 10 mm and larger will be resolved.
No way. With your lens, I'll give you 6 m resolution at 100 km--and that would be a very low earth orbit. I've seen those spy satellite lenses, and they are huge.
 
Is it safe to assume the answers differ by aperture and by lens?
I would expect that there's some generalized relationship, but I just wanted to check the rule of thumb. My sim program uses the exact thin lens numbers, so at this point any discrepancy from the approximate answer is not affecting me.

Jim
 
1. Given two CoCs of an object, c caused by being out of focus, and d caused by diffraction, what is a simple, decent formula for their convolution?
In my modeling, I use different kernels for OOF and diffraction. One is an Airy solid, and the other is a pillbox. I compute the combined effect by successive convolutions.
Do you like max(c,d), c+d, sqrt(c*c+d*d), or something else? If needed: Yes, all colors of light, natural scene, plenty of green.
Explained above. The diffraction kernels are different for each color plane, and the OOF kernels are the same.
2. If you are using a good deconvolution program, say of a detailed grand landscape, how does it change your answer?
Not at all. That comes later.
 
I have to say, the traditional method works fine for me, except for one little problem: they don't put depth of field scales on lenses any more. Consequently, I'm reduced to guessing.
There are plenty of DOF calculator apps out there.
That's funny. My camera doesn't have one of those.
You're satisfied with the traditional method. What CoC do you use? The standard 30 um for a FF camera? I pixel? Or something in between?
I don't any more. I just guess. I used to use whatever was on my lens, or I would sometimes cut it a little tighter.

Oh, wait. I get it. Since I don't carry a computer around my neck, I'm supposed to use my smart phone, right? Well, I don't have a smart phone, and if I did, I wouldn't use it in the mountains.

And in any case, I suppose a DOF calculator is a heck of a lot slower than using my lens. If you were lucky enough to have the right model of Rolleiflex, all you had to do was glance at the lens barrel. A single lens reflex was slightly harder but still pretty good. But punching a bunch of numbers slowly into a sunlit screen, and hoping to be able to read the answer, is just nuts. This is just one example of how camera designs have regressed in some areas.
I agree. Oh for the days when lenses looked like this:

69b2d9ddf5314fc3a38db15287673877.jpg.png

1b475edbcf2044e2b9d24352b039535a.jpg.png

Jim

--
http://blog.kasson.com
The practical question of how to do high-quality CoC managment without use of object-field methods is a big question for me. I am referring especially to lenses without any DOF scales.---i.e., most lenses today.
So the question is for anyone who does well with CoC issues in the field. For example, landscapes intended to print large, but your lens has no DOF info to help.

I've read methods for lenses that do have DOF markings. Now I want to hear how to do CoC management on the rest of the lenses.
That's not a question I can help with, other than my earlier suggestion about CoC apps. The lenses that I use for CoC work have DOF markings.



However, as I said below, the more important thing is sharpness as a function of f-stop and distance, and CoC only help with part of that.

Jim

--
 
1. Given two CoCs of an object, c caused by being out of focus, and d caused by diffraction, what is a simple, decent formula for their convolution?
In my modeling, I use different kernels for OOF and diffraction. One is an Airy solid, and the other is a pillbox. I compute the combined effect by successive convolutions.
Oh, I think I see what you mean. Have a look here:


Page down to 5.1.6

what you're talking about is this:

Distributive law: (g + h) ∗ f = g ∗ f + h ∗ f.

Right?

jim
 
But if you *did* have an NSA spy camera in Earth orbit, Merklinger's method would allow you to resolve the numbers on a car's license plate far below.

...when using a 50 mm lens at f/5, I know that everything in the scene 10 mm and larger will be resolved.
No way. With your lens, I'll give you 6 m resolution at 100 km--and that would be a very low earth orbit. I've seen those spy satellite lenses, and they are huge.
I didn't imply using a 50 mm lens in orbit. The Merklinger method makes some approximations and says nothing of lens quality. But it would work, I think, for a giant spy satellite in orbit, as the sharp region of interest is very distant relative to focal length.
 
1. Given two CoCs of an object, c caused by being out of focus, and d caused by diffraction, what is a simple, decent formula for their convolution?
In my modeling, I use different kernels for OOF and diffraction. One is an Airy solid, and the other is a pillbox. I compute the combined effect by successive convolutions.
Oh, I think I see what you mean. Have a look here:

http://www.cs.uu.nl/docs/vakken/ibv/reader/chapter5.pdf

Page down to 5.1.6

what you're talking about is this:

Distributive law: (g + h) ∗ f = g ∗ f + h ∗ f.

Right?

jim
 
I am currently modeling blur due to misfocusing with a single iteration of an appropriately sized pillbox kernel filter at a pitch of 32x finer than the sensor pitch. I'm modeling lens aberration with another kernel applied successively, and diffraction with three kernels (for three different wavelengths) also applied successively. After all that comes the AA filter calculation, then the fill-factor-driven sampling.

Anybody see a problem with that?
What lens aberration are you modeling?
Just the heuristic that I fitted to the Otus 55 curves, so far.
Defocus is also frequency dependent.
So I need a kernel for each plane? And where do I get those values? I'm using the CoC calcs, which have no frequency dependent terms, unless maybe you could say that LoCA is a change in focal length with frequency.
And it does not follow the geometric model for the slight errors typical of images captured with decent technique (say 0.5 wavelengths OPD or less).
So what do you recommend?
You are doing fine, I was just pointing out the potential problems, as requested :-)

I do have a rough model in the frequency domain but I have not given much thought to the spatial domain. I know Frans has, though, in designing the mtf-generate-rectangle.exe part of MTF Mapper.

Jack
 
1. Given two CoCs of an object, c caused by being out of focus, and d caused by diffraction, what is a simple, decent formula for their convolution?
In my modeling, I use different kernels for OOF and diffraction. One is an Airy solid, and the other is a pillbox. I compute the combined effect by successive convolutions.
Oh, I think I see what you mean. Have a look here:

http://www.cs.uu.nl/docs/vakken/ibv/reader/chapter5.pdf

Page down to 5.1.6

what you're talking about is this:

Distributive law: (g + h) ∗ f = g ∗ f + h ∗ f.

Right?

jim
 
As I said earlier, I don't like the idea of basing DOF decisions on extinction resolution. I think MTF50 is a much better metric. We can get into that in the fullness of time, but I decided to do some DOF testing with MTF50 as the measure of interest.

I got my first sim run done.

I modeled a 24 MP FF Bayer-CFA camera with a AA filter that operates in both directions and has a zero at about 0.67 cy/px. I mounted a diffraction-limited 50mm f/2 lens, put the target at 10m, focused on it, then moved it progressively farther away while measuring system MTF50. I developed the images using AHD.

Here's what I got:
Looking good. For comparison MTF Mapper shows MTF50 for DPR's a7II+FE55mm at f/5.6 (vertical crop with AA, ISO400-1600) at about 1170 lp/ph.
022556c987294245bb43783116ee6048.jpg.png

Stopping down from f/2 to f/2.8 is all gain and no loss, since the sensor is limiting the MTF50 at f/2. F/4 and f/5.6 bring tiny losses in central sharpness and pretty big gains in DOF. F/8 through f/16 show progressive DOF improvements and focused-object losses.

While the above was generally what I expected, I was surprised by the details.

So, for all you CoC lovers, how would you pick an MTF50 to use to ascertain DOF? Absolute? Relative to peak? Viewing distance?
If we simplify a bit we could say that a blur disc resulting from the combination of many factors (diffraction, pixel aperture, AA, aberrations, etc.) looks gaussian. We can calculate the standard deviation of the gaussian in the spatial domain when its transform in the frequency domain is at half its peak (MTF50) and call that the blur disc radius* - then double it to get the diameter of a generic CoC.

If we know the spatial frequency (s) at which our imaging system hits MTFx, the corresponding gaussian CoC is

c01d99a1a4d648fda511734553b7aa71.jpg.png

with the CoC in the same units as s. For instance if MTF50 occurs at 1000 lp/ph in a Full Frame setup we can say that x = 50 and s = 41.67 lp/mm, for a CoC of 0.0090 mm or about 1.5px in an a7II, say. This method is used to estimate an initial deconvolution deblurring radius here .

The next question is whether MTF(50) is the right level to evaluate the formula at. It would seem that the chosen representative MTFxx should vary with final photograph viewing distance and size.

Jack

* although that would represent only 68% of the 'power' within the disc. Maybe the radius should be 1.5x or 2x the standard deviation (88% and 95% resp.)?
 
Last edited:
Jerry Fusselman wrote: I've read methods for lenses that do have DOF markings. Now I want to hear how to do CoC management on the rest of the lenses.
Hi Jerry,

If you've followed this thread so far you have probably realized that lens DOF markings are a sort of catch all, generic recommendation based on a set of outdated conservative assumptions. Here those assumptions are being tested and jettisoned.

The answer in my book is this: there is a reason why any blog entry that has to do with DOF refers the good stuff to the notes at the end of the page, which very often refer you to a thick boring tome full of formulas typically lacking in practical advice: this stuff has too many variables and it is too mathematical to break down into simple bits.

That's why the typical phone-toting budding photographer like me is very well served by Anders Torger's excellent little app.

Jack
 
Last edited:
D Cox wrote: Is there a distinction between circle of confusion, Airy disc, and Point Spread function ?
Yes, the Circle of Confusion is the imaging system's response to a point source, so it is effectively the PSF on the sensing plane resulting from the combined effect of diffraction and lens blur (and its various causes).

I assume you know that when we are talking about the Circle of Confusion in geometrical optics we assume that its PSF on the sensing plane is of even intensity inside the circle and zero outside of it (a pillbox in 3D). On the other hand the PSF of defocus, diffraction and aberrations are all different and neither necessarily uniform inside the 'circle' nor zero outside of it - so neither is a CoC resulting from them.
 
As I said earlier, I don't like the idea of basing DOF decisions on extinction resolution. I think MTF50 is a much better metric. We can get into that in the fullness of time, but I decided to do some DOF testing with MTF50 as the measure of interest.

I got my first sim run done.

I modeled a 24 MP FF Bayer-CFA camera with a AA filter that operates in both directions and has a zero at about 0.67 cy/px. I mounted a diffraction-limited 50mm f/2 lens, put the target at 10m, focused on it, then moved it progressively farther away while measuring system MTF50. I developed the images using AHD.

Here's what I got:
Looking good. For comparison MTF Mapper shows MTF50 for DPR's a7II+FE55mm at f/5.6 (vertical crop with AA, ISO400-1600) at about 1170 lp/ph.
022556c987294245bb43783116ee6048.jpg.png

Stopping down from f/2 to f/2.8 is all gain and no loss, since the sensor is limiting the MTF50 at f/2. F/4 and f/5.6 bring tiny losses in central sharpness and pretty big gains in DOF. F/8 through f/16 show progressive DOF improvements and focused-object losses.

While the above was generally what I expected, I was surprised by the details.

So, for all you CoC lovers, how would you pick an MTF50 to use to ascertain DOF? Absolute? Relative to peak? Viewing distance?
If we simplify a bit we could say that a blur disc resulting from the combination of many factors (diffraction, pixel aperture, AA, aberrations, etc.) looks gaussian. We can calculate the standard deviation of the gaussian in the spatial domain when its transform in the frequency domain is at half its peak (MTF50) and call that the blur disc radius* - then double it to get the diameter of a generic CoC.
My sim is much more precise than that -- possibly unnecessarily so, but what the hay, I'm not paying for the computer time. I model the Airy solid by sampling it at a resolution much greater then the pixel pitch would imply. I'm using a pillbox for focus, and a double pillbox for lens aberrations, which I didn't turn on for the above.

I think the adding to a Gaussian only applies if the kernels are of similar size.
If we know the spatial frequency (s) at which our imaging system hits MTFx, the corresponding gaussian CoC is

c01d99a1a4d648fda511734553b7aa71.jpg.png

with the CoC in the same units as s. For instance if MTF50 occurs at 1000 lp/ph in a Full Frame setup we can say that x = 50 and s = 41.67 lp/mm, for a CoC of 0.0090 mm or about 1.5px in an a7II, say. This method is used to estimate an initial deconvolution deblurring radius here .

The next question is whether MTF(50) is the right level to evaluate the formula at. It would seem that the chosen representative MTFxx should vary with final photograph viewing distance and size.

Jack

* although that would represent only 68% of the 'power' within the disc. Maybe the radius should be 1.5x or 2x the standard deviation (88% and 95% resp.)?


--
 
1. Given two CoCs of an object, c caused by being out of focus, and d caused by diffraction, what is a simple, decent formula for their convolution?
In my modeling, I use different kernels for OOF and diffraction. One is an Airy solid, and the other is a pillbox. I compute the combined effect by successive convolutions.
Oh, I think I see what you mean. Have a look here:

http://www.cs.uu.nl/docs/vakken/ibv/reader/chapter5.pdf

Page down to 5.1.6

what you're talking about is this:

Distributive law: (g + h) ∗ f = g ∗ f + h ∗ f.

Right?
Not immediately clear to me.
Then you should probably read the whole chapter.
I was hoping for a formula involving the two numbers I have, c and d. I'm not sure how that fits into a distributive law. Distributive laws have three letters. I want a formula involving two letters.
Can't be done, since the kernels are different in other ways than their sizes. Even if focus blur can be approximated with a pillbox kernel, diffraction blur can't.
If no one suggests better, will assume that the composite CoC size is sqrt(c^2 + d^2), as in the book, Image Clarity. I want a formula that has a c in it and a d in it.

I often see, even from you, Jim, arguments with an implicit assumption (it seems to me) that the formula is max(c,d), but I don't think that is reliable or reasonable.

There is a passage in Merklinger's simpler book, "The Ins and Outs of Focus," that he considers c+d a reasonable approximation,
I don't. That's one of the reasons I'm running the sim.
and I know a simple geometrical argument that yields this result, but it ignores visibility thresholds (Yeah, I might be making up a term here).
 
1. Given two CoCs of an object, c caused by being out of focus, and d caused by diffraction, what is a simple, decent formula for their convolution?
In my modeling, I use different kernels for OOF and diffraction. One is an Airy solid, and the other is a pillbox. I compute the combined effect by successive convolutions.
Oh, I think I see what you mean. Have a look here:

http://www.cs.uu.nl/docs/vakken/ibv/reader/chapter5.pdf

Page down to 5.1.6

what you're talking about is this:

Distributive law: (g + h) ∗ f = g ∗ f + h ∗ f.

Right?
Is there a distinction between circle of confusion, Airy disc, and Point Spread function ?
Yes. The Airy disc is a haystack with annular rings, the CoC is a simple disc (pillbox in image processing speak), and the PSF combines those two and everything else about the lens that differs from perfection.

--
http://blog.kasson.com
 
Last edited:
I do have a rough model in the frequency domain but I have not given much thought to the spatial domain. I know Frans has, though, in designing the mtf-generate-rectangle.exe part of MTF Mapper.
I specify things in the spatial domain, but there's a set of methods that I haven't used in a while and probably are slightly broken by now that do the calcs in the frequency domain, which is faster if the image is not very sharp.

Jim
 
D Cox wrote: Is there a distinction between circle of confusion, Airy disc, and Point Spread function ?
Yes, the Circle of Confusion is the imaging system's response to a point source, so it is effectively the PSF on the sensing plane resulting from the combined effect of diffraction and lens blur (and its various causes).

I assume you know that when we are talking about the Circle of Confusion in geometrical optics we assume that its PSF on the sensing plane is of even intensity inside the circle and zero outside of it (a pillbox in 3D). On the other hand the PSF of defocus, diffraction and aberrations are all different and neither necessarily uniform inside the 'circle' nor zero outside of it - so neither is a CoC resulting from them.
So you are using CoC to refer to a hypothetical lens that has no aberrations (including diffraction) and focusses a star to an infinitesimal point ? It seems obvious that this would be a pillbox function.

But the rays from a lens with aberrations could do all kinds of things away from the focal plane.
 
D Cox wrote: Is there a distinction between circle of confusion, Airy disc, and Point Spread function ?
Yes, the Circle of Confusion is the imaging system's response to a point source, so it is effectively the PSF on the sensing plane resulting from the combined effect of diffraction and lens blur (and its various causes).

I assume you know that when we are talking about the Circle of Confusion in geometrical optics we assume that its PSF on the sensing plane is of even intensity inside the circle and zero outside of it (a pillbox in 3D). On the other hand the PSF of defocus, diffraction and aberrations are all different and neither necessarily uniform inside the 'circle' nor zero outside of it - so neither is a CoC resulting from them.
So you are using CoC to refer to a hypothetical lens that has no aberrations (including diffraction) and focusses a star to an infinitesimal point ? It seems obvious that this would be a pillbox function.

But the rays from a lens with aberrations could do all kinds of things away from the focal plane.
I'm just modeling on-axis behavior, and the assumption is that defocus error, diffraction, and lens aberrations can be modeled by successive convolutions with the image.

Jim
 
JimKasson wrote: My sim is much more precise than that -- possibly unnecessarily so, but what the hay, I'm not paying for the computer time. I model the Airy solid by sampling it at a resolution much greater then the pixel pitch would imply. I'm using a pillbox for focus, and a double pillbox for lens aberrations, which I didn't turn on for the above.
Hi Jim, yes, I assumed as much.
I think the adding to a Gaussian only applies if the kernels are of similar size.
I am not adding gaussians. I am simply assuming that by the time you are done with the various convolutions the resulting CoC PSF will look somewhat gaussian. There is actually a theorem that says that if you convolve enough different stuff together it will :-)

So the rest of the post was in answer to your question on how to get a CoC from MTF50 - in order to calculate DOF. Use the CoC below for the MTFx of your situation and plug it into the near/far formulas.
JimKasson wrote: So, for all you CoC lovers, how would you pick an MTF50 to use to ascertain DOF? Absolute? Relative to peak? Viewing distance?
Jack Hogan wrote: If we know the spatial frequency (s) at which our imaging system hits MTFx, the corresponding gaussian CoC is

c01d99a1a4d648fda511734553b7aa71.jpg.png

with the CoC in the same units as s. For instance if MTF50 occurs at 1000 lp/ph in a Full Frame setup we can say that x = 50 and s = 41.67 lp/mm, for a CoC of 0.0090 mm or about 1.5px in an a7II, say. This method is used to estimate an initial deconvolution deblurring radius here .

The next question is whether MTF(50) is the right level to evaluate the formula at. It would seem that the chosen representative MTFxx should vary with final photograph viewing distance and size.

* although that would represent only 68% of the 'power' within the disc. Maybe the radius should be 1.5x or 2x the standard deviation (88% and 95% resp.)?
Jack
 
Last edited:

Keyboard shortcuts

Back
Top