CoC Management and the Object Field.

I think the adding to a Gaussian only applies if the kernels are of similar size.
I am not adding gaussians. I am simply assuming that by the time you are done with the various convolutions the resulting CoC PSF will look somewhat gaussian. There is actually a theorem that says that if you convolve enough different stuff together it will :-)
Does the theorem you refer to involve, not "enough different stuff," but "enough IID stuff"? :-) If so, the IID part is likely missing for us here, I would think.
 
I think the adding to a Gaussian only applies if the kernels are of similar size.
I am not adding gaussians. I am simply assuming that by the time you are done with the various convolutions the resulting CoC PSF will look somewhat gaussian. There is actually a theorem that says that if you convolve enough different stuff together it will :-)
See the very last set of mathematical identities which are presented here in this paper.
So the rest of the post was in answer to your question on how to get a CoC from MTF50 - in order to calculate DOF. Use the CoC below for the MTFx of your situation and plug it into the near/far formulas.
Followed the two links from your reference, but did not find a derivation of that identity. It seems interesting. Any sources available describing the derivations of that particular item ?
JimKasson wrote: So, for all you CoC lovers, how would you pick an MTF50 to use to ascertain DOF? Absolute? Relative to peak? Viewing distance?
Jack Hogan wrote: If we know the spatial frequency (s) at which our imaging system hits MTFx, the corresponding gaussian CoC is

c01d99a1a4d648fda511734553b7aa71.jpg.png

with the CoC in the same units as s. For instance if MTF50 occurs at 1000 lp/ph in a Full Frame setup we can say that x = 50 and s = 41.67 lp/mm, for a CoC of 0.0090 mm or about 1.5px in an a7II, say. This method is used to estimate an initial deconvolution deblurring radius here .

The next question is whether MTF(50) is the right level to evaluate the formula at. It would seem that the chosen representative MTFxx should vary with final photograph viewing distance and size.

* although that would represent only 68% of the 'power' within the disc. Maybe the radius should be 1.5x or 2x the standard deviation (88% and 95% resp.)?
 
Last edited:
I think the adding to a Gaussian only applies if the kernels are of similar size.
I am not adding gaussians. I am simply assuming that by the time you are done with the various convolutions the resulting CoC PSF will look somewhat gaussian. There is actually a theorem that says that if you convolve enough different stuff together it will
Does the theorem you refer to involve, not "enough different stuff," but "enough IID stuff"? If so, the IID part is likely missing for us here, I would think.
:-) Actually in an imaging system context I don't think they need to be identically distributed, they just need to be many and uncorrelated. As you've probably seen in the link I provided earlier the assumption seems to hold fairly well for systems that include an AA. Not so good for those that don't.

Jack
 
Last edited:
See the very last set of mathematical identities which are presented here in this paper.
Hi DM,

Nice paper, although slightly different. I don't remember where I saw it, but the one I am referring to was the 2D version of the one that says that when you add enough continuous random variables the sum becomes a Gaussian.
Followed the two links from your reference, but did not find a derivation of that identity. It seems interesting. Any sources available describing the derivations of that particular item ?
Yes, wikipedia and a little elbow grease ;-)

Jack
 
1. Given two CoCs of an object, c caused by being out of focus, and d caused by diffraction, what is a simple, decent formula for their convolution?
In my modeling, I use different kernels for OOF and diffraction. One is an Airy solid, and the other is a pillbox. I compute the combined effect by successive convolutions.
Oh, I think I see what you mean. Have a look here:

http://www.cs.uu.nl/docs/vakken/ibv/reader/chapter5.pdf

Page down to 5.1.6

what you're talking about is this:

Distributive law: (g + h) ∗ f = g ∗ f + h ∗ f.

Right?
Not immediately clear to me.
On page 103, there's a detailed example of how to take two kernels and convolve then to get one.
I was hoping for a formula involving the two numbers I have, c and d. I'm not sure how that fits into a distributive law. Distributive laws have three letters. I want a formula involving two letters.

If no one suggests better, will assume that the composite CoC size is sqrt(c^2 + d^2), as in the book, Image Clarity. I want a formula that has a c in it and a d in it.

I often see, even from you, Jim, arguments with an implicit assumption (it seems to me) that the formula is max(c,d), but I don't think that is reliable or reasonable.

There is a passage in Merklinger's simpler book, "The Ins and Outs of Focus," that he considers c+d a reasonable approximation, and I know a simple geometrical argument that yields this result, but it ignores visibility thresholds (Yeah, I might be making up a term here).
 
1. Given two CoCs of an object, c caused by being out of focus, and d caused by diffraction, what is a simple, decent formula for their convolution?
By this question, I am asking for some simple-but-still-useful approximation. Sometimes I like to take pictures before I get the results of computer simulations. Is that why I like the object-field method so much?

For my original question, we are allowed to use the characteristics of of defocus blurs and diffraction to get our answer. I imagine we're all at least 99% sure that the answer is at least max(c,d) (because two spreads we didn't want don't usually offset each other) and at most c+d (a simple geometric argument corresponding the convolutions of circles).

I'm hoping for something that is reasonable to use in the field. I see no reason to give up on finding a useful answer in terms of c and d.

When c or d is 0, we know what to do and all three formulas I suggested give the same, correct answer.

Maybe we can focus on the c = d case. Maybe someone can simulate the c = d case. Too vague? OK, how about the c = d = 10 um case?

It looks like my question is striking this group as more that a little odd. I admit that my perspective is Jaynesian, as in reasoning from Bayesian methods, maximum entropy, and transformation groups. The attitude is summarized in a simple title for business "How to Measure Anything." One easy example of this approach is the Fermi decomposition. An approximate answer can be better than no answer.

Another example is, "Shall we open a store in Pittsburgh?" or "Shall I focus at infinity or 300 feet?" We want something to guide us right now even if our information is imperfect and a better answer is possible some day in the future.

It sounds like Jim is saying that my question involving c and d has such woefully limited information that no answer is possible, as if I asked "What is the CoC of a Canon lens?" with no other information. Can you at least agree that a reasonable answer is between max(c,d) and c+d?

Jim has discussed c often, including it in the title of this thread, but what I mean by d is perhaps unclear and undefined. Its not apples and oranges. By d, I am referring to the diameter of a blur caused by diffraction on an object that is in perfect focus measured in the visual units (it looks like) a blur caused by defocus of something photographed with essentially zero diffraction.

Jim, you're a classical guy. Maybe everyone in this thread is classical. I'm a Bayesian. Hope we can still learn from each other, or at least have a discussion. :)
In my modeling, I use different kernels for OOF and diffraction. One is an Airy solid, and the other is a pillbox. I compute the combined effect by successive convolutions.
Oh, I think I see what you mean. Have a look here:

http://www.cs.uu.nl/docs/vakken/ibv/reader/chapter5.pdf

Page down to 5.1.6

what you're talking about is this:

Distributive law: (g + h) ∗ f = g ∗ f + h ∗ f.

Right?
Not immediately clear to me.
Then you should probably read the whole chapter.
I was trying to answer your question, "Right?". I would have thought my issue has come up before, and indeed Image Clarity discusses this on p. 30 and chapters 4 and 5 at least. (By the way, the online PDF copies look dangerous to me. I am using my hard copy.)

The book offers an "image degradation" formula of sqrt(c^2+d^2), which falls between my reasonable limits above, so I'm inclined to use it if nothing more comes along. At least as a first approximation until someone shows me better.
I was hoping for a formula involving the two numbers I have, c and d. I'm not sure how that fits into a distributive law. Distributive laws have three letters. I want a formula involving two letters.
Can't be done, since the kernels are different in other ways than their sizes. Even if focus blur can be approximated with a pillbox kernel, diffraction blur can't.
No, "you can measure anything." It can be done, though not perfectly. If a clever person figures out how, it can take the strange aspects of diffraction blur into account, but until anyone offers some other way to do that, I'll use Image Clarity's image-degradation formula.
If no one suggests better, will assume that the composite CoC size is sqrt(c^2 + d^2), as in the book, Image Clarity. I want a formula that has a c in it and a d in it.

I often see, even from you, Jim, arguments with an implicit assumption (it seems to me) that the formula is max(c,d), but I don't think that is reliable or reasonable.

There is a passage in Merklinger's simpler book, "The Ins and Outs of Focus," that he considers c+d a reasonable approximation,
I don't. That's one of the reasons I'm running the sim.
I certainly look forward to the results of the sim. OK, you're sure that c+d is not always reasonable. Terrific. I like the concreteness of your statement. That's different from saying that c+d is meaningless. OK, are you saying that c+d too high or too low? What formula might be better?
and I know a simple geometrical argument that yields this result, but it ignores visibility thresholds (Yeah, I might be making up a term here).
--
http://blog.kasson.com
--
Jerry Fusselman
 
Last edited:
D Cox wrote: Is there a distinction between circle of confusion, Airy disc, and Point Spread function ?
Yes, the Circle of Confusion is the imaging system's response to a point source, so it is effectively the PSF on the sensing plane resulting from the combined effect of diffraction and lens blur (and its various causes).

I assume you know that when we are talking about the Circle of Confusion in geometrical optics we assume that its PSF on the sensing plane is of even intensity inside the circle and zero outside of it (a pillbox in 3D). On the other hand the PSF of defocus, diffraction and aberrations are all different and neither necessarily uniform inside the 'circle' nor zero outside of it - so neither is a CoC resulting from them.
Right. As an example of two kernels, when convolved, which form nothing like a Gaussian, see the Example on page 103 of this:


Of course, you've got a better chance of getting something more Gaussian if the kernels are all low-pass.

Jim
 
1. Given two CoCs of an object, c caused by being out of focus, and d caused by diffraction, what is a simple, decent formula for their convolution?
By this question, I am asking for some simple-but-still-useful approximation. Sometimes I like to take pictures before I get the results of computer simulations. Is that why I like the object-field method so much?

For my original question, we are allowed to use the characteristics of of defocus blurs and diffraction to get our answer. I imagine we're all at least 99% sure that the answer is at least max(c,d) (because two spreads we didn't want don't usually offset each other) and at most c+d (a simple geometric argument corresponding the convolutions of circles).

I'm hoping for something that is reasonable to use in the field. I see no reason to give up on finding a useful answer in terms of c and d.

When c or d is 0, we know what to do and all three formulas I suggested give the same, correct answer.

Maybe we can focus on the c = d case. Maybe someone can simulate the c = d case. Too vague? OK, how about the c = d = 10 um case?

It looks like my question is striking this group as more that a little odd. I admit that my perspective is Jaynesian, as in reasoning from Bayesian methods, maximum entropy, and transformation groups. The attitude is summarized in a simple title for business "How to Measure Anything." One easy example of this approach is the Fermi decomposition. An approximate answer can be better than no answer.

Another example is, "Shall we open a store in Pittsburgh?" or "Shall I focus at infinity or 300 feet?" We want something to guide us right now even if our information is imperfect and a better answer is possible some day in the future.

It sounds like Jim is saying that my question involving c and d has such woefully limited information that no answer is possible, as if I asked "What is the CoC of a Canon lens?" with no other information. Can you at least agree that a reasonable answer is between max(c,d) and c+d?

Jim has discussed c often, including it in the title of this thread, but what I mean by d is perhaps unclear and undefined. Its not apples and oranges. By d, I am referring to the diameter of a blur caused by diffraction on an object that is in perfect focus measured in the visual units (it looks like) a blur caused by defocus of something photographed with essentially zero diffraction.

Jim, you're a classical guy. Maybe everyone in this thread is classical. I'm a Bayesian. Hope we can still learn from each other, or at least have a discussion. :)
In my modeling, I use different kernels for OOF and diffraction. One is an Airy solid, and the other is a pillbox. I compute the combined effect by successive convolutions.
Oh, I think I see what you mean. Have a look here:

http://www.cs.uu.nl/docs/vakken/ibv/reader/chapter5.pdf

Page down to 5.1.6

what you're talking about is this:

Distributive law: (g + h) ∗ f = g ∗ f + h ∗ f.

Right?
Not immediately clear to me.
Then you should probably read the whole chapter.
I was trying to answer your question, "Right?". I would have thought my issue has come up before, and indeed Image Clarity discusses this on p. 30 and chapters 4 and 5 at least. (By the way, the online PDF copies look dangerous to me. I am using my hard copy.)

The book offers an "image degradation" formula of sqrt(c^2+d^2), which falls between my reasonable limits above, so I'm inclined to use it if nothing more comes along. At least as a first approximation until someone shows me better.
I was hoping for a formula involving the two numbers I have, c and d. I'm not sure how that fits into a distributive law. Distributive laws have three letters. I want a formula involving two letters.
Can't be done, since the kernels are different in other ways than their sizes. Even if focus blur can be approximated with a pillbox kernel, diffraction blur can't.
No, "you can measure anything." It can be done, though not perfectly. If a clever person figures out how, it can take the strange aspects of diffraction blur into account, but until anyone offers some other way to do that, I'll use Image Clarity's image-degradation formula.
If no one suggests better, will assume that the composite CoC size is sqrt(c^2 + d^2), as in the book, Image Clarity. I want a formula that has a c in it and a d in it.

I often see, even from you, Jim, arguments with an implicit assumption (it seems to me) that the formula is max(c,d), but I don't think that is reliable or reasonable.

There is a passage in Merklinger's simpler book, "The Ins and Outs of Focus," that he considers c+d a reasonable approximation,
I don't. That's one of the reasons I'm running the sim.
I certainly look forward to the results of the sim. OK, you're sure that c+d is not always reasonable. Terrific. I like the concreteness of your statement. That's different from saying that c+d is meaningless. OK, are you saying that c+d too high or too low? What formula might be better?
and I know a simple geometrical argument that yields this result, but it ignores visibility thresholds (Yeah, I might be making up a term here).
Jerry, for computational efficiency, I compute something that I call the combined kernel when running the sim, and it contains the effect of all of the optical processing prior to the sensor, including the AA filter. That kernel is used on the image and discarded. It would be a SMOP to save it, and examinations of many such kernels would answer you question. When I'm a bit further with the sim, I'll see if I can do that.

The simple dimensions of the combined kernel are not particularly interesting, since the repeated kernel convolutions inevitable increase the size of the kernel, but the periphery tens to be composed of small values. We would have to establish a threshold to get to the answer to your question, and I'm not quite sure how to do that now.

Jim
 
Nice paper, although slightly different. I don't remember where I saw it, but the one I am referring to was the 2D version of the one that says that when you add enough continuous random variables the sum becomes a Gaussian.
That's the Central Limit Theorem, although it usually assumes that the random variables all have the same distribution, although as I understand it, it is still a good approximation as long as all variables have a finite variance.
 
Nice paper, although slightly different. I don't remember where I saw it, but the one I am referring to was the 2D version of the one that says that when you add enough continuous random variables the sum becomes a Gaussian.
That's the Central Limit Theorem, although it usually assumes that the random variables all have the same distribution, although as I understand it, it is still a good approximation as long as all variables have a finite variance.
I don't think that applies here. There are only three kernels to convolve with no AA filter, and four with an AA filter. In the case of leaving out lens aberrations, there are one less than those numbers in either case. So, consider a case with no AA filter, and a pillbox of diameter 100 um and an Airy disk with 3 um between the first zeros. My claim is that the convolution of those two is essentially a pillbox, not a Gaussian.

Jim
 
1. Given two CoCs of an object, c caused by being out of focus, and d caused by diffraction, what is a simple, decent formula for their convolution?
In my modeling, I use different kernels for OOF and diffraction. One is an Airy solid, and the other is a pillbox. I compute the combined effect by successive convolutions.
Oh, I think I see what you mean. Have a look here:

http://www.cs.uu.nl/docs/vakken/ibv/reader/chapter5.pdf

Page down to 5.1.6

what you're talking about is this:

Distributive law: (g + h) ∗ f = g ∗ f + h ∗ f.

Right?
Not immediately clear to me.
On page 103, there's a detailed example of how to take two kernels and convolve then to get one.
Yes, and note the quote, "(with σ equal to the square root of the sum of squares of the original two sigmas)."

This (It seems to me) supports the Image Clarity version with the square root of the sum of squares. Their formula is the one that also satisfies the Goldilocks principle: It is the neither the floor of max(c,d) nor the ceiling of c+d. It is between them.

Yes, an approximation.


I was looking for some formula incorporating the special nature of diffraction CoCs, but that will have to wait. For today, I'm going with the square root of the sum of squares of c and d.

Offline, I got some other suggested reading on this issue. Looks interesting, but it will have to wait a day or so.
I was hoping for a formula involving the two numbers I have, c and d. I'm not sure how that fits into a distributive law. Distributive laws have three letters. I want a formula involving two letters.

If no one suggests better, will assume that the composite CoC size is sqrt(c^2 + d^2), as in the book, Image Clarity. I want a formula that has a c in it and a d in it.

I often see, even from you, Jim, arguments with an implicit assumption (it seems to me) that the formula is max(c,d), but I don't think that is reliable or reasonable.

There is a passage in Merklinger's simpler book, "The Ins and Outs of Focus," that he considers c+d a reasonable approximation, and I know a simple geometrical argument that yields this result, but it ignores visibility thresholds (Yeah, I might be making up a term here).

--
Jerry Fusselman
--
http://blog.kasson.com
--
Jerry Fusselman
 
Last edited:
Nice paper, although slightly different. I don't remember where I saw it, but the one I am referring to was the 2D version of the one that says that when you add enough continuous random variables the sum becomes a Gaussian.
That's the Central Limit Theorem, although it usually assumes that the random variables all have the same distribution, although as I understand it, it is still a good approximation as long as all variables have a finite variance.
I don't think that applies here. There are only three kernels to convolve with no AA filter, and four with an AA filter. In the case of leaving out lens aberrations, there are one less than those numbers in either case. So, consider a case with no AA filter, and a pillbox of diameter 100 um and an Airy disk with 3 um between the first zeros. My claim is that the convolution of those two is essentially a pillbox, not a Gaussian.
Sure, I did not mean to imply that the CoC was a gaussian, merely that one had the option of thinking of it as such - with the described consequences. Pillbox or gaussian are both legitimate assumptions depending on what one wishes to do with them: the PSF is not going to look exactly like either. An Airy looks somewhat gaussian-like and defocus goes from looking like a well defined circle full of onion rings to a gaussian depending on front/back/how much. Pixel aperture is just a low pass filter over those two. Then you've got other aberrations that span the range. So for our purposes here whether one decides to think of the CoC more like a pillbox or a gaussian has more to do with simplifying the model one has in mind than how it actually could look like in reality.

Jack
 
Last edited:
I was looking for some formula incorporating the special nature of diffraction CoCs, but that will have to wait. For today, I'm going with the square root of the sum of squares of c and d.

To me, a measure of the size of the combined kernel is of limited interest, since I think that extinction resolution is not a particularly useful measure, although it's relatively easy to calculate. We moved beyond extinction resolution for in-focus images decades ago with SFR, MTF, and SQF. Why do we still stick with it for DOF? Well, I know. It's because the math is easier. But I think that's looking for your keys under the lamppost (I'll be happy to tell you joke if you haven't heard it before).

It's early days for me in this enterprise. I want to understand what's going on first, and then try to figure out ways to simplify it.

Jim
 
JimKasson wrote:.

I don't think that applies here. There are only three kernels to convolve with no AA filter, and four with an AA filter. In the case of leaving out lens aberrations, there are one less than those numbers in either case. So, consider a case with no AA filter, and a pillbox of diameter 100 um and an Airy disk with 3 um between the first zeros. My claim is that the convolution of those two is essentially a pillbox, not a Gaussian.
You are right, the convergence to Gaussian is slow, and I wouldn't expect that to work for a small number of variables.

As lenses tend to be either over or under corrected, a pure disk and diffraction and AA filter seems reasonable, unless you want to also add a factor for over or under correction of spherical aberration.
 
Take a look at this MTF50 plot for a diffraction-limited 55 mm lens focused at 10m with MTF50 in cy/ph versus actual object distance:

dd551c9debae4a5490b4aa056d81f2be.jpg.png

The question is: how should we judge DOF if preserving most of the lans/camera maximum possible resolution is the criterion (for this exercise, forget output size and viewing distance).

If we're very particular, we might say that MTF50 = 1200 cy/ph is our criterion, and there's almost a meter of DOF behind the object (and a little less than that in front of it, although that's not shown on the graph), and you should stop down to f/4 or f/5.6 to get all that DOF. If we're not as choosy, we'd say that 1000 cy/[h is good enough, and now there's almost 3.5 m of DOF behind the object, and we can get it at f/8 or f/11.

Note that in the 1200 cy/ph case, f/8 through f/16 have zero DOF. In the 1000 cy/ph case, f/16 has zero DOF.

I'm open to other ways to interpret this data.

Anyone?

Jim

--
 
I was looking for some formula incorporating the special nature of diffraction CoCs, but that will have to wait. For today, I'm going with the square root of the sum of squares of c and d.
To me, a measure of the size of the combined kernel is of limited interest, since I think that extinction resolution is not a particularly useful measure, although it's relatively easy to calculate. We moved beyond extinction resolution for in-focus images decades ago with SFR, MTF, and SQF. Why do we still stick with it for DOF? Well, I know. It's because the math is easier. But I think that's looking for your keys under the lamppost (I'll be happy to tell you joke if you haven't heard it before).
I know the joke, but I may lack the skill to tell it, except to recall the punchline <spoiler alert>, "because the light is better here." Yeah, we should keep the joke in mind.

If extinction resolution is of limited interest, terrific: Please offer your alternative.

It is like an axiomatic system with these rules:
  1. max(c,d) <= f(c,d) <= c+d.
  2. c' > c implies f(c',d) > f(c,d).
  3. d' > d implies f(c,d') > f(c,d).
Can you agree with these rules? Besides the extreme c+d, Image Clarity's image-degradation formula is the only one of those yet put forward that I have yet seen that conforms to the three rules above.
It's early days for me in this enterprise. I want to understand what's going on first, and then try to figure out ways to simplify it.

Jim
 
I was looking for some formula incorporating the special nature of diffraction CoCs, but that will have to wait. For today, I'm going with the square root of the sum of squares of c and d.
To me, a measure of the size of the combined kernel is of limited interest, since I think that extinction resolution is not a particularly useful measure, although it's relatively easy to calculate. We moved beyond extinction resolution for in-focus images decades ago with SFR, MTF, and SQF. Why do we still stick with it for DOF? Well, I know. It's because the math is easier. But I think that's looking for your keys under the lamppost (I'll be happy to tell you joke if you haven't heard it before).
I know the joke, but I may lack the skill to tell it, except to recall the punchline <spoiler alert>, "because the light is better here." Yeah, we should keep the joke in mind.

If extinction resolution is of limited interest, terrific: Please offer your alternative.
MTF50.
It's early days for me in this enterprise. I want to understand what's going on first, and then try to figure out ways to simplify it.
Terrific. This seems at odds, though, with your several statements of denigration of the object-field method, even before you fully understand it.
Jerry, if fully understanding the OF method is a requirement for commenting on it, I will have to shut up here and now. From my conversations with you, I believe that I will never fully understand it.

Jim
 
What distances are the horizontal axis referring to? Are we looking at distances from the center point of the image formed with a full-frame camera?

Maybe I just need a list of assumptions used to get this graph.
 
Take a look at this MTF50 plot for a diffraction-limited 55 mm lens focused at 10m with MTF50 in cy/ph versus actual object distance:

dd551c9debae4a5490b4aa056d81f2be.jpg.png

The question is: how should we judge DOF if preserving most of the lans/camera maximum possible resolution is the criterion (for this exercise, forget output size and viewing distance).

If we're very particular, we might say that MTF50 = 1200 cy/ph is our criterion, and there's almost a meter of DOF behind the object (and a little less than that in front of it, although that's not shown on the graph), and you should stop down to f/4 or f/5.6 to get all that DOF. If we're not as choosy, we'd say that 1000 cy/[h is good enough, and now there's almost 3.5 m of DOF behind the object, and we can get it at f/8 or f/11.

Note that in the 1200 cy/ph case, f/8 through f/16 have zero DOF. In the 1000 cy/ph case, f/16 has zero DOF.

I'm open to other ways to interpret this data.

Anyone?
I find it difficult to make a choice without first determining a criterion for the CoC based on viewing distance and size. For instance, how do you know that 1000 cy/ph is good enough without knowing what those are? Good enough for what exactly? DOF depends on viewing distance and size.

On the other hand I have a pretty good idea how to choose the CoC (hence related near/far/DOF): acuity of viewer at set distance projected onto the sensing plane via the displayed photograph size. From there it's just a hop, skip and a jump to MTF50 and back ;-)

Jack
 
Last edited:
I was looking for some formula incorporating the special nature of diffraction CoCs, but that will have to wait. For today, I'm going with the square root of the sum of squares of c and d.
To me, a measure of the size of the combined kernel is of limited interest, since I think that extinction resolution is not a particularly useful measure, although it's relatively easy to calculate. We moved beyond extinction resolution for in-focus images decades ago with SFR, MTF, and SQF. Why do we still stick with it for DOF? Well, I know. It's because the math is easier. But I think that's looking for your keys under the lamppost (I'll be happy to tell you joke if you haven't heard it before).
I know the joke, but I may lack the skill to tell it, except to recall the punchline <spoiler alert>, "because the light is better here." Yeah, we should keep the joke in mind.

If extinction resolution is of limited interest, terrific: Please offer your alternative.
MTF50.
Yes, I recall that you are an MTF50 guy. One thing I don't understand about MTF50: Is the horizontal axis on these curves always the distance from the center of the imaging system? If there is some more general definition, I would like to see it. Call me a simpleton.
It's early days for me in this enterprise. I want to understand what's going on first, and then try to figure out ways to simplify it.
Terrific. This seems at odds, though, with your several statements of denigration of the object-field method, even before you fully understand it.
Jerry, if fully understanding the OF method is a requirement for commenting on it, I will have to shut up here and now. From my conversations with you, I believe that I will never fully understand it.
OK, full understanding is rare. :)


I don't want your consideration of these issues to cease. I'm only hoping that you'll wait for a bit before stating your final conclusion. I'm still digesting things you wrote several days ago, hoping for a worthwhile reply.

There are a few areas where your summary of object-field methods is not yet quite accurate in my eyes. I have a two-step approach in mind, and we may yet get both steps today!
--
Jerry Fusselman
 
Last edited:

Keyboard shortcuts

Back
Top