CoC Management and the Object Field.

What distances are the horizontal axis referring to? Are we looking at distances from the center point of the image formed with a full-frame camera?

Maybe I just need a list of assumptions used to get this graph.
I guess when I said this it was too much in short hand:
Take a look at this MTF50 plot for a diffraction-limited 55 mm lens focused at 10m with MTF50 in cy/ph versus actual object distance
The horizontal axis is object distance from the center of the lens, measured along the lens axis. The lens is focused at 10 meters.

Does that help?

m
Yes, thanks.
What made the difference? Was it that part about measuring the distance alone the lens axis?
I'll keep sharing my ignorance:

Still, a list of assumptions would be appreciated.
Diffraction-limited lens

Diffraction calculated at 450, 550, and 650 nm

Bayer CFA, FF 24 MP

No aberrations

AHD demosaicing

Zero photon noise

Zero read noise

Zero pattern noise

Zero PRNU
And a few others that I forgot:

No LoCA

No LaCA

No focus shift

14-bit ADC

Jim
 
I ran the sim overnight with a lens blur model that I originally created a couple of years ago to approximate the on-axis behavior of the Zeiss Otus 55/1.4. Now that I have improved my focusing accuracy with a motorized rail and my target with a razor blade edge, I now realize that my Otus model is actually somewhat worse than the lens itself, particularly at wide apertures. Nevertheless, it can serve as a standin for very good, if not great, 55mm lenses.

As before, the lens is focused at ten meters. The horizontal axis of the graph is object distance from 10 meters (in focus) to 20 meters (well out of focus). The vertical axis is MTF50 measured in cycles per picture height. The simulated sensor is 24 MP, 14 bit, full frame Bayer CFA with no AA filter, like the sensor in the Sony RX1R. I turned off all sources of noise (photon, read, PRNU) -- they don't affect slanted edge measurements much anyway, since the technique is intended to calibrate out noise. Diffraction is computed at 450, 550, and 650 nm for the respective blue, green, and red raw color planes. The CFA is Adobe RGB, and the illuminant is D65.



a7de68012dff4c83a661f9fa037b5152.jpg.png

If we pick 1000 cy/ph as our threshold for determining DOF, and only consider the DOF beyond the focused distance (since, to save about 12 hours of computer time, I didn't compute the MTF50 for objects in front of the focused distance), We can see that f/11 offers the largest DOF. If we raise our threshold to 1200 cy/ph, the greatest DOF is obtained at f/8 and f/11. At 1400 cy/ph, the greatest DOF is seen at f/5.6 and f/8. At 1600 cy/ph, we get the greatest DOF at f/5.6, and the DOF at all other f-stops is zero. If we go the other way and choose 800 cy/ph as the threshold, f/8 through f/16 all offer DOF as far out as I carried the computation.

Note how looking at DOF this way gives quite different answers than conventional CoC calculations.

It's interesting to think about how we'd incorporate focus shift into our calculations. The lens as modeled has no focus shift, and, we'd get these results in a lens with focus shift if we always focused at the taking aperture. But a lot of people don't do that. For them, stopping down introduces a new source of blur at the focused distance.



Jim

--
 
There are many photographers -- and I am one of them a lot of the time -- who don't have a final use (or at least not a complete list of possible final uses) in mind at the instant of exposure. Many of them us want everything in a distance range to be pretty nearly as sharp as it can be given the camera and the lens. Then the issue of how far down to stop turns into an issue of how much loss in resolution is tolerable.
Now I understand what you are driving at (duh!)

I first started thinking about this when you produced the first loCA plots a few months ago. If you take the peak of the luminance curve to be in acceptable focus, with minimal trig you can easily calculate depth of focus from the plot, hence depth of field, based on an acceptable loss of 'sharpness' - say 10% lower MTF50. The problem I had then, and that I have had in answering your question now, is that we do not know what peak focus represents to the viewer. It could mean tack sharp to unwatchably blurry based on photograph size and viewing distance. So what does 10% less than that mean?

Now that I better understand your question, here is one way to start looking at it:
  1. calculate the minimum acceptable MTFxx criterion for sharpness given typical print size and viewing conditions (say 1000 lp/ph below).
  2. read off the DOF as a function of f-number based on the MTFxx plot of the lens obtained at the desired subject distance
de257e4508d64dc4a868947900caf05d.jpg.png
Yes, indeed. The picking of the threshold MTF is the same kind of action as picking a CoC in classical DOF calculations.

And I don't know if I'd hove thought of this way of presenting the sim results if I hadn't done the focus shift studies with real camera data.

By the way, with respect to print size and viewing distance, I note that they are subsumed into MTFxx plots, as long as MTFxx is calculated in cycles per picture height or lw/ph and the viewing distance is proportional to the print size.
3. Come up with a model so that one does not need to measure and produce such laborious MTFxx plots for every lens in their possession at every distance - and hopefully develop a few rules of thumb :-)
Yes, I'm leaving that for after I get the sim work further along. Your frequency domain methods are probably better than my spatial methods for that. For that approach, deciding how to treat the Bayer CFA might be a little tricky, though.
As long as one is not too finicky I think (3) can be done relatively easily only for well corrected lenses near the center of the FOV. After that we need alanr0 or Brandon. Here is a start. If you think it might be helpful I have a more complex version of it that includes terms for defocus, AA and generic aberrations as shown here .
I'll get to those eventually. Right now I have my hands full.

Jim

--
http://blog.kasson.com
 
Last edited:
I will look at speeding up the computations, but the last simulation run took 48985.548248 seconds (love those significant digits). That's about 14 hours. The last run had lens aberrations, and thus larger kernels, which is why it took so long. So it's worth thinking about what runs to do.

I've had a request for a run with the object closer to the camera than the focused distance. I'm not in a hurry to do that, because the result is going to look so much like the result with the object farther away than the focused distance except that the MTF50 will get smaller faster.

I've had another request from the same person (Thanks, Jerry!), to do a run working back from infinity focus. Well, I can't deal with infinity, but if I focus at 1 km, that should be good enough with my 55 mm lens. Then I could do a run with the lens focused at the hyperfocal distance given some f-stop and CoC diameter assumption. That's two night's work.

I'm thinking about using the same lens blur model as for the curves I posted this morning, which were originally intended to model an Otus 55 on axis, but are more pessimistic than the measurements than I am now getting from that lens with my latest computer-drive focusing rail and razor blade target. I'm thinking about upping the camera resolution to 8000x5333 to simulate the Sony a7RII, although that will make the runs take longer.

I'm going to hold off until tonight to start the next run so I can work on speeding up the code, so are there requests in the meantime?

Jim
 
Agreed. Regarding Jerry's CoC formula request, root sum of squares addition can work reasonably well to estimate the radius, provided you figure out what measure of radius is appropriate for distributions with completely different shapes.
Once the central limit theorem kicks in, and everything is Gaussian, RMS radius works nicely. For a slightly blurred pill-box, I would probably go for width at half maximum intensity, but I haven't checked how well that works with an Airy disk.
Terrific! For a formula involving c and d, is Merkinger's (pp. 31, 49--51) d = 5 um at f/8 reasonable? That is, I would use d = (5 um)*N/8, where N is the f number.
Looks reasonable.

Root sum of squares should certainly give the correct asymptotic behaviour when either c or d dominate. Where the magnitudes are comparable I would expect the results to be useful, though not exact.

For me, a significant unknown is how the subjective appearance of 5 micron diffractive blur compares with 5 micron CoC dominated by defocus. There will also be differences in the MTF characteristics, and this is something that Jim Kasson's simulations should reveal.

Regarding diffracted Airy disk size, radius of the first null is (1.22 x wavelength x F#)

59% of the energy falls within half this radius, at which the intensity is 0.37 of peak.

At 550 nm, radius of the first null is (F# x 0.67 micron). Merklinger's value for the diffraction-limited resolution corresponds to (0.63 F#) which is close enough to my somewhat arbitrary choice. 54% of diffracted energy at 550 nm falls within 5 micron radius at f/8.

Cheers,
 
I will look at speeding up the computations, but the last simulation run took 48985.548248 seconds (love those significant digits). That's about 14 hours. The last run had lens aberrations, and thus larger kernels, which is why it took so long. So it's worth thinking about what runs to do.

I've had a request for a run with the object closer to the camera than the focused distance. I'm not in a hurry to do that, because the result is going to look so much like the result with the object farther away than the focused distance except that the MTF50 will get smaller faster.

I've had another request from the same person (Thanks, Jerry!), to do a run working back from infinity focus. Well, I can't deal with infinity, but if I focus at 1 km, that should be good enough with my 55 mm lens. Then I could do a run with the lens focused at the hyperfocal distance given some f-stop and CoC diameter assumption. That's two night's work.

I'm thinking about using the same lens blur model as for the curves I posted this morning, which were originally intended to model an Otus 55 on axis, but are more pessimistic than the measurements than I am now getting from that lens with my latest computer-drive focusing rail and razor blade target. I'm thinking about upping the camera resolution to 8000x5333 to simulate the Sony a7RII, although that will make the runs take longer.

I'm going to hold off until tonight to start the next run so I can work on speeding up the code, so are there requests in the meantime?

Jim
 
I will look at speeding up the computations, but the last simulation run took 48985.548248 seconds (love those significant digits). That's about 14 hours. The last run had lens aberrations, and thus larger kernels, which is why it took so long. So it's worth thinking about what runs to do.

I've had a request for a run with the object closer to the camera than the focused distance. I'm not in a hurry to do that, because the result is going to look so much like the result with the object farther away than the focused distance except that the MTF50 will get smaller faster.

I've had another request from the same person (Thanks, Jerry!), to do a run working back from infinity focus. Well, I can't deal with infinity, but if I focus at 1 km, that should be good enough with my 55 mm lens. Then I could do a run with the lens focused at the hyperfocal distance given some f-stop and CoC diameter assumption. That's two night's work.

I'm thinking about using the same lens blur model as for the curves I posted this morning, which were originally intended to model an Otus 55 on axis, but are more pessimistic than the measurements than I am now getting from that lens with my latest computer-drive focusing rail and razor blade target. I'm thinking about upping the camera resolution to 8000x5333 to simulate the Sony a7RII, although that will make the runs take longer.

I'm going to hold off until tonight to start the next run so I can work on speeding up the code, so are there requests in the meantime?
Given what you've said, my preference order of results is
  1. 8000x5333.
  2. near infinity, and the hyperfocal followup.
  3. other side of 10 meters.
  4. less pessimistic model of the Otus 55.
But all four are quite interesting.
I just recoded the sim to be able to use frequency domain computations instead of convolutions, and I now have hopes that last night's 14-hour run will take about an hour.

For you Matlab lovers, the basic algorithm is

output = ifft2(fft2(image) .* fft2(kernel))

but it looks a lot uglier than that with all the kernet trimming, FFT padding, and housekeeping.

Jim
 
In response to Jerry's request, I'm working on a generalized kernel size metric. In order to work in general, it has to work on kernels that are not radially symmetric. In my current model, phase shift AA filters result in kernels that don't have radial symmetry, and introducing astigmatism and off-axis aberrations at some point in the future will also result in loss of radial symmetry, even if the model is set up to run withou an AA filter.

So I now have a proposal for a way to measure kernel size that does not require radial symmetry. It assumes low-pass kernels, with all entries materially positive (it will tolerate small negative entries, however0.
  1. Normalize the kernel so that it sums to unity.
  2. If it's not square already, pad it with zeros to make it square.
  3. Reduce the size in both width and height slowly until the kernel sums to x, where x is between zero and one.
  4. Report the size of the kernel.
One advantage is that the result is a scalar. One disadvantage is that, if the kernel is long and skinny, we get just the long dimension.

Choosing x is not obvious. If we choose a number close to one, a pillbox filter will measure out to its diameter, but an Airy solid will probably report a value much larger than the distance between first zeros. If we choose a number close to 0.67, a Gaussian will report a bit less than its standard deviation (did I do that right?), but a pillbox will report a number a bit less than 8/9 of its radius (did I do that right?).

Anyway, what does everybody, particularly Alan O and Jack H, think of this?

Jim
 
One of the more challenging images in Merklinger's simpler book on the object-space method, is on p. 48, which I call "Cannon overlooking Placentia, Newfoundland."

The challenge: What aperture and focus distance do you use?

I plan to discuss this example in some detail shortly, because I've simulate CoCs based on both defocus and diffraction for several different CoC management plans. The plans compare different methods, including Merlinger's object field method.

The challenge is to manage depth of field for the best image according to your taste. If the image does not please, you could imagine that the Placentia C of C is your client, and they want a large print.

Some information gleaned from the Internet and the lens equation:
  • The 12-foot (bore) canon is 50 feet away.
  • The foreground pine trees are perhaps 75--150 feet away.
  • The buildings you can see in the town are 2160--5280 feet away.
  • The distant trees may well extend to five miles away.
Jim answered the challenge in the earlier thread, but I had not yet given this data about distances of key objects in the image.

I don't want to limit your creativity, but consider assuming the following for the sake of focusing on CoC-management issues:
  1. Lens, a superb 90mm
  2. Camera, maybe the best in FF for this image, a Sony a7RII 42 MP.
  3. with tripod; wind not terrible
  4. Camera position, time of day, season, weather, all match Merklinger's situation.
  5. Just one image; no panorama, no tilt-shift, no focus stacking
  6. You can plan deconvolution, sharpening, contrast enhancement, etc., as global changes in post.
Jim's answer is strong, of course, but it has some surprises, and we can still compare the object-space method to the image-space method to see which works better according to our tastes.
 
Regarding Jerry's CoC formula request, root sum of squares addition can work reasonably well to estimate the radius, provided you figure out what measure of radius is appropriate for distributions with completely different shapes.
Take the blur "radius" as equal to the std. deviation of a function (PSF) and the measure of resolution as 1 / (std. dev.), i.e., resolution^2 = 1 / var. Then 1/res^2= 1/res1^2 + 1/res2^2.

Please see below for a simple example:

http://www.dpreview.com/forums/post/50700374

--

Dj Joofa
 
One of the more challenging images in Merklinger's simpler book on the object-space method, is on p. 48, which I call "Cannon overlooking Placentia, Newfoundland."

The challenge: What aperture and focus distance do you use?

I plan to discuss this example in some detail shortly, because I've simulate CoCs based on both defocus and diffraction for several different CoC management plans. The plans compare different methods, including Merlinger's object field method.

The challenge is to manage depth of field for the best image according to your taste. If the image does not please, you could imagine that the Placentia C of C is your client, and they want a large print.

Some information gleaned from the Internet and the lens equation:
  • The 12-foot (bore) canon is 50 feet away.
  • The foreground pine trees are perhaps 75--150 feet away.
  • The buildings you can see in the town are 2160--5280 feet away.
  • The distant trees may well extend to five miles away.
Jim answered the challenge in the earlier thread, but I had not yet given this data about distances of key objects in the image.

I don't want to limit your creativity, but consider assuming the following for the sake of focusing on CoC-management issues:
  1. Lens, a superb 90mm
  2. Camera, maybe the best in FF for this image, a Sony a7RII 42 MP.
  3. with tripod; wind not terrible
  4. Camera position, time of day, season, weather, all match Merklinger's situation.
  5. Just one image; no panorama, no tilt-shift, no focus stacking
  6. You can plan deconvolution, sharpening, contrast enhancement, etc., as global changes in post.
Jim's answer is strong, of course, but it has some surprises, and we can still compare the object-space method to the image-space method to see which works better according to our tastes.
I like that photo, and it turns out you can use Street View in Google Maps to go to almost the exact camera position on Castle Hill, as far as I can tell, although I suspect the cannon has been subsequently turned to the left a bit, perhaps moved, and it is now placed on a concrete pad. At first I was dubious that the cannon was as far away as you suggest, but looking at the Street View, I suspect that you are correct.

Merklinger's method has two basic rules of thumb, at least as far as I understand it, either focusing halfway between the near and far points of interest and using an aperture setting to adequately resolve the objects according to lens' front pupil diameter; or, as a special case, if infinity needs to be sharp, focus at infinity and set the aperture to resolve the front object of interest.

Were I to photograph that scene today, I'd simply focus on the church (.700 mile or 3,696 feet away according to Google Maps) and set the aperture at f/11 as is my custom in such situations. But then I'd estimate that the resolving power of the lens at the cannon would be no more than 8 mm, which might not be quite enough, assuming the image is greatly enlarged. I would roughly estimate that about 4 mm of resolution or better is needed on the cannon, but that would imply an aperture setting of f/22 or tighter, assuming the focus is in the far distance, roughly infinity. This would give noticeable diffraction, but not so much that it would look particularly bad with sharpening.

Because significant points of interest are in the near and far distance, I don't think it is quite possible to get superb sharpness throughout. For sure, I'd ignore the background forest as insignificant, and concentrate on the cannon and the town, and so we can bring the focus much closer.

Based on the simplification of modeling blur geometrically according to the thin lens approximation, and assuming a perfectly circular and uniform blur disk, and then handling diffraction after the fact, we can come up with a better focus distance: let's also assume that the lens has its maximum sharpness between f/4 and f/8, with f/11 being only slightly worse.

I estimate that approximately 296 mm of resolution is needed on the church to visually equal the 4 mm of resolution needed on the cannon, and so we need to find a focus point where the distance between it and the church is 74 times the distance from the point of focus to the cannon.

(Focus - 50 ft ) * 74 = (3696 - Focus);

Focus = 99 feet.

Since the near point of interest, the cannon, is almost exactly half the distance to the focus point, to get a 4 mm resolution at the cannon, we'd need an 8mm aperture width, which would be f/11 again. This assumes the focus distance is measured from the center of the lens, which isn't a bad approximation in this case. This aperture setting is probably close to optimal for this focus distance, even taking diffraction into account, if you want to resolve both the cannon and the town equally well.

Practically speaking, we could focus on the trees in this situation.

--
http://therefractedlight.blogspot.com
 
Last edited:
So I now have a proposal for a way to measure kernel size that does not require radial symmetry. It assumes low-pass kernels, with all entries materially positive (it will tolerate small negative entries, however0.
  1. Normalize the kernel so that it sums to unity.
  2. If it's not square already, pad it with zeros to make it square.
  3. Reduce the size in both width and height slowly until the kernel sums to x, where x is between zero and one.
Anyway, what does everybody, particularly Alan O and Jack H, think of this?

Jim

--
http://blog.kasson.com
Hmmm..... I think that step 3 might introduce a dependence on the orientation of the kernel. You mentioned a "long skinny kernel" --- if that kernel was oriented at 45 degrees it would give a very different answer than when it is oriented along one of the axes.

This would crop up frequently if you are dealing with astigmatism in the radial/tangential directions as opposed to horizontal/vertical (we have Imatest to thank for that ugly oversimplification...)

Running the kernel through a PCA (principal component analysis), i.e., calculating the eigenvectors of the covariance of the x and y coordinates weighted by the kernel amplitude (at each x,y coordinate) will give you the a pair of orthogonal axes aligned with the two directions of maximal variance (in x and y, but weighted by the kernel).

The two eigenvalues will already give you the ML estimate of the standard deviations of the Gaussians along each direction (I think ...), but you can ignore them and simply use your step 3 above to reduce the width/height along the two eigenvectors to effectively make your kernel size metric invariant to kernel orientation.

Of course, if we are modeling only near the principal point, then I guess there would not be much astigmatism, but then I wonder what would cause a "long skinny kernel".

-F
 
I do have a rough model in the frequency domain but I have not given much thought to the spatial domain. I know Frans has, though, in designing the mtf-generate-rectangle.exe part of MTF Mapper.
I specify things in the spatial domain, but there's a set of methods that I haven't used in a while and probably are slightly broken by now that do the calcs in the frequency domain, which is faster if the image is not very sharp.

Jim

--
http://blog.kasson.com
The main reason I use spatial domain sampling to render synthetic images is to allow me to use "perfect" object geometry, i.e., target objects are simulated as polygons. The moment you move to the frequency domain you have to first discretize your target object geometry. I suppose one can calculate the impact of this discretization to see if it matters.

Although mtf-generate-rectangle does not support arbitrary 2D PSFs, it would be relatively straightforward to add code to apply the importance sampling method to a 2D discretized PSF. Is it better to discretize the target geometry, or the PSF? Even if you discretize the PSF, you can still perform some sub-sample jittering (within each PSF cell, assuming equal PSF intensity with the cell) to fully leverage the "infinity spatial resolution" of the target polygon geometry.

The importance sampling method can be fairly efficient: I have an early-stopping criterion that monitors the variance of the intensity of a rendered pixel as I add more samples. Once this variance drops below 2^-16 (for 16-bit output), I stop sampling. By ordering the importance samples so that many of the outer (i.e. points further from centre of PSF) are sampled first, this strategy seems to work pretty well to focus your CPU cycles only one the interesting parts of the simulated image. This seems to me to be the main advantage over the FFT-based convolution, which is forced spend equal time on all samples/pixels.

I recently ran into some numerical rounding issues while using OpenCV to perform 2D FFTs on large matrices (around 16384x16384 samples), but I suspect that one can avoid these issues with clever implementation (e.g., splitting a large FFT into several smaller ones), or using a better FFT implementation.
 
JimKasson wrote: It's interesting to think about how we'd incorporate focus shift into our calculations. The lens as modeled has no focus shift, and, we'd get these results in a lens with focus shift if we always focused at the taking aperture. But a lot of people don't do that. For them, stopping down introduces a new source of blur at the focused distance.
Good point. Reminder to self: use live view focusing more often.

Jack
 
JimKasson wrote: By the way, with respect to print size and viewing distance, I note that they are subsumed into MTFxx plots, as long as MTFxx is calculated in cycles per picture height or lw/ph and the viewing distance is proportional to the print size.
Right, they don't change as long as the ratio of viewing distance to print size is kept constant. This ratio needs to be declared in order for the relative figures to be relevant.
 
In response to Jerry's request, I'm working on a generalized kernel size metric. In order to work in general, it has to work on kernels that are not radially symmetric. In my current model, phase shift AA filters result in kernels that don't have radial symmetry, and introducing astigmatism and off-axis aberrations at some point in the future will also result in loss of radial symmetry, even if the model is set up to run withou an AA filter.

So I now have a proposal for a way to measure kernel size that does not require radial symmetry. It assumes low-pass kernels, with all entries materially positive (it will tolerate small negative entries, however0.
  1. Normalize the kernel so that it sums to unity.
  2. If it's not square already, pad it with zeros to make it square.
  3. Reduce the size in both width and height slowly until the kernel sums to x, where x is between zero and one.
  4. Report the size of the kernel.
One advantage is that the result is a scalar. One disadvantage is that, if the kernel is long and skinny, we get just the long dimension.

Choosing x is not obvious. If we choose a number close to one, a pillbox filter will measure out to its diameter, but an Airy solid will probably report a value much larger than the distance between first zeros. If we choose a number close to 0.67, a Gaussian will report a bit less than its standard deviation (did I do that right?), but a pillbox will report a number a bit less than 8/9 of its radius (did I do that right?).

Anyway, what does everybody, particularly Alan O and Jack H, think of this?
Frans' approach sounds excellent.

Another way to reduce processing time substantially would be to work in 1D in just a few representative directions (see Radon Transform and Fourier Slice Theorem in Gonzalez & Woods*). You could run 1D convolutions and/or Fourier transforms of Radon projections of kernel and target at, say, 0, 45, 90, 135 degree. Chosen angles could be fine tuned through Frans' suggestion.

Jack

* This is why the 1D slanted edge method results in a radial slice off the full 2D MTF, btw.
 
Last edited:
JimKasson wrote: It's interesting to think about how we'd incorporate focus shift into our calculations. The lens as modeled has no focus shift, and, we'd get these results in a lens with focus shift if we always focused at the taking aperture. But a lot of people don't do that. For them, stopping down introduces a new source of blur at the focused distance.
Good point. Reminder to self: use live view focusing more often.
Jack,

This raises another question. How many cameras offer Live view focusing stopped down?

On my Pentax K5ii, Live view appears to vary aperture to control brightness during viewing, but opens to full aperture for focus, and only stops down to the selected aperture when the shutter is tripped. Not something I use often, so I may have missed an option, and other cameras will differ.

Cheers,
 
One of the more challenging images in Merklinger's simpler book on the object-space method, is on p. 48, which I call "Cannon overlooking Placentia, Newfoundland."

The challenge: What aperture and focus distance do you use?

The challenge is to manage depth of field for the best image according to your taste. If the image does not please, you could imagine that the Placentia C of C is your client, and they want a large print.

Some information gleaned from the Internet and the lens equation:
  • The 12-foot (bore) canon is 50 feet away.
  • The foreground pine trees are perhaps 75--150 feet away.
  • The buildings you can see in the town are 2160--5280 feet away.
  • The distant trees may well extend to five miles away.
I haven't (yet) read all of Merklinger's book, but I shall assume Mark has provided an accurate executive summary. I reach pretty much the same conclusion as Mark, but find the image-plane perspective more useful for this particular case. Here are two alternative solutions.

Assumptions:
  • Resolution of camera sensor (effective blur diameter) is 5 micron.
  • 90 mm lens is diffraction-limited at f/4.
Subjects of interest:
  • Cannon at 15 m.
  • Trees at 20-50 m.
  • Buildings at 700-1600 m.
Diffraction-limited resolution is around 0.66 F#, so f/8 is a good match to the camera resolution. Root-sum-of-squares convolution gives 7.3 micron combined blur at f/8, compared with 5.7 micron at f/4.

If we focus on the building at 1100 m, 7 micron blur at sensor gives 86 mm object plane blur - the building is effectively at infinity where subject blur is much larger than aperture diameter.

For equal image blur at distances d1, d2, we focus at distance 2/(1/d1 + 1/d2).
For close distances, this is mid-way between. When the ratio of distances is much greater than unity, we use twice the distance of the near subject.

At 30 m focus distance, axial defocus at the sensor for both 15 m and infinity subjects is 271 micron.
Geometric blur diameter is 271 micron / F#
Diffractive blur diameter is F# x 0.66 micron
F# for minimum net blur is sqrt(271/0.66) = f/20.

Geometric image plane blur = Diffractive blur = 0.0135 mm
Net blur at sensor = 0.02 mm
Object plane blur at cannon = 3.3 mm
Object plane blur at 1100 m = 244 mm

Alternative solution:

If we decide the buildings need better sharpness, we can work at f/11 and aim for 0.012 mm blur at the sensor (2000 lines vertical resolution)
Diffractive blur at sensor = 0.0073 mm.
Tolerable geometric blur is 0.0081 mm, achieved with 90 m focus at f/11.

Geometric blur at 15 m with f/11 infinity focus = 0.049 mm (8.2 mm at subject)
Geometric blur at 15 m with f/11 focus at 90 m = 0.041 mm
Net blur at 15m, f/11 focus at 90 m = 0.042 mm at sensor, (7 mm at subject).

All rather subjective and subject-dependent. For a compromise favouring the buildings: f/11 and focus at 90 m

Regards,

--
Alan Robinson
 
Last edited:
JimKasson wrote: It's interesting to think about how we'd incorporate focus shift into our calculations. The lens as modeled has no focus shift, and, we'd get these results in a lens with focus shift if we always focused at the taking aperture. But a lot of people don't do that. For them, stopping down introduces a new source of blur at the focused distance.
Good point. Reminder to self: use live view focusing more often.
Jack,

This raises another question. How many cameras offer Live view focusing stopped down?

On my Pentax K5ii, Live view appears to vary aperture to control brightness during viewing, but opens to full aperture for focus, and only stops down to the selected aperture when the shutter is tripped. Not something I use often, so I may have missed an option, and other cameras will differ.
Ugh, you are right Alan. I need to investigate how mine works.

EDIT: It appears that my D610 stops down to the selected f-number when entering live view and stays there during focusing, so I should be ok.
 
Last edited:
JimKasson wrote: It's interesting to think about how we'd incorporate focus shift into our calculations. The lens as modeled has no focus shift, and, we'd get these results in a lens with focus shift if we always focused at the taking aperture. But a lot of people don't do that. For them, stopping down introduces a new source of blur at the focused distance.
Good point. Reminder to self: use live view focusing more often.
Jack,

This raises another question. How many cameras offer Live view focusing stopped down?

On my Pentax K5ii, Live view appears to vary aperture to control brightness during viewing, but opens to full aperture for focus, and only stops down to the selected aperture when the shutter is tripped. Not something I use often, so I may have missed an option, and other cameras will differ.

Cheers,
I can say for sure that the Nikon D3x,D4, D5, D800, D810 all allow stopped down LV focusing, as do the Sony a7, a7R, a7S, a7II, and a7RII if you choose "Setting Effect On".

I have been testing lenses for focus shift for the last few months, and I've been appalled at what I've found, even on lenses like the Zeiss Otus 85.

http://blog.kasson.com/?s=focus+shift

Jim

--
http://blog.kasson.com
 
Last edited:

Keyboard shortcuts

Back
Top