Depth of field - how best to explain it?

About "exact" formulas - all formulas are exact but none of them is an exact model. First, they could be very inaccurate away from the center.
It's valid for any rectilinear lens, on or off axis. Interested readers who have had plane geometry in school are invited to prove for themselves the corollary that it's valid off axis.

There are errors due to barrel or pincushion distortion, but these are minor. The magnification is still constant over the entire field within a few percent at most. The focal "plane" is actually a curved surface in general. Readers should be able to manage these complications. They apply equally to DOF calculations based on the exit pupil, and Tom Axford shouldn't have to complicate the argument with these extraneous considerations.
What is missing is the derivation of the formulas from the simple principle you mentioned; in other words, the whole proof is missing.
The whole derivation is extraordinarily simple, and that's the beauty of it. It is based on the properties of two similar triangles. Although it might be better if Tom Axford had spelled this out with a geometrical proof, any reader who has had plane geometry in school should be able to work this out.
 
Last edited:
About "exact" formulas - all formulas are exact but none of them is an exact model. First, they could be very inaccurate away from the center.
It's valid for any rectilinear lens, on or off axis. Interested readers who have had plane geometry in school are invited to prove for themselves the corollary that it's valid off axis.
Except that the (visible) aperture changes a lot, both by shape and size away from the corners. The loss is 2+ stops for fast lenses. Then the curved field.

It is not an "academic" difference. I have posted examples of equivalent shots with similar BG blur in the center and a huge difference away from it. BG blur is easier to asses but that would work similarly for DOF.
There are errors due to barrel or pincushion distortion, but these are minor. The magnification is still constant over the entire field within a few percent at most. The focal "plane" is actually a curved surface in general. Readers should be able to manage these complications. They apply equally to DOF calculations based on the exit pupil, and Tom Axford shouldn't have to complicate the argument with these extraneous considerations.
What is missing is the derivation of the formulas from the simple principle you mentioned; in other words, the whole proof is missing.
The whole derivation is extraordinarily simple, and that's the beauty of it. It is based on the properties of two similar triangles. Although it might be better if Tom Axford had spelled this out with a geometrical proof, any reader who has had plane geometry in school should be able to work this out.
I already acknowledged that I have not read the OP carefully. Since you mentioned similar triangles - the most important triangles there are QRS and QRP which are not similar. To move the whole thing from the image to the sensor plane and/or back, you can use similar triangles but even then you do not have to. So "based on" is a bit of a strong statement.
 
About "exact" formulas - all formulas are exact but none of them is an exact model. First, they could be very inaccurate away from the center.
<snip>
What is missing is the derivation of the formulas from the simple principle you mentioned; in other words, the whole proof is missing.
The whole derivation is extraordinarily simple, and that's the beauty of it. It is based on the properties of two similar triangles. Although it might be better if Tom Axford had spelled this out with a geometrical proof, any reader who has had plane geometry in school should be able to work this out.
I already acknowledged that I have not read the OP carefully. Since you mentioned similar triangles - the most important triangles there are QRS and QRP which are not similar. To move the whole thing from the image to the sensor plane and/or back, you can use similar triangles but even then you do not have to. So "based on" is a bit of a strong statement.
I think Tom considers PQR similar to PTU, and SQR similar to STU.

The ratio D = QR / TU is used to derive the denominators in equations (1, 2).
 
About "exact" formulas - all formulas are exact but none of them is an exact model. First, they could be very inaccurate away from the center.
It's valid for any rectilinear lens, on or off axis. Interested readers who have had plane geometry in school are invited to prove for themselves the corollary that it's valid off axis.
Except that the (visible) aperture changes a lot, both by shape and size away from the corners. The loss is 2+ stops for fast lenses.Then the curved field.

It is not an "academic" difference. I have posted examples of equivalent shots with similar BG blur in the center and a huge difference away from it. BG blur is easier to asses but that would work similarly for DOF.
Good point, and it's something for photographers to know about, but it's extraneous to Axford's analysis. It is an unstated assumption that the pupil is circular. If you want to consider noncircular pupils, there's the effect of pentagonal, hexagonal, etc. pupils to consider. All this also affects the image-side analysis, as well as other topics such as panoramic photography.
There are errors due to barrel or pincushion distortion, but these are minor. The magnification is still constant over the entire field within a few percent at most. The focal "plane" is actually a curved surface in general. Readers should be able to manage these complications. They apply equally to DOF calculations based on the exit pupil, and Tom Axford shouldn't have to complicate the argument with these extraneous considerations.
What is missing is the derivation of the formulas from the simple principle you mentioned; in other words, the whole proof is missing.
The whole derivation is extraordinarily simple, and that's the beauty of it. It is based on the properties of two similar triangles. Although it might be better if Tom Axford had spelled this out with a geometrical proof, any reader who has had plane geometry in school should be able to work this out.
I already acknowledged that I have not read the OP carefully.
Yes you did. That's why I aimed it at "readers" instead of you. But you should have. :)
Since you mentioned similar triangles - the most important triangles there are QRS and QRP which are not similar.
Triangle PQR is similar to triangle PTU, and triangle SQR is similar to triangle STU. Everyone who has had plane geometry in school should be able to work this out from here.
To move the whole thing from the image to the sensor plane and/or back, you can use similar triangles but even then you do not have to. So "based on" is a bit of a strong statement.
The next step in the derivation is based on those four triangles and nothing else. Actually, the derivation would be clearer if the triangles were bisected by the optic axis, so you would have right triangles instead of isosceles triangle. The distance to the subject plane or the distance to the entrance pupil would be the leg of a triangle instead of the height of a triangle. But please, do read Tom Axford's post carefully and work this out yourself before you reply.
 
Last edited:
Tom Axford, I have to confess to not reading your derivation very carefully either, because I am already familiar with the arguments. (Confession: I still haven't read it completely, actually.)

However, I think the derivation would be easier if you label the intersections of the optic axis with the subject plane and the plane of the entrance pupil. That way, the pertinent distances can be the sides of the right triangles along the optic axis, instead of the heights of triangles. Then just say that triangle PQR is similar to triangle PTU, and triangle SQR is similar to triangle STU, so the ratio of __ to __ is __, etc. Apparently it is not obvious even to the readers of this forum. You might also want to state the assumptions clearly and use these assumptions to formally derive the geometrical consequences. Apparently, some of the readers, who are reading all this at the speed of the Internet, don't understand either the assumptions or the geometry.
 
Last edited:
About "exact" formulas - all formulas are exact but none of them is an exact model. First, they could be very inaccurate away from the center.
It's valid for any rectilinear lens, on or off axis. Interested readers who have had plane geometry in school are invited to prove for themselves the corollary that it's valid off axis.
Except that the (visible) aperture changes a lot, both by shape and size away from the corners. The loss is 2+ stops for fast lenses.Then the curved field.

It is not an "academic" difference. I have posted examples of equivalent shots with similar BG blur in the center and a huge difference away from it. BG blur is easier to asses but that would work similarly for DOF.
Good point, and it's something for photographers to know about, but it's extraneous to Axford's analysis. It is an unstated assumption that the pupil is circular. If you want to consider noncircular pupils, there's the effect of pentagonal, hexagonal, etc. pupils to consider. All this also affects the image-side analysis, as well as other topics such as panoramic photography.
I meant vignetting, and that remark was directed to other posts, not to the OP. You get cat-eyes, cut cat-eyes, etc.
There are errors due to barrel or pincushion distortion, but these are minor. The magnification is still constant over the entire field within a few percent at most. The focal "plane" is actually a curved surface in general. Readers should be able to manage these complications. They apply equally to DOF calculations based on the exit pupil, and Tom Axford shouldn't have to complicate the argument with these extraneous considerations.
What is missing is the derivation of the formulas from the simple principle you mentioned; in other words, the whole proof is missing.
The whole derivation is extraordinarily simple, and that's the beauty of it. It is based on the properties of two similar triangles. Although it might be better if Tom Axford had spelled this out with a geometrical proof, any reader who has had plane geometry in school should be able to work this out.
I already acknowledged that I have not read the OP carefully.
Yes you did. That's why I aimed it at "readers" instead of you. But you should have. :)
Guilty as charged.
Since you mentioned similar triangles - the most important triangles there are QRS and QRP which are not similar.
Triangle PQR is similar to triangle PTU, and triangle SQR is similar to triangle STU. Everyone who has had plane geometry in school should be able to work this out from here.
Yes, but that is a trivial observation (which does not make Tom's writeup trivial!); and I already mentioned that they play a role. In my opinion, not the crucial one.
To move the whole thing from the image to the sensor plane and/or back, you can use similar triangles but even then you do not have to. So "based on" is a bit of a strong statement.
The next step in the derivation is based on those four triangles and nothing else.
The first step is to come up with the whole concept (which I missed the first time); the rest is bookkeeping.
 
I meant vignetting, and that remark was directed to other posts, not to the OP. You get cat-eyes, cut cat-eyes, etc.
Yes. In that case the entrance pupil is occluded, and effectively noncircular.
Since you mentioned similar triangles - the most important triangles there are QRS and QRP which are not similar.
Triangle PQR is similar to triangle PTU, and triangle SQR is similar to triangle STU. Everyone who has had plane geometry in school should be able to work this out from here.
Yes, but that is a trivial observation (which does not make Tom's writeup trivial!); and I already mentioned that they play a role. In my opinion, not the crucial one.
If you mean that my statement is trivial, it's not trivial at all. That triangle PQR is similar to triangle PTU, and that triangle SQR is similar to triangle STU is the essence of the depth of field, and it is exactly where the derivation comes from.

If by "trivial" you mean "obvious", well, yes, I would have thought so.
 
[snip]

Good point, and it's something for photographers to know about, but it's extraneous to Axford's analysis. It is an unstated assumption that the pupil is circular. If you want to consider noncircular pupils, there's the effect of pentagonal, hexagonal, etc. pupils to consider. All this also affects the image-side analysis, as well as other topics such as panoramic photography.
I meant vignetting, and that remark was directed to other posts, not to the OP. You get cat-eyes, cut cat-eyes, etc.

[snip]
In a way, it is quite a subtle point. Assuming the physical stop is circular, then the lens system has circular symmetry, so it is easy to make the assumption that a plane diagram covers the whole problem. However, as soon as you take the point source off-axis, then the symmetry is broken, as pointed out by J A C S.

I still think the idea of describing OOF blur in terms of perspective is a nice, intuitive way of getting a feel for how large the blur is in front of and behind the focus "plane". But is there any benefit in going further than the simple thin lens approximation? That I'm not so sure of.

S.
 
[snip]

Good point, and it's something for photographers to know about, but it's extraneous to Axford's analysis. It is an unstated assumption that the pupil is circular. If you want to consider noncircular pupils, there's the effect of pentagonal, hexagonal, etc. pupils to consider. All this also affects the image-side analysis, as well as other topics such as panoramic photography.
I meant vignetting, and that remark was directed to other posts, not to the OP. You get cat-eyes, cut cat-eyes, etc.

[snip]
In a way, it is quite a subtle point. Assuming the physical stop is circular, then the lens system has circular symmetry, so it is easy to make the assumption that a plane diagram covers the whole problem. However, as soon as you take the point source off-axis, then the symmetry is broken, as pointed out by J A C S.
This is getting repetitious, and it's overstated. The vignetting applies only to some lenses at their widest apertures, and it does not apply over the whole image field. And, as has been mentioned at least twice now, this same problem applies to any method of calculating depth of field. It's not part of the classical DOF equations because it's highly lens-specific, and the effect varies across the image field.

Fun fact: You can see the entrance pupil, including vignetting, by looking backwards through your SLR.
I still think the idea of describing OOF blur in terms of perspective is a nice, intuitive way of getting a feel for how large the blur is in front of and behind the focus "plane". But is there any benefit in going further than the simple thin lens approximation? That I'm not so sure of.
How many times do we have to say it? Tom Axford did not use the thin-lens approximation. The derivation applies to any rectilinear lens.

The thick-lens equations are useful if you try to calculate the position of the entrance pupil. But I think I know what you are saying. It may not be worth doing the calculations for close distances because the parameters can be highly lens-specific and often highly variable. I think that's the reason that
 
Last edited:
This is getting repetitious, and it's overstated. The vignetting applies only to some lenses at their widest apertures, and it does not apply over the whole image field.
May I buy your lenses? :-)

This is wrong on all counts. First, it is understated. It (the mechanical vignetting) applies to the majority of the lenses, not just at the widest apertures, and in a large area over the image field.
 
This is getting repetitious, and it's overstated. The vignetting applies only to some lenses at their widest apertures, and it does not apply over the whole image field.
May I buy your lenses? :-)

This is wrong on all counts. First, it is understated. It (the mechanical vignetting) applies to the majority of the lenses, not just at the widest apertures, and in a large area over the image field.
The only lens I can think of it doesn't apply to is a spherical Luneburg lens.

S.
 
This is getting repetitious, and it's overstated. The vignetting applies only to some lenses at their widest apertures, and it does not apply over the whole image field.
May I buy your lenses? :-)

This is wrong on all counts. First, it is understated. It (the mechanical vignetting) applies to the majority of the lenses, not just at the widest apertures, and in a large area over the image field.
OK, touché. Many lenses -- perhaps most, and especially fast lenses -- show this to some degree or other. Photographers need to know this, and it has been noted. But it's beyond the scope of normal DOF calculations, and it's off the subject of the thread.

For the record, for many lenses it has only a very minor effect on depth of field. Here are two pictures taken with a very ordinary kit lens, with aperture wide open. These are heavily defocused to show the entrance pupil. The circle of confusion is partly occluded in the corners, but the effect is minor and will not have a large effect on the depth of field.

18 mm, aperture wide open, f/3.5. Crop of upper right-hand corner.
18 mm, aperture wide open, f/3.5. Crop of upper right-hand corner.

55 mm, aperture wide open, f/5.6
55 mm, aperture wide open, f/5.6

I know you can find some examples of a much larger effect. Please, it's not necessary. Let's not hijack Axford's thread.
 
Last edited:
This is getting repetitious, and it's overstated. The vignetting applies only to some lenses at their widest apertures, and it does not apply over the whole image field.
May I buy your lenses? :-)

This is wrong on all counts. First, it is understated. It (the mechanical vignetting) applies to the majority of the lenses, not just at the widest apertures, and in a large area over the image field.
OK, touché. Many lenses -- perhaps most, and especially fast lenses -- show this to some degree or other. Photographers need to know this, and it has been noted. But it's beyond the scope of normal DOF calculations, and it's off the subject of the thread.
The only reason I mentioned it was because someone else mentioned exact or precise formulas, or something of this sort.

 
Thank you for that link.

Yes, there have been occasional discussions of the object-space approach going back a very long time. The earliest I know of is by Moritz von Rohr towards the end of the nineteenth century.

However, the object-space approach has remained remarkably little known compared to the thin-lens model. I think it deserves to be much better known and should be mentioned much more often in tutorials on depth of field. It seems to me to be conceptually simpler to understand and to give very good insight into the optics involved without the added complications of understanding how lenses work as well.
 
Thank you for that link.

Yes, there have been occasional discussions of the object-space approach going back a very long time. The earliest I know of is by Moritz von Rohr towards the end of the nineteenth century.

However, the object-space approach has remained remarkably little known compared to the thin-lens model. I think it deserves to be much better known and should be mentioned much more often in tutorials on depth of field. It seems to me to be conceptually simpler to understand and to give very good insight into the optics involved without the added complications of understanding how lenses work as well.
Agreed and thanks for the link. The illustration looks quite familiar with it's "eS" property ... :-D

I also like Merklinger's point of view which emphasizes the aperture diameter and focus distance as main factors:

http://www.trenholm.org/hmmerk/DOFR.html

See Figure 2.

You've probably read it or his other one, I imagine.

--
Pedantry is hard work, but someone's gotta do it ...
 
Last edited:
I also like Merklinger's point of view which emphasizes the aperture diameter and focus distance as main factors:

http://www.trenholm.org/hmmerk/DOFR.html
Good find Ted, Lyon's is an excellent historical review. Just missing some state-of-the-art references like Jeff Conrad's 'Depth of Field in Depth (2004)' and Alan's 'DOF from Nw and Magnification (2020)', added here to bookmark for future use.
 
Last edited:
fPerusing many posts on depth-of-field, I observe facts, data and lore, I now add my 2¢.

A wearisome dialogue on Depth-of-Field:

Before the camera was the camera obsura, in use long before the Common Era. In other words a pinhole can server as a lens. Such a lash-up yields a dim minimal image. However, the pinhole camera is remarkable in the fact that it produces an image that displays infinite depth-of-field. You might not be content with the acuity of this image so you can interchange the pinhole with a lens. The lens gains you image brilliance, and increase sharpness. The downside, now you must address depth-of-field.

So let me start by defining depth-of-field. In geometric optics, a lens is only able to deliver a sharp image for a specified subject distance. However, by observation we pronounce objects before and behind the distance focused upon, to appear sharp. We call this the span depth-of-field. Furthermore, we can, by choice of lens focal length, subject distance, aperture setting, degree of enlargement, viewing distance, and viewing illumination, shrink or swell this span. How do all these factors intertwine?

Fist, the lens images by handing each minuscule point that constitutes the vista individually. Each googolplex of points is projected by the lens onto the surface of film or digital sensor. Each individual point is thus replicated as a circle of light. The size and intensity and color of this circle is a variable. The image is thus comprised of countless illuminated circles. These circles are the smallest fraction of an optical image that conveys intelligence. This circle is juxtaposed with the other circles. Each has a indistinct center with scalloped boundaries. We call this circle the “circle of confusion”. The key point, all optical images consist of countless ill-defined circles. It is the size of these circles that governs depth-of-field.

Now a person with good eyesight can observe a nearby coin as a disk. Suppose a friend recedes with this coin. At what distance will you fail to observe it as a disk? In a sunlit setting, when the coin’s distance is about 3000 diameter away, the average person will observe a point of light, and not a disk of light. Suppose it’s a 3 feet wheel. At 3 x 3000 = 9,000 feet (1.7 miles) the wheel is seen as a point, not a disk. The resolving power of the human will be reduced if the illumination is lower. The 1/3000 standard is too stringent for pictorial photography. The is because we typically view images in subdued light and because the contrast of pictorial images is greatly abridged. For this reason the photo industry has generally adopted 3.4 minutes of arc which works out to a circle of confusion diameter viewed from 1000 times its diameter. Typically 0.5mm viewed from 500mm (20 inches) typical reading distance. If the image being viewed is 1 meter distance (1 yard), the allowable circle size is now 1mm. If it’s a billboard viewed from 100 feet, the circle size can now be 30mm (1.1 inch) in diameter.

How does all this square with the camera and the displayed image? Suppose we mount a 50mm lens on a 35mm full frame. The format size is 24mm height by 36mm length. Our desire is to make an 8X10 inch print for display. Now the 35mm format is tiny and thus we must enlarge the camera image to make an 8x10. The degree of enlargement is 8.5X. If the print is to be viewed from standard reading distance the maximum circle size is 0.5mm. To tolerate the 8.5X enlargement, the circle size at the focal plane of the camera must be 0.05 ÷ 8.5 = 0.0059mm. We have thus discovered the required circle size for this set-up at the focal plane of the camera.

All this rather complicated. Typically, for depth-of-field calculations I adopt a circle size of 1/1000 of the focal length. Thus for a 50mm lens, the circle size allotted is 50 ÷ 1000 = 0.05mm.

This method is convenient plus it allows for typical enlargement based on a “normal” focal length which is the corner to corner measure of the format. To see how this works, consider a compact digital APS-C format size 16mm height 24mm length. Diagonal measure =30mm. Thus the industry assigns 1/1000 of 30mm = 0.03mm as the circle size in camera. To make an 8x10 from this format the degree of enlargement is 12.7. The circle size on the 8x10 will be 12.7 X 0.03 = 0.38mm (within tolerance of 0.5mm.

Let me add that the 1/1000 of the focal length rule-of-thumb is not engraved in stone. Kodak often set this value at 1/1750 and Leica uses 1/1500 for critical work.

How do we calculate depth-of-field?

Hyperlocal distance: Let’s use 1/1000 of the focal length to obtain the hyperlocal distance. We mount a 200mm, 1/1000 of this focal length conveys a circle size of 200 ÷ 1000 = 0.2mm. This day we set the aperture at f/11. We find the working diameter to the 200mm lens set to f/11 = 200 ÷11 = 18.2mm. Now we multiply this value by 1000 = 18.2mm X 1000 = 18,200mm. This value is the hyperlocal distance in millimeters. This value is 18.2 meters or 57.7 feet.

Near point in focus:

P = point focused upon = 10 feet = 3,048mm

NP = near point sharply defined = unknown

FP = far point sharply defined = unknown

D = diameter of circle of confusion = 0.2mm

f-number = 11

F = focal length =200

NP = P/1+PDf/F^2

NP= 2610mm = 2.610 meters = 8 feet 7 inches

FP = P/1-PDf/F^2

FP = 3661mm = 3.661 meters = 12 feet 1 inch
 
I think something has gone wrong here ;-)

Let’s stick with a focal length of 200mm .

At f/10, say, the hyperfocal, using the rule of ten, is 200/10 = 20m, BUT at a CoC of 200.

Adjusting to a more sensible CoC of 20 microns, gives us a hyperfocal of 20*200/20=200m for a 200mm lens, ie not 18.2m
fPerusing many posts on depth-of-field, I observe facts, data and lore, I now add my 2¢.

A wearisome dialogue on Depth-of-Field:

Before the camera was the camera obsura, in use long before the Common Era. In other words a pinhole can server as a lens. Such a lash-up yields a dim minimal image. However, the pinhole camera is remarkable in the fact that it produces an image that displays infinite depth-of-field. You might not be content with the acuity of this image so you can interchange the pinhole with a lens. The lens gains you image brilliance, and increase sharpness. The downside, now you must address depth-of-field.

So let me start by defining depth-of-field. In geometric optics, a lens is only able to deliver a sharp image for a specified subject distance. However, by observation we pronounce objects before and behind the distance focused upon, to appear sharp. We call this the span depth-of-field. Furthermore, we can, by choice of lens focal length, subject distance, aperture setting, degree of enlargement, viewing distance, and viewing illumination, shrink or swell this span. How do all these factors intertwine?

Fist, the lens images by handing each minuscule point that constitutes the vista individually. Each googolplex of points is projected by the lens onto the surface of film or digital sensor. Each individual point is thus replicated as a circle of light. The size and intensity and color of this circle is a variable. The image is thus comprised of countless illuminated circles. These circles are the smallest fraction of an optical image that conveys intelligence. This circle is juxtaposed with the other circles. Each has a indistinct center with scalloped boundaries. We call this circle the “circle of confusion”. The key point, all optical images consist of countless ill-defined circles. It is the size of these circles that governs depth-of-field.

Now a person with good eyesight can observe a nearby coin as a disk. Suppose a friend recedes with this coin. At what distance will you fail to observe it as a disk? In a sunlit setting, when the coin’s distance is about 3000 diameter away, the average person will observe a point of light, and not a disk of light. Suppose it’s a 3 feet wheel. At 3 x 3000 = 9,000 feet (1.7 miles) the wheel is seen as a point, not a disk. The resolving power of the human will be reduced if the illumination is lower. The 1/3000 standard is too stringent for pictorial photography. The is because we typically view images in subdued light and because the contrast of pictorial images is greatly abridged. For this reason the photo industry has generally adopted 3.4 minutes of arc which works out to a circle of confusion diameter viewed from 1000 times its diameter. Typically 0.5mm viewed from 500mm (20 inches) typical reading distance. If the image being viewed is 1 meter distance (1 yard), the allowable circle size is now 1mm. If it’s a billboard viewed from 100 feet, the circle size can now be 30mm (1.1 inch) in diameter.

How does all this square with the camera and the displayed image? Suppose we mount a 50mm lens on a 35mm full frame. The format size is 24mm height by 36mm length. Our desire is to make an 8X10 inch print for display. Now the 35mm format is tiny and thus we must enlarge the camera image to make an 8x10. The degree of enlargement is 8.5X. If the print is to be viewed from standard reading distance the maximum circle size is 0.5mm. To tolerate the 8.5X enlargement, the circle size at the focal plane of the camera must be 0.05 ÷ 8.5 = 0.0059mm. We have thus discovered the required circle size for this set-up at the focal plane of the camera.

All this rather complicated. Typically, for depth-of-field calculations I adopt a circle size of 1/1000 of the focal length. Thus for a 50mm lens, the circle size allotted is 50 ÷ 1000 = 0.05mm.

This method is convenient plus it allows for typical enlargement based on a “normal” focal length which is the corner to corner measure of the format. To see how this works, consider a compact digital APS-C format size 16mm height 24mm length. Diagonal measure =30mm. Thus the industry assigns 1/1000 of 30mm = 0.03mm as the circle size in camera. To make an 8x10 from this format the degree of enlargement is 12.7. The circle size on the 8x10 will be 12.7 X 0.03 = 0.38mm (within tolerance of 0.5mm.

Let me add that the 1/1000 of the focal length rule-of-thumb is not engraved in stone. Kodak often set this value at 1/1750 and Leica uses 1/1500 for critical work.

How do we calculate depth-of-field?

Hyperlocal distance: Let’s use 1/1000 of the focal length to obtain the hyperlocal distance. We mount a 200mm, 1/1000 of this focal length conveys a circle size of 200 ÷ 1000 = 0.2mm. This day we set the aperture at f/11. We find the working diameter to the 200mm lens set to f/11 = 200 ÷11 = 18.2mm. Now we multiply this value by 1000 = 18.2mm X 1000 = 18,200mm. This value is the hyperlocal distance in millimeters. This value is 18.2 meters or 57.7 feet.

Near point in focus:

P = point focused upon = 10 feet = 3,048mm

NP = near point sharply defined = unknown

FP = far point sharply defined = unknown

D = diameter of circle of confusion = 0.2mm

f-number = 11

F = focal length =200

NP = P/1+PDf/F^2

NP= 2610mm = 2.610 meters = 8 feet 7 inches

FP = P/1-PDf/F^2

FP = 3661mm = 3.661 meters = 12 feet 1 inch
 

Keyboard shortcuts

Back
Top