# DOF And The 45 mm

Started 4 months ago | Discussion thread
 Like?
 Relative Background Blur math explains what Depth of Field math does not. In reply to woof woof, 4 months ago

Hello woof woof,

It's been nearly one year since I elected to abandon our discourse surrounding the calculation of Depth of Field. For the sake of the understanding of other readers of this forum I will enter back into that discourse for this post - and I will address your reasoning itself, as well as your example images.

If I seem critical, such criticism relates to your reasoning and understanding only. Everybody has their own ways of thinking, and I do not think less of you for that. Nor do I have much in the way of expectations that you might think differently. As stated above, this post is published for the benefit of other readers, who may indeed consider some different outlooks other than yours.

The last time we conversed, you were making assertions such as:

The sensor size doesn't affect DoF ...

... refuted by Yohan Pamudji here:

http://forums.dpreview.com/forums/post/40361033

... as well making assertions such as:

... The DoF is what it is regardless of how good or bad your eye sight is, no matter what sized print you produce, no matter what distance you view it.

... countered by me here (where I recommended you read the first few pages of Jeff Conrad's "Depth of Field in Depth"):

... which you evidently did not do, prompting me to quote directly from Conrad's text:

Circle of Confusion

A photograph is perceived as acceptably sharp when the blur spot is smaller than the acceptable circle of confusion. The size of the “acceptable” circle of confusion for the original image (i.e., ﬁlm or electronic sensor) depends on three factors:

1. Visual acuity .
2. The distance at which the ﬁnal image is viewed .
3. The enlargement of the ﬁnal image from the original image .

... Common practice in photography has been to assume that the diameter of the smallest disk distinguishable from a point corresponds to the Snellen line-pair recognition criterion of 2 minutes of arc, although the line-resolution criterion arguably is more appropriate. At the normal viewing distance of 250 mm, the Snellen criterion is equivalent to a blur spot diameter of 0.145 mm. Visual acuity tests are done using high-contrast targets; under conditions of normal contrast, a more realistic blur spot diameter may be about 0.2 mm. The value of 0.2 mm is commonly cited for ﬁnal-image CoC; in angular terms, this would subtend 2.75 minutes of arc, corresponding to a spatial frequency of approximately 22 cpd. Of course, some individuals have greater visual acuity than others .

http://forums.dpreview.com/forums/post/40361981

... and here further quote (from Page 4 of Jeff Conrad's "Depth of Field in Depth"):

Image Enlargement

If the original image is smaller than 8″×10″, it must be enlarged to produce an 8″×10″ ﬁnal
image, and the CoC for the original image is reduced by the required enlargement. For
example, if a full-frame 35 mm image is enlarged to ﬁt the short dimension of an 8″×10″
ﬁnal image, the enlargement is approximately 8×, and the CoC for the original image then is
⅛ of the CoC for the ﬁnal image.

I see that you (in the case cited below) may have mellowed your argument from one stating that:

The sensor size doesn't affect DoF ...

... to statements such as this one:

... the relative insignificance of CoF ...

... which is apparently a refence to "CoC" from a post on 11-03-202 at 17:34 (and at 21:44) here:

It's clear that you consider Jeff Conrad's as well as other texts to be erroneous. The Wikipedia page:

http://en.wikipedia.org/wiki/Depth_of_field#Circle_of_confusion_criterion_for_depth_of_field

http://en.wikipedia.org/wiki/Depth_of_field#DOF_formulas

... awaits the sure hand of your editing expertise to rewrite the history of photography forthwith.

woof woof wrote:

Detail Man wrote:

... you are forgetting that the image-enlargement (by a factor of 2) required in order to scale the displayed image back to the same physical viewing-size (also) reduces the Circle of Confusion diameter by a factor of 2. The result the above described effects combined results in no net increase in DOF.

Detail Man,

After reading your posts I thought I'd simplify things and indeed bring them back to reality a little ...

That is very kind of you. I will be watching the above referenced Wikipedia page for your edits.

... and say... clearly... that increasing the camera to subject distance will increase the DoF. How much it will increase the DoF isn't really worth arguing at this point as we're talking about something which is mostly subjective ...

It would seem that if you think that this is a matter clouded in "subjectivity", you (might) not have bothered to author your post presenting your thoughts as being something "objective" ?

Increasing Camera to Subject Distance does indeed increase Depth of Field (by the square of the ratio of the change in lens-system front nodal-plane to the plane-of-focus distance).

The rest of the story is that cropping (on the sensor, or in post-processing) reduces the DOF (which is directly proportional to the Circle of Confusion diameter) by an amount equal to ratio of the (linear) dimensions of the original un-cropped image divided by the cropped image.

Finally, in the enlargement of the result of that cropping back to the same physical viewing-size, the DOF (which is directly proportional to the Circle of Confusion diameter) is (again) by an amount equal to ratio of the (linear) dimensions of the enlarged image divided by the cropped image.

The net result is no increase in the mathematical quantity known to the world as Depth of Field.

... and we have to remember that the OP was sat at the table with the young lady and therefore couldn't back up too much anyway, so the question is really, moot.

Who knows ? I will restrict my statement to adressing the situation relating to your test-shots.

When you use the phrase "Depth of Field", reasonable, intelligent people assume that you are talking about something that is objectively describable by understood and agreed upon mathematical identities. Yet, I will be surprised if I were to see you editing the Wikipedia page. If you were to do that, we can be rest assured that such a situation would not last for long ...

Indeed, when you use the term "Depth of Field" in your writings, you are not describing something that is objectively defined as "Depth of Field". You are describing something that is (as you note yourself) subjectively perceived by human vision. A (generalized) perception of "blurriness".

Because what you see appears to (from your understanding) conflict with what accepted "Depth of Field" theory states, you assume that the "experts" must surely be wrong. As a result, you publish image samples that you consider to be evidence that the rest of the world is in some way(s) divorced from "reality", and invite others to eschew established "Depth of Field" theory in favor of your own thinking which claims to itself better understand reality.

Perhaps (since it is unlikely that you alone are going to rewrite the history and mathematics of photography where it comes to the meaning of the phrase "Depth of Field"), it makes more sense to wonder if (perhaps) you (and others) are seeing and noting an effect that is indeed explainable, is indeed understood, but has so far escaped the grasp of your own and others' conceptual understanding.

Yes, I myself do see what you are talking about, and I will set about to explain the effect below ...

As a photograph or two are worth a thousand links to on line articles I've just shot a little Cartman wind up toy in front of a clock at f1.4 ...

I estimate that the front nodal-plane of your lens system was around 0.5 Feet from the toy in the first shot. Have run calculations for two different possible distances between the toy and the clock (1.0 Foot and 1.5 Feet). While not exact, these distances estimates are adequate.

I then backed up about six feet or so and took the shot again.

I estimate that the front nodal-plane of your lens system was around 6.5 Feet from the toy in the second shot. Have run calculations for two different possible distances between the toy and the clock (1.0 Foot and 1.5 Feet). While not exact, these distances estimates are adequate.

I then cropped the shot taken at the greater distance after zooming in to 120% to get more or less the same framing.

My calculations assume that your cropping resulted in the same "subject magnification" of the toy. While this appears to not be exactly the case, that assumption is (reasonably) adequate.

Please ignore the noise, it's ISO 3200 and I'm not interested in anything other than the obvious DoF. It is obvious to me at least that the shot taken from further away has more DoF as much more detail can be seen in the clock face and the other things in view.

Here is where I think that it would more intellectually honest to state that background subject-matter looks (more or less) "blurry" - as opposed to continuing to use the phrase "Depth of Field".

This is a little experiment that anyone with a camera can do and is IMVHO worth more than many links to on line articles or pages of on line discussion that argue that increasing the camera to subject distance and then cropping the image will not have this effect... as it simply will, in any sized print or screen image.

And with that I say, Goodnight.

Good morning. Ready for some applied mathematics ? Even if you yourself are not, some readers may well be, and it will help whoever is ready for some applied mathematics to understaand why it is so that the "blurriness" of background objects (calculated for the clock-face below) changes with the position of your camera in these examples that you have published.

I recently derived a sound method for calculating "relative background blur". Another poster tested it's results against that of Bob Atkins' blur-calculator application, and states that they agree.

In order to evaluate the size of the "blur-disk" diameter (B) as a percentage proportion of the (diagonal) physical dimension in object-space that an image-frame represents at the location of background subject-matter of interest (S), the ratio of the two identities ( B / S ) is taken:

BP ~ ( (100) / ( (F) (H) ) ) x ( (L)^(2) ) x ( ( Df - D ) / ( (Df) (D) ) )

where:

BP is the percentage of the image-frame diagonal that the blur-disk represents;

F is F-Number;

H is the diagonal dimension of the image-sensor's active-area;

L is the Focal Length (when focused at infinity);

D is the Camera (front nodal-plane) to Subject (plane-of focus) Distance;

Df is the Camera (front nodal-plane) to Background Subject Matter Distance.

http://forums.dpreview.com/forums/post/50519706

While the form of the independent variables F, H, and L are the same as appear in an approximation of DOF (for Camera Subject Distances that are a fraction of the Hyperfocal Distance) - except that they appear in the above identity inverted (as reciprocals) - the form of the Camera to Subject Distance (D) that appears in the relative blur formula is somewhat different than what appears in the DOF formulas (where D alone is squared in the numerator).

Instead, this particular ratio (from the above stated formula) applies:

( ( Df - D ) / ( (Df) (D) ) )

... which involves the difference of the Camera to Background Distance (Df) minus the Camera to Subject Distance (D) all divided by the product of the Camera to Background Distance (Df) multiplied by the Camera to Subject Distance (D).

.

Here is the formula for evaluating the expected ratio of the relative percentage background-blurring between your two example images displayed above (percentages are less than 100%):

BP2 / BP1 =

( F1 H1 ) / ( F2 H2 ) x ( L2 / L1 )^(2)

x ( ( DF1 D1 ) / ( K2 DF2 D2 ) ) x ( K2 ( DF2 - D2 ) ) / ( DF1 - D1 ) )

where:

BP is the percentage of the image-frame diagonal that the blur-disk represents;

F is F-Number;

H is diagonal dimension of the image-sensor's active-area;

L is Focal Length (when focused at infinity);

D is Camera (front nodal-plane) to Subject (plane-of focus) Distance;

Df is Camera (front nodal-plane) to Background Subject Matter Distance;

K2 is a constant equal to the magnification-factor caused by your enlarging the 2nd image.

.

In this case, the variables F2=F1, H2=H1, and L2=L1. As a result, the formula above simplifies to:

BP2 / BP1 = ( ( DF1 D1 ) / ( K2 DF2 D2 ) ) x ( K2 ( DF2 - D2 ) ) / ( DF1 - D1 ) )

.

Calculation 1 (distances are in units of Feet):

DF1 = 1.5

D1 = 0.5

DF2 = 7.5

D2 = 6.5

K2 = 6.5 / 0.5 = 13

Result #1 predicts that the diameter of the (relative) background-blur disk would be 5 times smaller in your 2nd shot (from 6 Feet farther back) as compared to your 1st shot.

.

Calculation 2 (distances are in units of Feet):

DF1 = 2.0

D1 = 0.5

DF2 = 8.0

D2 = 6.5

K2 = 6.5 / 0.5 = 13

Result #2 predicts that the diameter of the (relative) background-blur disk would be 4 times smaller in your 2nd shot (from 6 Feet farther back) as compared to your 1st shot.

.

So, there we are. What you report is indeed predictable and is coherently explainable (using the mathematics of calculating Relative Background Blur) - while not refuting or disproving the mathematics of Depth of Field. The mathematics used to analyze your observations was derived from Merklinger's "Object Field" mathematics, so "Detail Man" did not (by any means) invent it.

Nor did woof woof invent it. You have observed the effect, and your repeated "clarion calls" pointing out a seeming limitation in the mathematics of Depth of Field have led to a rational explanation being provided (using the mathematics of Relative Background Blur) which does not in itself refute the widely accepted and agreed upon mathematics surrounding Depth of Field.

This exercise in applied mathematics has clearly shown that the mathematics of Relative Background Blur (in the cases of your various published experiments, anyway) serve us much better than the mathematics of Depth of Field in providing an objective and an understandable explanation of what is being perceived (by my own eyes, as well as by your, and others', eyes).

.

Notes: Both the relative background-blur as well as the DOF formulas are all based upon a single symmetrical lens model only, and may not (in their relative mathematical simplicity) completely describe the optical performance of multiple-element lens-systems.

In particular, telephoto zoom lens-systems (I have read) typically have a Pupillary Magnification factor that is less than unity (which means that they are in effect unsymmetrical lens-systems).

Therefore, (both of) these relatively simplistic mathematical models (relative background-blur as well as DOF) may, as a result, be less numerically accurate when applied to zooms lens-systems, as opposed to when applied to fixed focal length lens-systems (which, as I have read, are more likely to to have Pupillary Magnification factors that are closer to unity).

DM...

Edited 4 months ago by Detail Man
Complain
Post ()
Keyboard shortcuts: