Depth of field in object space

Tom Axford

Forum Pro
Messages
11,599
Solutions
57
Reaction score
13,499
Location
Midlands, UK
There have been recent discussions on whether or not it is necessary to know the sensor size to work out the depth of field.

An object-space analysis of depth of field shows that three basic independent variables can be used to obtain the depth of field. If those variables are known, nothing else is required.

Diagram for Depth of Field in Object Space

a939abad46834e3b9dce12cdef05cb2b.jpg.png

The camera is focussed on the object plane. Point P is in the background and point S is in the foreground. Seen from point T at the top of the entrance pupil, P appears in line with Q. Seen from point U at the bottom of the entrance pupil, P appears in line with R. The lens creates an image by using all of the light from P that enters the entrance pupil. If Q and R are in sharp focus, then the point P will be seen as a blur that extends from Q to R. Similarly, the point S will be seen as a blur that extends from R to Q.

In the diagram above, the diameter of the entrance pupil is denoted by a, the diameter of the circle of confusion in object space is denoted by b, and the distance between the object plane and the entrance pupil is denoted by x. The camera is assumed to be focussed on the object plane.

The nearside depth of field is the distance between the object plane and point S. The farside depth of field is the distance between the object plane and point P.

If a, b and x are all known, then the depth of field can be worked out by simple geometry. It can be expressed in the following form:

Nearside depth of field = E/(1+D)

Farside depth of field = E/(1-D) if D<1 and = infinity if D>1

Total depth of field = 2E/(1-D^2) if D<1 and = infinity if D>1

where D = b/a and E = xD

Often b (the circle of confusion in object space) is not directly known, but it can be calculated from other variables that are known.

For example, b = c/m, where c is the circle of confusion in image space and m is the image magnification (i.e. ratio of the size of the image of an object to the size of that object).

Alternatively, b = d/R, where d is the diameter of the field of view in the object plane and R is the ratio of the image diameter to the circle of confusion (very often R is taken as 1500 or thereabouts).

Although the first object-space analysis of depth of field has been attributed to Moritz von Rohr sometime in the 1890s (see the Wikipedia entry for Moritz von Rohr), most modern treatments of depth of field use image-space analysis. Personally, I think that object-space analysis is much simpler and should be more widely known.
 
It can be simplified even further. Use only one independent variable: the DOF.
 
Last edited:
There have been recent discussions on whether or not it is necessary to know the sensor size to work out the depth of field.
Thank you for the clear exposition Tom, it is useful (though it would be nice to standardize symbols and notation for these discussions;-)

I believe the dependence on format size creeps in from the definition of Circle of Confusion, which can vary but is typically a function of human acuity.

For instance someone with 20/20 vision can resolve objects of 1 arc minute width repeating every two arc minutes, meaning that they can resolve no more than one of them every 30th of a degree on the retina or θ ≈ 1/30 degree. That's the CoC on the retina.

Project that to the displayed photo, viewing distance v away:

CoC on displayed photo ≈ v * θ / 180 * pi

and project that to the sensor, assuming enlargement M = dp/ds, with dp and ds a linear dimension of displayed photo and sensor respectively (for instance their diagonals):

CoC on sensor c ≈ v * θ / 180 * pi / M

Then project to object space to achieve your b:

CoC in object space b = c/m ≈ v * θ / 180 * pi * ds/dp / m

with m lens magnification ≈ f/x.

Therefore for a given display size (dp) and viewing distance (v) the CoC will vary depending on format size (ds).

If we assume θ ≈ 1/30 degree and standard distance (v = dp) :

CoC in object space b = ds / m / 1720,

which is a version of your suggested b = d / R.

Jack
 
Last edited:
There have been recent discussions on whether or not it is necessary to know the sensor size to work out the depth of field.
Thank you for the clear exposition Tom, it is useful (though it would be nice to standardize symbols and notation for these discussions;-)

I believe the dependence on format size creeps in from the definition of Circle of Confusion, which can vary but is typically a function of human acuity.

For instance someone with 20/20 vision can resolve objects of 1 arc minute width repeating every two arc minutes, meaning that they can resolve no more than one of them every 30th of a degree on the retina or θ ≈ 1/30 degree. That's the CoC on the retina.

Project that to the displayed photo, viewing distance v away:

CoC on displayed photo ≈ v * θ / 180 * pi

and project that to the sensor, assuming enlargement M = dp/ds, with dp and ds a linear dimension of displayed photo and sensor respectively (for instance their diagonals):

CoC on sensor c ≈ v * θ / 180 * pi / M

Then project to object space to achieve your b:

CoC in object space b = c/m ≈ v * θ / 180 * pi * ds/dp / m

with m lens magnification ≈ f/x.

Therefore for a given display size (dp) and viewing distance (v) the CoC will vary depending on format size (ds).

If we assume θ ≈ 1/30 degree and standard distance (v = dp) :

CoC in object space b = ds / m / 1720, ...
Jack,

Lyon has a lot to say about the CoC here:


I like the part about halfway down where he states that the so-called "Zeiss Formula" (1730) is apocryphal - in addition to his many, many references to derivation of "the" CoC.
 
xpatUSA wrote: Lyon has a lot to say about the CoC here:

http://kronometric.org/phot/iq/DepthOfField-Lyon.pdf

I like the part about halfway down where he states that the so-called "Zeiss Formula" (1730) is apocryphal - in addition to his many, many references to derivation of "the" CoC.
Thanks Ted, that's quite the treatise on DOF, together with Jeff Conrad 's and Alan Robinson's.

Lyon refers to the contributions to the CoC of Dallmeyer and Abney in the 1800's, who also use 1 minute of arc (i.e. 1/60 degree) as the smallest object resolvable by a person with normal vision. He (and the relative Wikipedia page ) attribute to them 'the same factor-of-two error' because they use two arc minutes for the diameter of the CoC.

However, there is no factor of two error and Dallmeyer and Abney were in fact correct, as explained in my previous post. That's why the Snellen chart for 20/20 vision corresponds to 30 cycles per degree and not 60 :-)

Jack
 
Last edited:
A key ingredient – What is the largest permissible diameter of the circle of confusion. This is a variable based on image content, image contrast, image brilliance and the acuity of the observer.

Basis of circle of confusion diameter: Under sunlight conditions, the human eye can resolve lines which subtend an angle of 1/3000 the distance. Some examples – a wagon wheel 3 feet in diameter viewed from 9,000 feet (1.7 miles) or 1 meter in diameter viewed from 3 kilometer. Another example – a 1 inch diameter coin viewed from 3000 inches = 250 feet (25mm in diameter viewed from 75 meters.

The above values are too stringent for pictorial photography. Most depth of field charts are based on 1/1000 of the viewing distance. This works out to 1/100 of an inch viewed from 10 inches (1/4mm viewed from 250mm). However, the Kodak standard for critical work was 1/1750 of the focal length. Leica used 1/1500 of the focal length.

Using the full frame 35mm format as a benchmark, to make an 8X10 inch display image, the minimum required magnification is 8X. If the circle tolerance used is 1/100 of an inch, then the circle diameter at the image plane must not exceed 1/800 of an inch = 0.0318mm.

All this to tell you that the size of the format dictates the degree of magnification needed to make a final display image.

The key ingredients are viewing distance and the degree of magnification. Both are variables based on format size and viewing distance (among other things).
 
There have been recent discussions on whether or not it is necessary to know the sensor size to work out the depth of field.
To be honest there are many other topics where you can remove sensor size from the equation and this usually makes more sense, what is important is the optics. The sensor size ? Just use a focal reducer or teleconverter, you can adapt the optic to your sensor size.

I always prefer to consider that light or shallow depth of field comes from large aperture than from sensor size (though sensor size may be necessary when you reach the limits).
An object-space analysis of depth of field shows that three basic independent variables can be used to obtain the depth of field. If those variables are known, nothing else is required.

Diagram for Depth of Field in Object Space

a939abad46834e3b9dce12cdef05cb2b.jpg.png

The camera is focussed on the object plane. Point P is in the background and point S is in the foreground. Seen from point T at the top of the entrance pupil, P appears in line with Q. Seen from point U at the bottom of the entrance pupil, P appears in line with R. The lens creates an image by using all of the light from P that enters the entrance pupil. If Q and R are in sharp focus, then the point P will be seen as a blur that extends from Q to R. Similarly, the point S will be seen as a blur that extends from R to Q.

In the diagram above, the diameter of the entrance pupil is denoted by a, the diameter of the circle of confusion in object space is denoted by b, and the distance between the object plane and the entrance pupil is denoted by x. The camera is assumed to be focussed on the object plane.

The nearside depth of field is the distance between the object plane and point S. The farside depth of field is the distance between the object plane and point P.

If a, b and x are all known, then the depth of field can be worked out by simple geometry. It can be expressed in the following form:

Nearside depth of field = E/(1+D)

Farside depth of field = E/(1-D) if D<1 and = infinity if D>1

Total depth of field = 2E/(1-D^2) if D<1 and = infinity if D>1

where D = b/a and E = xD

Often b (the circle of confusion in object space) is not directly known, but it can be calculated from other variables that are known.

For example, b = c/m, where c is the circle of confusion in image space and m is the image magnification (i.e. ratio of the size of the image of an object to the size of that object).

Alternatively, b = d/R, where d is the diameter of the field of view in the object plane and R is the ratio of the image diameter to the circle of confusion (very often R is taken as 1500 or thereabouts).

Although the first object-space analysis of depth of field has been attributed to Moritz von Rohr sometime in the 1890s (see the Wikipedia entry for Moritz von Rohr), most modern treatments of depth of field use image-space analysis. Personally, I think that object-space analysis is much simpler and should be more widely known.
There are ways to estimate dof without dof calculators, I posted new methods fast and easy to use in practice for many use cases. It is also 100% independent of sensor size.

Personnally, I like the way you can estimate hyperfocal in the object space with your method, this is a simple rule that can be used in practice.

Otherwise, as you say, this is nothing less than an equivalent to a dof calculator, so we have again the debates about the relevance of this method... Until we see Merklinger again. Again, the important point is that it gives the same results as usual dof calculators.
 
Personnally, I like the way you can estimate hyperfocal in the object space with your method, this is a simple rule that can be used in practice.
Yes, the hyperfocal condition occurs when a=b, which is about as simple as you can get.
Except that b is not directly known as you said, and when you express it in terms of more commonly used quantities, you get the known formula for the DOF, as you should.
 
Last edited:
There have been recent discussions on whether or not it is necessary to know the sensor size to work out the depth of field.
Thank you for the clear exposition Tom, it is useful (though it would be nice to standardize symbols and notation for these discussions;-)

I believe the dependence on format size creeps in from the definition of Circle of Confusion, which can vary but is typically a function of human acuity.

For instance someone with 20/20 vision can resolve objects of 1 arc minute width repeating every two arc minutes, meaning that they can resolve no more than one of them every 30th of a degree on the retina or θ ≈ 1/30 degree. That's the CoC on the retina.

Project that to the displayed photo, viewing distance v away:

CoC on displayed photo ≈ v * θ / 180 * pi

and project that to the sensor, assuming enlargement M = dp/ds, with dp and ds a linear dimension of displayed photo and sensor respectively (for instance their diagonals):

CoC on sensor c ≈ v * θ / 180 * pi / M

Then project to object space to achieve your b:

CoC in object space b = c/m ≈ v * θ / 180 * pi * ds/dp / m

with m lens magnification ≈ f/x.

Therefore for a given display size (dp) and viewing distance (v) the CoC will vary depending on format size (ds).

If we assume θ ≈ 1/30 degree and standard distance (v = dp) :

CoC in object space b = ds / m / 1720,

which is a version of your suggested b = d / R.

Jack
Thanks Jack.

I generally prefer to keep out of discussions on the acuity of human vision as it is a difficult subject which I have not studied in detail.

I prefer to choose a CoC for rather more pragmatic reasons. Common DoF calculators such as dofmaster.com use 0.030mm for FF which corresponds to R=1440. Personally, I usually choose R=2000 as it makes the arithmetic simpler, but I often also use R=1000 for images that will only be viewed at a small size, or R=4000 if I want to ensure a higher level of resolution than usual.

It is convenient to be able to vary the CoC to suit the intended use of the image.
 
There have been recent discussions on whether or not it is necessary to know the sensor size to work out the depth of field.
Thank you for the clear exposition Tom, it is useful (though it would be nice to standardize symbols and notation for these discussions;-)

I believe the dependence on format size creeps in from the definition of Circle of Confusion, which can vary but is typically a function of human acuity.
We have to be clear, there is absolutely no dependance with sensor size (directly or indirectly) and this can not depend on how we define the Coc..

If we have the input variables given by Tom, even if you add visual acuity in the equation, this does not change the fact that sensor size is left out.
For instance someone with 20/20 vision can resolve objects of 1 arc minute width repeating every two arc minutes, meaning that they can resolve no more than one of them every 30th of a degree on the retina or θ ≈ 1/30 degree. That's the CoC on the retina.

Project that to the displayed photo, viewing distance v away:

CoC on displayed photo ≈ v * θ / 180 * pi

and project that to the sensor, assuming enlargement M = dp/ds, with dp and ds a linear dimension of displayed photo and sensor respectively (for instance their diagonals):

CoC on sensor c ≈ v * θ / 180 * pi / M

Then project to object space to achieve your b:

CoC in object space b = c/m ≈ v * θ / 180 * pi * ds/dp / m

with m lens magnification ≈ f/x.

Therefore for a given display size (dp) and viewing distance (v) the CoC will vary depending on format size (ds).
The Coc in common dof calculators varies also with sensor size.
If we assume θ ≈ 1/30 degree and standard distance (v = dp) :

CoC in object space b = ds / m / 1720,

which is a version of your suggested b = d / R.

Jack
I don't know what is your point. The sensor size does not enter in the dof equations given by Tom. Of course not.

The Coc varies with sensor size, nothing new.

But the dof equations can leave out the sensor size when we take the input variables given by Tom.

Just trying to understand your point but maybe you can explain.
 
But the dof equations can leave out the sensor size when we take the input variables given by Tom.
How is that different than what I suggested earlier: DOF = DOF. Absolutely no dependence on the sensor size, and much simpler?

But that is a tautology… So is Tom’s, in a way. Let us start with that mysterious b - the CoC in object space. What is that?
 
There have been recent discussions on whether or not it is necessary to know the sensor size to work out the depth of field.
Thank you for the clear exposition Tom, it is useful (though it would be nice to standardize symbols and notation for these discussions;-)

I believe the dependence on format size creeps in from the definition of Circle of Confusion, which can vary but is typically a function of human acuity.
We have to be clear, there is absolutely no dependance with sensor size (directly or indirectly) and this can not depend on how we define the Coc..

If we have the input variables given by Tom, even if you add visual acuity in the equation, this does not change the fact that sensor size is left out.
For instance someone with 20/20 vision can resolve objects of 1 arc minute width repeating every two arc minutes, meaning that they can resolve no more than one of them every 30th of a degree on the retina or θ ≈ 1/30 degree. That's the CoC on the retina.

Project that to the displayed photo, viewing distance v away:

CoC on displayed photo ≈ v * θ / 180 * pi

and project that to the sensor, assuming enlargement M = dp/ds, with dp and ds a linear dimension of displayed photo and sensor respectively (for instance their diagonals):

CoC on sensor c ≈ v * θ / 180 * pi / M

Then project to object space to achieve your b:

CoC in object space b = c/m ≈ v * θ / 180 * pi * ds/dp / m

with m lens magnification ≈ f/x.

Therefore for a given display size (dp) and viewing distance (v) the CoC will vary depending on format size (ds).
The Coc in common dof calculators varies also with sensor size.
If we assume θ ≈ 1/30 degree and standard distance (v = dp) :

CoC in object space b = ds / m / 1720,

which is a version of your suggested b = d / R.

Jack
I don't know what is your point. The sensor size does not enter in the dof equations given by Tom. Of course not.

The Coc varies with sensor size, nothing new.

But the dof equations can leave out the sensor size when we take the input variables given by Tom.

Just trying to understand your point but maybe you can explain.
The explanation is in the message you quoted Chris:

Tom's DOF calculations depend on the size of the CoC. As you say above, the CoC varies with sensor size. Therefore Tom's DOF calculations depend on sensor size.

Jack
 
The explanation is in the message you quoted Chris:

Tom's DOF calculations depend on the size of the CoC. As you say above, the CoC varies with sensor size. Therefore Tom's DOF calculations depend on sensor size.

Jack
I disagree of course, Tom dof calculations do not depend at all on sensor size, it was completely left out.

This is a problem of logic.. You have to choose your set of inputs, but these inputs are considered as root inputs.

With Tom equations, he can estimate dof with these inputs only, and there is absolutely 0 dependency with sensor size.

By the way because sensor size was completely left out, he can not even know what sensor size was used (different combinations of focal length,/sensor size would give the same result).

The camera body can be a black box, no need to know which sensor is inside
 
Tom's DOF calculations depend on the size of the CoC. As you say above, the CoC varies with sensor size. Therefore Tom's DOF calculations depend on sensor size.
Jack, you are confusing the CoC in object space with the CoC in image space. I used the former and it does not depend on sensor size.

The usual CoC in image space does depend on sensor size, but that CoC does not appear in my formula.

chrisfisheye is correct.

A well-known example is if you take several photographs the same scene using "equivalent" lens parameters on cameras with different sensor sizes, all the photos will have exactly the same depth of field despite the different sensor sizes. E.g. FF with 50mm f/8 lens and MFT with 25mm f/4 lens. The CoC in object space will be the same in all those cases.
 
Tom's DOF calculations depend on the size of the CoC. As you say above, the CoC varies with sensor size. Therefore Tom's DOF calculations depend on sensor size.
Jack, you are confusing the CoC in object space with the CoC in image space. I used the former and it does not depend on sensor size.
The usual CoC in image space does depend on sensor size,
but that CoC does not appear in my formula.
Yes we are talking about dof at the end, not Coc..

Even if you take into account visual acuity, viewing distance, it won't change the fact that the sensor size is completely left out in the equation with your set of inputs.
chrisfisheye is correct.
And I agree with your equations because one of the main advantage of these equations is that it does not depend on sensor size.

Especially usefull for hyperfocal..

Even better ; you can crop the image in post and the dof calculations will still work !!!!

Contrary to the usual calculations made with the sensor size; you may not know if it was cropped...
A well-known example is if you take several photographs the same scene using "equivalent" lens parameters on cameras with different sensor sizes, all the photos will have exactly the same depth of field despite the different sensor sizes. E.g. FF with 50mm f/8 lens and MFT with 25mm f/4 lens. The CoC in object space will be the same in all those cases.
100% agree.
 
Tom's DOF calculations depend on the size of the CoC. As you say above, the CoC varies with sensor size. Therefore Tom's DOF calculations depend on sensor size.
Jack, you are confusing the CoC in object space with the CoC in image space. I used the former and it does not depend on sensor size.

The usual CoC in image space does depend on sensor size,

but that CoC does not appear in my formula.
Yes we are talking about dof at the end, not Coc..
Still nobody can answer what CoC in the object space means...
 
Tom's DOF calculations depend on the size of the CoC. As you say above, the CoC varies with sensor size. Therefore Tom's DOF calculations depend on sensor size.
Jack, you are confusing the CoC in object space with the CoC in image space. I used the former and it does not depend on sensor size.

The usual CoC in image space does depend on sensor size,

but that CoC does not appear in my formula.
Yes we are talking about dof at the end, not Coc..
Still nobody can answer what CoC in the object space means...
Tom did not refer to Coc in his first post.

It would be b if I had to define a Coc (it is homogenous with a distance) which would depend on the diameter of the field of view....

If we note d this diameter, the important equation for the "confusion part" is b = d/R or b/d = 1/R

In the image of course you will find the same ratio because b and d are scaled with the same magnification.

Does that make sense ?
 
Tom's DOF calculations depend on the size of the CoC. As you say above, the CoC varies with sensor size. Therefore Tom's DOF calculations depend on sensor size.
Jack, you are confusing the CoC in object space with the CoC in image space. I used the former and it does not depend on sensor size.

The usual CoC in image space does depend on sensor size,

but that CoC does not appear in my formula.
Yes we are talking about dof at the end, not Coc..
Still nobody can answer what CoC in the object space means...
Tom did not refer to Coc in his first post.
He did but after the introduction, which makes his OP confusing:

In the diagram above, the diameter of the entrance pupil is denoted by a, the diameter of the circle of confusion in object space is denoted by b, and the distance between the object plane and the entrance pupil is denoted by x. The camera is assumed to be focussed on the object plane.
It would be b if I had to define a Coc (it is homogenous with a distance) which would depend on the diameter of the field of view....

If we note d this diameter, the important equation for the "confusion part" is b = d/R or b/d = 1/R

In the image of course you will find the same ratio because b and d are scaled with the same magnification.

Does that make sense ?
But then that diameter depends on the eq. FL, or on the AOV. With a fixed, then it depends directly on the sensor size.
 

Keyboard shortcuts

Back
Top