Confusion About DOF?

An image on a smaller sensor needs to be enlarged more, to give a print of the same size. This difference in the degree of enlargement will affect the depth of field.
This is incorrect and my guess is it based on film thinking. A 10Mp image will need the same degree of "enlargement" to make a large print regardless of size of sensor it came from. The image only consists of pixels as does the print you make from it.
I'm sorry you have lost me. What is a "Large" print, when the only unit of measurement is the pixel? How much wood and glass is required in order to frame such a print?
It is easy to test this, just strip the exif data from 2 images of the same pixel size from different sensors and see if your printing programme cares about sensor size.
In order to meet your requirements. it would be necessary to use a printing program which specifies the paper size in pixels too. Good luck with that :)

Regards,
Peter
 
Part of the reason there is a lack of agreement is that the examples given are not consistent from poster to poster. They are looking at different things and mixing examples like apples and oranges.
An image on a smaller sensor needs to be enlarged more, to give a print of the same size. This difference in the degree of enlargement will affect the depth of field.
This is incorrect and my guess is it based on film thinking. A 10Mp image will need the same degree of "enlargement" to make a large print regardless of size of sensor it came from. The image only consists of pixels as does the print you make from it.
I'm sorry you have lost me. What is a "Large" print, when the only unit of measurement is the pixel? How much wood and glass is required in order to frame such a print?
It is easy to test this, just strip the exif data from 2 images of the same pixel size from different sensors and see if your printing programme cares about sensor size.
In order to meet your requirements. it would be necessary to use a printing program which specifies the paper size in pixels too. Good luck with that :)
What Steephill may be referring to is the pixels per inch specification (ex: 360dpi).

So lets say there are 3600 pixels across the sensor. The size of the print at 360ppi would then be 10 inches on one side. If you play around with the image size dialogue box in Photoshop (with resampling turned off!), you can determine what size various prints can be made with various dpi settings.

Now with that thinking, it means that at 360dpi printing, a small sensor with 3600 tiny pixels will produce a 10in long print and a large football field sized sensor, 3600 pixels long will be shrunk down to produce the same 10 in long print. With this logic, as long as the lens used projects the same field of view onto both sensors, that is one tiny scene, and one gigantic scene, both showing the same view, then the print would have the same depth of field in each case.

Keep that thought and let's look at this statement:
Look at it like this: Let's take the popular 0.03mm circle of confusion on a full-frame sensor. Let's say we have an image of a single circle with a diameter of 0.03mm projected onto two sensors: one is a full-frame sensor, the other is a 1/3" type. Now, wouldn't you think that the circle would take up more of the total space on a small 1/3" type sensor than on the full-frame sensor? And so if you have two 10MB images, one from the small sensor, one from the bigger sensor, in which would the circle appear larger?
Of course the .03mm circle projection would have the same .03mm diameter on both 10MP sensors. It may cover 9 pixels on the smaller sensor and only 4 pixels on the larger sensor but the projected circle would be the same size. So you could say then that the DOF is exactly the same. Except that we don't normally view the image on a sensor. We usually enlarge them to a certain size - like 24" diagonal screens or 8x10 sized prints. As a result, with the same degree of enlargement, the bigger sensor could fill up the 8x10 print but the smaller sensor would only get to 6x8 for example. Now someone can claim that the DOF is still the same in both since the .03 diameter circle got enlarged to the same degree.

That is correct and that is why Leica, Zeiss, Canon, etc. require another parameter in addition to enlargement before determining DOF. They stipulate that you need to be looking at the image from a specific distance, one close enough where the image can just be viewed in its entirety without panning around. Roughly, the distance is the diagonal of the image. So for the 8x10 you would be "examining" it for DOF from a distance of about 10". For the smaller 6x8 crop of the same image, you would have to move in closer to about 8" in order to keep the same angle of view. Once you do that, the blur circle will appear that much bigger and consequently - blurrier. The DOF just went down!

However, unlike the above example, the field of view around that circle shown on the smaller sensor is cropped. It's an unfair comparison. But if you change the lens so that the same field of view is projected onto both sensors, now the situation in the first example occurs. This is why acceptable circles of confusion are smaller by the same proportion as the smaller sensor is to the larger one. It accounts for the fact that you can't enlarge the smaller image by the same amount as the larger one and still not notice that an apparently sharp point is in fact a blurry circle.

So those are two different ways of looking at it. The dpi argument implies that you'll get the same image in either case. The circle of confusion/ degree of enlargement argument seems to imply otherwise.

I'll take a break and let others have a go at solving this paradox.

--
Robert
 
Here is a complete discussion of DOF by Paul van Walree:

http://toothwalker.org/optics/dof.html

If you scroll about 3/4 of the way down you'll see the answer to the small sensor dilemma, in the section entitled "Digital depth of field", that some members are having. For those not wanting to read the answer and understand the concepts involved, DOF is larger with a small sensor if the variables are consistent in both examples or as van Walree explains it, "A fair comparison requires 'identical' pictures, i.e. the object distance (perspective) and field of view (FOV) must be the same in both cases." Under these conditions the small sensor will require a lens with a smaller focal length. This small lens will produce greater DOF and over come the smaller CoC required for the smaller sensor.

For those that want a more complete understanding with all the math, van Walree has this discussion on the theory of DOF entitled "Derivations of the DOF Equations."

http://toothwalker.org/optics/dofderivation.html

This is one of the best articles I know of because it discusses symmetrical and asymmetrical lenses and geometry of image formation.
 
Have fun in your quest...

BTW, don't ever call me "Mr. Davis" again... ;-)
Mr. Chuxter, (OK?) Thanks for your additional insights. When I first posted this thread, I expected a lot of nasty wisecracks, but I have been impressed by the seriousness of the replies. Thanks.
We are, when appropriate, serious around here. Sadly, that doesn't always transpire. BUT, where did you come up with that " nasty wisecracks " $hit? ;-)

BTW, you are getting closer...next time leave off the "Mr."... ;-)

--
Charlie Davis
Nikon 5700, Sony R1, Nikon D50, Nikon D300
HomePage: http://www.1derful.info
"He had a photographic memory which was never developed."
 
While I agree with your summary, and thanks for posting the links, the problem I see is that if someone makes a position statement about an idea being debated (the first link) and then tries to prove it is correct by showing formulas containing half the letters of the alphabet (the second link showing derivation of DOF), the folks that took the opposing position will either a) be so impressed that if someone can describe a physical property using mathematical equations, it shows that they really understand the concept, or b) they are just throwing formulas around to confuse the masses because if someone really understands the concept, they can describe it to a laymen without using long equations and jargon. A lot of people who can’t visualize what a formula represents would tend to choose b.

Besides who is Toothwalker and why should someone take this photographer's word (or mine for that matter) as gospel?

So let me try a different way, basing the argument on what a famous lens manufacturer says about the subject:

When you change the focal length of a lens on the same camera, at least two things occur. The field of view changes and the actual dimension in mm of the aperture changes. Let’s say you go from a telephoto lens set at f/2.8 to a wide angle lens set at f/2.8. The field of view increases while at the same time the lens opening in mm gets smaller. Both are facts described in the Zeiss article on Depth of Field and Bokeh.

If you look at the geometric diagrams in that article, you would see that as a result, a cone of light with its apex at the subject and its base at the lens opening would get narrower. Narrower light cones result in smaller circles where the cone intersects the object plane, the only plane actually in focus.

According to Zeiss, the DOF is related to the ratio of the size of this intersection to the diameter of the field of view. Now since both the field of view got bigger and the cone got narrower in that example, the DOF increased twice .

Now let’s look at changing the size of the sensor. If we start with the same lens and only reduce the sensor, we would be left with a crop of the original field of view. To make a fair comparison, we would then be required to use a wider angle lens to recreate the same field of view but on the smaller sensor. So far, we think we have been able to recreate the exact image on the smaller sensor. However, when we changed lenses, we not only increased the field of view, we also got a smaller aperture – not in terms of f/number, but in actual diameter. We ended up with a smaller opening (entrance pupil) as seen from the object. Recalling the cones of light description above, that increases the DOF even though the f/number stayed constant. So the image on the smaller surface, while including the same field of view is actually not exactly the same as the image on the larger sensor. Instead it’s the image you would get if you made the f/number higher on the larger sensor by the ratio of the two sensor sizes.

Steephill correctly pointed out that a 10MP large sensor and a 10MP small sensor, each with say 3600 pixels across its length, would result in a 10 inch photograph when printed at 360dpi, in other words, the same enlargement. However, the image on each sensor is different if you look closely. The difference is the depth of field. This is as true for digital as it was for film.

--
Robert
 
While I agree with your summary, and thanks for posting the links, the problem I see is that if someone makes a position statement about an idea being debated (the first link) and then tries to prove it is correct by showing formulas containing half the letters of the alphabet (the second link showing derivation of DOF), the folks that took the opposing position will either a) be so impressed that if someone can describe a physical property using mathematical equations, it shows that they really understand the concept, or b) they are just throwing formulas around to confuse the masses because if someone really understands the concept, they can describe it to a laymen without using long equations and jargon. A lot of people who can’t visualize what a formula represents would tend to choose b.

Besides who is Toothwalker and why should someone take this photographer's word (or mine for that matter) as gospel?

So let me try a different way, basing the argument on what a famous lens manufacturer says about the subject:

When you change the focal length of a lens on the same camera, at least two things occur. The field of view changes and the actual dimension in mm of the aperture changes. Let’s say you go from a telephoto lens set at f/2.8 to a wide angle lens set at f/2.8. The field of view increases while at the same time the lens opening in mm gets smaller. Both are facts described in the Zeiss article on Depth of Field and Bokeh.

If you look at the geometric diagrams in that article, you would see that as a result, a cone of light with its apex at the subject and its base at the lens opening would get narrower. Narrower light cones result in smaller circles where the cone intersects the object plane, the only plane actually in focus.

According to Zeiss, the DOF is related to the ratio of the size of this intersection to the diameter of the field of view. Now since both the field of view got bigger and the cone got narrower in that example, the DOF increased twice .

Now let’s look at changing the size of the sensor. If we start with the same lens and only reduce the sensor, we would be left with a crop of the original field of view. To make a fair comparison, we would then be required to use a wider angle lens to recreate the same field of view but on the smaller sensor. So far, we think we have been able to recreate the exact image on the smaller sensor. However, when we changed lenses, we not only increased the field of view, we also got a smaller aperture – not in terms of f/number, but in actual diameter. We ended up with a smaller opening (entrance pupil) as seen from the object. Recalling the cones of light description above, that increases the DOF even though the f/number stayed constant. So the image on the smaller surface, while including the same field of view is actually not exactly the same as the image on the larger sensor. Instead it’s the image you would get if you made the f/number higher on the larger sensor by the ratio of the two sensor sizes.
There have been a lot of good posts in this thread, thanks for another one.

However, at this point I feel uncomfortable again:
Steephill correctly pointed out that a 10MP large sensor and a 10MP small sensor, each with say 3600 pixels across its length, would result in a 10 inch photograph when printed at 360dpi, in other words, the same enlargement.
It's the word "enlargement" here. I think it was stated previously that apples and oranges were being compared. i think that's the case here.

Unless the pixel in the image is related to the dimensions of the sensor from which it it was captured, it is a dimensionless number.

A pixel is just a dot of light, having a particular brightness and red/green/blue composition. It is an abstract quantity, with no physical height or width. The idea that printing an image at some specific number of pixel per inch amounts to a specific enlargement is not true. I really think a different word or term needs to be used in this context, rather than "enlargement".

In fact I would say that introducing pixels into this thread is an unnecessary complication. Depth of field can be dealt with without the use of pixels at all, since the behaviour of the imaging system is the same regardless of whether we use film, a digital sensor, or any other method of capture.

The one exception that I can think of is when the size of the circle of confusion is comparable with, or smaller than the size of the pixel spacing on the sensor. Making due allowance for the anti-aliasing and bayer demosaicing, the smallest detail resolvable by the sensor may be a bit larger than a pixel.

The limit placed on the detail captured by the sensor can effectively set a limit on the size of the circle of confusion, Certainly in the earliest digital cameras of 1 megapixel or less, this was a very real and significant factor.

Nowadays, though this principle is still true, it is often left out of the discussion. For example, a depth of field calculator will not mention the megapixel count of the sensor. That's because the subject is dealt with purely on the basis of the geometry of the lens and the light rays forming the image. Similarly, in the film days, the grain size of the film emulsion was not included in the calculation either, though it too could place a limit on the size of fine detail which could be captured.
However, the image on each sensor is different if you look closely. The difference is the depth of field. This is as true for digital as it was for film.
Well, there I agree again, treating the sensor as a whole is an appropriate way to consider the matter, in the context of depth of field.

I think one of the reasons for the meaninlessness of pixels is the reluctance of camera makers to include a correct and meaningful pixels per inch figure in the EXIF data of digital camera images. Beginners frequently ask in these forums how it is that their camera is producing images of 72 dpi or 360 dpi or whatever figure. In fact these figures are all arbitrary and of no real meaning. What would be meaningful would be to use the actual pixel per inch figure of the imaging sensor, and place that into the EXIF. Then we would actually be able to correctly and easily derive the actual degree of enlargement when making a print of a particular size.

Regards,
Peter
 
While I agree with your summary, and thanks for posting the links, the problem I see is that if someone makes a position statement about an idea being debated (the first link) and then tries to prove it is correct by showing formulas containing half the letters of the alphabet (the second link showing derivation of DOF), the folks that took the opposing position will either a) be so impressed that if someone can describe a physical property using mathematical equations, it shows that they really understand the concept, or b) they are just throwing formulas around to confuse the masses because if someone really understands the concept, they can describe it to a laymen without using long equations and jargon. A lot of people who can’t visualize what a formula represents would tend to choose b.
This thread has gone way beyond a "Beginners Question". Many people are reading this thread and getting misinformation. van Walree answers the original poster's questions and does so in an easy to read format, that is proofread and not containing numerous errors (like many of the posts here) is illustrated to make major concepts easy to see (something that no one here has done).
Besides who is Toothwalker and why should someone take this photographer's word (or mine for that matter) as gospel?
If you took the time to read site and read his about page you would know.

http://toothwalker.org/about.html

If you read his Optics page:
http://toothwalker.org/optics.html

You would have read his credits and know of review and supply of material by Carl Zeiss.

Who is "The Sage Knows"? At least I can "Google" Paul van Walree and find out he knows what he is talking about.

I don't mean to be rude or offensive in any of my comments. If anyone has taken offense to anything I've written, I apologize in advance. You can "Goggle" my name and find my background and learn that I have 30 plus years of teaching photography at various level including several universities. It's just that understanding DOF is not rocket science, it's really simple to understand if you read the links in my post above.

My post is just meant for those that want an answer and not a multi page thread that reaches no conclusion or agreement and leaves most readers more confused than when they started. The links I posted above answers all questions related to DOF.
So let me try a different way, basing the argument on what a famous lens manufacturer says about the subject:

When you change the focal length of a lens on the same camera, at least two things occur. The field of view changes and the actual dimension in mm of the aperture changes. Let’s say you go from a telephoto lens set at f/2.8 to a wide angle lens set at f/2.8. The field of view increases while at the same time the lens opening in mm gets smaller. Both are facts described in the Zeiss article on Depth of Field and Bokeh.

If you look at the geometric diagrams in that article, you would see that as a result, a cone of light with its apex at the subject and its base at the lens opening would get narrower. Narrower light cones result in smaller circles where the cone intersects the object plane, the only plane actually in focus.

According to Zeiss, the DOF is related to the ratio of the size of this intersection to the diameter of the field of view. Now since both the field of view got bigger and the cone got narrower in that example, the DOF increased twice .

Now let’s look at changing the size of the sensor. If we start with the same lens and only reduce the sensor, we would be left with a crop of the original field of view. To make a fair comparison, we would then be required to use a wider angle lens to recreate the same field of view but on the smaller sensor. So far, we think we have been able to recreate the exact image on the smaller sensor. However, when we changed lenses, we not only increased the field of view, we also got a smaller aperture – not in terms of f/number, but in actual diameter. We ended up with a smaller opening (entrance pupil) as seen from the object. Recalling the cones of light description above, that increases the DOF even though the f/number stayed constant. So the image on the smaller surface, while including the same field of view is actually not exactly the same as the image on the larger sensor. Instead it’s the image you would get if you made the f/number higher on the larger sensor by the ratio of the two sensor sizes.

Steephill correctly pointed out that a 10MP large sensor and a 10MP small sensor, each with say 3600 pixels across its length, would result in a 10 inch photograph when printed at 360dpi, in other words, the same enlargement. However, the image on each sensor is different if you look closely. The difference is the depth of field. This is as true for digital as it was for film.

--
Robert
 
I really think a different word or term needs to be used in this context, rather than "enlargement".
I agree we may not always be making an “enlargement”. If someone should obtain an 8x10 size 10MP digital field camera and then print a 4x5 from it, the print could be called a “reduction” I suppose.

The point was that when you create a 360dpi print from any 10MP sensor, you would get the same 8”x10” print. And I believe, this fact is used in the argument that depth of field doesn’t depend on enlargement at all. So by acknowledging that yes, you can create the same size print from different sensors with the same field of view projected on them, while explaining why the images actually have slightly different appearances if you look closely, then you stand a better chance of proving your case.

--
Robert
 
This thread has gone way beyond a "Beginners Question". Many people are reading this thread and getting misinformation. van Walree answers the original poster's questions and does so in an easy to read format, that is proofread and not containing numerous errors (like many of the posts here) is illustrated to make major concepts easy to see (something that no one here has done).
Besides who is Toothwalker and why should someone take this photographer's word (or mine for that matter) as gospel?
If you took the time to read site and read his about page you would know.

http://toothwalker.org/about.html
In fact, I did look at this before posting and read that he is "an underwater acoustician." To his credit he states that he is "not professionally involved in photography, optics or web design." So what confidence can we put into a photography presentation by someone with those credentials? Unless you already know the subject and agree with their position, not much. You've already stated (correctly) that there has been a lot of misinformation put out on this thread.
If you read his Optics page:
http://toothwalker.org/optics.html

You would have read his credits and know of review and supply of material by Carl Zeiss.
That's good. I've used material from Carl Zeiss as well. I didn't say I disagreed with him anywhere in my post, did I?
Who is "The Sage Knows"? At least I can "Google" Paul van Walree and find out he knows what he is talking about.
Did you happen to catch the part in the parenthesis:
Besides who is Toothwalker and why should someone take this photographer's word (or mine for that matter) as gospel?
I'm saying that whenever I can, I prefer to cite a reputable authority (notwithstanding my tongue in cheek remark about bookmarking my own post) not some blogger, and not some Wikipedia page that many others have used to support their assertions.
It's just that understanding DOF is not rocket science, it's really simple to understand if you read the links in my post above.
Yes it is simple to understand. That's why I prefer not to show how "easy" it is by presenting a pageful of formulas and equations. I don't know whether you actually read the Zeiss article, but if you had, you might have concluded that H.H. Nasse did a pretty good job of explaining the concepts without resorting to the kinds of formulas that your second link showed. Our OP appears to be quite comfortable with math so in this case you may actually have answered their question better. I'm not trying to compete here. Just providing access to another correct way to visualize the concepts.
My post is just meant for those that want an answer and not a multi page thread that reaches no conclusion or agreement and leaves most readers more confused than when they started. The links I posted above answers all questions related to DOF.
You realize that nearly everyone who contributed a response feels exactly the same way about their post? :)
I don't mean to be rude or offensive in any of my comments. If anyone has taken offense to anything I've written, I apologize in advance. You can "Goggle" my name and find my background and learn that I have 30 plus years of teaching photography at various level including several universities.
No offense taken. Likewise for my comments.

--
Robert
 
The books are correct and people here have twisted ideas from lack of proper training :-)
Sensor size is shorthand for 'degree of enlargement' :-)
So read and inwardly digest what the books say and you will have the 'gen' :-)
 
In your first para you are a little off in saying that the sensor size has nothing to do with DoF becuase the degree of enlargement is an oft ignored parameter to the matter. Since the degree of enlargement varies according to the sensor size for a given print so the required circle of confusion needs to vary.

In my text book, a rather ancient Ilford Manual of Photography 1948 edition.[ I started my training in 1952 and passed my City and Guilds photographic exam in 1953] the article starts by talking about 10x8 cameras where I suspect a much bigger CoC is permissable becuase one can also assume the likelihood that contact prints are often made. To make as acceptable 10x8 inch print from 35mm the CoC is calculated by f/1000 but one would need a smaller CoC for an APC camera and even smaller for a P&S camera.

So I suggest that you trust the books rather than forum threads where often knowledge comes from reading other material and perhaps not fully understanding what we have read ... I expect I am as fallible as anyone in this ....as indeed you have not appreciated the connection between sensor/film gate size and DoF and the resulting enlagrement requirements.

This is not the only thread on the subject in various forums and I went back to my reference book becuase frankly it is not a subject that needs my attention since I passed my G&G with effectively a credit/distinction level pass. A long time ago :-)
 
The books are correct and people here have twisted ideas from lack of proper training :-)
Sensor size is shorthand for 'degree of enlargement' :-)
So read and inwardly digest what the books say and you will have the 'gen' :-)
In the light of your points made above, would you care to explain to me how it is that small sensors yield greater DoF, when they require greater enlargement to make any one size of print, and that would seem to have an opposite affect?

Now, I know that small sensors DO give greater DoF than large ones, despite our intuition suggesting the opposite....

..... I just wonder how people are currently explaining it. :-)
--
Regards,
Baz

"Ahh... But the thing is, they were not just ORDINARY time travellers!"
 
In the light of your points made above, would you care to explain to me how it is that small sensors yield greater DoF, when they require greater enlargement to make any one size of print, and that would seem to have an opposite affect?
That's a superb question and points to one reason that things are so confusing.

It is true that enlarging an image decreases apparent DoF; this is easy to see just by moving closer to a photo you are looking at.

So why does a small sensor (which requires more enlargement = display.size/sensor.size) have a greater DoF?

The reason is because there is also a kind of reverse enlargement that takes place when the image is optically squished to fit on the sensor in the first place. This reverse enlargement is sometimes called "magnification" or "reproduction ratio"= sensor.size/scene.size.

The key thing that happens physically is when the lens is moved away from the correct focus position the reproduction ratio is applied twice: once for the lens moving with respect to the subject and another for the lens moving with respect to the sensor. It is a consequence of how lenses work.

The increase in enlargement required by the small sensor cancels ONE of the reproduction ratio effects but not the other. This results in DoF being inversely proportional to sensor size.

Here's the simplified math if you want to see what's happening:

DoF = 2[display.pixel.size/sensor.enlargement]Fstop/[reproduction.ratio]^2

sensor.enlargement = display.width/sensor.width
reproduction.ratio = sensor.width/scene.width

substituting these relationships into the original results in:

DoF = 2[display.pixel.size/display.width]Fstop[scene.width^2/sensor.width]

Notice that for the same Fstop and scene width DoF is inversely proportional to sensor width.

Check out Wikipedia's DoF enrty if you want more math details...

I apologize for the math here but is necessary I think; these relations are not opinions they are physics & the language of physics is math.
 

Keyboard shortcuts

Back
Top