DOF and Cropping/Enlargement continued...

…oh, heaven forgive me… bless you all for your patience.

The argument is basically one of semantics anyway.

• Here is one point I want to make. Now consider this for a moment. You have two sensors with the same pixel density. One is APS-C one is FF. You shoot the same scene with the same lens set at the same aperture and focal length… got that firmly in your mind. Now explain to a simpleton like me how they are going to render a different DOF?
We've been through this before, but we are trying to be patient. Simply because one will be enlarged more than the other to make the same sized print. And whatever blur is present in the original image will be expanded more; it will be bigger. And when viewed from the same distance, the bigger blur in the print from the smaller senor will be more apparent to the viewer. If the size of the blur crosses the threshold of his acuity (his ability to detect detail), it will be deemed "out of focus" by the viewer. In the print from the smaller sensor, more of the area closer to the plane of best focus will be visibly blurred, and deemed out of focus; thus the range of objhect distances considered in focus will be narrower.
I have been through this before, but I am being patient. Yes like you say "whatever blur is present in the original image will be expanded more". I agree and have said the same thing if you would read my posts.

Now listen carefully here… because this is the last time. The sensor captures the light from the lens. Ok? Do we agree? The Lens projects this light. Agreed? The size of the sensor (everything else being equal) will only affect how much of the image is recorded by the camera.

This is a fact. And with that I am done on that point.
• Here is another. Enlarging images will multiply aspects of them that are already there, but it will not change them.

The picture of the boy by the canon is a good example to illustrait my first point (which ties in with the second http://www.dpreview.com/forums/post/54739267

Heres the images again. Try this experiment with this image in your mind once, and then please explain how sensor size has anything to do with the DOF.
Not sure your point here. Do you disagree that the knob of the cannon is visibly blurrier in the enlarged image? If not, then you agree that enlargement affects depth of field. If you don't see a difference, then I agree, you will have trouble relating to this conversation.
YES it IS blurrier. And those are the original properties of the image. If you would follow the link to the post I was linking to it is better explained there, and there is a little experiment that you can run dealing with this image.
I am a professional photographer and graphic designer with a degree in this field. Not that any of that matters, this debate is pointless. Its been a few hours, isn't it about time someone visit my website, then send me some personal insults? www.craigpilecky.com Someone want to tell me how stupid all my artwork is? Im sure that will help prove your point. But forgive me, I didn't mean to offend anyone for their patience.
No one has any desire to insult you, and I'm sure you photos are great. But the concepts of depth of field were worked out years ago, and they really are dependent on perception and enlargement. You don't need to believe us; do your own research.

Dave
 
Last edited:
DOF is a 100% perceptual concept. There is only one in-focus plane, and that plane is infinitely thin.

"Depth of field refers to the range of distance that appears acceptably sharp. It varies depending on camera type, aperture and focusing distance, although print size and viewing distance can also influence our perception of depth of field." Link

"In optics, particularly as it relates to film and photography, depth of field (DOF) is the distance between the nearest and farthest objects in a scene that appear acceptably sharp in an image. Although a lens can precisely focus at only one distance at a time, the decrease in sharpness is gradual on each side of the focused distance, so that within the DOF, the unsharpness is imperceptible under normal viewing conditions." Link

"A camera can only focus its lens at a single point, but there will be an area that stretches in front of and behind this focus point that still appears sharp." Link

"Depth of field is the amount of distance between the nearest and farthest objects that appear in acceptably sharp focus in a photograph." Link

"When a lens focuses on a subject at a distance, all subjects at that distance are sharply focused. Subjects that are not at the same distance are out of focus and theoretically are not sharp. However, since human eyes cannot distinguish very small degree of unsharpness, some subjects that are in front of and behind the sharply focused subjects can still appear sharp. The zone of acceptable sharpness is referred to as the depth of field." Link

As I said, DOF is a 100% perceptual concept.
Of course, there are other forms of blur that are not related to DOF, such as large pixels. In any case, another excellent explanation of the matter. I see there is still some confusion over both your explanation above and your photos from the previous thread. When you're done here, let's talk about Equivalence. ;-)
 
Like I said in my other reply enlarging images will multiply aspects of them that are already there, but it will not change them.
But your perception of them and their sharpness will change depending on the size of the resulting image and your viewing distance. And that's what's determining the DoF. That's the point you're missing.
I GET WHAT YOU ARE SAYING. I got it this morning when I first opened the original thread on this topic. You don't seem to get my point though. The characteristics that make the image what it is (which is what determines DOF) are captured at the moment of shutter release.
The interesting thing about you is that you are arguing vehemently about DoF and it is very clear that you have no real understanding of its definition. It is NOT the characteristics of the capture that determine DoF; it is how those characteristics are perceived in the final image. And the perceptions of those characteristics can change depending on enlargement, viewing distance, and visual acuity.

DoF is determined backwards from the viewed image, not forwards from the sensor. You would do yourself (and all of us) a favor if you would actually go back and learn what DoF actually is, not just what you want it to be.

Assuming a person with average visual acuity viewing separate lines on an 8 x 10 print from a distance of 1 foot, it is determined by experiment that separations of roughly 1/100 inch cannot be distinguished. Such a "blur" is therefore considered sharp. The 8x10 image would correspond to a blow-up of a FF sensor of about 7.5 fold. Thus the 1/100 inch (or 0.254 mm) distance on the print would correspond to roughly a .254/7.5 = .03 mm separation on a FF sensor.

This is what defines the CoC (circle of confusion) on the sensor. It is derived backwards from visual perceptions, not forwards from sensor technology. The sensor and its pixels have nothing to do with it.

Items in the camera's field of view that impinge on the sensor within that CoC, when blown up to an 8 x 10 image and viewed at a distance of 1 foot will appear effectively as one, and hence blurs of this magnitude will not be noticed (will appear sharp). Differences from the focal plane creating blurs of this magnitude or smaller determine the DoF.

However, if you blow up the image further, or you view it more closely, those items that appear together under the standard conditions (blowup and viewing distance) will now, under closer scrutiny, appear separate (blurry). And hence the DoF changes.
This is the moment that determines everything else.

Every other argument is just semantics… I am really trying to be patient here.
It's not just semantics. When a concept has a specific definition, it is no longer just a matter of semantics as to what it means. When you apply notions that are in conflict with the definition, you are corrupting the notion, not messaging it.

--
gollywop

D8A95C7DB3724EC094214B212FB1F2AF.jpg
 
Last edited:
…oh, heaven forgive me… bless you all for your patience.

The argument is basically one of semantics anyway.

• Here is one point I want to make. Now consider this for a moment. You have two sensors with the same pixel density. One is APS-C one is FF. You shoot the same scene with the same lens set at the same aperture and focal length… got that firmly in your mind. Now explain to a simpleton like me how they are going to render a different DOF?
We've been through this before, but we are trying to be patient. Simply because one will be enlarged more than the other to make the same sized print. And whatever blur is present in the original image will be expanded more; it will be bigger. And when viewed from the same distance, the bigger blur in the print from the smaller senor will be more apparent to the viewer. If the size of the blur crosses the threshold of his acuity (his ability to detect detail), it will be deemed "out of focus" by the viewer. In the print from the smaller sensor, more of the area closer to the plane of best focus will be visibly blurred, and deemed out of focus; thus the range of objhect distances considered in focus will be narrower.
I have been through this before, but I am being patient. Yes like you say "whatever blur is present in the original image will be expanded more". I agree and have said the same thing if you would read my posts.

Now listen carefully here… because this is the last time. The sensor captures the light from the lens. Ok? Do we agree? The Lens projects this light. Agreed? The size of the sensor (everything else being equal) will only affect how much of the image is recorded by the camera.

This is a fact. And with that I am done on that point.
Congratulations, you've now reasserted the obvious.
• Here is another. Enlarging images will multiply aspects of them that are already there, but it will not change them.

The picture of the boy by the canon is a good example to illustrait my first point (which ties in with the second http://www.dpreview.com/forums/post/54739267

Heres the images again. Try this experiment with this image in your mind once, and then please explain how sensor size has anything to do with the DOF.
Not sure your point here. Do you disagree that the knob of the cannon is visibly blurrier in the enlarged image? If not, then you agree that enlargement affects depth of field. If you don't see a difference, then I agree, you will have trouble relating to this conversation.
YES it IS blurrier. And those are the original properties of the image.
So is the lack of blur in in the smaller version also the original properties of the image? Remarkable -- an original image can hold both sharpness and blur of the same object at the same time! And indeed in a sense, it does. The original properties of the image CAN be perceived as either sharp of blurry, depending and the extent to which they are enlarged. but that once again shows that the properties recorded int he file are only the starting point for an evaluation of depth of field. the knob of the cannon can't be considered either sharp or blurry until it is know how much it will be enlarged.
If you would follow the link to the post I was linking to it is better explained there, and there is a little experiment that you can run dealing with this image.
The point made there was immediately debunked by the following responses. If you enlarge both prints so the subject matter is the same size (that is, apply the same percentage enlargement to both), you don't end up with comparable prints. The one from the full frame will be larger than the other. and if you crop it to the same size you have just created another crop -- so you have simply asserted the circular argument that if you crop two images to the same framing and then enlarge then to the same degree, they have the same depth of field. Duh.

the other problem with that approach is the it means that for the depth of field to be the same, imaged from crop sensor would always have to be printed smaller than those from full frame sensors. Not only that, they would have to be printed in exactly the same proportion to their crop factor, compared to full frame. If you accidentally -- or willfully -- slip up, then you have to acknowledge that the depth of field will be different. And if you really screw up and print them to the same final size -- then you will come face to face with the fact that the crop sensor provides less depth of field when all other factors are held constant -- just as predicted by all the DoF calculators.

Incidetnally, you never have responded why you think all those calculators give different results when only the senor size has changed. Why do you suppose that is?
I have done my research.
Well, with all due respect, you haven't explored the reason that depth of field is defined the way it is, and why all conventions, from calculators to depth of field scales on lenses, are based on values derived from human perception. If you had, you would at least be coming back with technical arguments about how the established methodology is flawed, or how equivalent results can be achieved with a different approach, or something. Instead you've imply reasserted the same obvious point over and over, without ever providing a compelling reason whey perception can be disregarded.

Dave
 
Like I said in my other reply enlarging images will multiply aspects of them that are already there, but it will not change them.

I GET WHAT YOU ARE SAYING. I got it this morning when I first opened the original thread on this topic. You don't seem to get my point though. The characteristics that make the image what it is (which is what determines DOF) are captured at the moment of shutter release. This is the moment that determines everything else.
Obviously everybody knows that the characteristics of the capture are locked into place (in the file) at the time of capture. I'm not sure why you thing that is news, or even a particularity remarkable insight.
I didn't. I thought it was a moronically obvious one.
But the key point is that the story doesn't -- can't -- end there. The whole point of taking a pciture if for it to be seen, right? And the impression that the picture give matters -- really matters -- right? Well, there are a whole lot of things that happen between capture and exhibition that affect how the final result will be regarded. With respect to depth of field, they key issue is that enlargement happens. Enlargement unavoidably enlarges blur. And the degree of enlargement unavoidably affects what degree of defocus in the original capture will be detectable, and thus regarded as blurry, or "not in focus", by the viewer.
Exactly. You are just putting your nose closer to the image (as I said in my first post on the topic this morning). Apparently some people want to call this "changing the DOF". Ok, whatever…. its just semantics.

We do not fundamentally disagree here. Never did. http://www.dpreview.com/forums/post/54746833
So the original capture is just the beginning. Nothing is either in focus or out of focus in the original capture; there is just a gradient scale of degrees of blur. It isn't possible to determine where that degree of blur becomes unacceptable until the image is enlarged to its viewable state. That's why everyone is telling you the perception is 100% of depth of field -- depth of field just doesn't exist without factoring in perception.
Why when I said its a matter of perception, was I shouted down numerous times, only to be told that… DOF is being changed by perception?

• My point, which should be vainly obvious even to the most simple, is that a camera with 50mm lens, focused on a stump 10 feet in front of it, with an aperture of f/4 will project an image with the same DOF characteristics. All other things being equal (pixel density) the sensors size has nothing to do with this. The size of the sensor will merely determine how much of the image is recorded.

• Will your visual perception be different? Yep!
Every other argument is just semantics… I am really trying to be patient here.
You should probably look up the word "semantics".
Linguistic semantics is the study of meaning that is used for understanding human expression through language. But I am trying to be patient here….
 
Last edited:
Like I said in my other reply enlarging images will multiply aspects of them that are already there, but it will not change them.
But your perception of them and their sharpness will change depending on the size of the resulting image and your viewing distance. And that's what's determining the DoF. That's the point you're missing.
I GET WHAT YOU ARE SAYING. I got it this morning when I first opened the original thread on this topic. You don't seem to get my point though. The characteristics that make the image what it is (which is what determines DOF) are captured at the moment of shutter release.
The interesting thing about you is that you are arguing vehemently about DoF and it is very clear that you have no real understanding of its definition. It is NOT the characteristics of the capture that determine DoF; it is how those characteristics are perceived in the final image. And the perceptions of those characteristics can change depending on enlargement, viewing distance, and visual acuity.

DoF is determined backwards from the viewed image, not forwards from the sensor. You would do yourself (and all of us) a favor if you would actually go back and learn what DoF actually is, not just what you want it to be.

Assuming a person with average visual acuity viewing separate lines on an 8 x 10 print from a distance of 1 foot, it is determined by experiment that separations of roughly 1/100 inch cannot be distinguished. Such a "blur" is therefore considered sharp. The 8x10 image would correspond to a blow-up of a FF sensor of about 7.5 fold. Thus the 1/100 inch (or 0.254 mm) distance on the print would correspond to roughly a .254/7.5 = .03 mm separation on a FF sensor.

This is what defines the CoC (circle of confusion) on the sensor. It is derived backwards from visual perceptions, not forwards from sensor technology. The sensor and its pixels have nothing to do with it.

Items in the camera's field of view that impinge on the sensor within that CoC, when blown up to an 8 x 10 image and viewed at a distance of 1 foot will appear effectively as one, and hence blurs of this magnitude will not be noticed (will appear sharp). Differences from the focal plane creating blurs of this magnitude or smaller determine the DoF.

However, if you blow up the image further, or you view it more closely, those items that appear together under the standard conditions (blowup and viewing distance) will now, under closer scrutiny, appear separate (blurry). And hence the DoF changes.
This is the moment that determines everything else.

Every other argument is just semantics… I am really trying to be patient here.
It's not just semantics. When a concept has a specific definition, it is no longer just a matter of semantics as to what it means. When you apply notions that are in conflict with the definition, you are corrupting the notion, not messaging it.
Craig, you've just been given a masterful summation of the technical derivation of depth of field, for free. You didn't even have to go out and do any research yourself; you've just had it handed to you. I heartily recommend that you put this to good use.

Dave
 
Exactly. You are just putting your nose closer to the image (as I said in my first post on the topic this morning). Apparently some people want to call this "changing the DOF". Ok, whatever…. its just semantics.
Not just "some people", everyone but you as I posted above:

 
Of course, there are other forms of blur that are not related to DOF, such as large pixels. In any case, another excellent explanation of the matter. I see there is still some confusion over both your explanation above and your photos from the previous thread. When you're done here, let's talk about Equivalence. ;-)
The whole thing is just weird.

DOF is a pretty horribly confusing topic, but this isn't the part that's confusing!

Equivalence is just flat simple and has no confusing parts. I don't see why everyone doesn't get that one immediately. They do when it comes to teleconverters, but not when it comes to cropping, even though they are the same. Even weirder!
 
Of course, there are other forms of blur that are not related to DOF, such as large pixels. In any case, another excellent explanation of the matter. I see there is still some confusion over both your explanation above and your photos from the previous thread. When you're done here, let's talk about Equivalence. ;-)
The whole thing is just weird.

DOF is a pretty horribly confusing topic, but this isn't the part that's confusing!
The whole problem is that people don't understand that DOF is a matter of perception, and that the subjective elements of perception are accounted for with the CoC, which is why the proper choice of the CoC is critical if one wants to compute either the absolute DOF or the relative differences in DOF between settings and/or systems.

I think what's confusing many is that there comes a point when the blur due to DOF is subsumed by the blur of large pixels -- blur is blur at that point, as it were. However, what they fail to understand is that if you took a photo of a scene from the same position and focal point with the same settings using, say, a 12 MP D700 and 36 MP D800, the DOFs would be identical for the same display size, viewing distance, and visual acuity.

The more you enlarge the photos, the more you can see the blur of the larger pixels of the D700, but this is entirely different from the blur associated with DOF. Would they claim that photos taken with a sharper lens have a different DOF than photos taken with the same camera and settings as a softer lens? I should hope not.
Equivalence is just flat simple and has no confusing parts.
It has no confusing part, but there are a few key points that need to be understood first. The first point is the difference and significance of the the virtual aperture (entrance pupil), which is the virtual image of the physical aperture (iris) when viewed through the front element of the lens, and the relative aperture (f-ratio), which is the quotient of the focal length and the diameter of the virtual aperture.

The next key point is the difference, and significance between the total amount of light that falls on the sensor and the density of light that falls on the sensor during the exposure, and that noise is largely an inherent property of the amount of light that makes up the photo.

After that, the relative contribution of noise that comes from the light itself and the noise that comes from the sensor and supporting hardware needs to be understood. In addition, the (non-) role that ISO plays in the noise in the photo.

Lastly, people need to understand the connection between DOF and the diameter of the [virtual] aperture, and thus how noise and DOF are linked for a given exposure time.

After that, it's not only simple, but natural and intuitive.
I don't see why everyone doesn't get that one immediately. They do when it comes to teleconverters, but not when it comes to cropping, even though they are the same. Even weirder!
I think they accept it for TCs and FRs (focal reducers), as those don't involve comparisons with different formats. But for those that do understand how TCs and FRs work, as opposed to merely accepting that they do work the way they do, I think they likely also understand Equivalence.
 
Like I said in my other reply enlarging images will multiply aspects of them that are already there, but it will not change them.
But it will. A clear example is that a photo will appear more noisy when viewed at greater enlargement.
I GET WHAT YOU ARE SAYING. I got it this morning when I first opened the original thread on this topic. You don't seem to get my point though. The characteristics that make the image what it is (which is what determines DOF) are captured at the moment of shutter release. This is the moment that determines everything else.
I don't think you do get what people are saying, 'cause if you did, you'd not have said what you just said. The DOF is not determined at the moment of capture. The viewing conditions of the photo are all *critical* components of the DOF, and are accounted for with the CoC, and this point is explained in detail here:

Every other argument is just semantics… I am really trying to be patient here.
"Patient" isn't the word I would use. In any event, are you really trying to say that if we took a photo of the same scene from the same position with the same focal point, framing, and aperture using a 12 MP D700 and a 36 MP D800, displayed the photos at the same size, and viewed from the same distance, that the photos would have a different DOF?
 
DOF is locked in at the moment of shutter release. Cropping DOES NOT CHANGE THAT!

Your PERCEPTION MAY CHANGE, but that is all…….
Craig, Lee Jay is being quite patient with you. To avoid further embarrassment later, let me suggest that, rather than simply reasserting your (mistaken) belief in a louder and louder voice, you not only consider his points, but do some independent research into the basis of depth of field. Explanations for why perception is an inherent factor in depth of field are readily available on the internet.

Start by pondering why all depth of field calculators give you a different result for different cameras that have different film or sensor sizes, when all factors (focal length, focus distance, aperture), except for the sensor size, are held constant. Since the image projected by the lens onto the recording surface is the same, the reason the results are different must have something to do with the size of the sensor, right? Now, why would the size of the sensor matter? Could it be that it's because the image from the smaller sensor needs to be enlarged more to make a standard sized print? And that since enlargement makes blur bigger and more noticeable, it will be more perceptible to a viewer? And that some of the bits that are perceived as sharp in one print may be perceived as blurred in the other? And that this difference would translate directly into a different perception of what is in focus, and what is out of focus?

You know, if that were true it would explain why depth of field calculators give different results for different sensor sizes, AND it would explain whey Lee Jay keeps telling you that depth of field is based entirely on what is perceived as in or out of focus. But Craig, you'll never know until you look into it yourself, instead of just repeating your assumptions.

Dave
 
Like I said in my other reply enlarging images will multiply aspects of them that are already there, but it will not change them.
But your perception of them and their sharpness will change depending on the size of the resulting image and your viewing distance. And that's what's determining the DoF. That's the point you're missing.
I GET WHAT YOU ARE SAYING. I got it this morning when I first opened the original thread on this topic. You don't seem to get my point though. The characteristics that make the image what it is (which is what determines DOF) are captured at the moment of shutter release.
The interesting thing about you is that you are arguing vehemently about DoF and it is very clear that you have no real understanding of its definition. It is NOT the characteristics of the capture that determine DoF; it is how those characteristics are perceived in the final image.
I believe it's printed to 8x10 viewed at 12" with 20/20 vision. Not your personal final image. If your final image is smaller or larger than 8x10, you may need less or more DoF than the calculated DoF to get the acceptable sharpness (per definition of DoF).
And the perceptions of those characteristics can change depending on enlargement, viewing distance, and visual acuity.
This is true. So, to standardize, I believe the DoF is defined at specific print size, viewing distance and the visual acuity.
DoF is determined backwards from the viewed image, not forwards from the sensor. You would do yourself (and all of us) a favor if you would actually go back and learn what DoF actually is, not just what you want it to be.

Assuming a person with average visual acuity viewing separate lines on an 8 x 10 print from a distance of 1 foot, it is determined by experiment that separations of roughly 1/100 inch cannot be distinguished. Such a "blur" is therefore considered sharp. The 8x10 image would correspond to a blow-up of a FF sensor of about 7.5 fold. Thus the 1/100 inch (or 0.254 mm) distance on the print would correspond to roughly a .254/7.5 = .03 mm separation on a FF sensor.

This is what defines the CoC (circle of confusion) on the sensor. It is derived backwards from visual perceptions, not forwards from sensor technology. The sensor and its pixels have nothing to do with it.

Items in the camera's field of view that impinge on the sensor within that CoC, when blown up to an 8 x 10 image and viewed at a distance of 1 foot will appear effectively as one, and hence blurs of this magnitude will not be noticed (will appear sharp). Differences from the focal plane creating blurs of this magnitude or smaller determine the DoF.
All good stuff we agree that now we have "the DoF".
However, if you blow up the image further, or you view it more closely, those items that appear together under the standard conditions (blowup and viewing distance) will now, under closer scrutiny, appear separate (blurry). And hence the DoF changes.
No, because DoF is defined being sharp at 8x10, 12", 20/20. If you blow up, you blew up your print but did not change DoF. Go to any DoF calculator and there is no input for the final print size. Now if you wanted sharper print at size greater than 8x10, indeed, you would need to account for the deeper DoF than the calculated value. The landscape photographers knew this and thus went for f/64 even though the calculation would have given you that f/16 is sharp enough. Yep, sharp enough for 8x10 print.
This is the moment that determines everything else.

Every other argument is just semantics… I am really trying to be patient here.
It's not just semantics. When a concept has a specific definition, it is no longer just a matter of semantics as to what it means. When you apply notions that are in conflict with the definition, you are corrupting the notion, not messaging it.
Now, you can start giving out the equivalent DoF for the different print size and will have as much usefulness as the equivalent aperture, IMHO.
Craig, you've just been given a masterful summation of the technical derivation of depth of field, for free. You didn't even have to go out and do any research yourself; you've just had it handed to you. I heartily recommend that you put this to good use.

Dave

--
http://www.pbase.com/dsjtecserv
 
Last edited:
Of course, there are other forms of blur that are not related to DOF, such as large pixels. In any case, another excellent explanation of the matter. I see there is still some confusion over both your explanation above and your photos from the previous thread. When you're done here, let's talk about Equivalence. ;-)
The whole thing is just weird.

DOF is a pretty horribly confusing topic, but this isn't the part that's confusing!

Equivalence is just flat simple and has no confusing parts. I don't see why everyone doesn't get that one immediately. They do when it comes to teleconverters, but not when it comes to cropping, even though they are the same. Even weirder!

--
Lee Jay
If I may, at DPR, there seems to be the tendency to use a well established term and try to explain again something that is well understood by the learned photographers but not to the new photographers. So rather than going back to the basic, there seems to be again tendency to create equivalence to make the explanation simpler but sounds odd to the old timers or to the traditionally trained photographers.

DoF as an example, it's got a definition and worked out the math to calculate it. Sensor/film size or CoC is a required input but not the final print size. Because 8x10 is assumed per definition.

Now we need to teach the new photographer that if the final print is 16x20, then the calculated DoF for 8x10 is not going to adequate to get a sharp print. Now you seem to say, therefore, DoF is different for the larger print to explain why the larger print seems to show more OOF blur. My take is, again, DoF is not different for larger print not because larger print shows more OOF blur but because the definition of DoF is for 8x10 print.

That the way human perception works is the reason why we can detect more OOF blur on a larger print and not because the larger print has shallower DoF. Now this may be the equivalent statement but my point is it makes more learned understanding to explain the visual acuity relative to the larger print than saying "oh, now it has shallower DoF".

BTW, do you see some parallel with the idea that same f-stop is different for FF lens vs. crop lens? ;-)
 
Before I get to my questions I should note that this DOF calculator does allow one to enter a circle of confusion (which could be done because of print size?) etc. :


This also says the circle of confusion varies


That is not to say there isn't a standard (default) circle of confusion such as viewing an 8 x 10 a foot away, though some can see blur with the standard circle of confusion that resulted from this standard.

(no link but I could probably find where I read this if needed)

============

This link explains equivalence:


It starts off with the situation where one uses a FF lens on a 4/3 sensor. Since the sensor crops the picture, it is not much different than if one took the same picture from a FF camera cropped it when editing it to the same picture and then printed both on 8 x 10? Cropping before or later makes no difference that I can see?

The part I don't see, and maybe someone can point me to something that explains why things closer or further away are out of focus. I read somewhere it was the total light, but wouldn't that mean shutter speed would also make an impact.

Pictures which deal with just the change in aperture size rather than cropping by the sensor show the larger angles that light is focused with the larger aperture.

So I was sort of assuming that was what the problem was , any point which reflect light to all parts of the aperture, would not focus as well the larger the aperture is (and the further away from the focus point). If that is true. then I don't understand the equivalence (DOF) as shown. The total light is cropped, but the aperture is not cropped, so these angles don't change? The only other thing I can think of is the (sensor) area over which this image is focused. And somehow this works out in the end the same as a smaller (cropped?) aperture focused on the larger FF sensor? Even if the aperture is cropped it is a little leap of faith to assume this would workout the same as putting the equivalent telephoto on the FF at the cropped aperture.

In fact another link said these DOF calculations are a bit approximate with different lens constructions etc.
 
Like I said in my other reply enlarging images will multiply aspects of them that are already there, but it will not change them.
But your perception of them and their sharpness will change depending on the size of the resulting image and your viewing distance. And that's what's determining the DoF. That's the point you're missing.
I GET WHAT YOU ARE SAYING. I got it this morning when I first opened the original thread on this topic. You don't seem to get my point though. The characteristics that make the image what it is (which is what determines DOF) are captured at the moment of shutter release.
The interesting thing about you is that you are arguing vehemently about DoF and it is very clear that you have no real understanding of its definition. It is NOT the characteristics of the capture that determine DoF; it is how those characteristics are perceived in the final image.
I believe it's printed to 8x10 viewed at 12" with 20/20 vision.
Those are the viewing conditions used to set the CoC for most DOF calculators/tables.
Not your personal final image. If your final image is smaller or larger than 8x10, you may need less or more DoF than the calculated DoF to get the acceptable sharpness (per definition of DoF).
You calculate the appropriate CoC for the viewing conditions.
And the perceptions of those characteristics can change depending on enlargement, viewing distance, and visual acuity.
This is true. So, to standardize, I believe the DoF is defined at specific print size, viewing distance and the visual acuity.
No, the DOF is most definitely *not* "defined at specific print size, viewing distance and the visual acuity" but the CoC used by default for most DOF calculators/tables is, as the link above explains, for an 8x10 inch print viewed from 10 in away with 20-20 vision.
DoF is determined backwards from the viewed image, not forwards from the sensor. You would do yourself (and all of us) a favor if you would actually go back and learn what DoF actually is, not just what you want it to be.

Assuming a person with average visual acuity viewing separate lines on an 8 x 10 print from a distance of 1 foot, it is determined by experiment that separations of roughly 1/100 inch cannot be distinguished. Such a "blur" is therefore considered sharp. The 8x10 image would correspond to a blow-up of a FF sensor of about 7.5 fold. Thus the 1/100 inch (or 0.254 mm) distance on the print would correspond to roughly a .254/7.5 = .03 mm separation on a FF sensor.

This is what defines the CoC (circle of confusion) on the sensor. It is derived backwards from visual perceptions, not forwards from sensor technology. The sensor and its pixels have nothing to do with it.

Items in the camera's field of view that impinge on the sensor within that CoC, when blown up to an 8 x 10 image and viewed at a distance of 1 foot will appear effectively as one, and hence blurs of this magnitude will not be noticed (will appear sharp). Differences from the focal plane creating blurs of this magnitude or smaller determine the DoF.
All good stuff we agree that now we have "the DoF".
There is no "the DoF". The DOF is a calculated value that *requires* assumptions about the viewing conditions which are accounted for with the CoC:

http://en.wikipedia.org/wiki/Circle_of_confusion

In photography, the circle of confusion (CoC) is used to determine the depth of field, the part of an image that is acceptably sharp. A standard value of CoC is often associated with each image format, but the most appropriate value depends on visual acuity, viewing conditions, and the amount of enlargement.
However, if you blow up the image further, or you view it more closely, those items that appear together under the standard conditions (blowup and viewing distance) will now, under closer scrutiny, appear separate (blurry). And hence the DoF changes.
No, because DoF is defined being sharp at 8x10, 12", 20/20.
No, it is not:

http://en.wikipedia.org/wiki/Depth_of_field

In optics, particularly as it relates to film and photography, depth of field (DOF) is the distance between the nearest and farthest objects in a scene that appear acceptably sharp in an image.

Not that what *appears* to be acceptably sharp in a photo depends strongly on the viewing conditions, and DOF is defined accordingly.
If you blow up, you blew up your print but did not change DoF.
Wrong again. But don't just take my word for it:


In other words, the more critical we look at our photographs the more apparent it becomes that there is only one plane that is really sharp. Nonetheless the whole concept of DOF in terms of a region of acceptable sharpness is perfectly valid. As we have just seen, the print size and viewing distance are of considerable importance in any DOF assessment; in a calculation these parameters must be taken into account via a suitable COC choice.

Doesn't leave a lot to the imagination, does it?
Go to any DoF calculator and there is no input for the final print size.
Wrong again. For example, click on the "show advanced" tab for this DOF calculator:

http://www.cambridgeincolour.com/tutorials/dof-calculator.htm
Now if you wanted sharper print at size greater than 8x10, indeed, you would need to account for the deeper DoF than the calculated value. The landscape photographers knew this and thus went for f/64 even though the calculation would have given you that f/16 is sharp enough. Yep, sharp enough for 8x10 print.
Again, use the appropriate CoC for the DOF calculation, or use an advanced calculator such as the one linked above.
This is the moment that determines everything else.

Every other argument is just semantics… I am really trying to be patient here.
It's not just semantics. When a concept has a specific definition, it is no longer just a matter of semantics as to what it means. When you apply notions that are in conflict with the definition, you are corrupting the notion, not messaging it.
Now, you can start giving out the equivalent DoF for the different print size and will have as much usefulness as the equivalent aperture, IMHO.
Or, you can calculate the correct DOF for the final viewing conditions, which, as you might guess, does have the same utility as understanding the equivalent f-ratio.
Craig, you've just been given a masterful summation of the technical derivation of depth of field, for free. You didn't even have to go out and do any research yourself; you've just had it handed to you. I heartily recommend that you put this to good use.
And now it's been spelled out in even more detail, not that it will matter, of course.
 
Last edited:
Like I said in my other reply enlarging images will multiply aspects of them that are already there, but it will not change them.
But it will. A clear example is that a photo will appear more noisy when viewed at greater enlargement.
lol :)
I missed the answer to my question. Here's the question again:

In any event, are you really trying to say that if we took a photo of the same scene from the same position with the same focal point, framing, and aperture using a 12 MP D700 and a 36 MP D800, displayed the photos at the same size, and viewed from the same distance, that the photos would have a different DOF?

At your leisure, of course.
 
Before I get to my questions I should note that this DOF calculator does allow one to enter a circle of confusion (which could be done because of print size?) etc. :

http://www.dofmaster.com/dofjs.html

This also says the circle of confusion varies


That is not to say there isn't a standard (default) circle of confusion such as viewing an 8 x 10 a foot away, though some can see blur with the standard circle of confusion that resulted from this standard.

(no link but I could probably find where I read this if needed)
All this is explained, in detail, along with a link to a DOF calculator that uses the appropriate CoC for user input viewing conditions.
============

This link explains equivalence:

http://www.dpreview.com/articles/2666934640/what-is-equivalence-and-why-should-i-care/2

It starts off with the situation where one uses a FF lens on a 4/3 sensor. Since the sensor crops the picture, it is not much different than if one took the same picture from a FF camera cropped it when editing it to the same picture and then printed both on 8 x 10? Cropping before or later makes no difference that I can see?
In terms of DOF, yes. In terms of resolution and noise, there are conditions that would have to be met.
The part I don't see, and maybe someone can point me to something that explains why things closer or further away are out of focus. I read somewhere it was the total light, but wouldn't that mean shutter speed would also make an impact.
It will cost you just over two minutes of your life to get a very good explanation:

Pictures which deal with just the change in aperture size rather than cropping by the sensor show the larger angles that light is focused with the larger aperture.

So I was sort of assuming that was what the problem was , any point which reflect light to all parts of the aperture, would not focus as well the larger the aperture is (and the further away from the focus point). If that is true. then I don't understand the equivalence (DOF) as shown. The total light is cropped, but the aperture is not cropped, so these angles don't change?

The only other thing I can think of is the (sensor) area over which this image is focused. And somehow this works out in the end the same as a smaller (cropped?) aperture focused on the larger FF sensor? Even if the aperture is cropped it is a little leap of faith to assume this would workout the same as putting the equivalent telephoto on the FF at the cropped aperture.
Not exactly sure what you are saying here. Consider 25mm f/1.4 on mFT and 50mm f/1.4 on FF. The aperture diameter for 25mm f/1.4 on mFT (25mm / 1.4 = 18mm) is half the aperture diameter as 50mm f/1.4 on FF (50mm / 1.4 = 36mm), so it is exactly as if the aperture were blocked (cropped).

That's why 25mm f/1.4 on mFT has the same DOF and puts the same amount of light on the sensor as 50mm f/2.8 on FF -- the apertures have the same diameter (25mm / 1.4 = 50mm / 2.8 = 18mm).
In fact another link said these DOF calculations are a bit approximate with different lens constructions etc.
Indeed, the DOF calculations are approximations, just as the f-ratio is an approximation to the numerical aperture, and you can see different formulas for different situations here:


However, the DOF formula for moderate to large distances works for most situations.
 
Most of what is being written here is correct. DoF is not real or measurable without knowing the observer's viewing distance, visual acuity, and print size in addition to the focal length, aperture and focus distance. If a series of points were printed with growing distances from left to right, the depth of field would be the equivalent of where the eye sees grey tone, rather than black dots. Obviously, the larger the image is printed, the more the dots will spread apart and the further left the line between grey and dots will move. Get closer to the print, same thing happens.

In any case, here is a wrench in the works.

If a blur circle on the sensor is as small as a photosensitive site (or maybe it is 4, I am still not clear on how a single pixel with tone and color is formed) then no amount of enlarging with change DoF. The entire image might blur from insufficient resolution, but a section of the image (or the entire image) will appear sharp regardless of the viewing conditions. In other words, the blur circles exist, but there is insufficient resolution to record them.
 

Keyboard shortcuts

Back
Top