DOF and Cropping/Enlargement continued...

In any case, here is a wrench in the works.

If a blur circle on the sensor is as small as a photosensitive site (or maybe it is 4, I am still not clear on how a single pixel with tone and color is formed) then no amount of enlarging with change DoF. The entire image might blur from insufficient resolution, but a section of the image (or the entire image) will appear sharp regardless of the viewing conditions. In other words, the blur circles exist, but there is insufficient resolution to record them.
This doesn't change anything. The situation you are describing is one in which the resolution of the sensor is too low to record the necessary detail. So instead of a slightly blurred area being rendered clearly by measurements from several pixels, all that fine detail is lost.

You simply have a poor quality, low-resolution image whose sharpness and detail are limited by the recording medium - like an image from a 2MP camera made into a big print, or an image taken with ISO 1600 film that has grain the size of golf balls. None of that changes the definition of depth of field.

Of course you could make a small enough print - or stand far enough back - that you can't see the individual pixels / grains: at this point the lack of resolution wouldn't be a problem and the in-focus parts of the image would appear sharp to you; the smaller magnification of the image has increased the depth of field - as it always does.

Best wishes
 
So you think that a print from a 50mm lens set at f4 on an 8x10 view camera focused at the same place as the same lens on a FF camera will produce equal DoF?

Uh....OK....

We have nothing left to talk about.
 
Got a question I haven't seen in all these days of this discussion.
This question has actually been discussed a lot.
With the theory of the c of c, and how different sensor sizes change things, does this not also have to do with the size of the photosites on the sensor. If so, and we need to factor in sensor size, wouldn't we also have to factor in size of the photosites that are recording the image projected by the lens? So when we add more pixels to the same size sensor would this then under these theories change dof also. I have a apsc camera with 2.1 mp resolution and one with 16 mp resolution so looks like if we have to tie sensor size into the dof factor, this should have affect also. It certainly takes more out of focus to spread the affect over 4 pixels in the 2.1 sensor than the 16. Yet in all these charts and explanations no one mentions this. I'm sure the proponents of this dof change will have lots of highly scientific explanations of why this doesn't matter, but might be interesting to hear them. I'm having a ball reading all this - better than the cartoons in the paper.
The definition of DOF only takes unsharpness caused by misfocus into consideration. Other sources of unsharpness are not considered.

Otherwise you would also need different DOF calculations for a Zeiss lens and a cheap kit lens at the same focal length and aperture since the kit lens will probably add more blur in addition to the OOF blur.
 
In any case, here is a wrench in the works.

If a blur circle on the sensor is as small as a photosensitive site (or maybe it is 4, I am still not clear on how a single pixel with tone and color is formed) then no amount of enlarging with change DoF. The entire image might blur from insufficient resolution, but a section of the image (or the entire image) will appear sharp regardless of the viewing conditions. In other words, the blur circles exist, but there is insufficient resolution to record them.
This doesn't change anything. The situation you are describing is one in which the resolution of the sensor is too low to record the necessary detail. So instead of a slightly blurred area being rendered clearly by measurements from several pixels, all that fine detail is lost.

You simply have a poor quality, low-resolution image whose sharpness and detail are limited by the recording medium - like an image from a 2MP camera made into a big print, or an image taken with ISO 1600 film that has grain the size of golf balls. None of that changes the definition of depth of field.

Of course you could make a small enough print - or stand far enough back - that you can't see the individual pixels / grains: at this point the lack of resolution wouldn't be a problem and the in-focus parts of the image would appear sharp to you; the smaller magnification of the image has increased the depth of field - as it always does.

Best wishes
 
In any case, here is a wrench in the works.

If a blur circle on the sensor is as small as a photosensitive site (or maybe it is 4, I am still not clear on how a single pixel with tone and color is formed) then no amount of enlarging with change DoF. The entire image might blur from insufficient resolution, but a section of the image (or the entire image) will appear sharp regardless of the viewing conditions. In other words, the blur circles exist, but there is insufficient resolution to record them.
This doesn't change anything. The situation you are describing is one in which the resolution of the sensor is too low to record the necessary detail. So instead of a slightly blurred area being rendered clearly by measurements from several pixels, all that fine detail is lost.

You simply have a poor quality, low-resolution image whose sharpness and detail are limited by the recording medium - like an image from a 2MP camera made into a big print, or an image taken with ISO 1600 film that has grain the size of golf balls. None of that changes the definition of depth of field.

Of course you could make a small enough print - or stand far enough back - that you can't see the individual pixels / grains: at this point the lack of resolution wouldn't be a problem and the in-focus parts of the image would appear sharp to you; the smaller magnification of the image has increased the depth of field - as it always does.

Best wishes
 
Got a question I haven't seen in all these days of this discussion. With the theory of the c of c, and how different sensor sizes change things, does this not also have to do with the size of the photosites on the sensor. If so, and we need to factor in sensor size, wouldn't we also have to factor in size of the photosites that are recording the image projected by the lens? So when we add more pixels to the same size sensor would this then under these theories change dof also. I have a apsc camera with 2.1 mp resolution and one with 16 mp resolution so looks like if we have to tie sensor size into the dof factor, this should have affect also. It certainly takes more out of focus to spread the affect over 4 pixels in the 2.1 sensor than the 16. Yet in all these charts and explanations no one mentions this. I'm sure the proponents of this dof change will have lots of highly scientific explanations of why this doesn't matter, but might be interesting to hear them. I'm having a ball reading all this - better than the cartoons in the paper.
This is called pixel density, and I mentioned it at the start of the thread.

"I believe part of the problem here is a confusion between sensor size, and pixel density."

Some people want to call this a different DOF, but the image characteristics from the lens when it is cast onto the sensor are the same. If shot with a wide angle and very small aperture, the 16mp sensor will retain sharpness better because it was there to begin with (from the lens casting the image on the sensor). The 2.1mp one will be softer. Some are calling this DOF alteration. In college 12 years ago we called it diffraction.

People can call it hippos jumping through hoops if they want. I don't care.

I don't think anyone reads my posts :/
 
Last edited:
Like I said in my other reply enlarging images will multiply aspects of them that are already there, but it will not change them.
But it will. A clear example is that a photo will appear more noisy when viewed at greater enlargement.
lol :)
I missed the answer to my question. Here's the question again:

In any event, are you really trying to say that if we took a photo of the same scene from the same position with the same focal point, framing, and aperture using a 12 MP D700 and a 36 MP D800, displayed the photos at the same size, and viewed from the same distance, that the photos would have a different DOF?

At your leisure, of course.
I thought that was a joke. Really, thats why I replied as I did and went to bed.

Here is a quote from a previous post in this same thread by me made earlier which I think answers it just fine. Also I don't think you read the original post. The one that started this thread. I have even put it in bold this time.

• My point, which should be vainly obvious even to the most simple, is that a camera with 50mm lens, focused on a stump 10 feet in front of it, with an aperture of f/4 will project an image with the same DOF characteristics. All other things being equal (pixel density) the sensors size has nothing to do with this. The size of the sensor will merely determine how much of the image is recorded.

"http://www.dpreview.com/forums/post/54750631"

Here is my original post quote

"I believe part of the problem here is a confusion between sensor size, and pixel density."

It really is funny because I was then told by Lee Jay that Pixel Density has nothing to do with it… now your arguing that it does and that I just don't get it….. lol
Pixel density does have nothing to do with depth of field. That is why GB asked you to answer the question. The only difference between the two cameras cited is their pixel density, and yet any calculator will tell the depth of field they produce is the same.

Honestly, every argument you tried to make is immediately debunked, and yet you still come back reasserting the same misconceptions. What we evidently have here is a true resistance to learning.

Dave
 
Craig76 wrote:

• My point, which should be vainly obvious even to the most simple, is that a camera with 50mm lens, focused on a stump 10 feet in front of it, with an aperture of f/4 will project an image with the same DOF characteristics. All other things being equal (pixel density) the sensors size has nothing to do with this. The size of the sensor will merely determine how much of the image is recorded.

Pretty simple isn't it? - Don't know why folks can't see it. Just keep repeating this that's all that is required. It's the lens, the aperture and the distance that control dof. Always has been and always will be.
 
Its time for you to start reading my answers/original post and thinking about what I said. (see reply I just made above about 1 minute ago to GreatBustards question)
Your misconceptions get addressed immediately after you post them, backed up with references, demonstrations and logic.You just keep posting the same tired assertions, devoid of any evidence. You hide when asked to answer questions that are incompatible with what you've locked yourself into.

We actually have been quite patient with you, but I think that's running out.

Dave
 
In any case, here is a wrench in the works.

If a blur circle on the sensor is as small as a photosensitive site (or maybe it is 4, I am still not clear on how a single pixel with tone and color is formed) then no amount of enlarging with change DoF. The entire image might blur from insufficient resolution, but a section of the image (or the entire image) will appear sharp regardless of the viewing conditions. In other words, the blur circles exist, but there is insufficient resolution to record them.
This doesn't change anything. The situation you are describing is one in which the resolution of the sensor is too low to record the necessary detail. So instead of a slightly blurred area being rendered clearly by measurements from several pixels, all that fine detail is lost.

You simply have a poor quality, low-resolution image whose sharpness and detail are limited by the recording medium - like an image from a 2MP camera made into a big print, or an image taken with ISO 1600 film that has grain the size of golf balls. None of that changes the definition of depth of field.

Of course you could make a small enough print - or stand far enough back - that you can't see the individual pixels / grains: at this point the lack of resolution wouldn't be a problem and the in-focus parts of the image would appear sharp to you; the smaller magnification of the image has increased the depth of field - as it always does.

Best wishes
 
• My point, which should be vainly obvious even to the most simple, is that a camera with 50mm lens, focused on a stump 10 feet in front of it, with an aperture of f/4 will project an image with the same DOF characteristics. All other things being equal (pixel density) the sensors size has nothing to do with this. The size of the sensor will merely determine how much of the image is recorded.
Pretty simple isn't it? - Don't know why folks can't see it. Just keep repeating this that's all that is required. It's the lens, the aperture and the distance that control dof. Always has been and always will be.
The size of the sensor has to do with the allowable circle of confusion when images are enlarged to a given viewing size and distance, and this is also a part of depth of field.

"It's important to note that DOF isn't a lens characteristic like focal length or aperture. It takes into account some subjective factors like print size and viewing distance. That's the reason different values for the CoC are used for different formats. Larger formats need to be enlarged less than smaller formats, and so a larger CoC can be used. For example to get an 8x10 print from an 8x10 negative, no enlargement is required, wheras to get the same print from a 35mm negative, an 8x enlargement is needed. So to get the same sharpness in a print, the 35mm negative must be 8x as sharp, or in terms of DOF and CoC, the CoC value used for DOF calculation must be 8x smaller.

From this I think you can see that if you're concerned about an 8x10 print which will be viewed from a distance of 3 ft, rather than 1ft, you could use a different CoC (one 3x as large in fact), wheras if you're concerned about a 24x30 print viewed from a distance of 1ft, the CoC value you need to use is 3x smaller than the "standard" value." http://bobatkins.com/photography/technical/dofcalc.html
 
Craig76 wrote:

• My point, which should be vainly obvious even to the most simple, is that a camera with 50mm lens, focused on a stump 10 feet in front of it, with an aperture of f/4 will project an image with the same DOF characteristics. All other things being equal (pixel density) the sensors size has nothing to do with this. The size of the sensor will merely determine how much of the image is recorded.

Pretty simple isn't it? - Don't know why folks can't see it. Just keep repeating this that's all that is required. It's the lens, the aperture and the distance that control dof. Always has been and always will be.
It seems like every time these debates come up there is always a faction that wants to measure DOF at the film plane or sensor. This is appropriate in ONE situation: when you are viewing the two images where the size of the image from each sensor is the same size as the originating sensor and I am viewing each image from the same viewing distance. If that is the very unique and uncommon scenario that you are stipulating, then the DOF of both images is the same.

There are other ways that we can modify Craig76's "point" to make it correct, with respect to the same shot with differently-sized sensors.

Here is one:

A camera with 50mm lens, focused on a stump 10 feet in front of it, with an aperture of f/4 will project an image with the same DOF characteristics some of the same DOF variables. (Since we are talking here about the image being projected onto the sensor, some of the DOF variables are still to be defined. We do not yet know if they are the same or not)

Here is another:

A camera with 50mm lens, focused on a stump 10 feet in front of it, with an aperture of f/4 will project produce an image with the same DOF characteristics. IF the final images are viewed at the same magnification (not same size).

If you want to insist on saying that sensor size does not impact DOF, then you have to stipulate what other DOF variables you are fixing to make that hold true. As explained above, you must either say that you are viewing the images at the same size as the sensor or at the same magnification from sensor to final image. That is NOT the common scenario, however.

The COMMON scenario is where the final image size is fixed. If the final image size is fixed, then magnification from sensor to final image will change and resulting DOF will change DEPENDING ON SENSOR SIZE.
 
Like I said in my other reply enlarging images will multiply aspects of them that are already there, but it will not change them.
But your perception of them and their sharpness will change depending on the size of the resulting image and your viewing distance. And that's what's determining the DoF. That's the point you're missing.
I GET WHAT YOU ARE SAYING. I got it this morning when I first opened the original thread on this topic. You don't seem to get my point though. The characteristics that make the image what it is (which is what determines DOF) are captured at the moment of shutter release.
The interesting thing about you is that you are arguing vehemently about DoF and it is very clear that you have no real understanding of its definition. It is NOT the characteristics of the capture that determine DoF; it is how those characteristics are perceived in the final image.
I believe it's printed to 8x10 viewed at 12" with 20/20 vision. Not your personal final image. If your final image is smaller or larger than 8x10, you may need less or more DoF than the calculated DoF to get the acceptable sharpness (per definition of DoF).
And the perceptions of those characteristics can change depending on enlargement, viewing distance, and visual acuity.
This is true. So, to standardize, I believe the DoF is defined at specific print size, viewing distance and the visual acuity.
DoF is determined backwards from the viewed image, not forwards from the sensor. You would do yourself (and all of us) a favor if you would actually go back and learn what DoF actually is, not just what you want it to be.

Assuming a person with average visual acuity viewing separate lines on an 8 x 10 print from a distance of 1 foot, it is determined by experiment that separations of roughly 1/100 inch cannot be distinguished. Such a "blur" is therefore considered sharp. The 8x10 image would correspond to a blow-up of a FF sensor of about 7.5 fold. Thus the 1/100 inch (or 0.254 mm) distance on the print would correspond to roughly a .254/7.5 = .03 mm separation on a FF sensor.

This is what defines the CoC (circle of confusion) on the sensor. It is derived backwards from visual perceptions, not forwards from sensor technology. The sensor and its pixels have nothing to do with it.

Items in the camera's field of view that impinge on the sensor within that CoC, when blown up to an 8 x 10 image and viewed at a distance of 1 foot will appear effectively as one, and hence blurs of this magnitude will not be noticed (will appear sharp). Differences from the focal plane creating blurs of this magnitude or smaller determine the DoF.
All good stuff we agree that now we have "the DoF".
However, if you blow up the image further, or you view it more closely, those items that appear together under the standard conditions (blowup and viewing distance) will now, under closer scrutiny, appear separate (blurry). And hence the DoF changes.
No, because DoF is defined being sharp at 8x10, 12", 20/20. If you blow up, you blew up your print but did not change DoF. Go to any DoF calculator and there is no input for the final print size. Now if you wanted sharper print at size greater than 8x10, indeed, you would need to account for the deeper DoF than the calculated value. The landscape photographers knew this and thus went for f/64 even though the calculation would have given you that f/16 is sharp enough. Yep, sharp enough for 8x10 print.
This is the moment that determines everything else.

Every other argument is just semantics… I am really trying to be patient here.
It's not just semantics. When a concept has a specific definition, it is no longer just a matter of semantics as to what it means. When you apply notions that are in conflict with the definition, you are corrupting the notion, not messaging it.
Now, you can start giving out the equivalent DoF for the different print size and will have as much usefulness as the equivalent aperture, IMHO.
Sorry for not replying to your comments earlier, but it was my fortune to go to bed before they were posted. I'm not sure I would have replied anyway, but now, thanks to GB's reply just above, I have no need. I couldn't have said what he did any better.

--
gollywop



D8A95C7DB3724EC094214B212FB1F2AF.jpg
 
But the CoC in an image depends on viewing conditions: it isn't fixed at the image capture stage. The size of the CoC is the diameter of the smallest dot that you can perceive to be slightly blurred rather than just a dimensionless point - and that depends how far away you look at it from.
Mike,

I think you are limiting you definition of CoC to the printed size of a blur circle.
Yes, basically. I'm limiting it to the maximum size of the blur circle (whether printed or viewed in any other way) that can be perceived to be a disk rather than a point. That is the definition of a CoC - it is all about perception and the limits of human vision, and nothing else.

It doesn't help that some of the definitions of CoC out there on the internet are misleading or plain wrong. This is a good one (from Cambridge in Color):

"...a more rigorous term called the circle of confusion is used to define how much a point needs to be blurred in order to be perceived as unsharp. When the circle of confusion becomes perceptible to our eyes, this region is said to be outside the depth of field and thus no longer acceptably sharp."

- and here is one form 'The Luminous Landscape':

"You can't understand Depth of Field until you understand COF (Circle of Confusion). The human eye has a finite ability to see fine detail. This is generally accepted as being 1' (minute) of arc. Translating this to the practical world, this means that at a normal reading distance the smallest object that a person with perfect eyesight, under ideal conditions can see is 1/16mm in size. If you place two dots smaller than this next to each other they will appear to be just one dot.

The photographic industry has generally found though that this is too fine a parameter, and long ago settled on 1/6th of a millimeter as the smallest point that can be clearly discerned by the average person under normal conditions. Expressed as a decimal, 1/6th of a millimeter equals 0.1667mm."

[note, this value of 1/6mm translates to around 1/150 inch which agrees pretty weil with the value I quoted earlier form a different source].
But CoC happens at two times. First, the size of the blur circle in the image projected onto the sensor. This is usually measured in micrometers (thousandths of a meter).
This is not part of definition of a CoC - what you describe is a quite separate phenomenon and is simply out-of-focus blur. The size of this blur circle will vary depending the distance from the focus plane, the aperture and so on. For an f/1.2 lens this blur circle may be huge, and at small apertures with a good lens it may be very small - but this is not a circle of confusion.

Let's assume for simplicity that you have a sharp lens so we can forget about effects of lens softness etc. So an infinitely small, bright 'point source' of light will be rendered - if it is in focus - as a point on the sensor. Let's ignore pixel size too (which is a red herring) and assume that you have a perfect very high sensor or very fine-frain large-format film.

If that point source is however slightly out of focus (OOF) it will of course be rendered as a disk whose size depends how far away form the focal plane it is. So the question - the only question - is: when the final image is made, is that circle larger or smaller than the CoC (i.e. the size at which it looks blutrred to the viewer)? That deterines whether it will be perceived as a point (in focus) or a disk (out of focus).
An image from a FF sensor needs to be enlarged about 60 times when printed 8x10. What that means is that a blur circle at the capture stage (on the sensor) of 10 microns will print at 600 microns, quite a bit larger than the 254 microns you set as the minimum perceptible size of a blur circle at the printed stage of .01in

This is why a blur circle of 250 microns would be totally acceptable (and diffraction is a non-issue) when shooting large format 8x10 film if the output is to be an 8x10 print.
Let's go with these numbers and say that the slightly OOF image of the point source ends up being a 10 microns disk on the sensor. If it is enlarged to a 10 x 8 print, the enlargement is 60x in area (not diameter): the diameter would increase by around a factor of 8 (the short edge of a FF sensor is around 1 inch and you are expanding it to 8 inches in the print). So the diamater would be around 80 microns - less than the CoC - and that part of the image would still appear sharp. Of course, in any smaller print that point would also appear sharp.

To get that point larger than the CoC you would have to expand its length (diameter) to > 250 microns: a 25-fold increase in length which means a 625-fold increase in area. That corresponds to an image that is around 36 x 24 inches, or 3 feet by 2 feet. Now that point would start to appear blurred... but only if you are still looking from one foot away. If you stand back and look at it from 3 feet, which would be a sensible viewing distance, the CoC has now expanded to 750 microns and the point still appears sharp. The more you stand back, the bigger the disk can be and still appear sharp, because of the resolution limits of your eye.

So the perception of what is in focus and what is out of focus, and hence depth of field, is inextricably bound up with viewing conditions.

Best wishes
 
Craig76 wrote:

• My point, which should be vainly obvious even to the most simple, is that a camera with 50mm lens, focused on a stump 10 feet in front of it, with an aperture of f/4 will project an image with the same DOF characteristics. All other things being equal (pixel density) the sensors size has nothing to do with this. The size of the sensor will merely determine how much of the image is recorded.

Pretty simple isn't it? - Don't know why folks can't see it. Just keep repeating this that's all that is required. It's the lens, the aperture and the distance that control dof. Always has been and always will be.
It seems like every time these debates come up there is always a faction that wants to measure DOF at the film plane or sensor. This is appropriate in ONE situation: when you are viewing the two images where the size of the image from each sensor is the same size as the originating sensor and I am viewing each image from the same viewing distance. If that is the very unique and uncommon scenario that you are stipulating, then the DOF of both images is the same.

There are other ways that we can modify Craig76's "point" to make it correct, with respect to the same shot with differently-sized sensors.

Here is one:

A camera with 50mm lens, focused on a stump 10 feet in front of it, with an aperture of f/4 will project an image with the same DOF characteristics some of the same DOF variables. (Since we are talking here about the image being projected onto the sensor, some of the DOF variables are still to be defined. We do not yet know if they are the same or not)

Here is another:

A camera with 50mm lens, focused on a stump 10 feet in front of it, with an aperture of f/4 will project produce an image with the same DOF characteristics. IF the final images are viewed at the same magnification (not same size).

If you want to insist on saying that sensor size does not impact DOF, then you have to stipulate what other DOF variables you are fixing to make that hold true. As explained above, you must either say that you are viewing the images at the same size as the sensor or at the same magnification from sensor to final image. That is NOT the common scenario, however.

The COMMON scenario is where the final image size is fixed. If the final image size is fixed, then magnification from sensor to final image will change and resulting DOF will change DEPENDING ON SENSOR SIZE.
Yep, everything is summarized is the last paragraph. Besides the word common scenario, I would add the word FAIR scenario.
 
Its time for you to start reading my answers/original post and thinking about what I said. (see reply I just made above about 1 minute ago to GreatBustards question)
Your misconceptions get addressed immediately after you post them, backed up with references, demonstrations and logic.You just keep posting the same tired assertions, devoid of any evidence. You hide when asked to answer questions that are incompatible with what you've locked yourself into.

We actually have been quite patient with you, but I think that's running out.
It's OK to blow your stack - most of us are adults if not downright old ;-0

BTW, was the DoF scale on the manual focus ring on SLR lens wrong?

 

Keyboard shortcuts

Back
Top