Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
This doesn't change anything. The situation you are describing is one in which the resolution of the sensor is too low to record the necessary detail. So instead of a slightly blurred area being rendered clearly by measurements from several pixels, all that fine detail is lost.In any case, here is a wrench in the works.
If a blur circle on the sensor is as small as a photosensitive site (or maybe it is 4, I am still not clear on how a single pixel with tone and color is formed) then no amount of enlarging with change DoF. The entire image might blur from insufficient resolution, but a section of the image (or the entire image) will appear sharp regardless of the viewing conditions. In other words, the blur circles exist, but there is insufficient resolution to record them.
You simply have a poor quality, low-resolution image whose sharpness and detail are limited by the recording medium - like an image from a 2MP camera made into a big print, or an image taken with ISO 1600 film that has grain the size of golf balls. None of that changes the definition of depth of field.
Of course you could make a small enough print - or stand far enough back - that you can't see the individual pixels / grains: at this point the lack of resolution wouldn't be a problem and the in-focus parts of the image would appear sharp to you; the smaller magnification of the image has increased the depth of field - as it always does.
Best wishes
So you think that a print from a 50mm lens set at f4 on an 8x10 view camera focused at the same place as the same lens on a FF camera will produce equal DoF?
I guess if you define words differently than everyone else in the world, you can't possibly be wrong.
This question has actually been discussed a lot.Got a question I haven't seen in all these days of this discussion.
The definition of DOF only takes unsharpness caused by misfocus into consideration. Other sources of unsharpness are not considered.With the theory of the c of c, and how different sensor sizes change things, does this not also have to do with the size of the photosites on the sensor. If so, and we need to factor in sensor size, wouldn't we also have to factor in size of the photosites that are recording the image projected by the lens? So when we add more pixels to the same size sensor would this then under these theories change dof also. I have a apsc camera with 2.1 mp resolution and one with 16 mp resolution so looks like if we have to tie sensor size into the dof factor, this should have affect also. It certainly takes more out of focus to spread the affect over 4 pixels in the 2.1 sensor than the 16. Yet in all these charts and explanations no one mentions this. I'm sure the proponents of this dof change will have lots of highly scientific explanations of why this doesn't matter, but might be interesting to hear them. I'm having a ball reading all this - better than the cartoons in the paper.
This doesn't change anything. The situation you are describing is one in which the resolution of the sensor is too low to record the necessary detail. So instead of a slightly blurred area being rendered clearly by measurements from several pixels, all that fine detail is lost.In any case, here is a wrench in the works.
If a blur circle on the sensor is as small as a photosensitive site (or maybe it is 4, I am still not clear on how a single pixel with tone and color is formed) then no amount of enlarging with change DoF. The entire image might blur from insufficient resolution, but a section of the image (or the entire image) will appear sharp regardless of the viewing conditions. In other words, the blur circles exist, but there is insufficient resolution to record them.
You simply have a poor quality, low-resolution image whose sharpness and detail are limited by the recording medium - like an image from a 2MP camera made into a big print, or an image taken with ISO 1600 film that has grain the size of golf balls. None of that changes the definition of depth of field.
Of course you could make a small enough print - or stand far enough back - that you can't see the individual pixels / grains: at this point the lack of resolution wouldn't be a problem and the in-focus parts of the image would appear sharp to you; the smaller magnification of the image has increased the depth of field - as it always does.
Best wishes
This doesn't change anything. The situation you are describing is one in which the resolution of the sensor is too low to record the necessary detail. So instead of a slightly blurred area being rendered clearly by measurements from several pixels, all that fine detail is lost.In any case, here is a wrench in the works.
If a blur circle on the sensor is as small as a photosensitive site (or maybe it is 4, I am still not clear on how a single pixel with tone and color is formed) then no amount of enlarging with change DoF. The entire image might blur from insufficient resolution, but a section of the image (or the entire image) will appear sharp regardless of the viewing conditions. In other words, the blur circles exist, but there is insufficient resolution to record them.
You simply have a poor quality, low-resolution image whose sharpness and detail are limited by the recording medium - like an image from a 2MP camera made into a big print, or an image taken with ISO 1600 film that has grain the size of golf balls. None of that changes the definition of depth of field.
Of course you could make a small enough print - or stand far enough back - that you can't see the individual pixels / grains: at this point the lack of resolution wouldn't be a problem and the in-focus parts of the image would appear sharp to you; the smaller magnification of the image has increased the depth of field - as it always does.
Best wishes
This is called pixel density, and I mentioned it at the start of the thread.Got a question I haven't seen in all these days of this discussion. With the theory of the c of c, and how different sensor sizes change things, does this not also have to do with the size of the photosites on the sensor. If so, and we need to factor in sensor size, wouldn't we also have to factor in size of the photosites that are recording the image projected by the lens? So when we add more pixels to the same size sensor would this then under these theories change dof also. I have a apsc camera with 2.1 mp resolution and one with 16 mp resolution so looks like if we have to tie sensor size into the dof factor, this should have affect also. It certainly takes more out of focus to spread the affect over 4 pixels in the 2.1 sensor than the 16. Yet in all these charts and explanations no one mentions this. I'm sure the proponents of this dof change will have lots of highly scientific explanations of why this doesn't matter, but might be interesting to hear them. I'm having a ball reading all this - better than the cartoons in the paper.
Pixel density does have nothing to do with depth of field. That is why GB asked you to answer the question. The only difference between the two cameras cited is their pixel density, and yet any calculator will tell the depth of field they produce is the same.I thought that was a joke. Really, thats why I replied as I did and went to bed.I missed the answer to my question. Here's the question again:lolBut it will. A clear example is that a photo will appear more noisy when viewed at greater enlargement.Like I said in my other reply enlarging images will multiply aspects of them that are already there, but it will not change them.![]()
In any event, are you really trying to say that if we took a photo of the same scene from the same position with the same focal point, framing, and aperture using a 12 MP D700 and a 36 MP D800, displayed the photos at the same size, and viewed from the same distance, that the photos would have a different DOF?
At your leisure, of course.
Here is a quote from a previous post in this same thread by me made earlier which I think answers it just fine. Also I don't think you read the original post. The one that started this thread. I have even put it in bold this time.
• My point, which should be vainly obvious even to the most simple, is that a camera with 50mm lens, focused on a stump 10 feet in front of it, with an aperture of f/4 will project an image with the same DOF characteristics. All other things being equal (pixel density) the sensors size has nothing to do with this. The size of the sensor will merely determine how much of the image is recorded.
"http://www.dpreview.com/forums/post/54750631"
Here is my original post quote
"I believe part of the problem here is a confusion between sensor size, and pixel density."
It really is funny because I was then told by Lee Jay that Pixel Density has nothing to do with it… now your arguing that it does and that I just don't get it….. lol
Your misconceptions get addressed immediately after you post them, backed up with references, demonstrations and logic.You just keep posting the same tired assertions, devoid of any evidence. You hide when asked to answer questions that are incompatible with what you've locked yourself into.Its time for you to start reading my answers/original post and thinking about what I said. (see reply I just made above about 1 minute ago to GreatBustards question)
This doesn't change anything. The situation you are describing is one in which the resolution of the sensor is too low to record the necessary detail. So instead of a slightly blurred area being rendered clearly by measurements from several pixels, all that fine detail is lost.In any case, here is a wrench in the works.
If a blur circle on the sensor is as small as a photosensitive site (or maybe it is 4, I am still not clear on how a single pixel with tone and color is formed) then no amount of enlarging with change DoF. The entire image might blur from insufficient resolution, but a section of the image (or the entire image) will appear sharp regardless of the viewing conditions. In other words, the blur circles exist, but there is insufficient resolution to record them.
You simply have a poor quality, low-resolution image whose sharpness and detail are limited by the recording medium - like an image from a 2MP camera made into a big print, or an image taken with ISO 1600 film that has grain the size of golf balls. None of that changes the definition of depth of field.
Of course you could make a small enough print - or stand far enough back - that you can't see the individual pixels / grains: at this point the lack of resolution wouldn't be a problem and the in-focus parts of the image would appear sharp to you; the smaller magnification of the image has increased the depth of field - as it always does.
Best wishes
The size of the sensor has to do with the allowable circle of confusion when images are enlarged to a given viewing size and distance, and this is also a part of depth of field.Pretty simple isn't it? - Don't know why folks can't see it. Just keep repeating this that's all that is required. It's the lens, the aperture and the distance that control dof. Always has been and always will be.• My point, which should be vainly obvious even to the most simple, is that a camera with 50mm lens, focused on a stump 10 feet in front of it, with an aperture of f/4 will project an image with the same DOF characteristics. All other things being equal (pixel density) the sensors size has nothing to do with this. The size of the sensor will merely determine how much of the image is recorded.
It seems like every time these debates come up there is always a faction that wants to measure DOF at the film plane or sensor. This is appropriate in ONE situation: when you are viewing the two images where the size of the image from each sensor is the same size as the originating sensor and I am viewing each image from the same viewing distance. If that is the very unique and uncommon scenario that you are stipulating, then the DOF of both images is the same.Craig76 wrote:
• My point, which should be vainly obvious even to the most simple, is that a camera with 50mm lens, focused on a stump 10 feet in front of it, with an aperture of f/4 will project an image with the same DOF characteristics. All other things being equal (pixel density) the sensors size has nothing to do with this. The size of the sensor will merely determine how much of the image is recorded.
Pretty simple isn't it? - Don't know why folks can't see it. Just keep repeating this that's all that is required. It's the lens, the aperture and the distance that control dof. Always has been and always will be.
Sorry for not replying to your comments earlier, but it was my fortune to go to bed before they were posted. I'm not sure I would have replied anyway, but now, thanks to GB's reply just above, I have no need. I couldn't have said what he did any better.I believe it's printed to 8x10 viewed at 12" with 20/20 vision. Not your personal final image. If your final image is smaller or larger than 8x10, you may need less or more DoF than the calculated DoF to get the acceptable sharpness (per definition of DoF).But your perception of them and their sharpness will change depending on the size of the resulting image and your viewing distance. And that's what's determining the DoF. That's the point you're missing.Like I said in my other reply enlarging images will multiply aspects of them that are already there, but it will not change them.
The interesting thing about you is that you are arguing vehemently about DoF and it is very clear that you have no real understanding of its definition. It is NOT the characteristics of the capture that determine DoF; it is how those characteristics are perceived in the final image.I GET WHAT YOU ARE SAYING. I got it this morning when I first opened the original thread on this topic. You don't seem to get my point though. The characteristics that make the image what it is (which is what determines DOF) are captured at the moment of shutter release.
This is true. So, to standardize, I believe the DoF is defined at specific print size, viewing distance and the visual acuity.And the perceptions of those characteristics can change depending on enlargement, viewing distance, and visual acuity.
All good stuff we agree that now we have "the DoF".DoF is determined backwards from the viewed image, not forwards from the sensor. You would do yourself (and all of us) a favor if you would actually go back and learn what DoF actually is, not just what you want it to be.
Assuming a person with average visual acuity viewing separate lines on an 8 x 10 print from a distance of 1 foot, it is determined by experiment that separations of roughly 1/100 inch cannot be distinguished. Such a "blur" is therefore considered sharp. The 8x10 image would correspond to a blow-up of a FF sensor of about 7.5 fold. Thus the 1/100 inch (or 0.254 mm) distance on the print would correspond to roughly a .254/7.5 = .03 mm separation on a FF sensor.
This is what defines the CoC (circle of confusion) on the sensor. It is derived backwards from visual perceptions, not forwards from sensor technology. The sensor and its pixels have nothing to do with it.
Items in the camera's field of view that impinge on the sensor within that CoC, when blown up to an 8 x 10 image and viewed at a distance of 1 foot will appear effectively as one, and hence blurs of this magnitude will not be noticed (will appear sharp). Differences from the focal plane creating blurs of this magnitude or smaller determine the DoF.
No, because DoF is defined being sharp at 8x10, 12", 20/20. If you blow up, you blew up your print but did not change DoF. Go to any DoF calculator and there is no input for the final print size. Now if you wanted sharper print at size greater than 8x10, indeed, you would need to account for the deeper DoF than the calculated value. The landscape photographers knew this and thus went for f/64 even though the calculation would have given you that f/16 is sharp enough. Yep, sharp enough for 8x10 print.However, if you blow up the image further, or you view it more closely, those items that appear together under the standard conditions (blowup and viewing distance) will now, under closer scrutiny, appear separate (blurry). And hence the DoF changes.
Now, you can start giving out the equivalent DoF for the different print size and will have as much usefulness as the equivalent aperture, IMHO.It's not just semantics. When a concept has a specific definition, it is no longer just a matter of semantics as to what it means. When you apply notions that are in conflict with the definition, you are corrupting the notion, not messaging it.This is the moment that determines everything else.
Every other argument is just semantics… I am really trying to be patient here.
Yes, basically. I'm limiting it to the maximum size of the blur circle (whether printed or viewed in any other way) that can be perceived to be a disk rather than a point. That is the definition of a CoC - it is all about perception and the limits of human vision, and nothing else.Mike,But the CoC in an image depends on viewing conditions: it isn't fixed at the image capture stage. The size of the CoC is the diameter of the smallest dot that you can perceive to be slightly blurred rather than just a dimensionless point - and that depends how far away you look at it from.
I think you are limiting you definition of CoC to the printed size of a blur circle.
This is not part of definition of a CoC - what you describe is a quite separate phenomenon and is simply out-of-focus blur. The size of this blur circle will vary depending the distance from the focus plane, the aperture and so on. For an f/1.2 lens this blur circle may be huge, and at small apertures with a good lens it may be very small - but this is not a circle of confusion.But CoC happens at two times. First, the size of the blur circle in the image projected onto the sensor. This is usually measured in micrometers (thousandths of a meter).
Let's go with these numbers and say that the slightly OOF image of the point source ends up being a 10 microns disk on the sensor. If it is enlarged to a 10 x 8 print, the enlargement is 60x in area (not diameter): the diameter would increase by around a factor of 8 (the short edge of a FF sensor is around 1 inch and you are expanding it to 8 inches in the print). So the diamater would be around 80 microns - less than the CoC - and that part of the image would still appear sharp. Of course, in any smaller print that point would also appear sharp.An image from a FF sensor needs to be enlarged about 60 times when printed 8x10. What that means is that a blur circle at the capture stage (on the sensor) of 10 microns will print at 600 microns, quite a bit larger than the 254 microns you set as the minimum perceptible size of a blur circle at the printed stage of .01in
This is why a blur circle of 250 microns would be totally acceptable (and diffraction is a non-issue) when shooting large format 8x10 film if the output is to be an 8x10 print.
Yep, everything is summarized is the last paragraph. Besides the word common scenario, I would add the word FAIR scenario.It seems like every time these debates come up there is always a faction that wants to measure DOF at the film plane or sensor. This is appropriate in ONE situation: when you are viewing the two images where the size of the image from each sensor is the same size as the originating sensor and I am viewing each image from the same viewing distance. If that is the very unique and uncommon scenario that you are stipulating, then the DOF of both images is the same.Craig76 wrote:
• My point, which should be vainly obvious even to the most simple, is that a camera with 50mm lens, focused on a stump 10 feet in front of it, with an aperture of f/4 will project an image with the same DOF characteristics. All other things being equal (pixel density) the sensors size has nothing to do with this. The size of the sensor will merely determine how much of the image is recorded.
Pretty simple isn't it? - Don't know why folks can't see it. Just keep repeating this that's all that is required. It's the lens, the aperture and the distance that control dof. Always has been and always will be.
There are other ways that we can modify Craig76's "point" to make it correct, with respect to the same shot with differently-sized sensors.
Here is one:
A camera with 50mm lens, focused on a stump 10 feet in front of it, with an aperture of f/4 will project an image withthe same DOF characteristicssome of the same DOF variables. (Since we are talking here about the image being projected onto the sensor, some of the DOF variables are still to be defined. We do not yet know if they are the same or not)
Here is another:
A camera with 50mm lens, focused on a stump 10 feet in front of it, with an aperture of f/4 willprojectproduce an image with the same DOFcharacteristics.IF the final images are viewed at the same magnification (not same size).
If you want to insist on saying that sensor size does not impact DOF, then you have to stipulate what other DOF variables you are fixing to make that hold true. As explained above, you must either say that you are viewing the images at the same size as the sensor or at the same magnification from sensor to final image. That is NOT the common scenario, however.
The COMMON scenario is where the final image size is fixed. If the final image size is fixed, then magnification from sensor to final image will change and resulting DOF will change DEPENDING ON SENSOR SIZE.
It's OK to blow your stack - most of us are adults if not downright old ;-0Your misconceptions get addressed immediately after you post them, backed up with references, demonstrations and logic.You just keep posting the same tired assertions, devoid of any evidence. You hide when asked to answer questions that are incompatible with what you've locked yourself into.Its time for you to start reading my answers/original post and thinking about what I said. (see reply I just made above about 1 minute ago to GreatBustards question)
We actually have been quite patient with you, but I think that's running out.