Part of the reason there is a lack of agreement is that the examples given are not consistent from poster to poster. They are looking at different things and mixing examples like apples and oranges.
An image on a smaller sensor needs to be enlarged more, to give a print of the same size. This difference in the degree of enlargement will affect the depth of field.
This is incorrect and my guess is it based on film thinking. A 10Mp image will need the same degree of "enlargement" to make a large print regardless of size of sensor it came from. The image only consists of pixels as does the print you make from it.
I'm sorry you have lost me. What is a "Large" print, when the only unit of measurement is the pixel? How much wood and glass is required in order to frame such a print?
It is easy to test this, just strip the exif data from 2 images of the same pixel size from different sensors and see if your printing programme cares about sensor size.
In order to meet your requirements. it would be necessary to use a printing program which specifies the paper size in pixels too. Good luck with that
What Steephill may be referring to is the pixels per inch specification (ex: 360dpi).
So lets say there are 3600 pixels across the sensor. The size of the print at 360ppi would then be 10 inches on one side. If you play around with the image size dialogue box in Photoshop (with resampling turned off!), you can determine what size various prints can be made with various dpi settings.
Now with that thinking, it means that at 360dpi printing, a small sensor with 3600 tiny pixels will produce a 10in long print and a large football field sized sensor, 3600 pixels long will be shrunk down to produce the same 10 in long print. With this logic, as long as the lens used projects the
same field of view onto both sensors, that is one tiny scene, and one gigantic scene, both showing the same view, then the print would have the same depth of field in each case.
Keep that thought and let's look at this statement:
Look at it like this: Let's take the popular 0.03mm circle of confusion on a full-frame sensor. Let's say we have an image of a single circle with a diameter of 0.03mm projected onto two sensors: one is a full-frame sensor, the other is a 1/3" type. Now, wouldn't you think that the circle would take up more of the total space on a small 1/3" type sensor than on the full-frame sensor? And so if you have two 10MB images, one from the small sensor, one from the bigger sensor, in which would the circle appear larger?
Of course the .03mm circle projection would have the same .03mm diameter on both 10MP sensors. It may cover 9 pixels on the smaller sensor and only 4 pixels on the larger sensor but the projected circle would be the same size. So you could say then that the DOF is exactly the same. Except that we don't normally view the image on a sensor. We usually enlarge them to a certain size - like 24" diagonal screens or 8x10 sized prints. As a result, with the
same degree of enlargement, the bigger sensor could fill up the 8x10 print but the smaller sensor would only get to 6x8 for example. Now someone can claim that the DOF is still the same in both since the .03 diameter circle got enlarged to the same degree.
That is correct and that is why Leica, Zeiss, Canon, etc. require another parameter in addition to enlargement before determining DOF. They stipulate that you need to be looking at the image from a specific distance, one close enough where the image can just be viewed in its entirety without panning around. Roughly, the distance is the diagonal of the image. So for the 8x10 you would be "examining" it for DOF from a distance of about 10". For the smaller 6x8 crop of the same image, you would
have to move in closer to about 8" in order to keep the same angle of view. Once you do that, the blur circle will
appear that much bigger and consequently - blurrier. The DOF just went down!
However, unlike the above example, the field of view around that circle shown on the smaller sensor is cropped. It's an unfair comparison. But if you change the lens so that the
same field of view is projected onto both sensors, now the situation in the first example occurs. This is why acceptable circles of confusion are smaller by the same proportion as the smaller sensor is to the larger one. It accounts for the fact that you can't enlarge the smaller image by the same amount as the larger one and still not notice that an apparently sharp point is in fact a blurry circle.
So those are two different ways of looking at it. The dpi argument implies that you'll get the same image in either case. The circle of confusion/ degree of enlargement argument seems to imply otherwise.
I'll take a break and let others have a go at solving this paradox.
--
Robert