Is this a Full Frame Myth?

The short answer is that the DOF is the same, assuming you control all factors.

This means setting the FF camera to a higher ISO and numerically larger f-stop. This way the image noise is the same, and the DOF is the same. Both cameras hit diffraction at the same time, although at different f-stops. Both cameras have the same shutter speed

The only big exception is when you get to places where the equivalent f-stop or ISO doesn't exist on one of the cameras--for wxample you'd (roughly) need a 16/1.0 to be the equivalent of a 24/1.4 wide open.

Yeah, there are a bunch of fudge factors, this only takes into account photon noise, electrical noise doesn't really vary directly with sensor size; the MTF of the system at the CoC resolution can raise or lower the DoF a little; as you get to numerically small apertures lens abberations get exponentially, not geometrically worse; some lenses have corner sharpness issues on FF, but crop is in general more demanding of lens quality, etc.

But equivalence is still a very good approximation.
 
(Subject is opening lyrics to NY Dolls "Personality Crisis")
but reducing the number of pixels past a certain point will result in a visibly

pixelated image rather than it being blured. This is just the digital equivalent of
blur and not some special advantage.
We have to be sure we're not confusing and conflating two different
sources of "lack of detail." Just because we reduce detail by some
additional means doesn't mean the depth of field changed, nor does
digital sharpening (which doesn't increase detail anyway) change the
depth of field.

For instance, if I had a conventional film negative in the enlarger
and threw it out of focus to make a print, that did NOT decrease the
depth of field in that print--it's still there and in fact still
measurable, even though nothing is sharp.
Sure it reduces the DOF--to nothing pretty quickly. DoF can be defined as the ability to resolve detail at a certain frequency at a certain print size. Resolve can be restated as "has some minimum MTF"--2%, 5%, whatever. So if the enlarger-defocus blur is such that it yields an MTF at the prescribed frequency that's not extremely close to 100%, it will indeed change what does and does not meet the DoF criteria.
 
The short answer is that the DOF is the same, assuming you control
all factors.
I'm a Joe Mama fan and all, but let's keep it simple for now. I'd recommend to the OP reading the following article:

http://www.normankoren.com/Tutorials/MTF6.html

Note, in particular, this statement:

"Depth of field is constant when the f-stop is proportional to the format size, i.e., DOF is the same for a 35mm image taken at f/11, a 6x7 image at f/22, a 4x5 image at f/45 or an 8x10 image at f/90."

I'll put it another way: if you keep the f/ratio the same, then the larger sensor will produce a shallower DOF, assuming the same framing and perspective. Playing around with a DOF calculator will also help (well, it certainly helped me understand DOF years ago).

All the best,

Victor
 
Lets say you want to print a life size print of someone's head. Using the magnification logic, all portraits, printed life size at say, 10X12, should have the same DOF at the same F-stop, regardless of sensor size.
We knew that long before digital imaging was invented.

--
RDKirk
'TANSTAAFL: The only unbreakable rule in photography.'
 
As a D3 owner and
former D300 owner, I have observed there is no consistent 3D effect
with these cameras as often seen in comparing 35mm film to medium
format film.
I think the factor which people fail to take into account when comparing the DOFs of digital cameras is the sizes of the photosites on the sensor. As the photosites get smaller, the DOF is reduced. So, if you compare a 35mmFF and a 4/3, the increase in DOF is not as great as you would assume by just comparing lens focal lengths and apertures.

In the film days (which I also remember), the resolution at the film plane was the same for all formats, hence we all developed our understandings based on focal length and aperture. I believe that in the digital realm these no longer tell the full story.

And yes, I have done calculations on this.

Cheers, Baddboy.
 
Lets say you want to print a life size print of someone's head. Using
the magnification logic, all portraits, printed life size at say,
10X12, should have the same DOF at the same F-stop, regardless of
sensor size.
No, because the part about "the same F-stop" screws up this line of thinking.

Bigger sensors demand longer focal lengths for the same overall magnification. Longer focal lengths need larger apertures for the same f-number. Larger apertures mean reduced DoF. Thus, bigger sensors mean reduced DoF for the same overall magnification (same image composition).

This end result should be blindingly obvious, folks. A small-sensor digicam has far greater DoF than a DSLR does for the same image at the same f-number.
 
Depth of field is the distance from the plane of focus that is in focus (to a certain degree, only the focus plane is "perfectly" in focus.

In fact, the depth of field is always 1/3 towards the camera, 2/3 away from the camera.

Now, with a 100 mm lens on a D700, for the same crop, you will get less depth of field than with a D300 with the same area showing in the photo. Why? Because with a 100mm lens, to get the same crop you have to move back. Moving back gives you more depth of field from the plane of focus (closer and further from the camera).

A point and shoot, having a tiny sensor requires a much shorter focal length to get the same crop from the same distance. So the shorter focal length gives you more depth of field. That's a big problem with such cameras. Everything ends up in focus, even at wide apertures.

Most of my professional career was shooting with full frame (film) cameras. It was tough getting used to the much greater depth of field for a given angle of view in the camera. My favorite focal lengtg with film was 35mm. That's close to a 24mm lens on a NIkon DX body. (36mm to be exact.) That 24mm has a much deeper depth of field when shooting from a distance on a D300 to get the same crop as a 35mm lens on a D3 or D700.

It's that simple. It's plain to those of us who have shot full frame most of our careers. Some website isn't going to show you diddly compared to what a professional is going to see in their photos.

I just shot an assignment with a 1Ds Mark II last week after shooting with my D200 for a few weeks. And it just seems more natural to me. And the depth of field is much shallower.

--
Eric

All cats are mortal.
Socrates died.
Therefore, Socrates was a cat.
 
then maybe these people are describing the ability to create a larger range for DOF .

if small senor cameras automatically create a longer DOF no matter what, then the ability to work inside that limit may be of value giving you mathmatically a greater range of achievable depths from the very shallow to

checked out the calculator it is pretty informative and easy to use. thanks for pointing that out.. I plugged in focal lengths for a 28mm lens set at f2.8 for full frame and for aps and the full frame starts earlier and extinguishes at infinity. the aps starts later at the near limit and extinguishes at a given length for far distance of acceptable sharpnessthe aps ha..(28mm, f2.8, subject distance 10meters)

the aps had a DOF of about 85 meters
the full frame was infinity

thanks again for suggesting playing with the learn glossary section of this site.

if my calculations are right(which the probably aren't) then the full frame has a shallower depth of field at telephoto than an aps and deeper depth of field at wide angle sttings.

and since focal length numbering on a lens is not described by a specific number of millimeters from the sensor but would appear actually describe field of view mathmatics based off the 35mm format ratios. the only thing that is really confusing about the calculator is that it asks for actual lens lengths which makes me uncertain as to what it is calculating.

anyway i know I ramble but this is a fun subject and it is increasing my understanding of my cameras abilities. thanks guys. hopefully OP is gaining an understanding
work is money,
time is irreplaceable
 
Depth of field is the distance from the plane of focus that is in
focus (to a certain degree, only the focus plane is "perfectly" in
focus.

In fact, the depth of field is always 1/3 towards the camera, 2/3
away from the camera.
Always? Not really. That's a very rough rule of thumb that sometimes is useful.

Take for example focus at the hyperfocal distance (useful for landscapes). The DOF extends a specific distance in front of the focus point, and an infinite distance behind it. This is a ratio of 1:infinity.

Another example, for macro work, the DOF is more or less symmetrical, extending equal distances in front of and behind the focus point. A ratio of 1:1.

So the distribution varies from 1:1 for close subjects to 1:infinity for distant subjects. The variation is gradual, and the only useful generalisation is that for non-macro subjects, the DOF extends further behind the subject than in front.
Regards,
Peter
 
I understand what you are trying to say... but I don't understand how
you think your figures say it. Do want to give it another shot??
Happy to oblige, squire. I appreciate you are an old dog, but I'll try to go slow.

The term "Depth of Field" is one of worst invented by human ingenuity, if clarity is our goal. There is, of course, no such thing. What there is, is a zone of "acceptable sharpness", in front of and behind the plane of focus. But what counts as acceptable sharpness in a print depends on many factors - viewing distance, paper, subject and composition, etc. Of course, one key factor is the size of the printed circles of confusion; traditionally, 0.25mm has been taken as the critical size for a print viewed at reading distance.

The key point, in this context, is that, given stable subjective factors and viewing distance , the size of the printed circle of confusion that is perceived as unsharp is absolute. Given all the factors at play it may be more or less then 0.25mm, but whatever it is it is a number of mm, not (eg) a proportion of the image. This is the take-home message.

In the olden days, prints were made from negatives. If you had a 36 x 24 mm negative and wanted to make a 20 x 25 cm print you have to enlarge the negative 7 times (25/3.6 = 6.9). You enlarged everything on the negative 7 times, so in order for the final printed circles of confusion to be less than 0.25mm they had to be 0.25/7 = 0.036mm on the negative. If you had an 8 x 10 negative you did not have to enlarge it at all to make an 8 x 10 print, so the circles of confusion could be 0.25mm on the negative and they would still be 0.25mm on the final print. There was a fixed and immutable relationship between the size of circles of confusion on the negative and their size on the print.

But, if there is a worse term than DOF, it is "enlargement of a digital image". There is no such thing as a digital image: there is an image cast by the lens on the sensor, then there is a string of numbers (they encode an image, but they are just a string of numbers). It is not a film negative and you can't talk about it as if it was. In particular, there is not a fixed relationship between the size of the circles of confusion cast on the sensor and their size on the print.

On a printed digital image the size of a circle of confusion is related to the number of dots that make it up, and to the size of those dots. This is why it is important that the size of the printed circle of confusion perceived as sharp is absolute. Now, the number of dots making up the printed circle of confusion is set by the number of pixels covered by the circle of confusion in the real optical image cast on the sensor by the lens, but the size of the dots can be varied independently.

Capisc'?
--

'Some of the money I spent on booze, women and fast cars, but the rest I squandered' - George Best
 
On a printed digital image the size of a circle of confusion is
related to the number of dots that make it up, and to the size of
those dots. This is why it is important that the size of the
printed circle of confusion perceived as sharp is absolute. Now,
the number of dots making up the printed circle of confusion is set
by the number of pixels covered by the circle of confusion in the
real optical image cast on the sensor by the lens, but the size of
the dots can be varied independently.

Capisc'?
You went into more depth than was necessary, but yes, I understand what you are saying.

However, now that I understand where you are coming from, I can be perfectly clear that you are quite wrong.

You can prove it for yourself....

-- Make a series of small sample prints with your inkjet, and make them at progressively lower and lower resolutions.

-- Examine the prints critically side-by-side.

-- Admit to yourself that even the lowest rezzed one does NOT actually look less sharp anywhere, just more pixelated and jaggy, and less detailed as a consequence.

-- Note the interesting fact that the 'missing' detail can apparently be restored by moving away from that print and viewing from a distance sufficient to hide the pixelation/jaggies. This is true, as you will find out if you carry out this test.

I leave it to you whether you get back here and admit it to the forum, or not. ;-)

Best wishes, Les.
--
Regards,
Baz
 
Are you doing this deliberately?

Les Olson wrote:
...
In the olden days, prints were made from negatives. If you had a 36
x 24 mm negative and wanted to make a 20 x 25 cm print you have to
enlarge the negative 7 times (25/3.6 = 6.9). You enlarged everything
on the negative 7 times, so in order for the final printed circles of
confusion to be less than 0.25mm they had to be 0.25/7 = 0.036mm on
the negative. If you had an 8 x 10 negative you did not have to
enlarge it at all to make an 8 x 10 print, so the circles of
confusion could be 0.25mm on the negative and they would still be
0.25mm on the final print. There was a fixed and immutable
relationship between the size of circles of confusion on the negative
and their size on the print.

But, if there is a worse term than DOF, it is "enlargement of a
digital image". There is no such thing as a digital image: there is
an image cast by the lens on the sensor, then there is a string of
numbers (they encode an image, but they are just a string of
numbers). It is not a film negative and you can't talk about it as
if it was. In particular, there is not a fixed relationship between
the size of the circles of confusion cast on the sensor and their
size on the print.
This is one of the major falacies that the digital crowd seem to like to put about, that somehow the rules of optics and physics don't apply to their world. When you digitise an image projected by your lens onto your sensor there are dimensions involved (the size of each pixel and the number of pixels in each linear dimension). Your digital file may not explicitly say so, but those dimensions were a property of the capture and you get the same scaling effects by printing pixels at a size that is a multiple of the original. With a 35mm sensor an 8x10" print needs pixels that are 7 times larger than the size of the sensor pixels. The fact that you don't use analog scaling is an irrelevence.
On a printed digital image the size of a circle of confusion is
related to the number of dots that make it up, and to the size of
those dots. This is why it is important that the size of the
printed circle of confusion perceived as sharp is absolute. Now,
the number of dots making up the printed circle of confusion is set
by the number of pixels covered by the circle of confusion in the
real optical image cast on the sensor by the lens, but the size of
the dots can be varied independently.
NO. You've still not worked this out. The size of your pixels creates a hard limit to the resolution of your shot. You cannot resolve anything smaller than one pixel, so if a feature in a digital image is resolved to exactly one pixel wide then it is said to be criticaly sharp. Features that are resolved to points larger than one pixel start to effect adjacent pixels (changing their colour or brightness). Providing you have enough pixels to avoid pixelation in your target print size then the resulting print will still follow the DOF rules and circles of confusion will still apply.

If you haven't got enough pixels then the whole image is effectively blured by the pixelation, not just the range outside of DOF.
 
You will have plenty of time to patronise people later, you can spare a couple of minutes.

Did you not say (correctly) in an earlier post that perceived DOF is reduced by enlargement, however you do it? And is not reducing dots/cm a form of enlargement?

Of course, it does not have to be. I can reduce dots/cm and print at the same size, and then you are right, and the reason you are right is exactly the point I was trying to help you grasp in relation to your post about displaying images at 100%: it is the absolute size of the printed (or displayed) circles of confusion that matters, not their relative size.

And if reducing dpi is a form of enlargement, is not increasing dpi a form of reduction, and will that not increase perceived DOF (at the same viewing distance)?

Suppose I have a camera with an 25 x 17 mm sensor and 7.2MP, and another camera with a 36 x 24 sensor and 14.6MP, so the pixels are the same size. I take images with each, side by side, using identical prime lenses with the same aperture. I make a 10 x 8 print at 300 dpi with the 7.2MP camera, and a 10 x 8 print at 425 dpi with the 14.6 MP camera.

Which print has the greater perceived DOF (assume viewing distance can be as short as needed to see a difference)? Give reasons for your answer if you don't think it is the 36 x 24.

I know the images are not identical, and if I made them identical by changing focal length I would change DOF. The point - the one the OP was making - is that it's not the sensor that changes the DOF, it is the lens.
--

'Some of the money I spent on booze, women and fast cars, but the rest I squandered' - George Best
 
Are you doing this deliberately?
Just hoping to learn. I am sorry to be a nuisance, but people keep confusing me.
With a 35mm sensor an 8x10" print needs pixels that are 7 times larger than
the size of the sensor pixels.
There you go again, confusing me. So, a 1DsIII has 21.1MP on a 36 x 24 sensor, and a 5D has 12.7MP on the same size sensor. If I make an 8 x 10 print from each, which one has pixels in the print seven times larger than the sensor pixels?
You cannot resolve
anything smaller than one pixel,
Doesn't the sampling frequency have to be at least twice the highest frequency component in the signal? You see why I am struggling? Just when you think you have it under control the Shannon-Nyquist theorem turns out to be wrong as well.
--

'Some of the money I spent on booze, women and fast cars, but the rest I squandered' - George Best
 
Suppose I have a camera with an 25 x 17 mm sensor and 7.2MP, and
another camera with a 36 x 24 sensor and 14.6MP, so the pixels are
the same size. I take images with each, side by side, using
identical prime lenses with the same aperture. I make a 10 x 8 print
at 300 dpi with the 7.2MP camera, and a 10 x 8 print at 425 dpi with
the 14.6 MP camera.

Which print has the greater perceived DOF (assume viewing distance
can be as short as needed to see a difference)? Give reasons for
your answer if you don't think it is the 36 x 24.
I've just run your example through the calculator at http://www.dofmaster.com/dofjs.html

I needed to add some specifics, so I chose a 50mm lens at f/8. Subject distance 10 feet.
For the 24x17mm sensor:
Near limit 8.46 ft
Far limit 12.2 ft
Total 3.77 ft

For the 36 x 24mm sensor:
Near limit 7.77 ft
Far limit 14 ft
Total 6.28 ft
I know the images are not identical, and if I made them identical
by changing focal length I would change DOF. The point - the one the
OP was making - is that it's not the sensor that changes the DOF, it
is the lens.
I don't follow your statement above. The lens was identical, at the same aperture. So it can't be as simple as suggested, that the lens is the thing that makes the difference? Or were you thinking one thing while writing another?
Regards,
Peter
 
Are you doing this deliberately?
Just hoping to learn. I am sorry to be a nuisance, but people keep
confusing me.
If you have problems with what we're saying then fair enough, ask. You seem however to ignore what we (i.e. multiple people, not just myself) say and state your theories as fact.
With a 35mm sensor an 8x10" print needs pixels that are 7 times larger than
the size of the sensor pixels.
There you go again, confusing me. So, a 1DsIII has 21.1MP on a 36 x
24 sensor, and a 5D has 12.7MP on the same size sensor. If I make an
8 x 10 print from each, which one has pixels in the print seven times
larger than the sensor pixels?
BOTH prints have pixels that are 7 times larger than the pixels captured by the respective sensors. The 1DsIII captures MORE and smaller pixels than the 5D, but you need to enlarge them by exactly the same proportions to get the final prints. You ONLY change this magnification factor by changing the size of the total frame, not the size of a pixel within the frame.
You cannot resolve
anything smaller than one pixel,
Doesn't the sampling frequency have to be at least twice the highest
frequency component in the signal? You see why I am struggling?
Just when you think you have it under control the Shannon-Nyquist
theorem turns out to be wrong as well.
To resolve a PAIR of lines, one black. one white, which is the typical measure used for resolution charts (and normaly measured in Line Pairs Per Milimeter, or LPPM) you need a minimum of two pixels, but each distinct feature is only one pixel wide. In practice, using a Bayer type sensor and with an anti-alias filter, you'll struggle to resolve features that small.
 
DoF can be defined as the ability to resolve detail at a certain frequency at a certain print size.
No, it can't.

Now, if you want to talk about measuring arcs of view, I'd agree.

But "certain print sizes," no.

--
RDKirk
'TANSTAAFL: The only unbreakable rule in photography.'
 
If you take images with 36 x 24 and 25 x 17 sensor cameras with the same lens at the same position, the smaller sensor does not "see" the edges of the image cast on the larger sensor (the image is "cropped"). To include all the image cast on the larger sensor on the smaller sensor there would have to be a shorter focal length lens on the smaller-sensor camera, or you could walk further away from the subject. But then the DOF of the image cast on the smaller sensor would be greater. So people say, zipping from A to B straight across the flower beds instead of staying on the path, "36 x 24 sensors have shallower DOF", instead of being more careful and saying "The adjustments to focal length or distance from the subject that are needed to make the field of view from 36 x 24 and 25 x 17 sensors the same predictably result in shallower DOF in the images from the 36 x 24 sensor". If you are not careful about the flower beds you reinforce the puzzlement the OP expressed.
--

'Some of the money I spent on booze, women and fast cars, but the rest I squandered' - George Best
 
BOTH prints have pixels that are 7 times larger than the pixels
captured by the respective sensors. The 1DsIII captures MORE and
smaller pixels than the 5D, but you need to enlarge them by exactly
the same proportions to get the final prints. You ONLY change this
magnification factor by changing the size of the total frame, not the
size of a pixel within the frame.
I never meant them Laws of Optics to come to no harm, Sir, I didn't think they would break, Sir, I was only being familiar like. You are quite right, Sir, of course, though I do think you may just have misunderstood my point, begging your worship's indulgence for what might be thought in a less humble man to be redolent of lese-majesty.

May I take the liberty, Sir, of pointing out that the two prints you had the goodness to refer to will not have used the same printer resolution: it will be 470-odd dpi in the case of the 1DsIII and 360-odd dpi in the case of the 5D. Now I do realise I have no business asking you to explain your printer settings, and you will do as you will, Sir, being a gentleman and all, but why should I use those printer resolutions, given that what I want to do is keep the printed circles of confusion below an absolute size of (give or take) 0.25mm and not below a constant proportion of the image?

My point, which I advance with an heart as humble as could be, was intended to bring to the notice of the quality here, such as your self, a point that seemed unaccountably to have avoided their notice, that in digital imaging there are consequences of perceived DOF being determined by the absolute size of the printed CoC, that did not apply to film. Just a theory, to be sure, and if you tell me it is wrong I will be careful to explain that to Mr Adams, should I have the opportunity in due course.

Ever your worship's humble servant.

--

'Some of the money I spent on booze, women and fast cars, but the rest I squandered' - George Best
 
until moral improves.

Les Olson wrote:
...
May I take the liberty, Sir, of pointing out that the two prints you
had the goodness to refer to will not have used the same printer
resolution: it will be 470-odd dpi in the case of the 1DsIII and
360-odd dpi in the case of the 5D.
And your point is? Both are beyond the limits that the human eye can distinguish at your standard 8x10" print size and viewing distance. They will appear to have identical resolution and DoF. The 1DsIII file can be enlarged more before pixels become apparent, but DoF comparisions assume prints of the same size.
Now I do realise I have no
business asking you to explain your printer settings, and you will do
as you will, Sir, being a gentleman and all, but why should I use
those printer resolutions, given that what I want to do is keep the
printed circles of confusion below an absolute size of (give or take)
0.25mm and not below a constant proportion of the image?
The point of having CoC correction factors is that you want images from whatever source to be comparable in a standard sized print. An 8x10" plate has a CoC of 0.22mm (not 0.25mm BTW). A 35mm frame has a CoC of 0.029mm, or approximately 1/7th of the full sized plate. The CoC for any given frame size will likewise tell you how much the image needs to be enlarged to reach 8x10" size. For example 4/3rds has a CoC of 0.015. 0.22/0.015 = 14.67, so an 8x10" print from 4/3rds is slightly under 15 times larger than the original frame.

Now if you want to b*gg*r about with changing printer resolutions then you're changing print sizes and are nolonger comparing the same thing.
My point, which I advance with an heart as humble as could be, was
intended to bring to the notice of the quality here, such as your
self, a point that seemed unaccountably to have avoided their notice,
that in digital imaging there are consequences of perceived DOF being
determined by the absolute size of the printed CoC, that did not
apply to film. Just a theory, to be sure,
And a wrong and broken theory as we (note that I'm not alone in telling you this) keep telling you. Film and digital work identicaly here. You can chose higher resolution films and enlarge the results more, but the change in DoF is not a feature of the film. You ALWAYS compare DoF at a standard print size. If you change the print size for one then you change it for all. Only when one or more systems run out of resolution does comparision become impossible.
 

Keyboard shortcuts

Back
Top