Cropping for larger depth of field

Frank_GH

Active member
Messages
53
Reaction score
5
Hi,

Moving away from a subject, and then cropping the picture is a way of taking advantage of a wide aperture (in dark situations) while keeping the DOF a little larger. In a 24 megapixel photo (6000x4000 pixels), the 1/3 of the picture has 2000x1333, which is okay for a 1920x1080 screen. But from my experiments, the quality of the image is much worse than if I crop half a 3936x2624 picture (getting 1968x1312 pixels, which is also fine for a 1080p screen).

Is there such a limit to cropping ability (other than the number of pixels, which is not a problem in both examples above)?

In case it's relevant, I'm using the Sony a7iii.
 
Last edited:
Solution
Thank you Great Bustard and ThrillaMozilla.
El gusto es mío.
Great example. No more cropping for me. I believe you are right, though I don't understand all the explanations in this thread, but that's another story. Maybe someday. I've got to read more on photography...
In simple terms, it goes like this:

If you frame twice as wide by using half the focal length or stand back twice as far (although the latter, as demonstrated, will change the perspective) and crop out the middle 25% of the photo (half the width and height), the crop will have half the resolution and be made with 1/4 as much light as the photo as a whole, and thus twice as noisy.

If instead, you frame as desired and use twice the f-number without...
Thank you all for the comments. Below is a demonstration. Let's say I want to make a picture my desktop background. Whether it's a 6MP or a 10MP, it will be shrinked down, so I did just that in photoshop (I created a 1920x1080 image file, pasted a 10MP picture and sized it down to fit the frame). Going back to my camera, I stepped back to take a picture 2X as wide, and cropped the central 1920x1080 pixels in photoshop. Then I did the same for a picture 3X as wide (but that one was 24MP). Next, I tried to find how much I would need to stop down, from my original position closer to the subject, to be able to read the buttons in the background. I tested f/10, f/11 and f/13, and as you can see, the ISO goes up too much and the result is not as good as the 1/3 cropped picture.

f/4.5
f/4.5

half the width and half the height, so it shoud read "1/4". f/4.5
half the width and half the height, so it shoud read "1/4". f/4.5

1/3 the width and 1/3 the height, so it shoud read "1/9". f/4.5
1/3 the width and 1/3 the height, so it shoud read "1/9". f/4.5

0929fff3e49744ff870ba38b4246d58c.jpg

0cdb533500784decb63247c811fbe325.jpg

I appreciate all the explanations, though I didn't understand everything. The bottom line is I will probably keep using this method, as it works great for my needs. In my OP, I was asking why it worked okay when cropping a 10MP photo by 1/4, but not a 24MP photo by 1/9. In this test, the 1/9 cropped picture looks fine, so I guess I just made something wrong the 1st time.
I think your 'method' isn't gaining you anything. The basic mistake is thinking you're gaining quality by using a smaller f-number and lower ISO for a cropped image, compared with using an uncropped image with the larger f-number and higher ISO for the uncropped image. In fact, they end up using just the same amount of light, so the quality is just the same, except that the cropped image has the disadvantage of being made with fewer pixels. The problem with your experiment is that you're changing more than one variable at the same time so mixing up what is the consequence of what.

You'll get the best results by:

1. Shoot from as far away as you can, and use a lens that gives you the required field of view from that distance using the full frame. The point of this is that the difference in depth of your subject is a smaller proportion of the shooting distance, so you need less proportional depth of field to cover the subject.

2. Use as long a shutter speed as you can. A camera support such as a tripod will help you use a longer shutter speed.

3. focus and set f-number to get just the DOF that you need. A tool like DOFmaster can help you work out what these are. You will need to focus a little bit 'into' the subject, not at the closest bit.

4. When you have the longest shutter speed that you can, and set the aperture for the DOF, you have the largest exposure that the circumstances allow. Set the ISO so that whatever that is centres the meter, or just use auto-ISO. One of your mistakes was thinking that high ISO causes a drop in image quality, it doesn't, it's a low amount of light energy captured in the image which causes the drop. The amount of light energy is given by the exposure times the effective sensor size (after cropping), which is why you don't want to crop, you're reducing the amount of light energy in the image as a whole.

--

Things became much easier since I stopped confusing profundity and profanity.
 
You'll get the best results by:

1. Shoot from as far away as you can, and use a lens that gives you the required field of view from that distance using the full frame.
Oops, Bob, I don't know what you intended, but:

50mm lens, 10m focus, f/4: -3.1m to +9.1m depth of field (12.2m total depth)

100mm lens, 20m focus, f/4: -3.9 to +6.3m depth of field (10.2m total depth)

I don't know why the results are different between the two cases. I think they should be the same, but there must be some assumption about the aperture. But never mind that. They are approximately the same, but the DOF calculator says you get slightly worse results by stepping back.
 
You'll get the best results by:

1. Shoot from as far away as you can, and use a lens that gives you the required field of view from that distance using the full frame.
Oops, Bob, I don't know what you intended, but:

50mm lens, 10m focus, f/4: -3.1m to +9.1m depth of field (12.2m total depth)

100mm lens, 20m focus, f/4: -3.9 to +6.3m depth of field (10.2m total depth)

I don't know why the results are different between the two cases. I think they should be the same, but there must be some assumption about the aperture. But never mind that. They are approximately the same, but the DOF calculator says you get slightly worse results by stepping back.
Amazing how often things you've memorised and not questioned turn out to be wrong. You're absolutely right, and when I think properly, you must be because all I'm saying is scale everything up, in which case all the angles stay the same.

To the OP. Ignore my first advice, it's wrong.
 
Bob, here is a simple trick, just for you. This little diagram immediately and simply solves all kinds of depth of field problems. It helps me to think in subject space.

The top diagram represents lens L (actually the aperture of lens L) focused on subject plane S. The thick line in plane S is the circle of confusion corresponding to point P.

As an example, the bottom diagram represents the current problem: double the focal length, same f/ stop.

b4b8d6d917774d50bdc26f7ed5c2d25c.jpg.png
 

Attachments

  • 6d36a93f9f8d44a3bbc36e8017419e67.jpg.png
    6d36a93f9f8d44a3bbc36e8017419e67.jpg.png
    8.6 KB · Views: 0
Last edited:
Christof21, Thrilla and Bob,

Just to clarify, I'm not trying to say that you're wrong and I'm right. You obviously are way more experienced in photography.

But practically speaking, I tried to adjust the brightness/contrast of the f/13 photo in my example, and I can't make it look as good as the "24MP 1/3 crop" (in addition to the buttons at the back being less in focus). I understand that if I were to print these pictures, the cropped one would look worse than the "f/13" one. But what about a desktop background? If my screen is a 1080p, having more than 1920x1080 pixels is useless, right?

If I'm doing something wrong, please tell me!

Thanks
 
Christof21, Thrilla and Bob,

Just to clarify, I'm not trying to say that you're wrong and I'm right. You obviously are way more experienced in photography.

But practically speaking, I tried to adjust the brightness/contrast of the f/13 photo in my example, and I can't make it look as good as the "24MP 1/3 crop" (in addition to the buttons at the back being less in focus). I understand that if I were to print these pictures, the cropped one would look worse than the "f/13" one. But what about a desktop background? If my screen is a 1080p, having more than 1920x1080 pixels is useless, right?

If I'm doing something wrong, please tell me!
With your cropping and other manipulations, you probably don't have as many surviving pixels as you imagine. Your experimental cropping was a poorly conceived exercise right from the start.

Windows does a reasonable job of fitting whatever you throw at it to the background, but I usually crop and resize to 16:9 using a graphics program, so that unexpected croppng doesn't occur.

One of my favourites is 3840x2160 and it looks good on all my screens. I guess there's about 8MPix of PMP the in that scene.

Here's a low res. version...

Snowy Mountains, Australia.
Snowy Mountains, Australia.
 
Last edited:
Christof21, Thrilla and Bob,

Just to clarify, I'm not trying to say that you're wrong and I'm right. You obviously are way more experienced in photography.

But practically speaking, I tried to adjust the brightness/contrast of the f/13 photo in my example, and I can't make it look as good as the "24MP 1/3 crop" (in addition to the buttons at the back being less in focus). I understand that if I were to print these pictures, the cropped one would look worse than the "f/13" one. But what about a desktop background? If my screen is a 1080p, having more than 1920x1080 pixels is useless, right?

If I'm doing something wrong, please tell me!

Thanks
Hello,

The metering is different because it does not apply to the same part of the picture, it applies to the whole picture unless you use for instance or use manual exposure which I would suggest to make a fair comparison.

Same for the jpeg engine, it applies to the whole picture so I would shoot in raw to make a fair comparison.

Also we need the complete exif to see how the exposure and lightness compare.
 
Bob, here is a simple trick, just for you. This little diagram immediately and simply solves all kinds of depth of field problems. It helps me to think in subject space.

The top diagram represents lens L (actually the aperture of lens L) focused on subject plane S. The thick line in plane S is the circle of confusion corresponding to point P.

As an example, the bottom diagram represents the current problem: double the focal length, same f/ stop.

b4b8d6d917774d50bdc26f7ed5c2d25c.jpg.png
Thanks, that was the diagram I had in my mind. My brain just messed it up. I find it does that sometimes, I must give it a stern talking to.

--
Things became much easier since I stopped confusing profundity and profanity.
 
Christof21, Thrilla and Bob,

Just to clarify, I'm not trying to say that you're wrong and I'm right. You obviously are way more experienced in photography.

But practically speaking, I tried to adjust the brightness/contrast of the f/13 photo in my example, and I can't make it look as good as the "24MP 1/3 crop" (in addition to the buttons at the back being less in focus). I understand that if I were to print these pictures, the cropped one would look worse than the "f/13" one. But what about a desktop background? If my screen is a 1080p, having more than 1920x1080 pixels is useless, right?

If I'm doing something wrong, please tell me!
As I said, your problem was doing tests which involve changing multiple parameters at the same time. Almost inevitably, the outcomes of such tests are messed up. To discover these things you need to do careful and systematic tests where you just change one variable at a time. So, what you have done is adjust the f-stop,meanwhile, I'm guessing changing the exposure (you haven't said anything about exposure settings nor how you set it), then you changed the subject distance, then you changed the crop and resampling ratio, then you changed the processing. In the end, you don't know what you're comparing with what.

Maybe you could give some more details of what you actually did, or leave the EXIF on your images, which would give a lot more information.
 
Bob, here is a simple trick, just for you. This little diagram immediately and simply solves all kinds of depth of field problems. It helps me to think in subject space.

The top diagram represents lens L (actually the aperture of lens L) focused on subject plane S. The thick line in plane S is the circle of confusion corresponding to point P.

As an example, the bottom diagram represents the current problem: double the focal length, same f/ stop.
You can also consider it from the perspective of the subject. The size of the entrance pupil diameter, as seen as an angle, from the subject's perspective, determines how focus falls off in front of and behind the focal plane through the subject.

Spending most of my photography time shooting birds so small that they need to be cropped 95% of the time, even with 560mm or 800mm with APS-C, I have come to think purely in terms of subject. DOF of the entire frame, noise of the entire frame, diffraction blur size relative to the frame, etc, are totally meaningless to me in that context. Entrance pupil and proximity, subject size, environmental illumination, and shutter speed (chosen more for subject motion than anything else with lens IS), are my world. The frame is only a factor when the angle of view is too narrow, in which case I remove a TC, zoom out (if using a zoom), or as a last resort, step back.

I watch all kinds of people I know make non-optimal gear and setting choices for the same type of photography, because they do not have these simple truths in mind. Once you start thinking this way, there is no point in looking at things the conventional way, and even in situations where frame quality is more relevant, the model still works, if you consider the rectangle of the frame's composition as the subject, which just happens to fill a frame.
 
You can also consider it from the perspective of the subject. The size of the entrance pupil diameter, as seen as an angle, from the subject's perspective, determines how focus falls off in front of and behind the focal plane through the subject.

Spending most of my photography time shooting birds so small that they need to be cropped 95% of the time, even with 560mm or 800mm with APS-C, I have come to think purely in terms of subject. DOF of the entire frame, noise of the entire frame, diffraction blur size relative to the frame, etc, are totally meaningless to me in that context. Entrance pupil and proximity, subject size, environmental illumination, and shutter speed (chosen more for subject motion than anything else with lens IS), are my world. The frame is only a factor when the angle of view is too narrow, in which case I remove a TC, zoom out (if using a zoom), or as a last resort, step back.

I watch all kinds of people I know make non-optimal gear and setting choices for the same type of photography, because they do not have these simple truths in mind. Once you start thinking this way, there is no point in looking at things the conventional way, and even in situations where frame quality is more relevant, the model still works, if you consider the rectangle of the frame's composition as the subject, which just happens to fill a frame.
Yes, you can work out the depth of field without knowing much about the camera and lens.

If you know:

1. The diameter of the field of view (after any cropping) in the plane of the subject (d),

2. The diameter of the aperture (entrance pupil) (a),

3. The subject distance (s),

then you have everything you need to work out the depth of field, assuming a circle of confusion for acceptable sharpness (which is normally taken as 1/1500th of the diameter of the field of view if measured in the subject plane).

An handy approximate formula is:

DoF = 2Ds, where D = d/(1500a)

A more accurate formula is:

DoF = 2Ds/(1 – D^2)

Sorry about the maths, but I find these formulae very useful and others may do so too.

These formulae work even if you take a shot with a wider field of view and then crop the image (to the field of view used in the formula).
 
Hi,

Moving away from a subject, and then cropping the picture is a way of taking advantage of a wide aperture (in dark situations) while keeping the DOF a little larger. In a 24 megapixel photo (6000x4000 pixels), the 1/3 of the picture has 2000x1333, which is okay for a 1920x1080 screen. But from my experiments, the quality of the image is much worse than if I crop half a 3936x2624 picture (getting 1968x1312 pixels, which is also fine for a 1080p screen).

Is there such a limit to cropping ability (other than the number of pixels, which is not a problem in both examples above)?

In case it's relevant, I'm using the Sony a7iii.
You decide which is best from the following three photos, all resized to the same dimensions as the crop (last of the three photos):

50mm f/4 1/640 ISO 100 (resized)
50mm f/4 1/640 ISO 100 (resized)

100mm f/8 1/640 ISO 400 (resized), taken from twice as far back as the 50mm f/4 photo above.
100mm f/8 1/640 ISO 400 (resized), taken from twice as far back as the 50mm f/4 photo above.

Crop of 50mm f/4 1/640 ISO 100 taken from the same position as the 100mm f/8 photo above.
Crop of 50mm f/4 1/640 ISO 100 taken from the same position as the 100mm f/8 photo above.

Once again, the first two photos above have been resized to the same size as the crop of the 50mm f/4 photo (last photo). Note the the last two have the same perspective (since they were taken from the same position) which is different than the first (which was taken at half the distance as the latter two).

Here are the original fullsize photos for the first two photos above:

50mm f/4 1/640 ISO 100 (original)
50mm f/4 1/640 ISO 100 (original)

100mm f/8 1/640 ISO 400 (original) taken from twice as far back as the 50mm f/4 photo above.
100mm f/8 1/640 ISO 400 (original) taken from twice as far back as the 50mm f/4 photo above.
 
Last edited:
My brain just messed it up. I find it does that sometimes, I must give it a stern talking to.
What?! That happens all the time to most of us. You have the advantage that you comprehend instantly instead of having to be browbeaten. I had to unlearn a bunch of stuff about imaging.

Depth of field calculations are much simpler in subject space. I'm sure you know, to convert to image space, just divide the CoC by the magnification.
 
Last edited:
Bob, here is a simple trick, just for you. This little diagram immediately and simply solves all kinds of depth of field problems. It helps me to think in subject space.

The top diagram represents lens L (actually the aperture of lens L) focused on subject plane S. The thick line in plane S is the circle of confusion corresponding to point P.

As an example, the bottom diagram represents the current problem: double the focal length, same f/ stop.
You can also consider it from the perspective of the subject. The size of the entrance pupil diameter, as seen as an angle, from the subject's perspective, determines how focus falls off in front of and behind the focal plane through the subject.
Interesting, yes. I would say that my diagram is essentially equivalent.
Spending most of my photography time shooting birds so small that they need to be cropped 95% of the time, even with 560mm or 800mm with APS-C, I have come to think purely in terms of subject. DOF of the entire frame, noise of the entire frame, diffraction blur size relative to the frame, etc, are totally meaningless to me in that context.
Yes, indeed.
Entrance pupil and proximity, subject size, environmental illumination, and shutter speed (chosen more for subject motion than anything else with lens IS), are my world. The frame is only a factor when the angle of view is too narrow, in which case I remove a TC, zoom out (if using a zoom), or as a last resort, step back.

I watch all kinds of people I know make non-optimal gear and setting choices for the same type of photography, because they do not have these simple truths in mind. Once you start thinking this way, there is no point in looking at things the conventional way, and even in situations where frame quality is more relevant, the model still works, if you consider the rectangle of the frame's composition as the subject, which just happens to fill a frame.
Amen. If I were teaching photography, that's how I would teach it. I think students would understand much more easily. I would mention the conventional approach only in passing. While the conventional approach is easy to understand, it's quite limited, and not very extensible without error-prone problem solving.
 
My brain just messed it up. I find it does that sometimes, I must give it a stern talking to.
What?! That happens all the time to most of us. You have the advantage that you comprehend instantly instead of having to be browbeaten. I had to unlearn a bunch of stuff about imaging.

Depth of field calculations are much simpler in subject space. I'm sure you know, to convert to image space, just divide the CoC by the magnification.
Yes, I do my DOF in subject (object) space. I even worked out all the formulae. My brain just misbehaved on this occasion.
 
EDIT: This is nonsense. Ignore it.

Great Bustard's pictures are perfect examples. #1 and 3 show that increasing the distance and cropping does work to sharpen the background. The penalties are that the number of pixels on the subject is reduced, and lens aberrations become more important. And of course the perspective is changed. As the OP suggested, this can be a useful tool.

If the exposure time were increased and the perspective was not important, it might be better to use the same distance as #1, with 50 mm, f/8. If the exposure time could not be increased, then f/8 would reduce the exposure, and the ISO setting could be increased to compensate. You would have to choose between reduced exposure or reduced number of pixels. Photography often requires tradeoffs and compromises.
 
Last edited:
Hi,

Moving away from a subject, and then cropping the picture is a way of taking advantage of a wide aperture (in dark situations) while keeping the DOF a little larger. In a 24 megapixel photo (6000x4000 pixels), the 1/3 of the picture has 2000x1333, which is okay for a 1920x1080 screen. But from my experiments, the quality of the image is much worse than if I crop half a 3936x2624 picture (getting 1968x1312 pixels, which is also fine for a 1080p screen).

Is there such a limit to cropping ability (other than the number of pixels, which is not a problem in both examples above)?

In case it's relevant, I'm using the Sony a7iii.
Below is the original post with the original set:
You decide which is best from the following three photos, all resized to the same dimensions as the crop (last of the three photos):

50mm f/4 1/640 ISO 100 (resized)
50mm f/4 1/640 ISO 100 (resized)

100mm f/8 1/640 ISO 400 (resized), taken from twice as far back as the 50mm f/4 photo above.
100mm f/8 1/640 ISO 400 (resized), taken from twice as far back as the 50mm f/4 photo above.

Crop of 50mm f/4 1/640 ISO 100 taken from the same position as the 100mm f/8 photo above.
Crop of 50mm f/4 1/640 ISO 100 taken from the same position as the 100mm f/8 photo above.

Once again, the first two photos above have been resized to the same size as the crop of the 50mm f/4 photo (last photo). Note the the last two have the same perspective (since they were taken from the same position) which is different than the first (which was taken at half the distance as the latter two).

Here are the original fullsize photos for the first two photos above:

50mm f/4 1/640 ISO 100 (original)
50mm f/4 1/640 ISO 100 (original)

100mm f/8 1/640 ISO 400 (original) taken from twice as far back as the 50mm f/4 photo above.
100mm f/8 1/640 ISO 400 (original) taken from twice as far back as the 50mm f/4 photo above.
The following is a repeat of the above set, but instead of f/4 ISO 100 vs f/8 ISO 400, it's f/4 ISO 1600 vs f/8 ISO 6400. The reason for this is because some might say that the difference in noise is imperceptible at the lower ISO settings, so I repeated the photos later in the day with four stops less light so that noise would be more of an issue:

50mm f/4 1/640 ISO 1600 (resized)
50mm f/4 1/640 ISO 1600 (resized)

100mm f/8 1/640 ISO 6400 (resized), taken from twice as far back as the 50mm f/4 photo above.
100mm f/8 1/640 ISO 6400 (resized), taken from twice as far back as the 50mm f/4 photo above.

Crop of 50mm f/4 1/640 ISO 1600 taken from the same position as the 100mm f/8 photo above.
Crop of 50mm f/4 1/640 ISO 1600 taken from the same position as the 100mm f/8 photo above.

Once again, the first two photos above have been resized to the same size as the crop of the 50mm f/4 photo (last photo). Note the the last two have the same perspective (since they were taken from the same position) which is different than the first (which was taken at half the distance as the latter two).

Here are the original fullsize photos for the first two photos above:

50mm f/4 1/640 ISO 1600 (original)
50mm f/4 1/640 ISO 1600 (original)

100mm f/8 1/640 ISO 6400 (original)
100mm f/8 1/640 ISO 6400 (original)

For resized for web viewing, there is negligible difference with regards to IQ. However, it seems more than a little clear to me that, rather than stepping back and cropping, the better way about it is to shoot from the same position with a more narrow aperture. Even if the exposure time needs to be the same, it is no worse for wear with regards to noise, and, in fact, delivers superior IQ. Furthermore, the perspective is unchanged, and that can matter a lot more than the difference in IQ.
 
Last edited:
...However, it seems more than a little clear to me that, rather than stepping back and cropping, the better way about it is to shoot from the same position with a more narrow aperture. Even if the exposure time needs to be the same, it is no worse for wear with regards to noise, and, in fact, delivers superior IQ. Furthermore, the perspective is unchanged, and that can matter a lot more than the difference in IQ.
Right you are. I goofed when I relented and agreed partly with the OP. Frank_GH, I was right the first time. Your trick only works on April 1.

Increasing the distance does decrease the background blur, but so does stopping down. And that's the method of choice. Except for specialized methods (e.g., tilting the lens or camera back or image stacking), all other methods carry a penalty in image quality.

Stopping down: reduces total light per unit subject area and decreases background blur.

Increasing the distance; reduces total light per unit subject area and decreases background blur. Explanation: Although you get to keep the same f stop and therefore the same light per area, the image is smaller, so the total amount of light per area of the cropped image is smaller. Reduced amount of light = increased noise when viewed at the same size.

I have edited my previous posts to make this correction.

Frank_GH, you can believe us or not. You don't have to understand the explanation, but the simple thing to do is just to forget any funny tricks and realize that if you hold the field of view constant and view the image at a constant size (and that's what you are trying to do), your depth of field is determined by the f stop, and only the f stop. Nothing else affects it substantially.
 
Last edited:
Thank you Great Bustard and ThrillaMozilla. Great example. No more cropping for me. I believe you are right, though I don't understand all the explanations in this thread, but that's another story. Maybe someday. I've got to read more on photography...
 
Thank you Great Bustard and ThrillaMozilla.
El gusto es mío.
Great example. No more cropping for me. I believe you are right, though I don't understand all the explanations in this thread, but that's another story. Maybe someday. I've got to read more on photography...
In simple terms, it goes like this:

If you frame twice as wide by using half the focal length or stand back twice as far (although the latter, as demonstrated, will change the perspective) and crop out the middle 25% of the photo (half the width and height), the crop will have half the resolution and be made with 1/4 as much light as the photo as a whole, and thus twice as noisy.

If instead, you frame as desired and use twice the f-number without cropping, this will result in the same DOF as the crop. If you also use the same exposure time, the photo is still made with 1/4 as much light, and thus twice as noisy, just as with cropping but you keep the full resolution.

Thus, it's preferable to frame as desired and use the more narrow aperture than to frame wider and crop, since cropping results in less resolution and a more noisy photo, whereas stopping down only results in a more noisy photo (and, even then, only if the exposure time has to be the same -- if a longer exposure time can be used, then it will be less noisy).

One advantage of greater resolution is that noise filtering is more effective. So, if you start with equally noisy photos, but one has greater resolution than the other, then you can use more aggressive noise filtering with the higher resolution photo, resulting in either a less noisy photo, or an equally noisy photo with a bit more resolution left over.

As a side, this is also exactly the same reason using a TC is better than using the bare lens and cropping (unless you're using a really bad TC).
 
Last edited:
Solution

Keyboard shortcuts

Back
Top