Do crop sensor camera's lose sharpness in the same frame?

Tim Deen

Member
Messages
40
Reaction score
13
I understand the basics of a crop sensor vs. a full frame sensor. The crop sensor catches a narrower field of view at a same focal length. The crop factor is the factor between the size of a full frame sensor and a crop sensor. All clear.

What I fail to understand though is when you start comparing images. An often praised feat of crop sensor camera's is the fact that you can enlarge your subject. But what does actually happen? Does the image get enlarged to the same frame as an full frame image (if you compare FF to crop)? And thus resulting in a more zoomed in image?

And if this is the case, do crop sensor images lose sharpness due to being enlarged within a same frame?
 
But the reality is different. Most modern crop sensor cameras have higher pixel density as most FF cameras (26 MPX APSC-C has a FFeq pixel density of 70 MPX!). Therefore, in many cases, a modern crop sensor can give you nearly identical or even sharper pictures and more details than loads of FF cameras, esp. after Post processing (denoising in particular).
Actually the a7R IV and V are 61mp with 26mp crop areas. The a6700 is a 26mp as is the Pentax K-3 iii - all of these can be compared at parity on pixel density.

The a7R is 36mp and it's crop is just over 15mp, easily compared to 16mp apsc gear of that era.
In many equivalent shooting situations, identically framed, one would have a hard time telling which shot was made with a FF and which one with a (modern) crop sensor.

Comparing cameras with identical pixel density will be hard: I would have to compare the A7iii FF sensor with (for example) my 10 year older Nikon D300S (I haven't touched it in many years), as FF 24 MPX = APSC-C 10.6 MPX. But there's no point doing such a test.
There are other options. 45mp Nikons produce 19mp crop files - which is almost identical to the 20mp APSC sensor size.

However, the key question as you pointed out was not this specific thing. The question is too broad to really address with a yes or no. If you frame properly and get good shots where you have enough pixels on the subject and enough light on the subject for the intended purpose - it really isn't that important what size the sensor is. The margins of the exposure envelope will benefit from larger sensors with more exposed area on the subject (assuming identical framing using a different focal length). There's no magic bullet however - composition and subject selection and skill will still win out over sensor size.
 
No. Auto ISO. SS at 1/80 for all cameras in both tests.

Test 1:

X-T5, APSC 40MPX, same framing, f2.8, SS 1/80, Auto ISO (4000), Auto WB

A7iii, FF 24MPX, same framing, f4, SS 1/80, Auto ISO (1600), Auto WB
You might have to explain how the full frame image with one stop less exposure and 1 1/3 stop lower ISO would give equivalent brightness for comparison. Were the lighting conditions the same for both; it would seem unlikely. Pushing it in raw would not be equivalent.
Look, I don't try to understand how Sony and Fuji adjust the Auto Settings (ISO and WB).

When I'm sitting in my armchair, my kids on the Sofa opposite of me and I have both cameras on my lap, and I make all shots within one minute, I expect the conditions to as close to similar as possible.
Then something is amiss. If the light and framing did not change, then both cameras would use essentially the same exposure. A 2 1/3 stop difference would not be due to auto algorithms.

I don't know why, but for a useful comparison, equivalent shooting conditions would need to be used.
Maybe something was indeed off. But in that case, the FF @ ISO1600 should even have been at a greater advantage vs APSC @ 4000.
Not really. Noise is not a function of ISO but rather the amount of light used to make the image. So the combination of the same shutter speed and one stop difference in f-number are actually roughly equivalent, assuming that the light did not change. But the fact that the cameras gave such dramatically different settings for ostensibly the same scene calls the whole process into question.
Demoicaising and noise reduction: PureRaw 4 (note: XD2S is not working for XxTrans sensors)

More details and better sharpness for X-T5 from 100% onwards.
Assuming that there is an explanation for the exposure discrepancy and you achieved same framing by focal length rather than altering the distance, then it is not at all surprising that the camera with the far greater pixel density delivered more detail. I am not sure a test would be needed to reach that conclusion. But that is not a function of sensors size per se, but just the pixel density of the individual cameras, which is independent of the size of their sensors.

In terms of sharpness, that is a function of a number of factors, not least of which are the lenses involved, but an image with more detail may appear sharper.
You're right: I shot at 70mm on The Sony and 50mm (80mm FF) on the APSC. That would be the only "marginal" difference.

My goal was not to make a precise comparison between the two, but to find out if I'd lose a lot in skipping the Sony FF system for my trusted APSC one when I do corporate events.

And to point out that FF will not have the clear edge in any use case.
Esp. as many photographers (be it Wildlife, Landscape or Street, to mention only those) would usually chose the same shooting spot regardless of FF or APSC, and use the focal length to adjust the framing.
The point is that a sensor with a greater pixel density will always have an advantage of one with less density, and have the potential to produce more detail. This isn't a function of sensor size. So you simply demonstrated the advantage of pixel density, which is axiomatic. You didn't demonstrate anything about sensor size itself.
The question asked here was:
Do crop sensors lose sharpness (in comp. to FF) in the same frame?
Your comparison did not address that. To address that the pixel density would need to be normalized, while the sensor size was varied. And ideally the same lens would be used for both.

Your interest is appreciated, but the test didn't quite get at the issue.
This is where I will disagree, as this thread was never about identical pixel density only.

At least, that's not how I understood it. To me, it's more: "Do I lose sharpness using a crop sensor camera vs a FF one, for a give scene."

What I demonstrate here is that there is no general rule and no YES or NO answer to the question asked by the OP.
Well, THAT observation has already been aptly explained by many of the first commenters a year and a half ago!

Yes, there is a general rule, though I stated it incorrectly in my previous post. For the same focal length, the image of an object projected onto the same sensor will be the same size. In that case, only pixel density determines the potential for detail, regardless of sensor size.

If the framing is kept the same by using an equivalent focal length, then the potential for detail is determined by the pixel density times the focal length. In your test you used nearly equivalent, and thus different, focal lengths, which means that the image of the subject was projected larger on the larger sensor. That would be a good thing for detail on the larger sensor. But since the pixel density was VERY different, the greater density overwhelmed the focal length advantage, and produced more detail, as would be expected.

In addition, note that comparisons like this also need to account for the final viewing size, either on screen or a print. To create an viewable image of the same size and framing, the image from the smaller sensor will need to be enlarged more. For the same viewing distance and other conditions, greater enlargement always results in reduction in detail and sharpness. That would work against the advantage of the smaller, denser sensor, but probably not reverse the result in this case.

But different combinations of cameras and focal lengths would give different results, independent of the sensor size. That is they key takeaway here.
Esp. as I used both Sony's and Fuji's red badge lenses and compared the focus point and area around it only - because this is what serves my purpose when I shoot corporate events.

Indeed, if you take the same Pixel density, FF will have a clear advantage. This is obvious.
But the reality is different. Most modern crop sensor cameras have higher pixel density as most FF cameras (26 MPX APSC-C has a FFeq pixel density of 70 MPX!). Therefore, in many cases, a modern crop sensor can give you nearly identical or even sharper pictures and more details than loads of FF cameras, esp. after Post processing (denoising in particular).
No. This is often, but not necessarily, true. There are a number of high resolution full frame cameras that exceed the pixel density (and thus potential for detail) of many smaller-sensors cameras. By using sensor size as an incorrect proxy for pixels you run the risk of missing the true reason for an effect, and making non-optimal decisions as a result.
In many equivalent shooting situations, identically framed, one would have a hard time telling which shot was made with a FF and which one with a (modern) crop sensor.
If ALL of the parameters used were equivalent, this would be quite true, and even when they are not, the differences may not be noticeable to most people. But that doesn't mean there aren't differences, and accounting for them is necessary to answer the original question.
Comparing cameras with identical pixel density will be hard: I would have to compare the A7iii FF sensor with (for example) my 10 year older Nikon D300S (I haven't touched it in many years), as FF 24 MPX = APSC-C 10.6 MPX. But there's no point doing such a test.
Well yes, you'd have to go out and buy new cameras. We aren't asking you to do that, but just pointing out some of the things that your test did, and didn't, reveal as it was actually done.
Dave
 
But the reality is different. Most modern crop sensor cameras have higher pixel density as most FF cameras (26 MPX APSC-C has a FFeq pixel density of 70 MPX!). Therefore, in many cases, a modern crop sensor can give you nearly identical or even sharper pictures and more details than loads of FF cameras, esp. after Post processing (denoising in particular).
Actually the a7R IV and V are 61mp with 26mp crop areas. The a6700 is a 26mp as is the Pentax K-3 iii - all of these can be compared at parity on pixel density.

The a7R is 36mp and it's crop is just over 15mp, easily compared to 16mp apsc gear of that era.
Yes. Actually, some the FF cameras seem to have the same sensors as their APS-C counterparts, except that the FF sensors the same kind of pixels, but more of them.

For example, the D850 sensor essentially has a D500 inside it.

https://photonstophotos.net/Charts/RN_e.htm#Sony ILCE-6700_14,Sony ILCE-7RM5_14

https://photonstophotos.net/Charts/RN_e.htm#Nikon D500_14,Nikon D850_14

There's no magic to the difference between FF and APS-C. Many of the sensors are the same except that they are bigger. People should forget all the mystical stuff that they don't understand, and realize that what's fundamental is the size difference.
 
But the reality is different. Most modern crop sensor cameras have higher pixel density as most FF cameras (26 MPX APSC-C has a FFeq pixel density of 70 MPX!). Therefore, in many cases, a modern crop sensor can give you nearly identical or even sharper pictures and more details than loads of FF cameras, esp. after Post processing (denoising in particular).
Actually the a7R IV and V are 61mp with 26mp crop areas. The a6700 is a 26mp as is the Pentax K-3 iii - all of these can be compared at parity on pixel density.

The a7R is 36mp and it's crop is just over 15mp, easily compared to 16mp apsc gear of that era.
Yes. Actually, some the FF cameras seem to have the same sensors as their APS-C counterparts, except that the FF sensors the same kind of pixels, but more of them.

For example, the D850 sensor essentially has a D500 inside it.

https://photonstophotos.net/Charts/RN_e.htm#Sony ILCE-6700_14,Sony ILCE-7RM5_14

https://photonstophotos.net/Charts/RN_e.htm#Nikon D500_14,Nikon D850_14

There's no magic to the difference between FF and APS-C. Many of the sensors are the same except that they are bigger. People should forget all the mystical stuff that they don't understand, and realize that what's fundamental is the size difference.
In terms of noise, the biggest factor is usually shot noise. This is the noise inherent in the quantum nature of light. This noise is dependent on the total light captured. That's the exposure (light per unit area) multiplied by the area of the sensor.

As crop bodies have smaller sensors, they need higher exposures in order to capture the same amount of light. A full frame sensor has about twice the area as a 1.4X crop body. Hence, at the same exposure, a full frame captures twice the total light as a 1.4X crop body. This is why people think a full frame has a one stop noise advantage; at the same exposure it captures twice the light.

The fallacy of this, is that there is no reason to use the same exposure on a crop body that you would use on a full frame.

.

The advantage of a full frame body is that it generally will have a larger shooting envelope than a crop body. Full frame bodies generally can tolerate capturing more total light than a crop body. Full frame bodies generally offer lenses with larger aperture diameters, thus giving the option of shallower depth of field (along with the associated low light performance).

.

Of course the quality of an image is often limited by the weakest link in the system. With enough light, and a high quality lens, that limitation may be the pixel count. Thus a 50 megapixel camera is likely capable of sharper images than a camera with a lower pixel count.

When you crop an image, you reduce the pixel count, and the area used of the sensor.

A 50 megapixel full frame cropped to match a 1.6X body yields a 19 megapixel image. Thus an entry level 24 megapixel crop body can give a sharper result than a 50 megapixel camera cropped to match.
 
Have you ever cropped a photo? That's all it is when using a 'crop' sensor.
Most lenses are sharper in the center than towards the edge of the frame.

Consider a lens that is very sharp in the center, and not at all sharp near the edges of the frame. Cropping an image from such a lens can actually increase the average sharpness of the image.
Yes, I 100% agree. As a matter of fact, I quite liked my FX 28-200 zoom on a DX but when I upgraded to a FX Nikon I was astonished as to how awful that lens really was.

Thanks for the clarification. I had a different interpretation of the OP’s question
 
But the reality is different. Most modern crop sensor cameras have higher pixel density as most FF cameras (26 MPX APSC-C has a FFeq pixel density of 70 MPX!). Therefore, in many cases, a modern crop sensor can give you nearly identical or even sharper pictures and more details than loads of FF cameras, esp. after Post processing (denoising in particular).
Actually the a7R IV and V are 61mp with 26mp crop areas. The a6700 is a 26mp as is the Pentax K-3 iii - all of these can be compared at parity on pixel density.

The a7R is 36mp and it's crop is just over 15mp, easily compared to 16mp apsc gear of that era.
In many equivalent shooting situations, identically framed, one would have a hard time telling which shot was made with a FF and which one with a (modern) crop sensor.

Comparing cameras with identical pixel density will be hard: I would have to compare the A7iii FF sensor with (for example) my 10 year older Nikon D300S (I haven't touched it in many years), as FF 24 MPX = APSC-C 10.6 MPX. But there's no point doing such a test.
There are other options. 45mp Nikons produce 19mp crop files - which is almost identical to the 20mp APSC sensor size.

However, the key question as you pointed out was not this specific thing. The question is too broad to really address with a yes or no. If you frame properly and get good shots where you have enough pixels on the subject and enough light on the subject for the intended purpose - it really isn't that important what size the sensor is. The margins of the exposure envelope will benefit from larger sensors with more exposed area on the subject (assuming identical framing using a different focal length). There's no magic bullet however - composition and subject selection and skill will still win out over sensor size.
I totally agree with you.

The Sony "R" series and other FF cameras are real beast and no APSC will come close in terms of IQ.

Someone using APSC rather than such a camera in order to go lighter when doing 7 hour hikes in the Swiss mountains (like me) is definitely trading off IQ.

But compared to a 24MPX FF? This is where the gap in terms ouf final output is very close compared to the gear that I'm using.

And yes, I agree also that in the end, "It's the Indian, not the arrow!"
 
My layman understanding: imagine a square as a sensor, the lens will gather the light to reflect the scene on that square, you can think of crop sensor is a smaller square inside that original square, the parts of the image that are outside the smaller square but inside of the larger one will be missing, that is why FF sensors have a larger field of view, but crop have "zoomed in" view, this might not be exactly correct since there is also other facts like distance between the lens and the sensor plane which control how large the image is reflected on the sensor, this is based on my usage of both dx and fx cameras and stuff I vaguely remember from a physics 101 lab I took long ago where we did the experiment of a candle and a dark box

Sharpness depends on pixels density, the more pixels that can fit inside the square the more resolution the resulting image will have, there is an inverse relation with that where the smaller the pixels the less light they can collect, dx cameras try to fit more pixels but without having them being very small so that they can still collect enough light, theoretically 24mp dx can have the same resolution of 24mp fx, but the 2nd will produce a brighter image or have a wider dynamic range.

so sensors control how detailed the digital image is, since digital data is discrete 1 and 0 and not analogue, the more pixels the more it can be converted from continuous analogue to discrete digital, lens control how well the scene is reflected focused and collected on the sensor plane so that the sensor can produce the digital image representation, good lens on a small sensor can produce a better image than a bad lens on a big sensor.
 
Last edited:
No. Auto ISO. SS at 1/80 for all cameras in both tests.

Test 1:

X-T5, APSC 40MPX, same framing, f2.8, SS 1/80, Auto ISO (4000), Auto WB

A7iii, FF 24MPX, same framing, f4, SS 1/80, Auto ISO (1600), Auto WB
You might have to explain how the full frame image with one stop less exposure and 1 1/3 stop lower ISO would give equivalent brightness for comparison. Were the lighting conditions the same for both; it would seem unlikely. Pushing it in raw would not be equivalent.
Look, I don't try to understand how Sony and Fuji adjust the Auto Settings (ISO and WB).

When I'm sitting in my armchair, my kids on the Sofa opposite of me and I have both cameras on my lap, and I make all shots within one minute, I expect the conditions to as close to similar as possible.
Then something is amiss. If the light and framing did not change, then both cameras would use essentially the same exposure. A 2 1/3 stop difference would not be due to auto algorithms.

I don't know why, but for a useful comparison, equivalent shooting conditions would need to be used.
Maybe something was indeed off. But in that case, the FF @ ISO1600 should even have been at a greater advantage vs APSC @ 4000.
Not really. Noise is not a function of ISO but rather the amount of light used to make the image. So the combination of the same shutter speed and one stop difference in f-number are actually roughly equivalent, assuming that the light did not change. But the fact that the cameras gave such dramatically different settings for ostensibly the same scene calls the whole process into question.
Demoicaising and noise reduction: PureRaw 4 (note: XD2S is not working for XxTrans sensors)

More details and better sharpness for X-T5 from 100% onwards.
Assuming that there is an explanation for the exposure discrepancy and you achieved same framing by focal length rather than altering the distance, then it is not at all surprising that the camera with the far greater pixel density delivered more detail. I am not sure a test would be needed to reach that conclusion. But that is not a function of sensors size per se, but just the pixel density of the individual cameras, which is independent of the size of their sensors.

In terms of sharpness, that is a function of a number of factors, not least of which are the lenses involved, but an image with more detail may appear sharper.
You're right: I shot at 70mm on The Sony and 50mm (80mm FF) on the APSC. That would be the only "marginal" difference.

My goal was not to make a precise comparison between the two, but to find out if I'd lose a lot in skipping the Sony FF system for my trusted APSC one when I do corporate events.

And to point out that FF will not have the clear edge in any use case.
Esp. as many photographers (be it Wildlife, Landscape or Street, to mention only those) would usually chose the same shooting spot regardless of FF or APSC, and use the focal length to adjust the framing.
The point is that a sensor with a greater pixel density will always have an advantage of one with less density, and have the potential to produce more detail. This isn't a function of sensor size. So you simply demonstrated the advantage of pixel density, which is axiomatic. You didn't demonstrate anything about sensor size itself.
The question asked here was:
Do crop sensors lose sharpness (in comp. to FF) in the same frame?
Your comparison did not address that. To address that the pixel density would need to be normalized, while the sensor size was varied. And ideally the same lens would be used for both.

Your interest is appreciated, but the test didn't quite get at the issue.
This is where I will disagree, as this thread was never about identical pixel density only.

At least, that's not how I understood it. To me, it's more: "Do I lose sharpness using a crop sensor camera vs a FF one, for a give scene."

What I demonstrate here is that there is no general rule and no YES or NO answer to the question asked by the OP.
Well, THAT observation has already been aptly explained by many of the first commenters a year and a half ago!

Yes, there is a general rule, though I stated it incorrectly in my previous post. For the same focal length, the image of an object projected onto the same sensor will be the same size. In that case, only pixel density determines the potential for detail, regardless of sensor size.

If the framing is kept the same by using an equivalent focal length, then the potential for detail is determined by the pixel density times the focal length. In your test you used nearly equivalent, and thus different, focal lengths, which means that the image of the subject was projected larger on the larger sensor. That would be a good thing for detail on the larger sensor. But since the pixel density was VERY different, the greater density overwhelmed the focal length advantage, and produced more detail, as would be expected.

In addition, note that comparisons like this also need to account for the final viewing size, either on screen or a print. To create an viewable image of the same size and framing, the image from the smaller sensor will need to be enlarged more. For the same viewing distance and other conditions, greater enlargement always results in reduction in detail and sharpness. That would work against the advantage of the smaller, denser sensor, but probably not reverse the result in this case.

But different combinations of cameras and focal lengths would give different results, independent of the sensor size. That is they key takeaway here.
Esp. as I used both Sony's and Fuji's red badge lenses and compared the focus point and area around it only - because this is what serves my purpose when I shoot corporate events.

Indeed, if you take the same Pixel density, FF will have a clear advantage. This is obvious.

But the reality is different. Most modern crop sensor cameras have higher pixel density as most FF cameras (26 MPX APSC-C has a FFeq pixel density of 70 MPX!). Therefore, in many cases, a modern crop sensor can give you nearly identical or even sharper pictures and more details than loads of FF cameras, esp. after Post processing (denoising in particular).
No. This is often, but not necessarily, true. There are a number of high resolution full frame cameras that exceed the pixel density (and thus potential for detail) of many smaller-sensors cameras. By using sensor size as an incorrect proxy for pixels you run the risk of missing the true reason for an effect, and making non-optimal decisions as a result.
In many equivalent shooting situations, identically framed, one would have a hard time telling which shot was made with a FF and which one with a (modern) crop sensor.
If ALL of the parameters used were equivalent, this would be quite true, and even when they are not, the differences may not be noticeable to most people. But that doesn't mean there aren't differences, and accounting for them is necessary to answer the original question.
Comparing cameras with identical pixel density will be hard: I would have to compare the A7iii FF sensor with (for example) my 10 year older Nikon D300S (I haven't touched it in many years), as FF 24 MPX = APSC-C 10.6 MPX. But there's no point doing such a test.
Well yes, you'd have to go out and buy new cameras. We aren't asking you to do that, but just pointing out some of the things that your test did, and didn't, reveal as it was actually done.

Dave
Thanks for taking the time to reply, btw.

Here, I agree with you on all points.

Had I someone carrying my gear for me on my hikes, I would certainly fall for a higher pixel FF camera, as the rules of optics always prevail in the end.
(In case you were wondering, I'm definitely not in the "Team crop" vs "Team FF" trip.)

In my specific case, though, the IQ of my pictures won't take a hit if I decide to use the 40MPX crop sensor instead of the A7iii (despite the flaws in my tests).
 
My layman understanding: imagine a square as a sensor, the lens will gather the light to reflect the scene on that square, you can think of crop sensor is a smaller square inside that original square, the parts of the image that are outside the smaller square but inside of the larger one will be missing, that is why FF sensors have a larger field of view, but crop have "zoomed in" view, this might not be exactly correct since there is also other facts like distance between the lens and the sensor plane which control how large the image is reflected on the sensor, this is based on my usage of both dx and fx cameras and stuff I vaguely remember from a physics 101 lab I took long ago where we did the experiment of a candle and a dark box

Sharpness depends on pixels density, the more pixels that can fit inside the square the more resolution the resulting image will have, there is an inverse relation with that where the smaller the pixels the less light they can collect, dx cameras try to fit more pixels but without having them being very small so that they can still collect enough light, theoretically 24mp dx can have the same resolution of 24mp fx, but the 2nd will produce a brighter image or have a wider dynamic range.
FF can do that but not necessarily will. In fact, if someone is using an APS-C camera with a 18-50 f/2.8 zoom, in practice, it will access the same type of images as a full frame camera with a 28-70 f/4 zoom. The only time the FF will have DR advantage is when you saturate its larger pixels at base ISO by using a slower shutter speed compared to the APS-C set up.

To rephrase, there should be no theoretical advantage/disadvantage for using a specific sensor size when equivalent photos are taken (same framing, same DoF, same shutter speed). By definition these equivalent photos are made by the same amount of light, i.e. the same number of photons hit the sensor. It doesn't really matter much if these photons are counted by 24 million small pixels or 24 million larger pixels, as long as the smaller pixels can count them (are not overexposed).
so sensors control how detailed the digital image is, since digital data is discrete 1 and 0 and not analogue, the more pixels the more it can be converted from continuous analogue
Arguably digital cameras convert from digital to digital by simply counting how many photons hit each pixel. That's why we see "noise" at low exposures, because light is not continuous by nature but comes in little portions.
to discrete digital, lens control how well the scene is reflected focused and collected on the sensor plane so that the sensor can produce the digital image representation, good lens on a small sensor can produce a better image than a bad lens on a big sensor.
 
Last edited:
My layman understanding: imagine a square as a sensor, the lens will gather the light to reflect the scene on that square, you can think of crop sensor is a smaller square inside that original square, the parts of the image that are outside the smaller square but inside of the larger one will be missing, that is why FF sensors have a larger field of view, but crop have "zoomed in" view, this might not be exactly correct since there is also other facts like distance between the lens and the sensor plane which control how large the image is reflected on the sensor, this is based on my usage of both dx and fx cameras and stuff I vaguely remember from a physics 101 lab I took long ago where we did the experiment of a candle and a dark box

Sharpness depends on pixels density, the more pixels that can fit inside the square the more resolution the resulting image will have, there is an inverse relation with that where the smaller the pixels the less light they can collect, dx cameras try to fit more pixels but without having them being very small so that they can still collect enough light, theoretically 24mp dx can have the same resolution of 24mp fx, but the 2nd will produce a brighter image or have a wider dynamic range.
FF can do that but not necessarily will. In fact, if someone is using an APS-C camera with a 18-50 f/2.8 zoom, in practice, it will access the same type of images as a full frame camera with a 28-70 f/4 zoom. The only time the FF will have DR advantage is when you saturate its larger pixels at base ISO by using a slower shutter speed compared to the APS-C set up.

To rephrase, there should be no theoretical advantage/disadvantage for using a specific sensor size when equivalent photos are taken (same framing, same DoF, same shutter speed). By definition these equivalent photos are made by the same amount of light, i.e. the same number of photons hit the sensor. It doesn't really matter much if these photons are counted by 24 million small pixels or 24 million larger pixels, as long as the smaller pixels can count them (are not overexposed).
I might be wrong(or taking too simplistic approach) but wouldn't the larger pixels have a larger surface area for light detection(before it gets translated to RGB) so they can collect more light?
so sensors control how detailed the digital image is, since digital data is discrete 1 and 0 and not analogue, the more pixels the more it can be converted from continuous analogue
Arguably digital cameras convert from digital to digital by simply counting how many photons hit each pixel. That's why we see "noise" at low exposures, because light is not continuous by nature but comes in little portions.
eh this is getting too quantum for me :) but then how does ISO affect the transformed signal, isn't it supposed to amplify the signal before digital conversion?
to discrete digital, lens control how well the scene is reflected focused and collected on the sensor plane so that the sensor can produce the digital image representation, good lens on a small sensor can produce a better image than a bad lens on a big sensor.
 
I might be wrong(or taking too simplistic approach) but wouldn't the larger pixels have a larger surface area for light detection(before it gets translated to RGB) so they can collect more light?
Individual pixels will collect more light, but all of them combined - the same amount of light for all sensors of the same area, given the scene and exposure is also the same.
 
I might be wrong(or taking too simplistic approach) but wouldn't the larger pixels have a larger surface area for light detection(before it gets translated to RGB) so they can collect more light?
...

Let's back up a minute. Do we care whether the pixels are noisier or the image looks noisier?

If you care about the pixels, then yes, at the same exposure larger pixels will be lee noisy.

Personally, I care how the resulting image looks. If that's your concern, it's only total light captured.

.

Consider a one pixel section from a 5 megapixel camera, compared to that same area from a 20 megapixel camera (same sensor size, same subject, same camera settings).

The pixels from the 20 megapixel camera will be noisier. However, they are likely to small for the eye to individually discern (or for the printer to render). Therefore, the eye will see the average of these four pixels. Averaging the pixels reduces the noise, and you end up with the same visible noise as the 5 megapixel camera.

The bottom line is that in terms of how noisy the result looks, light per unit area is usually the largest factor. Pixel density generally doesn't play a noticeable roll.
 
My layman understanding: imagine a square as a sensor, the lens will gather the light to focus reflect the scene on that square, you can think of crop sensor is a smaller square inside that original square, the parts of the image that are outside the smaller square but inside of the larger one will be missing, that is why FF sensors have a larger field of view, but crop have "zoomed in" view, this might not be exactly correct since there is also other facts like focal length distance between the lens and the sensor plane which control how large the image is focused reflected on the sensor, this is based on my usage of both dx and fx cameras and stuff I vaguely remember from a physics 101 lab I took long ago where we did the experiment of a candle and a dark box
I made some corrections.
Sharpness depends on pixels density, the more pixels that can fit inside the square the more resolution the resulting image will have, there is an inverse relation with that where the smaller the pixels the less light they can collect,
Pixel density is only one factor. Resolution may be limited by diffraction or lens quality, so smaller pixels will not always produce better resolution.
 
My layman understanding: imagine a square as a sensor, the lens will gather the light to reflect the scene on that square, you can think of crop sensor is a smaller square inside that original square, the parts of the image that are outside the smaller square but inside of the larger one will be missing, that is why FF sensors have a larger field of view, but crop have "zoomed in" view, this might not be exactly correct since there is also other facts like distance between the lens and the sensor plane which control how large the image is reflected on the sensor, this is based on my usage of both dx and fx cameras and stuff I vaguely remember from a physics 101 lab I took long ago where we did the experiment of a candle and a dark box

Sharpness depends on pixels density, the more pixels that can fit inside the square the more resolution the resulting image will have, there is an inverse relation with that where the smaller the pixels the less light they can collect, dx cameras try to fit more pixels but without having them being very small so that they can still collect enough light, theoretically 24mp dx can have the same resolution of 24mp fx, but the 2nd will produce a brighter image or have a wider dynamic range.
FF can do that but not necessarily will. In fact, if someone is using an APS-C camera with a 18-50 f/2.8 zoom, in practice, it will access the same type of images as a full frame camera with a 28-70 f/4 zoom. The only time the FF will have DR advantage is when you saturate its larger pixels at base ISO by using a slower shutter speed compared to the APS-C set up.

To rephrase, there should be no theoretical advantage/disadvantage for using a specific sensor size when equivalent photos are taken (same framing, same DoF, same shutter speed). By definition these equivalent photos are made by the same amount of light, i.e. the same number of photons hit the sensor. It doesn't really matter much if these photons are counted by 24 million small pixels or 24 million larger pixels, as long as the smaller pixels can count them (are not overexposed).
I might be wrong(or taking too simplistic approach) but wouldn't the larger pixels have a larger surface area for light detection(before it gets translated to RGB) so they can collect more light?
Yeah, but that doesn't help because there aren't as many of them.
so sensors control how detailed the digital image is, since digital data is discrete 1 and 0 and not analogue, the more pixels the more it can be converted from continuous analogue
Arguably digital cameras convert from digital to digital by simply counting how many photons hit each pixel. That's why we see "noise" at low exposures, because light is not continuous by nature but comes in little portions.
That's part of why we see noise. There's also electronic noise.
eh this is getting too quantum for me :) but then how does ISO affect the transformed signal, isn't it supposed to amplify the signal before digital conversion?
to discrete digital, lens control how well the scene is reflected focused and collected on the sensor plane so that the sensor can produce the digital image representation, good lens on a small sensor can produce a better image than a bad lens on a big sensor.
 
eh this is getting too quantum for me :) but then how does ISO affect the transformed signal, isn't it supposed to amplify the signal before digital conversion?
Very roughly speaking, pixels count photons, and produce a signal based on the count.

Conceptually, the ISO setting provides a context for interpreting those counts. At high ISO settings, a low photon count might result in a bright pixel in the image. At low ISO settings, you need a higher photon count to get a bright pixel.

How this gets implemented can vary by camera. A few cameras actually produce the same raw data, independent of the ISO setting. The ISO setting only affects how the raw data is interpreted. Some cameras do some analog processing that depends on the ISO setting. However, this is an implementation detail.

I think one is usually best off not worrying about how a particular camera implements ISO.

It's true that some cameras reconfigure themselves to match the expected ISO. Some cameras have two different modes, one that can tolerate high exposures (used at low ISO settings) and one that adds a little noise to the image (used at high ISO settings). The best way to deal with this is to use an ISO setting that's reasonable for your chosen exposure. Auto-ISO is often a reasonable choice.

In low light situations choose the largest aperture that yields sufficient depth of field, and the slowest shutter that doesn't yield unwanted motion blur. That's the highest exposure you can get under those conditions. Auto-ISO will then give you a reasonable result, even if the light changes a bit.
 
My layman understanding: imagine a square as a sensor, the lens will gather the light to reflect the scene on that square, you can think of crop sensor is a smaller square inside that original square, the parts of the image that are outside the smaller square but inside of the larger one will be missing, that is why FF sensors have a larger field of view, but crop have "zoomed in" view, this might not be exactly correct since there is also other facts like distance between the lens and the sensor plane which control how large the image is reflected on the sensor, this is based on my usage of both dx and fx cameras and stuff I vaguely remember from a physics 101 lab I took long ago where we did the experiment of a candle and a dark box

Sharpness depends on pixels density, the more pixels that can fit inside the square the more resolution the resulting image will have, there is an inverse relation with that where the smaller the pixels the less light they can collect, dx cameras try to fit more pixels but without having them being very small so that they can still collect enough light, theoretically 24mp dx can have the same resolution of 24mp fx, but the 2nd will produce a brighter image or have a wider dynamic range.
FF can do that but not necessarily will. In fact, if someone is using an APS-C camera with a 18-50 f/2.8 zoom, in practice, it will access the same type of images as a full frame camera with a 28-70 f/4 zoom. The only time the FF will have DR advantage is when you saturate its larger pixels at base ISO by using a slower shutter speed compared to the APS-C set up.

To rephrase, there should be no theoretical advantage/disadvantage for using a specific sensor size when equivalent photos are taken (same framing, same DoF, same shutter speed). By definition these equivalent photos are made by the same amount of light, i.e. the same number of photons hit the sensor. It doesn't really matter much if these photons are counted by 24 million small pixels or 24 million larger pixels, as long as the smaller pixels can count them (are not overexposed).
I might be wrong(or taking too simplistic approach) but wouldn't the larger pixels have a larger surface area for light detection (before it gets translated to RGB) so they can collect more light?
While it is not very important how much light is collected per pixel (it is the total light per image that defines IQ), they certainly can. This is quantified by the Full Well Capacity: https://www.photonstophotos.net/Charts/Sensor_Characteristics.htm

Typically about 40,000 photoelectrons per pixel for a 24 Mpx APS-C camera and about 95,000-100,000 photoelectrons per pixel for a 24 Mpx full frame camera (e.g. Canon EOS R8, Leica SL2-S, Lumix DC-S1).

I am saying that if you want to take advantage of that extra capacity of larger pixels on a 24 Mpx ff camera and see some benefits to DR compared to a 24 Mpx APS-C camera, you will have to ETTR the full frame sensor at its base ISO. If maximum signal is say 30,000 photoelectrons per pixel, then either APS-C or FF 24 Mpx camera will be able to record it, with larger sensor offering no benefit in terms of DR (in fact, in practice, smaller pixels will have an advantage as they have lower read noise).
so sensors control how detailed the digital image is, since digital data is discrete 1 and 0 and not analogue, the more pixels the more it can be converted from continuous analogue
Arguably digital cameras convert from digital to digital by simply counting how many photons hit each pixel. That's why we see "noise" at low exposures, because light is not continuous by nature but comes in little portions.
eh this is getting too quantum for me :) but then how does ISO affect the transformed signal, isn't it supposed to amplify the signal before digital conversion?
It typically is, but ISO only affects the way the signal is recorded. If pixel A is hit by 8 photons and pixel B by 12, then that is the signal that is going to be stored in the raw file but converted into "Digital Numbers" (DNs) with the conversion rate set by the ISO. So the DN content of pixels A and B for a 12-bit 24 MPx APS-C sensor might look something like

ISO 1600: A=8, B=13

ISO 3200: A=17, B=23

ISO 6400: A=30, B=50

Not exactly 8-to-12-ratio due to the read noise, but since the input referred read noise in modern cameras is typically under 1 electron at high ISOs, I like thinking of our cameras as quantum devices that are not far off counting individual photons.
to discrete digital, lens control how well the scene is reflected focused and collected on the sensor plane so that the sensor can produce the digital image representation, good lens on a small sensor can produce a better image than a bad lens on a big sensor.
 
Last edited:
In many equivalent shooting situations, identically framed, one would have a hard time telling which shot was made with a FF and which one with a (modern) crop sensor.
This, of course, is to be expected. If making equivalent photos, they're the same. The fact that we can make equivalent photos with different format cameras does not, however, contradict the fact that a larger format will use more total light energy at the same exposure to make an image. That additional light produces predictable, measurable advantages in dynamic range, noise, color fidelity and other areas of image quality.

Whether or not those potential advantages are of significance to the photography one does is up to the photographer to determine.
 
In many equivalent shooting situations, identically framed, one would have a hard time telling which shot was made with a FF and which one with a (modern) crop sensor.
This, of course, is to be expected. If making equivalent photos, they're the same. The fact that we can make equivalent photos with different format cameras does not, however, contradict the fact that a larger format will use more total light energy at the same exposure to make an image. That additional light produces predictable, measurable advantages in dynamic range, noise, color fidelity and other areas of image quality.
Yes, at the same exposure, the larger format captures more total light, and typically less depth of field.

However, at the same aperture diameter, shutter speed and angle of view, they both capture the same total light.

So if you choose to shoot with shallower depth of field, you will capture more light. However, unless you are in a situation where you can shoot the maximum exposure the camera can tolerate, why would you shoot a crop body at the same exposure as a full frame?

Whether or not those potential advantages are of significance to the photography one does is up to the photographer to determine.
Larger format sensors typically can tolerate a higher maximum total light captured. If you are in a situation where you can reach that maximum, you will get measurably better quality (lower noise) with the larger sensor.

Whether or not that makes a visible difference is the question here.

If you are not seeing noise in your ISO 100 images on a crop body, then you will not see the improvement from capturing more total light. Once the noise is low enough not to be visible to the human eye, there is little, if any, advantage to further reductions.

Now, if you are seeing noise in your ISO 100 crop body images, then you may see the improvement from moving to a larger sensor and capturing more total light.
 
In many equivalent shooting situations, identically framed, one would have a hard time telling which shot was made with a FF and which one with a (modern) crop sensor.
This, of course, is to be expected. If making equivalent photos, they're the same. The fact that we can make equivalent photos with different format cameras does not, however, contradict the fact that a larger format will use more total light energy at the same exposure to make an image. That additional light produces predictable, measurable advantages in dynamic range, noise, color fidelity and other areas of image quality.
Yes, at the same exposure, the larger format captures more total light, and typically less depth of field.

However, at the same aperture diameter, shutter speed and angle of view, they both capture the same total light.
As already mentioned...
So if you choose to shoot with shallower depth of field, you will capture more light.
Or with a slower shutter speed...
However, unless you are in a situation where you can shoot the maximum exposure the camera can tolerate, why would you shoot a crop body at the same exposure as a full frame?
Wildlife, bird, sports, and action photography come immediately to mind. It's common to shoot with a lens wide open at the same shutter speed with different format systems.

These genres also illustrate the limitation of relying on exposure, alone, when discussing relative performerance. You really should be focusing on extrance pupil diameter as that's the spec - not exposure - determining light-gathering.
Whether or not those potential advantages are of significance to the photography one does is up to the photographer to determine.
Larger format sensors typically can tolerate a higher maximum total light captured. If you are in a situation where you can reach that maximum, you will get measurably better quality (lower noise) with the larger sensor.
Again, as already mentioned...
Whether or not that makes a visible difference is the question here.
Yup, already mentioned...
If you are not seeing noise in your ISO 100 images on a crop body, then you will not see the improvement from capturing more total light. Once the noise is low enough not to be visible to the human eye, there is little, if any, advantage to further reductions.

Now, if you are seeing noise in your ISO 100 crop body images, then you may see the improvement from moving to a larger sensor and capturing more total light.
Let's add image processing to the mix of factors. AI noise reduction extends the low-light conditions in which a photographer will be confident in making a quality image by at least a full stop.

Here's a simple guideline that can be applied in these situations: A photographer choosing a camera system will be well served by the format allowing them to consistently achieve, within their budget and usability needs, full image quality potential.

To illustrate, if a photographer routinely uses an internal crop mode or crops images in post to match the angle of view a smaller format system could deliver using the same lens and settings, there's a good chance they've overspent for a format that's larger than they need. The image quality they're getting may not be discernibly worse but the more expensive, possibly larger system also isn't delivering better image quality. Unless the usability of a smaller format system would be a significant downgrade, that probably would've been a better choice.

--
Bill Ferris Photography
Flagstaff, AZ
 
Last edited:

Keyboard shortcuts

Back
Top