Why is DR smaller when FF cameras are used in cropped mode?

originals , look into the dark shadows, theres no noise ,the smaller image pixelized before any noise differences shows. at 100% screen view the a7r2 would equal a print 20 foot high. and we are looking at 700%
Not quite. On a 24" HD monitor, an A7R2 image at 100% equals a 7.4 x 4.9 foot print, or 3.7 x 2.5 foot on a 24" 4K monitor.
 
Photons to photos uses a measure of DR that purists don't find satisfying. The upper limit is just enough to not burn out highlights, the lower is the point at which shadow noise becomes intrusive. (I've no idea how this was set, but believe is it a decent real life measure from images in normal viewing.) So as smaller sensor area means more enlargement that point rises a little as sensors become smaller and enlargement means in effect you get to look more closely for the shadow noise.

I asked this a while ago. Mr Claif kindly answered. Doubtless I've over simplified.

Take heart though that DR is now pretty similar and fairly adequate in most cameras so you can simply enjoy taking photos.
Or the closer you get to a noisy photo the noisier it will look ? :)
 
Let me thank all participants for the vivid discussion, I really enjoy and understand better now...

My mistake was to not understand the term PDR exactly. I confused PDR (intuitively) with DR of a single pixel. This DR of a single pixel (is there a technical term for it?) I was trying to read out from Bills graphs. I see now that the sensor size has to be taken into account as an additional factor when one wants to compare DR of the single pixels between different sensor sizes...

When one compares PDR between a MFT (Oly EM1II) and a FF camera (Sony A7R3) the outcome is that at base ISO, PDR is almost 2 EV better with the larger Sensor: https://www.photonstophotos.net/Charts/PDR.htm#Olympus OM-D E-M1 Mark II,Sony ILCE-7RM3

Since the Sony sensor is 384% the size of the Oly, a 2 EV difference is just what would be approx. expected by the sheer size difference, when the single pixels have the same DR...

But how can the single pixel DR of the Oly (with much smaller pixels and lower FWC) be identical (if not slightly better, as implied by the PDR graph) to the Sony with larger pixels (and BSI)?

Wolfgang
 
You can also think of this in terms of how many photons the sensor can absorb before it saturates up. OP posted a link to full well capacity; the full sensor has four times as much. 4x as much area means it can get hit with 4x as many photons before saturation.
Surely the saturation is in each pixel ? If none of the pixels is saturated, increasing their number won't magically make some of them saturated >
Think of it this way: I need a 9MP photo:
  • Crop: To get proper exposure for the noise floor I want, I can do ISO1600, but my highlights are blown.
  • Full sensor: I can underexpose down to ISO400 and add 4 adjacent pixels to make a "superpixel" where I have 4x as much headway.
So you can think of it in terms of noise floor (keeping ISO constant and having less noise by averaging), or in terms of saturation (e.g. adjusting ISO to keep noise constant, and having more headroom -- although for simplicity's sake I didn't try to keep noise constant in the example above).
Or consider the dots of ink in a print or a printed book.
That's a good example. Without dithering, I have black-and-white (1 bit dynamic range). Dithering expands the dynamic range at the cost of resolution. With classic algorithms (dots), I had a trade-off, where a large dot size gave low resolution / high DR, and vice-versa. With modern ones, the further back I stand, the more area I average over, and the more dynamic range I have, and the less resolution.

It's less technical than sigma-deltas but makes the same point. Thank you!

Nice images of dither patterns. You can step back, to see a small, high-DR image, or step in to see a large, low-DR image.
 
Last edited:
Why is DR smaller when FF cameras are used in cropped mode?
The crop is made with less light than the whole photo.
So why isn't it darker ?

The total amount of light required for a given exposure is directly proportional to the area of the sensor. So the variable "total light" tells us nothing more than the variable "area".

When I look at the photonstophotons side and compare DR of different cameras, I see that the DR is smaller, when a camera is used in cropped mode, e.g.: https://www.photonstophotos.net/Cha...5(APS-C),Sony ILCE-7RM4,Sony ILCE-7RM4(APS-C)

I thought the sensor and the pixels remain the same and just a smaller area is used. How can DR be different?
The DR/pixel is the same, but the DR/photo lessens because the crop is made with less light.
Let's work a simple example. The DR is the number of stops from the noise floor to the saturation limit. Let's say a pixel for a particular camera has an electronic noise (the noise from the sensor and supporting hardware) of 15 e- and a saturation limit of 80K e-. Then, using the electronic noise for the noise floor, the DR/pixel is log2 (80000 / 15) = 12.4 stops.

Now let's compute the DR for four pixels. The combined saturation limit is 80K + 80K + 80K + 80K = 320K e-. The combined electronic noise is sqrt (15² + 15² + 15² + 15²) = 30 e- (noise, being a standard deviation, adds in quadrature, unlike saturation limit which is not a standard deviation). So, the DR for the four combined pixels is log2 (320000 / 30) = 13.4 stops -- one stop more than the single pixel.
That's more pixels, not more light. Increasing the number of measurements reduces the error. Of course you need a certain amount of light to make each measurement, but in the case of cropping, where the measuring instruments (pixels) are all the same, it's the number of data points that matters.
So, what we see is that the greater amount of light matters more than the greater amount of noise, so the net effect is greater DR.
I don't think that is what we see at all.

Don Cox
 
we are discussing "dynamic range" but here we go again changing the subject.
No. You are changing the subject. Rear the initial post and you'll see the topic is Photographic Dynamic Range (PDR). So a broader topic might include other normalized measures like DxOMark Landscape score; but it does not include pixel measures like Engineering Dynamic Range (EDR, pixel level dynamic range).
quote:

So FWC is exactly the same for a given sensor, no matter how much you crop. And the intrinsic noise of a given pixel is also not dependend of cropping. No matter wether you crop in post or by the camera setting...
There is no such thing as FWC for a sensor; FWC is a property of a pixel.
You persist in talking about pixels when the topic is normalized dynamic range eg. PDR.

In any case, I'd be happy to discuss it further with anyone other than you who remains confused.


You know, an explanation of what your photos represent would be more than welcome in a technical discussion. So, since you leave us to guess, my guess is the following:
  • The two photos are taken of the same scene from the same distance with the same focal length.
  • The bottom photo is the whole frame from the EM1.2 (an mFT camera).
  • The top photo is a crop from the A7R2 (a FF camera) to match the framing of the mFT photo.
OK, those assumptions in place, and the additional assumption that the lighting was the same for both photos, then what we have is that the top photo (f/6.3 1/160) was made with 2/3 stops more light than the bottom photo (f/8 1/200). Therefore, assuming similar sensor tech, and thus similar electronic noise, as well as similar processing (that is, one didn't have more noise filtering applied than the other), the DR for the crop would be greater than for the mFT photo.

Is there something else you wanted to add?
I agree. This thread is about cropping from one camera, not about the multiple effects of using different cameras with very different sensors and lenses. So these doll shots are irrelevant.

Don Cox
 
Why is DR smaller when FF cameras are used in cropped mode?
The crop is made with less light than the whole photo.
When I look at the photonstophotons side and compare DR of different cameras, I see that the DR is smaller, when a camera is used in cropped mode, e.g.: https://www.photonstophotos.net/Cha...5(APS-C),Sony ILCE-7RM4,Sony ILCE-7RM4(APS-C)

I thought the sensor and the pixels remain the same and just a smaller area is used. How can DR be different?
The DR/pixel is the same, but the DR/photo lessens because the crop is made with less light.

Let's work a simple example. The DR is the number of stops from the noise floor to the saturation limit. Let's say a pixel for a particular camera has an electronic noise (the noise from the sensor and supporting hardware) of 15 e- and a saturation limit of 80K e-. Then, using the electronic noise for the noise floor, the DR/pixel is log2 (80000 / 15) = 12.4 stops.

Now let's compute the DR for four pixels. The combined saturation limit is 80K + 80K + 80K + 80K = 320K e-. The combined electronic noise is sqrt (15² + 15² + 15² + 15²) = 30 e- (noise, being a standard deviation, adds in quadrature, unlike saturation limit which is not a standard deviation). So, the DR for the four combined pixels is log2 (320000 / 30) = 13.4 stops -- one stop more than the single pixel.

So, what we see is that the greater amount of light matters more than the greater amount of noise, so the net effect is greater DR.
Your going to have a hard time proving that.
Wasn't the proof right from the start with the OP's link?
considering the histogram is identical at the highlights and the shadows and theres no room to push either way. on a cropped image or full size subject.
Let's go about this in an equivalent way. Take two photos of a scene that exceeds the DR of the sensor. One photo at, say, 100mm f/5.6 1/200 and the other at 50mm f/5.6 1/200. Crop the 50mm photo to the same framing as the 100mm photo. Display both photos at the same size. Compare the noisiness of the shadows for each and compare the blown highlights for each.

What you'll find is that the highlights are the same for both, but the 100mm photo, which is uncropped, and made with 4x as much light as the cropped photo, will have less noisy shadows (it will be less noisy everywhere, but most noticeable in the shadows).
Now view them both at 100% -- or any other magnification that shows the noise clearly.

In the original post, one image was shown at a higher magnification than the other. This makes the high spatial frequency noise more visible. So does viewing a print from a closer distance.
And there you are!
 
This is the correct answer. I'll mark it, since some of the other ones have subtle errors.

I'll provide a little bit more detail. Let's say you have two images, one is 36 MP and one is a 9MP crop from the same (middle quarter of the image).

If you display both at the same resolution, let's say 1 megapixel, for the original image, you're averaging 36 pixels together for each output pixel. For the cropped image, you're only averaging 9 pixels together.
So the greater the enlargement from the original image size, the more visible the noise is.
Yes. The noise is there and unchanged from the start. But the appearance of the noise depends on many factors, not the least of which are the lightness of the photo, the size it's displayed at, the distance it's viewed from, and the medium it's displayed on.
You can also think of this in terms of how many photons the sensor can absorb before it saturates up. OP posted a link to full well capacity; the full sensor has four times as much. 4x as much area means it can get hit with 4x as many photons before saturation.
Surely the saturation is in each pixel ? If none of the pixels is saturated, increasing their number won't magically make some of them saturated >
Consider two photos of a scene with the same perspective, framing, and exposure using an mFT camera and FF camera. For example, f/5.6 1/400 for both cameras. Any given proportion of the FF photo will be made with 4x as much light as the mFT photo, regardless of the pixel count of either sensor (pixel count will figure into resolution, however).
No. I will not consider that, because this thread is not about comparing cameras. It's about using different lenses (or zoom settings) on one camera.

Comparing cameras involves far too many variables, many of them proprietary information, to be practical.
Don Cox
 


better colour recovery comes from a more accurate pixel voltage reading (larger pixel)
Pixels don't record color, they record light. The more light the pixel records, the more accurate the reading is *for that pixel*.

However, if, for example, you have four smaller pixels covering the same area as one larger pixel, then those four smaller pixels, while each recording less light than the one larger pixel, will record, in total, the same amount of light, but you will also have better spatial information as well as better color information, because each pixel is under a different color dye of the RGB filter.


Nyquist.

The lesser number of pixels in a crop reduces the maximum spatial frequency in line-pairs per image height.

More pixels -- more detail.

More pixels -- higher frequency noise.

If you double the number of pixels per image height, you add an octave of detail and an octave of noise. But you have to enlarge the image more if you want to see them.

Now what about doubling the focal length of the lens ? You add an octave of detail on the subject (Christmas ornaments), but do you add an octave of noise ?

Don Cox
 
Let me thank all participants for the vivid discussion, I really enjoy and understand better now...

My mistake was to not understand the term PDR exactly. I confused PDR (intuitively) with DR of a single pixel. This DR of a single pixel (is there a technical term for it?)
I've never saw term for DR of single pixel.
DR is term used for entire sensor but for practical reasons it's always measured over some small subarray of pixels as average DR of multiple pixels. There is reasonable expectation that pixels behave similarly across entire sensor.
DR is given as FWC (Full Well Capacity)/RN (Read Noise), but FWC is here for average pixel in the measured subarray and not some specific pixel and the same is true for RN.

But how can the single pixel DR of the Oly (with much smaller pixels and lower FWC) be identical (if not slightly better, as implied by the PDR graph) to the Sony with larger pixels (and BSI)?
Difference of pixel size isn't so large. It's 11 microns^2 (Oly) vs 14 microns^2 (Sony).
Also at ISO400 Oly is saturated at 16K e- and Sony at 8.6K e-. Because PDR works with shot noise which is dependent on saturation, I'm not so surprised that that difference isn't so big.
 
Let me thank all participants for the vivid discussion, I really enjoy and understand better now...

My mistake was to not understand the term PDR exactly. I confused PDR (intuitively) with DR of a single pixel. This DR of a single pixel (is there a technical term for it?)
I've never saw term for DR of single pixel.
DR is term used for entire sensor but for practical reasons it's always measured over some small subarray of pixels as average DR of multiple pixels. There is reasonable expectation that pixels behave similarly across entire sensor.
DR is given as FWC (Full Well Capacity)/RN (Read Noise), but FWC is here for average pixel in the measured subarray and not some specific pixel and the same is true for RN.
But how can the single pixel DR of the Oly (with much smaller pixels and lower FWC) be identical (if not slightly better, as implied by the PDR graph) to the Sony with larger pixels (and BSI)?
Difference of pixel size isn't so large. It's 11 microns^2 (Oly) vs 14 microns^2 (Sony).
Also at ISO400 Oly is saturated at 16K e- and Sony at 8.6K e-. Because PDR works with shot noise which is dependent on saturation, I'm not so surprised that that difference isn't so big.
Hi DiMachi,

I wonder how you arrive at the pixel areas you cite. Calculated from pixel pitch the pixels of A7RIII have approx. 2x the area (and FWC) and A7III approx. 3.5x:

66a82f4f42244f1f9677379dc615fa23.jpg

Not all of this area will be used for light gathering, since there is also some "dead" surrounding of a pixel, the ratio circumference/area will, however decrease, the bigger the pixel becomes. BSI also will increase the area used for light gathering...



edited after posting: Sorry DiMachi, I just see that in the original link ther is EM1II with A7RIV compared - I had originally RIV, RIII and III in the graph, decided to leave only RIII, but erroneously let RIV...

My mistake - I apologize!

Still there is the (apparent) dicrepancy that larger pixel area does not help to improve DR...



Wolfgang
 
Last edited:
I think DR is calculated by dividing the FWC by the noise floor.
We don't have access to FWC in a vast majority of cameras because close to FWC characteristic curve is non-linear, and fixed pattern noise becomes problematically high (4 .. 5% of FWC, easy, and that is very visible) , so the manufacturer clips the signal, making ADC saturation level lower than sensor saturation level by about 1/2 to 1 stop.

What you can access is limited by design, and it is the max level in raw data (clipping in raw).

Using this number as the ceiling is, strictly speaking, also wrong without additional verification. This value may be too high, to accommodate for some of the photon shot noise. That's why Canon are explicitly, in metadata, specifying the specular white level and linearity upper margin, both being lower than the value for clipping in raw.
The noise floor stays the same between APS-C/FF but the FWC drops when you crop.
It doesn't, FWC is about a single pixel.
 
Let me thank all participants for the vivid discussion, I really enjoy and understand better now...

My mistake was to not understand the term PDR exactly. I confused PDR (intuitively) with DR of a single pixel. This DR of a single pixel (is there a technical term for it?)
I've never saw term for DR of single pixel.
DR is term used for entire sensor but for practical reasons it's always measured over some small subarray of pixels as average DR of multiple pixels. There is reasonable expectation that pixels behave similarly across entire sensor.
DR is given as FWC (Full Well Capacity)/RN (Read Noise), but FWC is here for average pixel in the measured subarray and not some specific pixel and the same is true for RN.
But how can the single pixel DR of the Oly (with much smaller pixels and lower FWC) be identical (if not slightly better, as implied by the PDR graph) to the Sony with larger pixels (and BSI)?
Difference of pixel size isn't so large. It's 11 microns^2 (Oly) vs 14 microns^2 (Sony).
Also at ISO400 Oly is saturated at 16K e- and Sony at 8.6K e-. Because PDR works with shot noise which is dependent on saturation, I'm not so surprised that that difference isn't so big.
Hi DiMachi,

I wonder how you arrive at the pixel areas you cite. Calculated from pixel pitch the pixels of A7RIII have approx. 2x the area (and FWC) and A7III approx. 3.5x:

66a82f4f42244f1f9677379dc615fa23.jpg

Not all of this area will be used for light gathering, since there is also some "dead" surrounding of a pixel, the ratio circumference/area will, however decrease, the bigger the pixel becomes. BSI also will increase the area used for light gathering...

edited after posting: Sorry DiMachi, I just see that in the original link ther is EM1II with A7RIV compared - I had originally RIV, RIII and III in the graph, decided to leave only RIII, but erroneously let RIV...

My mistake - I apologize!

Still there is the (apparent) dicrepancy that larger pixel area does not help to improve DR...

Wolfgang
Funny, I wanted to apologize to you Wolfgang as I've used data for A7RIV and not A7RIII and I already had message prepared for post but DPR logout me before I did that.
Now I see your apology! :)
So apology is mutual! :)

Now to the discussion.
At ISO400 saturation for Sony A7RIII is 13K e- and not 8.6K e- (that's value for A7RIV). So then difference between Oly and Sony isn't so big but considering all the facts, I still think that PhotonstoPhotos results seems reasonable.
One reason is that ISO100 (or any specific ISO) does means slightly diffent sensitivity for one manufacturer than for other one. That means that graphs for different cameras can have some offset in the horizontal axis.

BSI also doesn't neccessary mean that sensor has better parameters. Well designed and manufactured FSI sensor with microlenses can be better than not so well designed and manufactured BSI sensor.
 
Last edited:
edited after posting: Sorry DiMachi, I just see that in the original link ther is EM1II with A7RIV compared - I had originally RIV, RIII and III in the graph, decided to leave only RIII, but erroneously let RIV...

My mistake - I apologize!

Still there is the (apparent) dicrepancy that larger pixel area does not help to improve DR...
They do not, in general. A larger pixel has to count more photons, and as such it usually has a higher input referred read noise (in e-). See here (ISO 100). As an analogy, imagine a preamp and a power amp. In absolute units, the power amp would add more noise but that would be OK because the output would be much stronger. With typical signals, the SNR could be the same.
 
Photons to photos uses a measure of DR that purists don't find satisfying. The upper limit is just enough to not burn out highlights, the lower is the point at which shadow noise becomes intrusive. (I've no idea how this was set, but believe is it a decent real life measure from images in normal viewing.) So as smaller sensor area means more enlargement that point rises a little as sensors become smaller and enlargement means in effect you get to look more closely for the shadow noise.

I asked this a while ago. Mr Claif kindly answered. Doubtless I've over simplified.

Take heart though that DR is now pretty similar and fairly adequate in most cameras so you can simply enjoy taking photos.
Or the closer you get to a noisy photo the noisier it will look ? :)
Yes. (And DOF will appear more shallow too.)
 
Let me thank all participants for the vivid discussion, I really enjoy and understand better now...

My mistake was to not understand the term PDR exactly. I confused PDR (intuitively) with DR of a single pixel.
Glad you sorted that out.
This DR of a single pixel (is there a technical term for it?)
I call single pixel dynamic range Engineering Dynamic Range (EDR) since this is what you will often find on the actual specifications sheet for a sensor.
I was trying to read out from Bills graphs. I see now that the sensor size has to be taken into account as an additional factor when one wants to compare DR of the single pixels between different sensor sizes...

When one compares PDR between a MFT (Oly EM1II) and a FF camera (Sony A7R3) the outcome is that at base ISO, PDR is almost 2 EV better with the larger Sensor: https://www.photonstophotos.net/Charts/PDR.htm#Olympus OM-D E-M1 Mark II,Sony ILCE-7RM3

Since the Sony sensor is 384% the size of the Oly, a 2 EV difference is just what would be approx. expected by the sheer size difference, when the single pixels have the same DR...
Not strictly sensor size but that's an OK first approximation.
Technically what matters is the shape of of something called the Photon Transfer Curve (PTC). Here's the A7R3 for example:

62a5f35b9bfe4102b0b7fda530b10a39.jpg.png

The quick area approximation essentially assumes a straight line with a slope of 1/2 like in the middle of the curve.
But how can the single pixel DR of the Oly (with much smaller pixels and lower FWC) be identical (if not slightly better, as implied by the PDR graph) to the Sony with larger pixels (and BSI)?
Because pixel size is largely irrelevant; what matters is the total area collected (for something normalized like PDR).

--
Bill ( Your trusted source for independent sensor data at PhotonsToPhotos )
 
A better analogy is raindrops falling into buckets to measure rainfall. If you put out a 10cm x 10cm bucket, and you have roughly 200 raindrops per 100 square centimeters over the unit time you're measuring, you might get 109 raindrops or 84 raindrops, based on luck.

If you put out four 5cm x 5cm buckets, you'll still capture the same total number of raindrops, and your noise will be roughly the same once you add them up. There might be a little more with the smaller bucket -- it's easier to measure 100 raindrops than 25, so whatever measurement errors your tools introduce might change things, but at the end of the day, it's more-or-less the same thing if you can measure well.

We can measure increasingly well.
 
A better analogy is raindrops falling into buckets to measure rainfall. If you put out a 10cm x 10cm bucket, and you have roughly 200 raindrops per 100 square centimeters over the unit time you're measuring, you might get 109 raindrops or 84 raindrops, based on luck.

If you put out four 5cm x 5cm buckets, you'll still capture the same total number of raindrops, and your noise will be roughly the same once you add them up. There might be a little more with the smaller bucket -- it's easier to measure 100 raindrops than 25, so whatever measurement errors your tools introduce might change things, but at the end of the day, it's more-or-less the same thing if you can measure well.
With sensors, measuring accuracy typically increases (in raindrops) for small buckets. In the end, the result is more or less the same.
 
This is the correct answer. I'll mark it, since some of the other ones have subtle errors.

I'll provide a little bit more detail. Let's say you have two images, one is 36 MP and one is a 9MP crop from the same (middle quarter of the image).

If you display both at the same resolution, let's say 1 megapixel, for the original image, you're averaging 36 pixels together for each output pixel. For the cropped image, you're only averaging 9 pixels together.
So the greater the enlargement from the original image size, the more visible the noise is.
Yes. The noise is there and unchanged from the start. But the appearance of the noise depends on many factors, not the least of which are the lightness of the photo, the size it's displayed at, the distance it's viewed from, and the medium it's displayed on.
You can also think of this in terms of how many photons the sensor can absorb before it saturates up. OP posted a link to full well capacity; the full sensor has four times as much. 4x as much area means it can get hit with 4x as many photons before saturation.
Surely the saturation is in each pixel ? If none of the pixels is saturated, increasing their number won't magically make some of them saturated >
Consider two photos of a scene with the same perspective, framing, and exposure using an mFT camera and FF camera. For example, f/5.6 1/400 for both cameras. Any given proportion of the FF photo will be made with 4x as much light as the mFT photo, regardless of the pixel count of either sensor (pixel count will figure into resolution, however).
No. I will not consider that, because this thread is not about comparing cameras. It's about using different lenses (or zoom settings) on one camera.
Same difference.
Comparing cameras involves far too many variables, many of them proprietary information, to be practical.
OK -- I'll rewrite it:

Consider two photos of a scene from the same position. One at 100mm f/5.6 1/400 and the other at 50mm f/5.6 1/400 and cropped to the same framing as the 100mm photo. Any given proportion of the 100 photo will be made with 4x as much light as the crop from the 50mm photo, and thus be less noisy with more DR. It will also have higher resolution since it is made with 4x the number of pixels.
 

Keyboard shortcuts

Back
Top