Dynamic Range: Does Sensor Size Really Matter?

Hey fellow photogs!

I've been diving deep into the technical aspects of dynamic range lately, and I've come across some conflicting information regarding the impact of sensor size. Some argue that larger sensors inherently have better dynamic range,
Yes, that's true
while others believe that advancements in technology are closing the gap.
That's also true, but advancements also improve large sensors as well.
The recent BSI / partial sensor designs also play a role here.

What's your take on this? Any personal experiences or scientific insights you'd like to share?
IMHO, the concern of dynamic range is a little over rated. I say this because by using the technique of exposing to the right and lifting the shadows in Lightroom, dynamic range is rarely an issue.

A camera, regardless of how amazing its dynamic range may be, cannot overcome the basic limitations of not having enough light in a high-contrast scene.
 
Hey fellow photogs!

I've been diving deep into the technical aspects of dynamic range lately, and I've come across some conflicting information regarding the impact of sensor size. Some argue that larger sensors inherently have better dynamic range, while others believe that advancements in technology are closing the gap. The recent BSI / partial sensor designs also play a role here.

What's your take on this? Any personal experiences or scientific insights you'd like to share?
Check out these recent DPR articles:-

https://www.dpreview.com/articles/9...-number-a-closer-look-at-dynamic-range-part-1

https://www.dpreview.com/articles/7...mislead-a-closer-look-at-dynamic-range-part-2

https://www.dpreview.com/articles/6...need-dr-a-closer-look-at-dynamic-range-part-3

In a nutshell, bigger sensors have an advantage in DR if they have the same sensor technology as a smaller sensor.

Mr Butler is at pains in the above articles to point out that you can have two cameras with the same DR, meaning the same signal-to-noise in the deepest of shadows, yet one has significantly less photon shot noise (eg via a larger sensor allowing more photons to be captured) throughout all the other tones of the image, hence they don't actually have the same image quality at all. So DPR prefer to use other means to assess image quality with not much emphasis on dynamic range.

cheers

--
There’s some sort of something out yonder
Can’t see it; it’s too far to wander.
“Instead,” said his friends
“Use a much longer lens
And then you’ll no longer need ponder.”
© Gregory Simpson
 
Last edited:
My real life experience

Canon 6D take better photos than my Fuji XH1, specially in contrasty light and above 1600 iso where i am struggling with the Fuji JPEG, Raw converter wont install on my computer (???)

canon 5Ds take better photos than 6D. specially if you print big, but even 11x14 prints look sharper

Canon 5Ds is still usable at 12 800 iso and wil equal 6D if reduces to 20 MP

Canon 5D mk IV is the king of low light and faster than the standard Nikon F3 of my youth

I like Canon colors science better than fuji

My 100 Riels (two cents)
 
This is a great perspective, thanks for sharing!

If I correctly think this through this also means that bigger sensors with a smaller pixel pitch from about the same generation of sensor design have lower dynamic range than smaller sensors with a bigger pixel pitch?
How did you reach that conclusion? A FF sensor with a given level of tech captures the same number of photons, regardless of whether the sensor is divided into 20MP or 50MP.
I think there might have been a misunderstanding about my previous point. What I was trying to highlight is that stacked sensor designs, while offering benefits like faster readouts and reduced rolling shutter, can also introduce additional noise due to their complex architecture.

To clarify my earlier question, I was curious if the additional noise introduced by stacked sensors could potentially reduce the dynamic range of larger sensors with a smaller pixel pitch when compared to smaller sensors from the same generation but with fewer pixels. Essentially, can the benefits of a larger sensor be somewhat negated by the noise introduced in stacked designs?
 
Hey fellow photogs!

I've been diving deep into the technical aspects of dynamic range lately, and I've come across some conflicting information regarding the impact of sensor size. Some argue that larger sensors inherently have better dynamic range, while others believe that advancements in technology are closing the gap. The recent BSI / partial sensor designs also play a role here.

What's your take on this? Any personal experiences or scientific insights you'd like to share?
Check out these recent DPR articles:-

https://www.dpreview.com/articles/9...-number-a-closer-look-at-dynamic-range-part-1

https://www.dpreview.com/articles/7...mislead-a-closer-look-at-dynamic-range-part-2

https://www.dpreview.com/articles/6...need-dr-a-closer-look-at-dynamic-range-part-3

In a nutshell, bigger sensors have an advantage in DR if they have the same sensor technology as a smaller sensor.

Mr Butler is at pains in the above articles to point out that you can have two cameras with the same DR, meaning the same signal-to-noise in the deepest of shadows, yet one has significantly less photon shot noise (eg via a larger sensor allowing more photons to be captured) throughout all the other tones of the image, hence they don't actually have the same image quality at all. So DPR prefer to use other means to assess image quality with not much emphasis on dynamic range.

cheers
Thanks for sharing those articles from DPReview! They offer some great insights into the complexities of dynamic range and how it relates to sensor size and technology.

I want to add to that there are significant differences in dynamic range performance between full-frame sensors from different manufacturers, such as Canon and Pentax which doesn’t make things easier. Even within the same generation of sensor technology, variations in design, implementation, and processing can lead to notable differences in image quality and dynamic range.

For instance, Pentax cameras have been known for their excellent dynamic range, partly due to their sensor design and in-camera processing algorithms. On the other hand, Canon's full-frame sensors, while excellent in many aspects, may exhibit different performance characteristics, particularly in shadow detail and noise levels.

The added noise of stacked sensors may impact the dynamic range, especially in challenging lighting conditions.
 
Hey fellow photogs!

I've been diving deep into the technical aspects of dynamic range lately, and I've come across some conflicting information regarding the impact of sensor size. Some argue that larger sensors inherently have better dynamic range, while others believe that advancements in technology are closing the gap. The recent BSI / partial sensor designs also play a role here.
But the latest sensor designs like BSI or that partial design on the recent mid-range Nikon deliver less dynamic range, in favor of more speed, so I wouldn’t say the last years gave us as many benefits, as the first two decades of digital photography.
And, if you're a landscape photographer who shoots a lot of scenes with high dynamic range, then a large sensor DOES make a fairly big difference, especially if you try and recover a little shadow detail
Yes, that’s true, especially if you print big.
That being said, photographers who actually take advantage of this are not plentiful. Personally, I take landscapes with both my full frame gear and micro four thirds gear, and like both.
Exactly, what kind of difference does the viewer see in the end result on a phone or tablet between MFT and MF?
What is at issue is the number of photons captured in a pixel (or photodiode) well.
Really? Why is that the only thing at issue? Do you imagine that the DR of a whole image is the same as the DR of one of its constituent pixels?
This -more pixel well depth -will reflect In highest dynamic range meaning less blown highlights and details In darkest areas.
If this theory of yours was correct, we'd expect the DR of cameras with different sensor sizes but similar pixel sizes to be similar, and we'd expect the DR of cameras with same sensor sizes but different pixel sizes to be different by the difference in pixel size.

Let's look at the PDR performance of some actual cameras with similar pixel technology to test your theory.

Consider the Z6, Z7 and Z50. All have similar sensor technology.

The Z50 and Z7 have very similar pixel sizes but the Z7's sensor size is more than twice as big. The Z6 has pixels nearly twice as large as the Z7, but a sensor of the same size.

bf4e2ba6f94d4f5d8fd11d41fefd7ddb.jpg.png

It seems that the data do not support your theory. The Z6's pixels are nearly a whole stop larger than those of the Z7, yet the difference in PDR performance is practically non-existant at ISO 100, and typically about 1/3 of a stop on most of the graph. The Z7 and Z50 have pixels of similar size, yet the difference in PDR between them is twice as much as the difference between the Z6 and Z7.
BSI sensors close to doubles that amount of photon capture available per area unit of a sensor.
This may be true of tiny pixels on phone-camera sensors, but it is not true of pixels on MFT, APS-C, FF, or MF sensors. On these larger pixels, the switch to BSI makes only a small fraction of a stop of difference, because the non-sensing components on the front side make up a much smaller portion of the pixel surface area. These components do not scale in size with pixel size.
The increasing MP for a given unit area of a sensor will however, while having more pixels lessen the possible dynamic range.
The increasing number of pixels does reduce DR, but not in proportion to the increase in number of pixels. The decrease in PDR is due to increased read noise associated with reading out a larger number of pixels in the same time,
Now, this is my understanding and having seen examples of this through the years of sensor development, I think is true.
Yet I show you data that indicates it is not actually true.

Let me introduce you to a different theory.

DR of an image depends mostly on the SNR of an image, The SNR of an image depends primarily on the amount of light captured in the image. For a given technology, sensors of the same size capture about the same amount of light at the same exposure, regardless of pixel size/count. Sensors of different sizes capture light in proportion to the differences in surface area of the whole sensor.

SNR is secondarily affected by read noise, with the effect of read noise becoming more significant at lower exposures. Major factors affecting read noise are the number and speed of read operations.

A factor of two difference in sensor area has a larger effect on SNR than a factor of 2 difference in read operation counts and/or speed.

Applying this to the Z6, Z7 and Z50 data, we start with the Z6 and Z7 capturing about the same amount of light at the same exposure. This means they will have similar DR. The Z50, in contrast, captures about half as much light at a given exposure, because its sensor has about half the surface area. This gives it about one stop lower DR. Then we have to account for the fact that the Z7 has about twice as many pixels as the Z6 and Z50. This results in a reduction of its DR by about a third of a stop. This explanation corresponds to the actual data.
 
Last edited:
This is a great perspective, thanks for sharing!

If I correctly think this through this also means that bigger sensors with a smaller pixel pitch from about the same generation of sensor design have lower dynamic range than smaller sensors with a bigger pixel pitch?
How did you reach that conclusion? A FF sensor with a given level of tech captures the same number of photons, regardless of whether the sensor is divided into 20MP or 50MP.
I think there might have been a misunderstanding about my previous point. What I was trying to highlight is that stacked sensor designs, while offering benefits like faster readouts and reduced rolling shutter, can also introduce additional noise due to their complex architecture.
OK, but what does this have to do with "bigger sensors" or "smaller pixel pitch", which are the factors you actually mention in your question? Stacked sensor design does not dictate sensor size or pixel pitch. Yet these factors both affect SNR and DR. They will have an effect independent of whether the sensor is partly or completely stacked. By addign these other factorsd into you question, you are obscurign teh effect of stacking on DR.
To clarify my earlier question, I was curious if the additional noise introduced by stacked sensors could potentially reduce the dynamic range of larger sensors with a smaller pixel pitch
Yes, just as they could reduce the dynamic range of smaller sensors with a larger pixel pitch.
when compared to smaller sensors from the same generation but with fewer pixels.
No, the effects of stacking are mostly independent of the effects of sensor size and pixel pitch.
Essentially, can the benefits of a larger sensor be somewhat negated by the noise introduced in stacked designs?
Yes, somewhat, but not completely. The effects of sensor size are on the order of one full stop of difference in PDR per difference in sensor size category. So MFT sensors have roughly 1 stop less DR than APS-C sensors which have about 1 stop less DR than FF sensors which have about one stop less DR than MF sensors. (Actual differences vary a bit from this because sensor sizes are not actually an exact full stop apart in surface area. MFT to APC- and FF to MF are less than a stop. APS-C to FF is more than a stop.)

The effects of stacking, fame rate, FSI vs. BSI, or pixel count all are on the scale of a small fraction of a stop.

We would not expect that a stacked high pixel-count high framerate FF sensor would have as low a DR as a non-stacked, low pixel count, slow framerate APS-C sensor. The FF sensor in question would just have less of a DR advantage over the APS-C than would a non-stacked, low pixel count, slow framerate FF sensor of a similar vintage.
 
I don't think your example proves very much, without context / comparison..

Here is an example I just did - between panasonic GX9 + 20mm (40mm eq FL) and sony A7Riii + 45mm - shot with same ISO / shutter / aperture: - both processed from RAW, and I did nothing more than raise exposure + lift shadows.

GX9:



SOOC jpeg
SOOC jpeg



processed from RAW
processed from RAW



160% crop
160% crop



Sony A7Riii:



SOOC jpeg
SOOC jpeg



processed from RAW
processed from RAW

cropped to same FOV as GX9 shot
cropped to same FOV as GX9 shot

I think it's quite clear the FF shot it significantly better in both noise and tonal quality.
 
Hey fellow photogs!

I've been diving deep into the technical aspects of dynamic range lately, and I've come across some conflicting information regarding the impact of sensor size. Some argue that larger sensors inherently have better dynamic range, while others believe that advancements in technology are closing the gap. The recent BSI / partial sensor designs also play a role here.

What's your take on this? Any personal experiences or scientific insights you'd like to share?
Other things being equal, a bigger pixel can capture more photons, giving you a bigger DR between empty and full for each pixel. If you just add more pixels without changing the pixel size, the DR of each pixel does not change and thus the DR of the image does not change.
 
Last edited:
Other things being equal, a bigger pixel can capture more photons, giving you a bigger DR between empty and full for each pixel. If you just add more pixels without changing the pixel size, the DR of each pixel does not change and thus the DR of the image does not change.
There is:

1 Pixel DR,

2 DR per unit of sensor area, and

3 DR of the total sensor area.

You are talking about #1. Most people are talking about #3.
 
I don't think your example proves very much, without context / comparison..

Here is an example I just did - between panasonic GX9 + 20mm (40mm eq FL) and sony A7Riii + 45mm - shot with same ISO / shutter / aperture: - both processed from RAW, and I did nothing more than raise exposure + lift shadows.

GX9:

SOOC jpeg
SOOC jpeg

processed from RAW
processed from RAW

160% crop
160% crop

Sony A7Riii:

SOOC jpeg
SOOC jpeg

processed from RAW
processed from RAW

cropped to same FOV as GX9 shot
cropped to same FOV as GX9 shot

I think it's quite clear the FF shot it significantly better in both noise and tonal quality.
@Foggy @Eric you guys are comparing noise in the various tones, which is not corresponding to dynamic range.

https://www.dpreview.com/articles/7687479702/how-numbers-can-mislead-a-closer-look-at-dynamic-range-part-2

https://www.dpreview.com/articles/7...mislead-a-closer-look-at-dynamic-range-part-2

Check out the article. Two cameras can show quite different noisiness throughout the various tonal levels, yet have the same dynamic range.

cheers
 
Hey fellow photogs!

I've been diving deep into the technical aspects of dynamic range lately, and I've come across some conflicting information regarding the impact of sensor size. Some argue that larger sensors inherently have better dynamic range, while others believe that advancements in technology are closing the gap. The recent BSI / partial sensor designs also play a role here.

What's your take on this? Any personal experiences or scientific insights you'd like to share?
Other things being equal, a bigger pixel can capture more photons, giving you a bigger DR between empty and full for each pixel. If you just add more pixels without changing the pixel size, the DR of each pixel does not change and thus the DR of the image does not change.
The pixels on an a7RIV are about 3/4 the size of the pixels on a Zfc, but PhotonsToPhotos lists the PDR of the a7RIV as about 1 EV better than the Zfc. According to your theory, the ZfC should be about 1/3 stop better. How do you account for the 4/3 stop discrepancy?

It isn't due to better or newer sensor tech in the a7RIV - that sensor has worse PDR than the a7III.

The P2P data is well-explained by shot noise depending on whole sensor size and read noise depending on pixel count and frame rate.
 
Other things being equal, a bigger pixel can capture more photons, giving you a bigger DR between empty and full for each pixel. If you just add more pixels without changing the pixel size, the DR of each pixel does not change and thus the DR of the image does not change.
If you're saying that a single pixel of x area captures more photons than several smaller pixels whose combined area is also x, please explain how that works.
 
Last edited:
@Foggy @Eric you guys are comparing noise in the various tones, which is not corresponding to dynamic range.
That is correct - I was comparing IQ at the extremes of DR (-in particular the low-light extreme, where SNR is at it's worse), which is also part of this discussion..
 
Check out the article. Two cameras can show quite different noisiness throughout the various tonal levels, yet have the same dynamic range.
cheers
Yes, indeed, that was also what I was demonstrating, and also what the article summarises - it is the IQ that is meaningful between the sensor sizes, not necessarily DR..
 
What matters is low light noise performance.

Looking at the shadows only, a larger pixel size offers a better signal to noise ratio because it gets more light at the same exposure (i.e. same aperture and shutter speed). This extends the dynamic range at the low end. That counts, especially in low light situations.

Things are not so clear, if we compare actual cameras with different pixel count or more modern sensor designs, e.g. with dual readouts. Moreover, there are equivalent settings for DOF and Bokeh etc. But in practice, everybody found the FF pictures slightly better than APS-C. If that matters, is another story.
 
Last edited:
Other things being equal, a bigger pixel can capture more photons, giving you a bigger DR between empty and full for each pixel. If you just add more pixels without changing the pixel size, the DR of each pixel does not change and thus the DR of the image does not change.
If you're saying that a single pixel of x area captures more photons than several smaller pixels whose combined area is also x, please explain how that works.
compare the sony a7s2 ( 12 meg) vers a7s3 (48meg binned) go to DXO and you will find the answer.
 
Other things being equal, a bigger pixel can capture more photons, giving you a bigger DR between empty and full for each pixel. If you just add more pixels without changing the pixel size, the DR of each pixel does not change and thus the DR of the image does not change.
If you're saying that a single pixel of x area captures more photons than several smaller pixels whose combined area is also x, please explain how that works.
compare the sony a7s2 ( 12 meg) vers a7s3 (48meg binned) go to DXO and you will find the answer.
Old but interesting article -

 
Other things being equal, a bigger pixel can capture more photons, giving you a bigger DR between empty and full for each pixel. If you just add more pixels without changing the pixel size, the DR of each pixel does not change and thus the DR of the image does not change.
If you're saying that a single pixel of x area captures more photons than several smaller pixels whose combined area is also x, please explain how that works.
It can't work for photons and photon noise. This simple Black Swan thought experiment lays the "bigger buckets" theory to rest: if you had a 10x10 grid of square bins upon which you drizzled a bucket of marbles, and each of those bins each had a cardboard divider that made each each large bin into 4 smaller ones, what ACTUALLY happens when you pull out the dividers? You don't increase the total number of marbles; you just lose resolution information. The 20x20 configuration takes the marble positions from analog and puts them in a discrete grid, losing resolution, and the 10x10 configuration loses even more, but the number of marbles is still exactly the same.

Imagine a street where the mailbox for each house is removed, and every 4 houses in a row got a new mailbox 4x as large as the old ones; would more mail be delivered to the people on that street? No, that only depends on how many pieces of mail were mailed to that street, by people who don't decide whether or not to send mail based on mail box knowledge.

Any real-world differences would have to do with read noise, and even with read noise, the pixel readout rates and bit depths seem to be more of a factor than the actual size of the pixels, for the post-gain read noise which is a major bottleneck to base-ISO DR, both "as measured" and visibly, and at higher ISOs, the post-gain read noise is not affecting the simple noise deviation measurements, but if it has any spatial correlation to it, it may still be visible.
 
@Foggy @Eric you guys are comparing noise in the various tones, which is not corresponding to dynamic range.
https://www.dpreview.com/articles/7687479702/how-numbers-can-mislead-a-closer-look-at-dynamic-range-part-2


Check out the article. Two cameras can show quite different noisiness throughout the various tonal levels, yet have the same dynamic range.

cheers
It is unfortunate that people have come to think of DR or PDR as some kind of general noise metric. As you demonstrate, the DR tells nothing directly about what goes on above and below the "bottom" of the DR. These are two "normal" sensors, with the blue trend having more capacity for total photons, and the red trend having less read noise relative to photon input. What if a sensor had mixed pixels with different characteristics, and blended the results together? Then one of those trends could be a curve with extra bends in it. You could have a sensor that crossed the SNR=1 line not once, but 3 times! Which crossing would determine the DR? DR is a fragile concept, once you start using a mix of pixel characteristics.

In your example above, the data is straightforward if the read noise is Gaussian in distribution without spatial correlation. In the real world, photon noise always has the same level of spatial correlation; that which comes from pure chance in independent random samples, and it has the same characteristics in every sensor, but read noise can have extra spatial correlation per sensor beyond pure chance, and so, from a visual perspective, if Camera 1 (the blue trend) had extra spatially-correlated read noise, then "visually", the blue trend would drop faster going to the left, and they might cross at SNR=1.5 or 2, and Camera 1 would have a much greater visual noise increase more than 14 stops below clipping, and may even start to look worse maybe only 8 or 9 stops below clipping, despite a higher single-value "SNR" that does not take spectral SNR into account. 8 or 9 stops down from clipping is not very deep in the shadows with ambient light that is heavily colored, like deep shade lacking in red, or warm incandescent light lacking in blue.



--
Beware of correct answers to wrong questions.
John
 

Keyboard shortcuts

Back
Top