Dynamic Range: Does Sensor Size Really Matter?

What matters is low light noise performance.
What matters is low light performance over a given portion of the sensor, not at the pixel level, because photos are not viewed one pixel a a time..
Looking at the shadows only, a larger pixel size offers a better signal to noise ratio because it gets more light at the same exposure (i.e. same aperture and shutter speed). This extends the dynamic range at the low end. That counts, especially in low light situations.
Imagine two cameras. They have sensors of the same size, but camera B has 4 times as many pixels as camera A and camera A's pixels have four times the surface area of camera B's pixels.

As you say, Camera A's pixels get a better SNR than Camera B's pixels. The SNR for an A pixel will be about twice the SNR of a B pixel. If a camera A pixel captured 1,600 photons at a specific exposure, then a camera B pixel will only capture 400 photons. Photon shot noise is the square root of the photons captured, So the photon shot noise of pixel A will be 40 and it will be 20 on Pixel B. This gives an SNR of 40 for pixel A and 20 for pixel B.

However, what happens when you compare equal areas of sensor A and Sensor B? Let's take the simplest example. What happens when you compare the SNR of a part of Sensor B that has the same surface area of a pixel on sensor A with the SNR of a pixel on Sensor A? The SNR at that same exposure for that pixel on A will still be 40. What about the SNR for the equal area on sensor B?

The total signal will be 4 x 400 = 1,600. Noise adds in quadrature, so the total noise will be the square root of the sum of the squares of the noises on the four pixels comprising the same area, so

N = sqrt(20^2 + 20^2 + 20^2+20^2)
N = sqrt(1,600)
N = 40

SNR = 1,600/40
SNR = 40

The SNR for shot noise over the same area is the same, regardless of pixel size.
Things are not so clear, if we compare actual cameras with different pixel count or more modern sensor designs, e.g. with dual readouts.
Things remain simple enough if you look at what is actually happening over an area of the sensor.

These days the main difference in SNR performance, and thus in DR, is in read noise. Cameras with higher pixel counts generate more read noise, either because they simply have more pixels to read, or also because that to get a given frame rate they have to read those pixels faster.

An example of these effects can be seen in the difference in PDR between the approx 45MP Sony a7RIII and the approx 60MP Sony a7RIV. Despite having a newer sensor, the a7RIV has slightly lower PDR. That's because it has higher read noise from the higher pixel count.
 
Last edited:
Other things being equal, a bigger pixel can capture more photons, giving you a bigger DR between empty and full for each pixel. If you just add more pixels without changing the pixel size, the DR of each pixel does not change and thus the DR of the image does not change.
If you're saying that a single pixel of x area captures more photons than several smaller pixels whose combined area is also x, please explain how that works.
compare the sony a7s2 ( 12 meg) vers a7s3 (48meg binned) go to DXO and you will find the answer.
That's not an answer as to the number of photons captured. I want to hear about why the number of photons captured would be different. But let's look anyway:

cbe5004dc1784916bd258912ade7dab2.jpg

Golly. the A7SIII scores slightly greater dynamic range and slightly less low light capability. So? DxOMark says this about testing:

DR: A value of 12 EV is excellent, with differences below 0.5 EV usually not noticeable.

Low Light: A difference in low-light ISO of 25% equals 1/3 EV and is only slightly noticeable.


Besides, an example of one specific camera compared to another specific camera does not comprise an answer to the general question of small pixels vs. large ones. If I show you examples where the higher pixel density makes no difference in the DR, what are you going to say then?
 
Last edited:
Other things being equal, a bigger pixel can capture more photons, giving you a bigger DR between empty and full for each pixel. If you just add more pixels without changing the pixel size, the DR of each pixel does not change and thus the DR of the image does not change.
If you're saying that a single pixel of x area captures more photons than several smaller pixels whose combined area is also x, please explain how that works.
compare the sony a7s2 ( 12 meg) vers a7s3 (48meg binned) go to DXO and you will find the answer.
That's not an answer as to the number of photons captured. I want to hear about why the number of photons captured would be different. But let's look anyway:

cbe5004dc1784916bd258912ade7dab2.jpg

Golly. the A7SIII scores slightly greater dynamic range and slightly less low light capability. So? DxOMark says this about testing:

DR: A value of 12 EV is excellent, with differences below 0.5 EV usually not noticeable.

Low Light: A difference in low-light ISO of 25% equals 1/3 EV and is only slightly noticeable.


Besides, an example of one specific camera compared to another specific camera does not comprise an answer to the general question of small pixels vs. large ones. If I show you examples where the higher pixel density makes no difference in the DR, what are you going to say then?
look at the sports performance figure. someone has been fiddling the books with the a7s2 😑 and forgot about the original a7s 😉



85eb8a4cfa344a2283a695b4a08f6e6c.jpg
 
Last edited:
Other things being equal, a bigger pixel can capture more photons, giving you a bigger DR between empty and full for each pixel. If you just add more pixels without changing the pixel size, the DR of each pixel does not change and thus the DR of the image does not change.
If you're saying that a single pixel of x area captures more photons than several smaller pixels whose combined area is also x, please explain how that works.
It can't work for photons and photon noise. This simple Black Swan thought experiment lays the "bigger buckets" theory to rest: if you had a 10x10 grid of square bins upon which you drizzled a bucket of marbles, and each of those bins each had a cardboard divider that made each each large bin into 4 smaller ones, what ACTUALLY happens when you pull out the dividers? You don't increase the total number of marbles; you just lose resolution information. The 20x20 configuration takes the marble positions from analog and puts them in a discrete grid, losing resolution, and the 10x10 configuration loses even more, but the number of marbles is still exactly the same.

Imagine a street where the mailbox for each house is removed, and every 4 houses in a row got a new mailbox 4x as large as the old ones; would more mail be delivered to the people on that street? No, that only depends on how many pieces of mail were mailed to that street, by people who don't decide whether or not to send mail based on mail box knowledge.

Any real-world differences would have to do with read noise, and even with read noise, the pixel readout rates and bit depths seem to be more of a factor than the actual size of the pixels, for the post-gain read noise which is a major bottleneck to base-ISO DR, both "as measured" and visibly, and at higher ISOs, the post-gain read noise is not affecting the simple noise deviation measurements, but if it has any spatial correlation to it, it may still be visible.
i can only go off the facts.

17f4c1ad57cd45b593ac5b0a8ae8a9d1.jpg
 
Other things being equal, a bigger pixel can capture more photons, giving you a bigger DR between empty and full for each pixel. If you just add more pixels without changing the pixel size, the DR of each pixel does not change and thus the DR of the image does not change.
If you're saying that a single pixel of x area captures more photons than several smaller pixels whose combined area is also x, please explain how that works.
compare the sony a7s2 ( 12 meg) vers a7s3 (48meg binned) go to DXO and you will find the answer.
That's not an answer as to the number of photons captured. I want to hear about why the number of photons captured would be different. But let's look anyway:

cbe5004dc1784916bd258912ade7dab2.jpg

Golly. the A7SIII scores slightly greater dynamic range and slightly less low light capability. So? DxOMark says this about testing:

DR: A value of 12 EV is excellent, with differences below 0.5 EV usually not noticeable.

Low Light: A difference in low-light ISO of 25% equals 1/3 EV and is only slightly noticeable.


Besides, an example of one specific camera compared to another specific camera does not comprise an answer to the general question of small pixels vs. large ones. If I show you examples where the higher pixel density makes no difference in the DR, what are you going to say then?
look at the sports performance figure.
I already quoted it, and pointed out how little it means. It doesn't help in terms of DR, which is the topic.
someone has been fiddling the books with the a7s2 😑 and forgot about the original a7s 😉
85eb8a4cfa344a2283a695b4a08f6e6c.jpg

Ah, yes. The fat pixel model is worse in DR again.

Here's the other part you ignored: An example of one specific camera compared to another specific camera does not comprise an answer to the general question of small pixels vs. large ones. If I show you examples where fatter pixels make no improvement in DR - or low light performance - what are you going to say then?

And you still haven't explained how an equal area captures more photons. Why not?
 
Last edited:
Other things being equal, a bigger pixel can capture more photons, giving you a bigger DR between empty and full for each pixel. If you just add more pixels without changing the pixel size, the DR of each pixel does not change and thus the DR of the image does not change.
There is:

1 Pixel DR,

2 DR per unit of sensor area, and

3 DR of the total sensor area.

You are talking about #1. Most people are talking about #3.
OK, I've thought it through. Pixel size and sensor size are measures of area, and the DR of a light-capturing element is related (other things being equal) to it's capacity = depth. We're talking light intensity per unit area, not total light captured by the entire surface.

Maybe bigger pixels have a slight advantage, as there is less total unused sensor space (the isolation rows and columns between the pixels).
 
Last edited:
What matters is low light noise performance.
What matters is low light performance over a given portion of the sensor, not at the pixel level, because photos are not viewed one pixel a a time..
Looking at the shadows only, a larger pixel size offers a better signal to noise ratio because it gets more light at the same exposure (i.e. same aperture and shutter speed). This extends the dynamic range at the low end. That counts, especially in low light situations.
Imagine two cameras. They have sensors of the same size, but camera B has 4 times as many pixels as camera A and camera A's pixels have four times the surface area of camera B's pixels.

As you say, Camera A's pixels get a better SNR than Camera B's pixels. The SNR for an A pixel will be about twice the SNR of a B pixel. If a camera A pixel captured 1,600 photons at a specific exposure, then a camera B pixel will only capture 400 photons. Photon shot noise is the square root of the photons captured, So the photon shot noise of pixel A will be 40 and it will be 20 on Pixel B. This gives an SNR of 40 for pixel A and 20 for pixel B.

However, what happens when you compare equal areas of sensor A and Sensor B? Let's take the simplest example. What happens when you compare the SNR of a part of Sensor B that has the same surface area of a pixel on sensor A with the SNR of a pixel on Sensor A? The SNR at that same exposure for that pixel on A will still be 40. What about the SNR for the equal area on sensor B?

The total signal will be 4 x 400 = 1,600. Noise adds in quadrature, so the total noise will be the square root of the sum of the squares of the noises on the four pixels comprising the same area, so

N = sqrt(20^2 + 20^2 + 20^2+20^2)
N = sqrt(1,600)
N = 40

SNR = 1,600/40
SNR = 40

The SNR for shot noise over the same area is the same, regardless of pixel size.
Things are not so clear, if we compare actual cameras with different pixel count or more modern sensor designs, e.g. with dual readouts.
Things remain simple enough if you look at what is actually happening over an area of the sensor.

These days the main difference in SNR performance, and thus in DR, is in read noise. Cameras with higher pixel counts generate more read noise, either because they simply have more pixels to read, or also because that to get a given frame rate they have to read those pixels faster.

An example of these effects can be seen in the difference in PDR between the approx 45MP Sony a7RIII and the approx 60MP Sony a7RIV. Despite having a newer sensor, the a7RIV has slightly lower PDR. That's because it has higher read noise from the higher pixel count.
In reading through this thread and additional materials, I can see there are several factors that can have more weight than others. Read noise per pixel seems to be the most constant (predictable) source of noise at first glance which implies pixel density will increase electronic read noise. But advances in manufacturing of circuitry can result in less read noise at the pixel level depending on the quality of the sensor. Thermal noise is or can be a factor. Shot noise is going to be the same irregardless of size of sensor or well depth. (?)

If I am not mistaken, at the end of looking through this, read noises is the big magilla but with diminishing effect with increasing size of pixel (or number of photons)

Added to this is the differences in red, blue and green frequencies. It's not easy to correctly assess which of these in combination will result in best usable dynamic range. Expose to the right I guess is a good rule of thumb . . .

It is a more complex subject than I first thought so if I misunderstood some basics, correct me.
 
Last edited:
look at the sports performance figure. someone has been fiddling the books with the a7s2 😑 and forgot about the original a7s 😉

85eb8a4cfa344a2283a695b4a08f6e6c.jpg
DXO mark scores are useless. I always click on the measurements tab which separates everything into easily readable graphs. These two are what the Sports/ Low Light ISO score is primarily based on.

6d827bfeef284dee8fdb087200527dcc.jpg

3c6b1fc06e1441a296ac08d896148bc1.jpg

--
Tom
 
Last edited:
Other things being equal, a bigger pixel can capture more photons, giving you a bigger DR between empty and full for each pixel. If you just add more pixels without changing the pixel size, the DR of each pixel does not change and thus the DR of the image does not change.
There is:

1 Pixel DR,

2 DR per unit of sensor area, and

3 DR of the total sensor area.

You are talking about #1. Most people are talking about #3.
OK, I've thought it through. Pixel size and sensor size are measures of area, and the DR of a light-capturing element is related (other things being equal) to it's capacity = depth. We're talking light intensity per unit area, not total light captured by the entire surface.

Maybe bigger pixels have a slight advantage, as there is less total unused sensor space (the isolation rows and columns between the pixels).
That has often been stated as a concern, but has not happened in any big way for DR. Decreasing max charge per unit of area most directly affects what base ISO can be. Any small difference in total charge capacity per unit of area is not going to affect DR so much, as long as there is significant post-gain read noise. Of course, if there were only photon noise, then SNR-based DR would vary directly with the square root of total charge capacity.

Post-gain read noise is a worse phenomenon with larger-capacity pixels, because it is totally independent of pixel capacity and is purely downstream electronic noise, so a large charge, per-pixel, is still going to have an uphill DR battle with it. Tiny pixels, on the other hand, have so much pre-gain read noise at the pixel level that the post-gain read noise is more heavily lost in quadrature, or another way of looking at it is that higher pixel density reduces post-gain read noise per unit of sensor area, in aggregate.
 


Any real-world differences would have to do with read noise, and even with read noise, the pixel readout rates and bit depths seem to be more of a factor than the actual size of the pixels, for the post-gain read noise which is a major bottleneck to base-ISO DR, both "as measured" and visibly, and at higher ISOs, the post-gain read noise is not affecting the simple noise deviation measurements, but if it has any spatial correlation to it, it may still be visible.
i can only go off the facts.

17f4c1ad57cd45b593ac5b0a8ae8a9d1.jpg


"Sports" is a pretty worthless metric, IMO, because it concerns mainly with midtones and highlights at medium ISOs and tells nothing about shadows at truly high ISOs, which is where the real challenge is.

I take the "Landscape" a bit more seriously for DR, but I never said that higher pixel counts never give more noise. What I have said is that smaller pixels do not necessarily result in higher noise or lower DR per unit of sensor area, and that the real challenge is the compromise between rolling shutter speed and low noise, when the pixel *count* is high (not because the density is high!).

--
Beware of correct answers to wrong questions.
John
 
Any real-world differences would have to do with read noise, and even with read noise, the pixel readout rates and bit depths seem to be more of a factor than the actual size of the pixels, for the post-gain read noise which is a major bottleneck to base-ISO DR, both "as measured" and visibly, and at higher ISOs, the post-gain read noise is not affecting the simple noise deviation measurements, but if it has any spatial correlation to it, it may still be visible.
i can only go off the facts.

17f4c1ad57cd45b593ac5b0a8ae8a9d1.jpg
"Sports" is a pretty worthless metric, IMO, because it concerns mainly with midtones and highlights at medium ISOs and tells nothing about shadows at truly high ISOs, which is where the real challenge is.

I take the "Landscape" a bit more seriously for DR, but I never said that higher pixel counts never give more noise. What I have said is that smaller pixels do not necessarily result in higher noise or lower DR per unit of sensor area, and that the real challenge is the compromise between rolling shutter speed and low noise, when the pixel *count* is high (not because the density is high!).
You hit the nail on the head, sensor design is based around compromises between cleaner images vers rolling shutter. rolling shutter has never been a problem for me as my cameras have mechanical shutters for shooting sports, i always shoot dance concerts at the full dress rehearsals so no silent shutter needed. we have so many choices with the latest cameras its just a matter of choosing which one suits your genre best.
 
These days the main difference in SNR performance, and thus in DR, is in read noise.
DR and SNR are not the same.
Cameras with higher pixel counts generate more read noise, either because they simply have more pixels to read, or also because that to get a given frame rate they have to read those pixels faster.
True, except this is an artifact of the electronic signal processing , not of the sensor, and does not affect the DR.
 
Hey fellow photogs!

I've been diving deep into the technical aspects of dynamic range lately, and I've come across some conflicting information regarding the impact of sensor size. Some argue that larger sensors inherently have better dynamic range, while others believe that advancements in technology are closing the gap. The recent BSI / partial sensor designs also play a role here.

What's your take on this? Any personal experiences or scientific insights you'd like to share?
Personally I think the whole DR discussion between sensor sizes is massively over blown

3 cameras

1 Lumix GX9 mft

2 Sony A6100 APS C

3 Lumix S1

Some would have you believe that the improvements in DR (and everything else) would/should be double as the sensor size difference is double for each camera

But the DR difference 1 to 2 is 5% and 1 to 3 is 11% at ISO 200

Use the most modern mft senso in the G9ii and the difference from that to the S1 is 8.5%
 
These days the main difference in SNR performance, and thus in DR, is in read noise.
DR and SNR are not the same.
I didn't say they were the same. DR of an image is partly dependent on SNR. It isn't possible to have high DR without high SNR.
Cameras with higher pixel counts generate more read noise, either because they simply have more pixels to read, or also because that to get a given frame rate they have to read those pixels faster.
True, except this is an artifact of the electronic signal processing , not of the sensor,
That depends whether you consider the read-out circuitry from the sensors to be part of the sensor.
and does not affect the DR.
It most certainly does affect the DR of the image.
 
Personally I think the whole DR discussion between sensor sizes is massively over blown

3 cameras

1 Lumix GX9 mft

2 Sony A6100 APS C

3 Lumix S1

Some would have you believe that the improvements in DR (and everything else) would/should be double as the sensor size difference is double for each camera

But the DR difference 1 to 2 is 5% and 1 to 3 is 11% at ISO 200
Where did you get those numbers? And what units are they in?

If camera A has a DR of 11 EV and camera B has a DR of 10 EV, 11 may be 10% more than 10, but Camera A's DR is double that of Camera B's .
Use the most modern mft senso in the G9ii and the difference from that to the S1 is 8.5%
 
If camera A has a DR of 11 EV and camera B has a DR of 10 EV, 11 may be 10% more than 10, but Camera A's DR is double that of Camera B's .
How do you define double in this specific case? For example, if you measure camera A's dynamic range with a back-lit Xyla Dynamic Range Test chart...



images




... then measure B's....the result doesn't appear to double in practical IQ.



--
My opinions are my own and not those of DPR or its administration. They carry no 'special' value (except to me and Lacie of course)
 
Personally I think the whole DR discussion between sensor sizes is massively over blown

3 cameras

1 Lumix GX9 mft

2 Sony A6100 APS C

3 Lumix S1

Some would have you believe that the improvements in DR (and everything else) would/should be double as the sensor size difference is double for each camera

But the DR difference 1 to 2 is 5% and 1 to 3 is 11% at ISO 200
Where did you get those numbers? And what units are they in?
Photons to photos https://www.photonstophotos.net/Charts/PDR.htm
If camera A has a DR of 11 EV and camera B has a DR of 10 EV, 11 may be 10% more than 10, but Camera A's DR is double that of Camera B's .
Uhhh ?
Use the most modern mft senso in the G9ii and the difference from that to the S1 is 8.5%
 
Where did you get those numbers? And what units are they in?
Photons to photos https://www.photonstophotos.net/Charts/PDR.htm
Photographic Dynamic Range (PDR) is not Engineering Dynamic range (DXO) nor the DR measurement you get using a back-lit Dynamic range chart. All have their place and related in some ways...but not all the same. Best to define which you are using/discussing

Best way to measure the actual DR of the camera?:

Dynamic Range

Personally, I like too use the DxO EDR numbers when speaking to "dynamic range" as they are almost the same results as one gets when actually photographing a back-lit Dynamic range chart and give me a better ideal about what I'll see when really pushing a specific camera with a high DR scene...and use PDR for more general quick comparison A vs B. Bill does great work with a huge data base. Pretty handy
 
Last edited:
Where did you get those numbers? And what units are they in?
Photons to photos https://www.photonstophotos.net/Charts/PDR.htm
Photographic Dynamic Range (PDR) is not Engineering Dynamic range (DXO) nor the DR measurement you get using a back-lit Dynamic range chart. All have their place and related in some ways...but not all the same. Best to define which you are using/discussing

Best way to measure the actual DR of the camera?:

Dynamic Range

Personally, I like too use the DxO EDR numbers when speaking to "dynamic range" as they are almost the same results as one gets when actually photographing a back-lit Dynamic range chart and give me a better ideal about what I'll see when really pushing a specific camera with a high DR scene...and use PDR for more general quick comparison A vs B. Bill does great work with a huge data base. Pretty handy
Fascinating and as with all these discussions, slippery as an eel ... clearly the difference when looking at a photo between the cameras I mentioned is certainly not 'double', so what next ?
 

Keyboard shortcuts

Back
Top