Dynamic range and sensor size

Yes that's why photonstophotos have their definition of "photographic dynamic range", I just don't think it's very useful for sensor comparison and the dx vs fx measurements on the same sensor are proof by reductio ad absurdum as far as I'm concerned.
People have tried to use that type of argument on me to "prove" that noise and DOF don't change when you crop ("if you take scissors to the negative, does the noise/DOF change?") and the problem is, they do!
They do or don't based on your definition of each, for DoF to change as a result of cropping a very exotic definition would be required,
Nope...just the standard one found on every DOF calculator.
for noise it's easier but again requires conflating capture with reproduction which I rather not do.
Again, the same definition everyone uses.
K, I'll take your word for it.
Don't have to.
Oh but I insist.
 
Photographic Dynamic Range (PDR) is normalized to a particular viewing condition as if you're viewing a print of consistent size at a consistent distance.
Can you provide a basis for that assertion?
Normalization to print size and viewing distance is one of traditional photographic assertions, at least since pre-World War II (you can see Zeiss CLN No. 1, 1997 for a bit of the discussion).
It makes sense for analog film where dynamic range is a property of local particle density, it doesn't make sense for digital where dynamic range is a property of an individual photosite.
Some thoughts on comparing noise and DR for film and digital can be found in https://corp.dxomark.com/wp-content/uploads/2017/11/2012-Film_vs_Digital_final_copyright.pdf

To discuss dynamic range I think we need first to define what is the object, and what is the signal, and what is the application.

In practical terms, I like to plot resolution loss against exposure.

On a normally exposed real life shot where green pixels may reach very close to clipping blue and red often do not, and that's different for different sensors and different light SPDs. Reducing sensor performance in terms of DR to one number may be not all that's needed.

What to make of pixel DR if we are trying to apply it to sensors with different pixel densities, sizes, and counts?
Even for film print size implies different enlargement of same-sized negative, is anyone claiming that a given film emulsion has different dynamic range between 35mm and 120 film ?
Not sure what the question is. Is there a fundamental difference in the effect of photon shot noise depending on the media?

How do we define dynamic range for a stochastic raster? Suppose I use a very contrasty film, and all I have on it after development of a step wedge is Dmax and Dmin, no intermediate Ds. If I print it small, I can have many perceivable shades of grey, essentially a half-tone print. If I print it large, I will have only 2 perceivable shades (at some viewing distance).
We define the dynamic range of each reproduction device (such as an exposed negative, a monitor, a printer, or even a given printed photograph) as distinct from the dynamic range of the capturing system (such as a sensor or film) thereby avoiding lots of pain and confusion.
 
Last edited:
The upper end of the dynamic range however is a property of each photosite (assume a monochrome sensor without a CFA for the purpose of simplicity) and is determined solely by the number of photons it can collect before any additional photons no longer result in a higher numerical value being read out at the end of the exposure.
How we view the upper end depends on the final size of the image as well and on the pixel density.

The gazillion mp sensor Eric Fossum develops would have a saturation level of 1 electron.
 
Photographic Dynamic Range (PDR) is normalized to a particular viewing condition as if you're viewing a print of consistent size at a consistent distance.
Can you provide a basis for that assertion?
Normalization to print size and viewing distance is one of traditional photographic assertions, at least since pre-World War II (you can see Zeiss CLN No. 1, 1997 for a bit of the discussion).
It makes sense for analog film where dynamic range is a property of local particle density, it doesn't make sense for digital where dynamic range is a property of an individual photosite.
Some thoughts on comparing noise and DR for film and digital can be found in https://corp.dxomark.com/wp-content/uploads/2017/11/2012-Film_vs_Digital_final_copyright.pdf
You haven't had time to read the article, yet you are already replying.
To discuss dynamic range I think we need first to define what is the object, and what is the signal, and what is the application.

In practical terms, I like to plot resolution loss against exposure.

On a normally exposed real life shot where green pixels may reach very close to clipping blue and red often do not, and that's different for different sensors and different light SPDs. Reducing sensor performance in terms of DR to one number may be not all that's needed.

What to make of pixel DR if we are trying to apply it to sensors with different pixel densities, sizes, and counts?
Even for film print size implies different enlargement of same-sized negative, is anyone claiming that a given film emulsion has different dynamic range between 35mm and 120 film ?
Not sure what the question is. Is there a fundamental difference in the effect of photon shot noise depending on the media?

How do we define dynamic range for a stochastic raster? Suppose I use a very contrasty film, and all I have on it after development of a step wedge is Dmax and Dmin, no intermediate Ds. If I print it small, I can have many perceivable shades of grey, essentially a half-tone print. If I print it large, I will have only 2 perceivable shades (at some viewing distance).
We define the dynamic range of each reproduction device (such as a monitor or a printer, or even a given printed photograph) as distinct from the dynamic range of the capturing system (such as a sensor or film) thereby avoiding lots of pain and confusion.
That is, you don't have an answer. Doesn't even look like you've really understood the question.

Ciao.
 
Can someone give me a logically persausive explantion of the fact that smaller sensors have lower DR?

This is the comparison of D800 FX and DX . Same sensor, same pixels. I cannot make the logical leap that is needed to grasp why trimming off the outside of the frame reduces the DR of what is left. Surely cropping in processing won't do the same (rationally it cannot) but I'm at a loss to understand why.
Photographic Dynamic Range (PDR) is normalized to a particular viewing condition as if you're viewing a print of consistent size at a consistent distance.

So, a smaller sensor (or a cropped portion of a sensor) needs to be enlarged more to make a print of the same size; therefore noise is more apparent and the PDR is lower.
OK, so here is a related issue. On another thread, people were positing taking multiple shots with a longer lens and stitching them would increase the DR over using a single shot with a wide-angle lens. The contention being that the higher res image, when viewed at the same size would have greater SNR and therefore greater DR. Whilst I can accept the former, I balk at the latter. The sensor gathers the same light, and stitching is irrelevant to that. If it reduces effective noise, there is an effective benefit to the DR, but still not gathering more light, thus not changing the true DR.
The answer depends on whether the stitched photo is the same physical size as the single wide angle photo. If so then yes the "dynamic range" is increased because the un-stitched images would be enlarged less.

Ultimately it's light per unit area as seen in the final image that matters.
How can you reconcile this definition with the fact that if any of the original photos contained a bright spot with luminosity that clips (in RAW) it would not be recoverable regardless of the amount of exposures stitched ?
As with any photograph where highlights that matters are clipped that's no good. Stitching isn't meant to fix that; but it would get you more dynamic range into the shadows.
I understand that, but shadow dynamic range is not convertible to highlight dynamic range the way the other way around is (via ETTR).

So an intuitive dynamic range measure for me would be the "bucket depth" of the sensor photosites constrained by a fixed-neighborhood noise floor, anything that takes into account the overall sensor size is a perceptual, hybrid dr+noise+resolution metric.
Dynamic range is dynamic range regardless of where you place middle gray.
 
How can you reconcile this definition with the fact that if any of the original photos contained a bright spot with luminosity that clips (in RAW) it would not be recoverable regardless of the amount of exposures stitched ?
As with any photograph where highlights that matters are clipped that's no good. Stitching isn't meant to fix that; but it would get you more dynamic range into the shadows.
If there is no information, in other words if the pixel is black, how do you get more dynamic range in the shadows?
Shadows aren't pure black. Pure black is pretty hard to find once you've removed the lens cap :-)
 
The upper end of the dynamic range however is a property of each photosite (assume a monochrome sensor without a CFA for the purpose of simplicity) and is determined solely by the number of photons it can collect before any additional photons no longer result in a higher numerical value being read out at the end of the exposure.
How we view the upper end depends on the final size of the image as well and on the pixel density.
There is no final image, no image has been made. No image can ever be made because the data only exists as a RAW file on an SD card in a universe without any sentient beings left. Like many before them the last of those succumbed to hunger and diseases that followed a terrifying thermonuclear war brought about by a faction of rogue dpreview readers who banded together to finally free their planet from the scourge of a species whose members are apparently unable to grasp really simple concepts.

Even in such cold and barren universe the DR of the sensor on the camera that took this image should be definable as distinct from that of any picture it ever took.
The gazillion mp sensor Eric Fossum develops would have a saturation level of 1 electron.
No idea who or what that is but unless it's a really crappy sensor it wouldn't, it would just have a very low base ISO rating and require long exposures. Yes you can create higher reproductive dynamic range from a lower underlying capture dynamic range by techniques such as dithering, half toning and other spatial tradeoffs but it doesn't change the fact that those are still different ranges.
 
Can someone give me a logically persausive explantion of the fact that smaller sensors have lower DR?

This is the comparison of D800 FX and DX . Same sensor, same pixels. I cannot make the logical leap that is needed to grasp why trimming off the outside of the frame reduces the DR of what is left. Surely cropping in processing won't do the same (rationally it cannot) but I'm at a loss to understand why.
Photographic Dynamic Range (PDR) is normalized to a particular viewing condition as if you're viewing a print of consistent size at a consistent distance.

So, a smaller sensor (or a cropped portion of a sensor) needs to be enlarged more to make a print of the same size; therefore noise is more apparent and the PDR is lower.
OK, so here is a related issue. On another thread, people were positing taking multiple shots with a longer lens and stitching them would increase the DR over using a single shot with a wide-angle lens. The contention being that the higher res image, when viewed at the same size would have greater SNR and therefore greater DR. Whilst I can accept the former, I balk at the latter. The sensor gathers the same light, and stitching is irrelevant to that. If it reduces effective noise, there is an effective benefit to the DR, but still not gathering more light, thus not changing the true DR.
The answer depends on whether the stitched photo is the same physical size as the single wide angle photo. If so then yes the "dynamic range" is increased because the un-stitched images would be enlarged less.

Ultimately it's light per unit area as seen in the final image that matters.
How can you reconcile this definition with the fact that if any of the original photos contained a bright spot with luminosity that clips (in RAW) it would not be recoverable regardless of the amount of exposures stitched ?
As with any photograph where highlights that matters are clipped that's no good. Stitching isn't meant to fix that; but it would get you more dynamic range into the shadows.
I understand that, but shadow dynamic range is not convertible to highlight dynamic range the way the other way around is (via ETTR).

So an intuitive dynamic range measure for me would be the "bucket depth" of the sensor photosites constrained by a fixed-neighborhood noise floor, anything that takes into account the overall sensor size is a perceptual, hybrid dr+noise+resolution metric.
Dynamic range is dynamic range regardless of where you place middle gray.
You can define dynamic range in a way that this will be true but it will not be very useful as a characteristic of camera sensors.
 
The upper end of the dynamic range however is a property of each photosite (assume a monochrome sensor without a CFA for the purpose of simplicity) and is determined solely by the number of photons it can collect before any additional photons no longer result in a higher numerical value being read out at the end of the exposure.
How we view the upper end depends on the final size of the image as well and on the pixel density.
There is no final image, no image has been made.
When the conventional DR is computed, nobody really makes images.
Even in such cold and barren universe the DR of the sensor on the camera that took this image should be definable as distinct from that of any picture it ever took.
The gazillion mp sensor Eric Fossum develops would have a saturation level of 1 electron.
No idea who or what that is
Google him
but unless it's a really crappy sensor it wouldn't, it would just have a very low base ISO rating and require long exposures.
Wrong.
Yes you can create higher reproductive dynamic range from a lower underlying capture dynamic range by techniques such as dithering, half toning and other spatial tradeoffs but it doesn't change the fact that those are still different ranges.
What the DR of such a sensor would be either at a pixel or image level, would not depend on the saturation level even if that level was 1.
 
Last edited:
The upper end of the dynamic range however is a property of each photosite (assume a monochrome sensor without a CFA for the purpose of simplicity) and is determined solely by the number of photons it can collect before any additional photons no longer result in a higher numerical value being read out at the end of the exposure.
How we view the upper end depends on the final size of the image as well and on the pixel density.
There is no final image, no image has been made. No image can ever be made because the data only exists as a RAW file on an SD card in a universe without any sentient beings left. Like many before them the last of those succumbed to hunger and diseases that followed a terrifying thermonuclear war brought about by a faction of rogue dpreview readers who banded together to finally free their planet from the scourge of a species whose members are apparently unable to grasp really simple concepts.

Even in such cold and barren universe the DR of the sensor on the camera that took this image should be definable as distinct from that of any picture it ever took.
The gazillion mp sensor Eric Fossum develops would have a saturation level of 1 electron.
No idea who or what that is but unless it's a really crappy sensor it wouldn't, it would just have a very low base ISO rating and require long exposures. Yes you can create higher reproductive dynamic range from a lower underlying capture dynamic range by techniques such as dithering, half toning and other spatial tradeoffs but it doesn't change the fact that those are still different ranges.
You really don't understand much of what is discussed here.
 
Can someone give me a logically persausive explantion of the fact that smaller sensors have lower DR?

This is the comparison of D800 FX and DX . Same sensor, same pixels. I cannot make the logical leap that is needed to grasp why trimming off the outside of the frame reduces the DR of what is left. Surely cropping in processing won't do the same (rationally it cannot) but I'm at a loss to understand why.
Photographic Dynamic Range (PDR) is normalized to a particular viewing condition as if you're viewing a print of consistent size at a consistent distance.

So, a smaller sensor (or a cropped portion of a sensor) needs to be enlarged more to make a print of the same size; therefore noise is more apparent and the PDR is lower.
OK, so here is a related issue. On another thread, people were positing taking multiple shots with a longer lens and stitching them would increase the DR over using a single shot with a wide-angle lens. The contention being that the higher res image, when viewed at the same size would have greater SNR and therefore greater DR. Whilst I can accept the former, I balk at the latter. The sensor gathers the same light, and stitching is irrelevant to that. If it reduces effective noise, there is an effective benefit to the DR, but still not gathering more light, thus not changing the true DR.
The answer depends on whether the stitched photo is the same physical size as the single wide angle photo. If so then yes the "dynamic range" is increased because the un-stitched images would be enlarged less.

Ultimately it's light per unit area as seen in the final image that matters.
SirPeepsalot illustrates the issue I am trying to present. Black is black and white is white regardless of the resolution.
Not really. "Black" begins at the noise floor. Signals lower than the noise floor are considered to be indistinguishable and simple "black". Less noisy sensors with a lower noise floor can record a "darker black".

Likewise, "white" is the saturation limit of the sensor. Signals above the saturation limit are also "white". Sensors that can record more light can see "whiter whites".
A sensor gathers a certain range from black to white. Stitching is not going to change this. It cannot bring information that is not there to begin with.
Stitching lowers the noise floor and raises the saturation limit (for a given proportion of the photo compared to a single exposure), thus increasing the DR.
 

Keyboard shortcuts

Back
Top