What affects the "Dynamic range"?

Alireza S

Member
Messages
16
Reaction score
1
Location
IR
Hi,

I'm interested to know:

1. What is the Dynamic range? and...

2. What really affects the Dynamic range? (ISO or any other things...?)

Thx,

Alireza
 
Hi,

What affects the "Dynamic range"? I'm interested to know:

1. What is the Dynamic range? and...
DR (Dynamic Range) is the range of light levels where detail can be recorded by a photo. The low end is where detail is drowned out by noise and the high end where detail is blown out by oversaturation.
The low end is actually the lowest light level that can be distinguished from a total absence of light (i.e. black) by the system. It may or may not be determined by the presence of noise. It is more usually determined by the sensitivity of the sensor to very low levels of light, resulting in no meaningful output.
Can you flesh that out? Let's say I have a 24 MP sensor with a QE (in the green channel) of 51%, a per-pixel saturation limit of 82K photoelectrons, and a per-pixel read noise of 3.3 electrons. What would the DR of this sensor be? If you need more information, what information would you require?
 
Hi,

What affects the "Dynamic range"? I'm interested to know:

1. What is the Dynamic range? and...
DR (Dynamic Range) is the range of light levels where detail can be recorded by a photo. The low end is where detail is drowned out by noise and the high end where detail is blown out by oversaturation.
The low end is actually the lowest light level that can be distinguished from a total absence of light (i.e. black) by the system. It may or may not be determined by the presence of noise. It is more usually determined by the sensitivity of the sensor to very low levels of light, resulting in no meaningful output.
Can you flesh that out? Let's say I have a 24 MP sensor with a QE (in the green channel) of 51%, a per-pixel saturation limit of 82K photoelectrons, and a per-pixel read noise of 3.3 electrons. What would the DR of this sensor be? If you need more information, what information would you require?
"It is more usually determined by the sensitivity of the sensor to very low levels of light, resulting in no meaningful output." Sensitivity to levels resulting in (no) output. Let it sink.

--
http://www.libraw.org/
 
Last edited:
Hi,

What affects the "Dynamic range"? I'm interested to know:

1. What is the Dynamic range? and...
DR (Dynamic Range) is the range of light levels where detail can be recorded by a photo. The low end is where detail is drowned out by noise and the high end where detail is blown out by oversaturation.
The low end is actually the lowest light level that can be distinguished from a total absence of light (i.e. black) by the system. It may or may not be determined by the presence of noise. It is more usually determined by the sensitivity of the sensor to very low levels of light, resulting in no meaningful output.
That is wrong. What is the 'lowest meaningful output' is determined by the level of noise. That is a basic tenet of communications theory.
 
Last edited:
And as a result, the lowest discernible level will always be determined (at least partially) by noise--especially since noise requires an area and not a single pixel. Even in an area of 2 pixels, you will have noise unless you have the exact same number of photons hit each.
Not entirely correct. Rather think of photon shot noise in terms of repeated measurements at the same pixel, since this more accurately reflects the physical process (random arrival time of photons).

We often take a shortcut and perform a space-for-time substitution by measuring noise spatially over a number of pixels, rather than temporally at the same pixel, since this is more convenient (only requires one captured image).

If we do the space-for-time substitution carefully enough (uniform illumination, etc.) then the spatial and temporal noise measurements should agree fairly well. But keep in mind that some of what could call noise is purely a spatial phenomenon, like PRNU or other fixed-pattern sources of variability.

-F
 
Last edited:
And as a result, the lowest discernible level will always be determined (at least partially) by noise--especially since noise requires an area and not a single pixel. Even in an area of 2 pixels, you will have noise unless you have the exact same number of photons hit each.
Not entirely correct. Rather think of photon shot noise in terms of repeated measurements at the same pixel, since this more accurately reflects the physical process (random arrival time of photons).

We often take a shortcut and perform a space-for-time substitution by measuring noise spatially over a number of pixels, rather than temporally at the same pixel, since this is more convenient (only requires one captured image).

If we do the space-for-time substitution carefully enough (uniform illumination, etc.) then the spatial and temporal noise measurements should agree fairly well. But keep in mind that some of what could call noise is purely a spatial phenomenon, like PRNU or other fixed-pattern sources of variability.

-F
I see what you are saying, but my point was that a single pixel on a single exposure cannot identify shot noise. It has only one value, even when different photons (inevitably) hit different parts of its surface area. In digital photography, shot noise manifests itself over a minimum of 2 pixels through the process we correctly described. And to tie to the response:

The variance between 2 or more pixels when they "should" ideally be uniform (as in uniform lighting) is noise. In reality & in theory, this "should ideally be uniform lighting" is practically impossible due to the photon's inherent randomness.

To put it another way: A bunch of photons hitting a single pixel results in a signal. By itself, it cannot, however, be distinguished between being a signal vs noise because it itself is the very measurement of the signal. The noise is only identified by variances, which by definition requires 2. Alternatively, you could do "the same pixel twice," but that's a bit out of scope for the topic here that discusses single exposure dynamic range. :)
 
And as a result, the lowest discernible level will always be determined (at least partially) by noise--especially since noise requires an area and not a single pixel. Even in an area of 2 pixels, you will have noise unless you have the exact same number of photons hit each.
Not entirely correct. Rather think of photon shot noise in terms of repeated measurements at the same pixel, since this more accurately reflects the physical process (random arrival time of photons).

We often take a shortcut and perform a space-for-time substitution by measuring noise spatially over a number of pixels, rather than temporally at the same pixel, since this is more convenient (only requires one captured image).

If we do the space-for-time substitution carefully enough (uniform illumination, etc.) then the spatial and temporal noise measurements should agree fairly well. But keep in mind that some of what could call noise is purely a spatial phenomenon, like PRNU or other fixed-pattern sources of variability.

-F
I see what you are saying, but my point was that a single pixel on a single exposure cannot identify shot noise.
No argument there.
It has only one value, even when different photons (inevitably) hit different parts of its surface area. In digital photography, shot noise manifests itself over a minimum of 2 pixels through the process we correctly described. And to tie to the response:
Agreed. My point (if a little pedantic) was just that if you use multiple pixels, you are conflating photon shot noise with other spatial sources of noise (PRNU, other fixed pattern noise, and spatial variability in the light source itself), unless you correct for those. And you can only correct for PRNU if you take repeat measurements. Without correcting for PRNU your spatial "noise" measurement will underestimate DR.
To put it another way: A bunch of photons hitting a single pixel results in a signal. By itself, it cannot, however, be distinguished between being a signal vs noise because it itself is the very measurement of the signal. The noise is only identified by variances, which by definition requires 2. Alternatively, you could do "the same pixel twice," but that's a bit out of scope for the topic here that discusses single exposure dynamic range. :)
Fair enough. But the value of a single pixel (from a single image) is not "the signal"; it is a sample drawn from the distribution of "the signal" (at that location on the sensor). By taking a spatial average, you are assuming that "the signal" is constant over all the pixels you are considering in your noise measurement, which is only true under controlled conditions.
 
And as a result, the lowest discernible level will always be determined (at least partially) by noise--especially since noise requires an area and not a single pixel. Even in an area of 2 pixels, you will have noise unless you have the exact same number of photons hit each.
Not entirely correct. Rather think of photon shot noise in terms of repeated measurements at the same pixel, since this more accurately reflects the physical process (random arrival time of photons).
Not entirely correct. Time is simply a dimension of space-time. Spatially random arrival has the same general properties as temporally random arrival. We don't generally construct a photograph using sequential observations of the same pixel (unless you have a Nipkow disc camera).

--
Tinkety tonk old fruit, & down with the Nazis!
Bob
 
Last edited:
To put it another way: A bunch of photons hitting a single pixel results in a signal.
More accurately, it results in a sample or observation, which by itself is neither 'signal' nor 'noise'.
By itself, it cannot, however, be distinguished between being a signal vs noise because it itself is the very measurement of the signal. The noise is only identified by variances, which by definition requires 2. Alternatively, you could do "the same pixel twice," but that's a bit out of scope for the topic here that discusses single exposure dynamic range. :)
I think that there is some issue here with the use of the term 'signal'. In communications theory, 'signal' is the intended message. In the context of photography, trying to work out what is the 'intended message' is a little complicated, especially when your receiver is sensitive enough that quantum fluctuations in the carrier are significant.
 
To put it another way: A bunch of photons hitting a single pixel results in a signal.
More accurately, it results in a sample or observation, which by itself is neither 'signal' nor 'noise'.
By itself, it cannot, however, be distinguished between being a signal vs noise because it itself is the very measurement of the signal. The noise is only identified by variances, which by definition requires 2. Alternatively, you could do "the same pixel twice," but that's a bit out of scope for the topic here that discusses single exposure dynamic range. :)
I think that there is some issue here with the use of the term 'signal'. In communications theory, 'signal' is the intended message. In the context of photography, trying to work out what is the 'intended message' is a little complicated, especially when your receiver is sensitive enough that quantum fluctuations in the carrier are significant.
We are back to the existential question of defining noise and signal. I prefer your post from yesterday: photon noise or not, that is the signal. Zero noise means no added/modulated noise to that signal, in other words, a perfect capture of those photons wherever they happened to hit. Unless they hit exactly when the shutter opened or closed, if "exactly" even makes sense. :-)

BTW, if we think of the photons of small points moving along lines, then the image consists of a bunch of deltas. The sampled image is an aliased representation of that signal.
 
To put it another way: A bunch of photons hitting a single pixel results in a signal.
More accurately, it results in a sample or observation, which by itself is neither 'signal' nor 'noise'.
By itself, it cannot, however, be distinguished between being a signal vs noise because it itself is the very measurement of the signal. The noise is only identified by variances, which by definition requires 2. Alternatively, you could do "the same pixel twice," but that's a bit out of scope for the topic here that discusses single exposure dynamic range. :)
I think that there is some issue here with the use of the term 'signal'. In communications theory, 'signal' is the intended message. In the context of photography, trying to work out what is the 'intended message' is a little complicated, especially when your receiver is sensitive enough that quantum fluctuations in the carrier are significant.
We are back to the existential question of defining noise and signal. I prefer your post from yesterday: photon noise or not, that is the signal.
Yes, that's easier. One can get into quantum mechanics as information theory, but it tends to diverge from practicality quite quickly. Unfortunately, some people have this idea that there it this concretely observable thing called a 'scene' which we can take as 'reality' and/or 'signal'.
Zero noise means no added/modulated noise to that signal, in other words, a perfect capture of those photons wherever they happened to hit. Unless they hit exactly when the shutter opened or closed, if "exactly" even makes sense. :-)
Exactly
BTW, if we think of the photons of small points moving along lines, then the image consists of a bunch of deltas. The sampled image is an aliased representation of that signal.
That's the consequence of a finite sampling rate (where, since 'time' is just a dimension, we refer to a spatial 'rate')
 
And as a result, the lowest discernible level will always be determined (at least partially) by noise--especially since noise requires an area and not a single pixel. Even in an area of 2 pixels, you will have noise unless you have the exact same number of photons hit each.
Not entirely correct. Rather think of photon shot noise in terms of repeated measurements at the same pixel, since this more accurately reflects the physical process (random arrival time of photons).
Not entirely correct. Time is simply a dimension of space-time. Spatially random arrival has the same general properties as temporally random arrival. We don't generally construct a photograph using sequential observations of the same pixel (unless you have a Nipkow disc camera).
As Hillel taught us, “If not now, where? If not here, when?"
 
And as a result, the lowest discernible level will always be determined (at least partially) by noise--especially since noise requires an area and not a single pixel. Even in an area of 2 pixels, you will have noise unless you have the exact same number of photons hit each.
Not entirely correct. Rather think of photon shot noise in terms of repeated measurements at the same pixel, since this more accurately reflects the physical process (random arrival time of photons).
Not entirely correct. Time is simply a dimension of space-time. Spatially random arrival has the same general properties as temporally random arrival. We don't generally construct a photograph using sequential observations of the same pixel (unless you have a Nipkow disc camera).
Not entirely correct. A typical image, as in a photograph, will not be uniformly lit. So the statistics of a darker part of image would be different from a lighter part. Hence, spatial sampling can't be replaced by temporal sampling, in general.
 
And as a result, the lowest discernible level will always be determined (at least partially) by noise--especially since noise requires an area and not a single pixel. Even in an area of 2 pixels, you will have noise unless you have the exact same number of photons hit each.
Not entirely correct. Rather think of photon shot noise in terms of repeated measurements at the same pixel, since this more accurately reflects the physical process (random arrival time of photons).
Not entirely correct. Time is simply a dimension of space-time. Spatially random arrival has the same general properties as temporally random arrival. We don't generally construct a photograph using sequential observations of the same pixel (unless you have a Nipkow disc camera).
Not entirely correct. A typical image, as in a photograph, will not be uniformly lit. So the statistics of a darker part of image would be different from a lighter part. Hence, spatial sampling can't be replaced by temporal sampling, in general.
Not entirely correct. Sampling is sampling. If a phenomenon is sampled in the spatial dimensions and varies over the range of sampling it is no different from a phenomenon being sampled in the temporal dimension and varying over that, If the phenomenon didn't vary over the dimension(s) over which it is being sampled, there would be no point sampling it. Or put it another way. If 'he statistics' of the brighter part of an image weren't different from those of a darker part of the image, you wouldn't have an image. If the statistics of (say) a sound in the louder part of the sound didn't vary from those in the quieter part of the sound, you wouldn't have any sound.

--
Tinkety tonk old fruit, & down with the Nazis!
Bob
 
Last edited:
And as a result, the lowest discernible level will always be determined (at least partially) by noise--especially since noise requires an area and not a single pixel. Even in an area of 2 pixels, you will have noise unless you have the exact same number of photons hit each.
Not entirely correct. Rather think of photon shot noise in terms of repeated measurements at the same pixel, since this more accurately reflects the physical process (random arrival time of photons).
Not entirely correct. Time is simply a dimension of space-time. Spatially random arrival has the same general properties as temporally random arrival. We don't generally construct a photograph using sequential observations of the same pixel (unless you have a Nipkow disc camera).
Not entirely correct. A typical image, as in a photograph, will not be uniformly lit. So the statistics of a darker part of image would be different from a lighter part. Hence, spatial sampling can't be replaced by temporal sampling, in general.
Not entirely correct. Sampling is sampling. If a phenomenon is sampled in the spatial dimensions and varies over the range of sampling it is no different from a phenomenon being sampled in the temporal dimension and varying over that, If the phenomenon didn't vary over the dimension(s) over which it is being sampled, there would be no point sampling it. Or put it another way. If 'he statistics' of the brighter part of an image weren't different from those of a darker part of the image, you wouldn't have an image. If the statistics of (say) a sound in the louder part of the sound didn't vary from those in the quieter part of the sound, you wouldn't have any sound.

--
Tinkety tonk old fruit, & down with the Nazis!
Bob
Entirely correct. We cannot separate the sampling concepts of space & time for the purposes of imaging. For one: we're talking about sampling a finite area (ie. light entering the area of the aperture and redirected to the "2D" sensor); and a finite time (ie. only the light reflected or transmitted during the exposure)--just like how the universe works. We cannot have one without the other for the scope of this discussion.

If everything existed everywhere and at all times (ie. "0 sampling" / the separation of the above), we'd essentially be frozen inside of a single pixel in an absolute 100% signal / 0 noise situation... :)
 
Last edited:
This reminds me what Bob wrote an year ago:


And actually, I don't post in the PST forum very much, I find the discussions there very interesting on occasion, but mostly pretty arcane and esoteric.

:-)
 

Keyboard shortcuts

Back
Top