Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Can you flesh that out? Let's say I have a 24 MP sensor with a QE (in the green channel) of 51%, a per-pixel saturation limit of 82K photoelectrons, and a per-pixel read noise of 3.3 electrons. What would the DR of this sensor be? If you need more information, what information would you require?The low end is actually the lowest light level that can be distinguished from a total absence of light (i.e. black) by the system. It may or may not be determined by the presence of noise. It is more usually determined by the sensitivity of the sensor to very low levels of light, resulting in no meaningful output.DR (Dynamic Range) is the range of light levels where detail can be recorded by a photo. The low end is where detail is drowned out by noise and the high end where detail is blown out by oversaturation.Hi,
What affects the "Dynamic range"? I'm interested to know:
1. What is the Dynamic range? and...
"It is more usually determined by the sensitivity of the sensor to very low levels of light, resulting in no meaningful output." Sensitivity to levels resulting in (no) output. Let it sink.Can you flesh that out? Let's say I have a 24 MP sensor with a QE (in the green channel) of 51%, a per-pixel saturation limit of 82K photoelectrons, and a per-pixel read noise of 3.3 electrons. What would the DR of this sensor be? If you need more information, what information would you require?The low end is actually the lowest light level that can be distinguished from a total absence of light (i.e. black) by the system. It may or may not be determined by the presence of noise. It is more usually determined by the sensitivity of the sensor to very low levels of light, resulting in no meaningful output.DR (Dynamic Range) is the range of light levels where detail can be recorded by a photo. The low end is where detail is drowned out by noise and the high end where detail is blown out by oversaturation.Hi,
What affects the "Dynamic range"? I'm interested to know:
1. What is the Dynamic range? and...
That is wrong. What is the 'lowest meaningful output' is determined by the level of noise. That is a basic tenet of communications theory.The low end is actually the lowest light level that can be distinguished from a total absence of light (i.e. black) by the system. It may or may not be determined by the presence of noise. It is more usually determined by the sensitivity of the sensor to very low levels of light, resulting in no meaningful output.DR (Dynamic Range) is the range of light levels where detail can be recorded by a photo. The low end is where detail is drowned out by noise and the high end where detail is blown out by oversaturation.Hi,
What affects the "Dynamic range"? I'm interested to know:
1. What is the Dynamic range? and...
But can be clamped in a camera?What is the 'lowest meaningful output' is determined by the level of noise.
I'm not sure what you mean by 'clamped'. But the encoding can choose to set a 'zero' above the noise floor - otherwise known as 'black level'.But can be clamped in a camera?What is the 'lowest meaningful output' is determined by the level of noise.
Yes, that's what I meant.I'm not sure what you mean by 'clamped'. But the encoding can choose to set a 'zero' above the noise floor - otherwise known as 'black level'.But can be clamped in a camera?What is the 'lowest meaningful output' is determined by the level of noise.
Yes, that's what I meant.I'm not sure what you mean by 'clamped'. But the encoding can choose to set a 'zero' above the noise floor - otherwise known as 'black level'.But can be clamped in a camera?What is the 'lowest meaningful output' is determined by the level of noise.
Yes, that's what I meant.I'm not sure what you mean by 'clamped'. But the encoding can choose to set a 'zero' above the noise floor - otherwise known as 'black level'.But can be clamped in a camera?What is the 'lowest meaningful output' is determined by the level of noise.
Not entirely correct. Rather think of photon shot noise in terms of repeated measurements at the same pixel, since this more accurately reflects the physical process (random arrival time of photons).And as a result, the lowest discernible level will always be determined (at least partially) by noise--especially since noise requires an area and not a single pixel. Even in an area of 2 pixels, you will have noise unless you have the exact same number of photons hit each.
I see what you are saying, but my point was that a single pixel on a single exposure cannot identify shot noise. It has only one value, even when different photons (inevitably) hit different parts of its surface area. In digital photography, shot noise manifests itself over a minimum of 2 pixels through the process we correctly described. And to tie to the response:Not entirely correct. Rather think of photon shot noise in terms of repeated measurements at the same pixel, since this more accurately reflects the physical process (random arrival time of photons).And as a result, the lowest discernible level will always be determined (at least partially) by noise--especially since noise requires an area and not a single pixel. Even in an area of 2 pixels, you will have noise unless you have the exact same number of photons hit each.
We often take a shortcut and perform a space-for-time substitution by measuring noise spatially over a number of pixels, rather than temporally at the same pixel, since this is more convenient (only requires one captured image).
If we do the space-for-time substitution carefully enough (uniform illumination, etc.) then the spatial and temporal noise measurements should agree fairly well. But keep in mind that some of what could call noise is purely a spatial phenomenon, like PRNU or other fixed-pattern sources of variability.
-F
No argument there.I see what you are saying, but my point was that a single pixel on a single exposure cannot identify shot noise.Not entirely correct. Rather think of photon shot noise in terms of repeated measurements at the same pixel, since this more accurately reflects the physical process (random arrival time of photons).And as a result, the lowest discernible level will always be determined (at least partially) by noise--especially since noise requires an area and not a single pixel. Even in an area of 2 pixels, you will have noise unless you have the exact same number of photons hit each.
We often take a shortcut and perform a space-for-time substitution by measuring noise spatially over a number of pixels, rather than temporally at the same pixel, since this is more convenient (only requires one captured image).
If we do the space-for-time substitution carefully enough (uniform illumination, etc.) then the spatial and temporal noise measurements should agree fairly well. But keep in mind that some of what could call noise is purely a spatial phenomenon, like PRNU or other fixed-pattern sources of variability.
-F
Agreed. My point (if a little pedantic) was just that if you use multiple pixels, you are conflating photon shot noise with other spatial sources of noise (PRNU, other fixed pattern noise, and spatial variability in the light source itself), unless you correct for those. And you can only correct for PRNU if you take repeat measurements. Without correcting for PRNU your spatial "noise" measurement will underestimate DR.It has only one value, even when different photons (inevitably) hit different parts of its surface area. In digital photography, shot noise manifests itself over a minimum of 2 pixels through the process we correctly described. And to tie to the response:
Fair enough. But the value of a single pixel (from a single image) is not "the signal"; it is a sample drawn from the distribution of "the signal" (at that location on the sensor). By taking a spatial average, you are assuming that "the signal" is constant over all the pixels you are considering in your noise measurement, which is only true under controlled conditions.To put it another way: A bunch of photons hitting a single pixel results in a signal. By itself, it cannot, however, be distinguished between being a signal vs noise because it itself is the very measurement of the signal. The noise is only identified by variances, which by definition requires 2. Alternatively, you could do "the same pixel twice," but that's a bit out of scope for the topic here that discusses single exposure dynamic range.![]()
Not entirely correct. Time is simply a dimension of space-time. Spatially random arrival has the same general properties as temporally random arrival. We don't generally construct a photograph using sequential observations of the same pixel (unless you have a Nipkow disc camera).Not entirely correct. Rather think of photon shot noise in terms of repeated measurements at the same pixel, since this more accurately reflects the physical process (random arrival time of photons).And as a result, the lowest discernible level will always be determined (at least partially) by noise--especially since noise requires an area and not a single pixel. Even in an area of 2 pixels, you will have noise unless you have the exact same number of photons hit each.
More accurately, it results in a sample or observation, which by itself is neither 'signal' nor 'noise'.To put it another way: A bunch of photons hitting a single pixel results in a signal.
I think that there is some issue here with the use of the term 'signal'. In communications theory, 'signal' is the intended message. In the context of photography, trying to work out what is the 'intended message' is a little complicated, especially when your receiver is sensitive enough that quantum fluctuations in the carrier are significant.By itself, it cannot, however, be distinguished between being a signal vs noise because it itself is the very measurement of the signal. The noise is only identified by variances, which by definition requires 2. Alternatively, you could do "the same pixel twice," but that's a bit out of scope for the topic here that discusses single exposure dynamic range.![]()
We are back to the existential question of defining noise and signal. I prefer your post from yesterday: photon noise or not, that is the signal. Zero noise means no added/modulated noise to that signal, in other words, a perfect capture of those photons wherever they happened to hit. Unless they hit exactly when the shutter opened or closed, if "exactly" even makes sense.More accurately, it results in a sample or observation, which by itself is neither 'signal' nor 'noise'.To put it another way: A bunch of photons hitting a single pixel results in a signal.
I think that there is some issue here with the use of the term 'signal'. In communications theory, 'signal' is the intended message. In the context of photography, trying to work out what is the 'intended message' is a little complicated, especially when your receiver is sensitive enough that quantum fluctuations in the carrier are significant.By itself, it cannot, however, be distinguished between being a signal vs noise because it itself is the very measurement of the signal. The noise is only identified by variances, which by definition requires 2. Alternatively, you could do "the same pixel twice," but that's a bit out of scope for the topic here that discusses single exposure dynamic range.![]()
Yes, that's easier. One can get into quantum mechanics as information theory, but it tends to diverge from practicality quite quickly. Unfortunately, some people have this idea that there it this concretely observable thing called a 'scene' which we can take as 'reality' and/or 'signal'.We are back to the existential question of defining noise and signal. I prefer your post from yesterday: photon noise or not, that is the signal.More accurately, it results in a sample or observation, which by itself is neither 'signal' nor 'noise'.To put it another way: A bunch of photons hitting a single pixel results in a signal.
I think that there is some issue here with the use of the term 'signal'. In communications theory, 'signal' is the intended message. In the context of photography, trying to work out what is the 'intended message' is a little complicated, especially when your receiver is sensitive enough that quantum fluctuations in the carrier are significant.By itself, it cannot, however, be distinguished between being a signal vs noise because it itself is the very measurement of the signal. The noise is only identified by variances, which by definition requires 2. Alternatively, you could do "the same pixel twice," but that's a bit out of scope for the topic here that discusses single exposure dynamic range.![]()
ExactlyZero noise means no added/modulated noise to that signal, in other words, a perfect capture of those photons wherever they happened to hit. Unless they hit exactly when the shutter opened or closed, if "exactly" even makes sense.![]()
That's the consequence of a finite sampling rate (where, since 'time' is just a dimension, we refer to a spatial 'rate')BTW, if we think of the photons of small points moving along lines, then the image consists of a bunch of deltas. The sampled image is an aliased representation of that signal.
As Hillel taught us, “If not now, where? If not here, when?"Not entirely correct. Time is simply a dimension of space-time. Spatially random arrival has the same general properties as temporally random arrival. We don't generally construct a photograph using sequential observations of the same pixel (unless you have a Nipkow disc camera).Not entirely correct. Rather think of photon shot noise in terms of repeated measurements at the same pixel, since this more accurately reflects the physical process (random arrival time of photons).And as a result, the lowest discernible level will always be determined (at least partially) by noise--especially since noise requires an area and not a single pixel. Even in an area of 2 pixels, you will have noise unless you have the exact same number of photons hit each.
Not entirely correct. A typical image, as in a photograph, will not be uniformly lit. So the statistics of a darker part of image would be different from a lighter part. Hence, spatial sampling can't be replaced by temporal sampling, in general.Not entirely correct. Time is simply a dimension of space-time. Spatially random arrival has the same general properties as temporally random arrival. We don't generally construct a photograph using sequential observations of the same pixel (unless you have a Nipkow disc camera).Not entirely correct. Rather think of photon shot noise in terms of repeated measurements at the same pixel, since this more accurately reflects the physical process (random arrival time of photons).And as a result, the lowest discernible level will always be determined (at least partially) by noise--especially since noise requires an area and not a single pixel. Even in an area of 2 pixels, you will have noise unless you have the exact same number of photons hit each.
Not entirely correct. Sampling is sampling. If a phenomenon is sampled in the spatial dimensions and varies over the range of sampling it is no different from a phenomenon being sampled in the temporal dimension and varying over that, If the phenomenon didn't vary over the dimension(s) over which it is being sampled, there would be no point sampling it. Or put it another way. If 'he statistics' of the brighter part of an image weren't different from those of a darker part of the image, you wouldn't have an image. If the statistics of (say) a sound in the louder part of the sound didn't vary from those in the quieter part of the sound, you wouldn't have any sound.Not entirely correct. A typical image, as in a photograph, will not be uniformly lit. So the statistics of a darker part of image would be different from a lighter part. Hence, spatial sampling can't be replaced by temporal sampling, in general.Not entirely correct. Time is simply a dimension of space-time. Spatially random arrival has the same general properties as temporally random arrival. We don't generally construct a photograph using sequential observations of the same pixel (unless you have a Nipkow disc camera).Not entirely correct. Rather think of photon shot noise in terms of repeated measurements at the same pixel, since this more accurately reflects the physical process (random arrival time of photons).And as a result, the lowest discernible level will always be determined (at least partially) by noise--especially since noise requires an area and not a single pixel. Even in an area of 2 pixels, you will have noise unless you have the exact same number of photons hit each.
Entirely correct. We cannot separate the sampling concepts of space & time for the purposes of imaging. For one: we're talking about sampling a finite area (ie. light entering the area of the aperture and redirected to the "2D" sensor); and a finite time (ie. only the light reflected or transmitted during the exposure)--just like how the universe works. We cannot have one without the other for the scope of this discussion.Not entirely correct. Sampling is sampling. If a phenomenon is sampled in the spatial dimensions and varies over the range of sampling it is no different from a phenomenon being sampled in the temporal dimension and varying over that, If the phenomenon didn't vary over the dimension(s) over which it is being sampled, there would be no point sampling it. Or put it another way. If 'he statistics' of the brighter part of an image weren't different from those of a darker part of the image, you wouldn't have an image. If the statistics of (say) a sound in the louder part of the sound didn't vary from those in the quieter part of the sound, you wouldn't have any sound.Not entirely correct. A typical image, as in a photograph, will not be uniformly lit. So the statistics of a darker part of image would be different from a lighter part. Hence, spatial sampling can't be replaced by temporal sampling, in general.Not entirely correct. Time is simply a dimension of space-time. Spatially random arrival has the same general properties as temporally random arrival. We don't generally construct a photograph using sequential observations of the same pixel (unless you have a Nipkow disc camera).Not entirely correct. Rather think of photon shot noise in terms of repeated measurements at the same pixel, since this more accurately reflects the physical process (random arrival time of photons).And as a result, the lowest discernible level will always be determined (at least partially) by noise--especially since noise requires an area and not a single pixel. Even in an area of 2 pixels, you will have noise unless you have the exact same number of photons hit each.
--
Tinkety tonk old fruit, & down with the Nazis!
Bob