Perception, reality and a signal below the noise...

No. Exposure is signal per unit area. The same exposure will saturate all sensors with the same QE at the same point.
I have never denied that. But exposure is not what I meant by “light”, photons are.
But that's not what you see in the image, that's only what's recorded. The image is just a number code that's reproduced by a printer or display.
Yes, and it just so happens that the convention that we often use to quantify how the recorded information maps to a reproduced tone (ISO 12232) is defined in terms of focal plane exposure. But it wouldn’t have to be the case. Neither is really what you see in the image.
Pure white is the brightest you are going to get, whatever the camera records.
What is white?
It seems logical that looking at light per unit area would appear to nullify a linear advantage of a larger area.
No, because with a larger area, you record more light overall, so the signal to noise ratio is higher over the total area of the image.
Yes, that’s exactly my point. It would appear to nullify the advantage, when in fact “it generally is the case that larger sensors can hold more light before clipping, for the image as a whole or for any fixed fraction of it”.
My point is precisely that exposure is probably not the relevant thing to look at for such comparisons, if only because equal exposures on differently-sized sensors do not produce equivalent images.
Equivalent in what terms? And what do you mean by 'exposure' ?
“Equivalent” in terms of angle of view, DOF, photon shot noise, diffraction, motion blur. “Exposure” as in “focal plane exposure”, generally expressed in lx·s, which I believe is the same definition that you are using.

So, if you have the same exposure on a larger sensor, you have at least one of: a wider angle of view, a shallower DOF, a longer exposure time. If you equalize those parameters, you have similar noise and a lower exposure on the larger sensor (thus more room for highlights).
The clipping point is the same. On a 14-bit sensor, its 16383 units, which translates to RGB 255.
On most current sensors, it’s 2000-3000 electrons per µm². Does it really mean much that they all use most of their 14-bit range to represent the number of electrons from an individual photosite? Of course they do, why use so many bits otherwise?
They use more bits to reduce quantisation error.
Which would be pointless if they didn’t use the whole range, right? Hence why they use approximately the same numerical range for a given bit depth, not because they clip at the same point.
But that's bits per pixel, not bits per square micrometer. And some FF sensors have 100,000 electrons per photosite.
Yes, photosites of 35 µm². What is your point?
(I say “most” of the range because it turns out that the RAWs from my G1 X III only go up to 15871, and those from my K-70, to 16313.)
Black level offset varies between cameras, and Canon's are generally higher, but the difference is not significant. The question is what does the maximum number represent when converted to image RGB?
That would depend on how you convert it.
A larger signal does not make the exposure brighter, but the larger signal from a larger sensor DOES have less noise. With less noise you can underexpose more to preserve highlights and adjust in processing.
“Underexpose” is such a loaded term. If a given exposure gives you the same noise as an equivalent (higher) exposure on a smaller sensor, and that exposure was fine on the small sensor, then what makes the lower (but still equivalent) exposure on the larger sensor “underexposed”? “Under” compared to what?

If the answer is “compared to what the sensor could have held”, then I believe that my point is made.
But that isn't the answer, so what's your point?
What is the answer, then?
 
Last edited:
Again, noise/grain is a factor. Smaller sensors have more noise...
I don't know of any evidence to support that generalization.

https://www.photonstophotos.net/Cha...n D850_14,Sony ILCE-6500_14,Sony ILCE-7RM3_14
This only measures READ NOISE not total signal to noise ratio, which includes shot noise.

It has very little bearing on image noise except in dark shadows.

Try a proper SNR chart...

https://www.dxomark.com/Cameras/Compare/Side-by-side/Sony-A6500-versus-Sony-A7R-III___1127_1187
That's on an equal-exposure basis. On an equal-aperture or equal-area basis larger sensors have no noise advantage except higher saturation levels. As always, it's a matter of choosing the appropriate measure for the photographic problem at hand. Your link applies where depth of field or exposure time is not important.

But now we have another equivalence thread. Sorry, Erik.
 
Last edited:
I guess that the discussion relates to how human perception works. Dunning Kruger obviously comes mind.
Returning to the subject, overcoming that requires an open mind. Minds tend to be more open if the situation is non-confrontational and the bearer has nothing to lose. There's not much else you can do to suppress noise.
 
I guess that the discussion relates to how human perception works. Dunning Kruger obviously comes mind.
This reminds me of a quote I saw recently, something like: smart people are not right more often; they are just wrong for more sophisticated reasons.
Returning to the subject, overcoming that requires an open mind. Minds tend to be more open if the situation is non-confrontational and the bearer has nothing to lose. There's not much else you can do to suppress noise.
 
No. Exposure is signal per unit area. The same exposure will saturate all sensors with the same QE at the same point.
I have never denied that. But exposure is not what I meant by “light”, photons are.
But that's not what you see in the image, that's only what's recorded. The image is just a number code that's reproduced by a printer or display.
Yes, and it just so happens that the convention that we often use to quantify how the recorded information maps to a reproduced tone (ISO 12232) is defined in terms of focal plane exposure. But it wouldn’t have to be the case. Neither is really what you see in the image.
So your point is...?
Pure white is the brightest you are going to get, whatever the camera records.
What is white?
Whatever the viewing device says it is.
It seems logical that looking at light per unit area would appear to nullify a linear advantage of a larger area.
No, because with a larger area, you record more light overall, so the signal to noise ratio is higher over the total area of the image.
Yes, that’s exactly my point. It would appear to nullify the advantage, when in fact “it generally is the case that larger sensors can hold more light before clipping, for the image as a whole or for any fixed fraction of it”.
If you mean they convert more photons into electrons for the same level of saturation, then yes. However, the voltage output from the pixel is roughly the same.

It's only the variance in the voltage (noise) that changes.

If the variance is large, we don't need so many ADC bits because the quantisation error is swamped by the noise.
My point is precisely that exposure is probably not the relevant thing to look at for such comparisons, if only because equal exposures on differently-sized sensors do not produce equivalent images.
Equivalent in what terms? And what do you mean by 'exposure' ?
“Equivalent” in terms of angle of view, DOF, photon shot noise, diffraction, motion blur.
Why does any of that matter? It's just complicating things.
Exposure” as in “focal plane exposure”, generally expressed in lx·s, which I believe is the same definition that you are using.
Well since lx.s is equivalent to t/FN^2, the same t/FN^2 should give you the same photometric exposure, assuming quantum efficiency is the same.
So, if you have the same exposure on a larger sensor, you have at least one of: a wider angle of view, a shallower DOF, a longer exposure time.
That's not remotely at the back of my mind when I go and take a picture.
If you equalize those parameters, you have similar noise and a lower exposure on the larger sensor (thus more room for highlights).
That is an exposure choice. But we already know larger sensors have more latitude.

Difference in SNR between two sensors is (roughly) n2/n1 = sqrt(a2/a1) for the same photometric exposure (average photons captured/unit area).

Where n = SNR and a = sensor area.

If you reduce exposure by 1 stop, you reduce SNR by about SQRT(2)

So if I have a minimum SNR threshold, then I have less exposure latitude with a smaller sensor.

Why bring equivalence into the equation? If I am not near the threshold, then equivalence is irrelevant. If noise is not visible in the image, it's not relevant either.

Secondly, it isn't necessarily true that you have more room for highlights. This will depend on how far you have to lift the shadows to restore the midtones to the same place, and since shadow SNR in increasingly affected by read noise in the lower stops, this is not predictable based on sensor area.

But in general, large sensors give better SNR and DR for the same shooting conditions than smaller sensors. So you have more exposure latitude with larger sensors. But the reason is simply noise - either shot or read noise.

We know this. It's not a point that needs debating or complicating.
The clipping point is the same. On a 14-bit sensor, its 16383 units, which translates to RGB 255.
On most current sensors, it’s 2000-3000 electrons per µm². Does it really mean much that they all use most of their 14-bit range to represent the number of electrons from an individual photosite? Of course they do, why use so many bits otherwise?
They use more bits to reduce quantisation error.
Which would be pointless if they didn’t use the whole range, right?
What do you mean by 'use the whole range'. Whole range of what? Full well capacity of the photosite? They don't even use that, they usually use only the linear part of the response curve.
Hence why they use approximately the same numerical range for a given bit depth, not because they clip at the same point.
They use the same numerical range for a given bit depth because 14 bits go from 0 to 16383. That's the only reason. It's also the reason why 'white' is 255 255 255 in RGB.
But that's bits per pixel, not bits per square micrometer. And some FF sensors have 100,000 electrons per photosite.
Yes, photosites of 35 µm². What is your point?
You don't need 14 bits for smaller pixels. They are too noisy to benefit.
(I say “most” of the range because it turns out that the RAWs from my G1 X III only go up to 15871, and those from my K-70, to 16313.)
Black level offset varies between cameras, and Canon's are generally higher, but the difference is not significant. The question is what does the maximum number represent when converted to image RGB?
That would depend on how you convert it.
Not really. Clipped data in all three channels will be white, although some raw converters clip earlier to allow you to 'recover' some highlights.
A larger signal does not make the exposure brighter, but the larger signal from a larger sensor DOES have less noise. With less noise you can underexpose more to preserve highlights and adjust in processing.
“Underexpose” is such a loaded term. If a given exposure gives you the same noise as an equivalent (higher) exposure on a smaller sensor, and that exposure was fine on the small sensor, then what makes the lower (but still equivalent) exposure on the larger sensor “underexposed”? “Under” compared to what?

If the answer is “compared to what the sensor could have held”, then I believe that my point is made.
But that isn't the answer, so what's your point?
What is the answer, then?
Underexposed compared to a standard reference exposure. As long as it's the same for both cameras, it hardly matters what it is.
 
Hi,

There had been a lot of good discussion on DR.

Now, I have learned that regarding 'all other things being equal' is a pretty big bin.

If we consider comparable silicon technology, it is quite reasonable that a larger sensor will offer a wider dynamic range.

We can of course utilize that increase in DR to allow more headroom for highlights.

But having a larger sensor will not change the clipping characteristics of the sensor.

It may also be thought that a competent photographer would choose exposure to avoid clipping in significant highlights.

Now, it may be that the camera does the job for the photographer and uses an exposure that allows more highlights before clipping. But that would not be a property of medium format.

On the other hand, it could be that say work flow on Phase One may be biased to protect highlights. With my P45+ I have definitively seen some of that.
  • Both histograms and blinkies (on image review) are 'conservative', so relying on those will not fully utilize the sensor.
  • The 'film curve' normally used in Capture One makes the image far too bright. So it looks like highlights are clipped. But, it is just that default processing makes it too bright.
Best regards

Erik
 
Pure white is the brightest you are going to get, whatever the camera records.
What is white?
Whatever the viewing device says it is.
If you mean the maximum output, then no, not really. If my camera clips, say, 5 stops above my mid-tones, then I might be happy to map that clipping level to the maximum level of an DisplayHDR 1000 monitor. But if it clips 3 stops above them, then likely not. I would probably prefer to map it to half a stop above whatever the reference white is. And that’s assuming that the output even has a maximum level—OpenEXR essentially doesn’t.

Even in the case where one tone-maps to SDR: in the following hypothetical scenario, just because cameras A and B both map their clipping point to “white”, surely you wouldn’t deny that camera B has more highlight headroom:

73cb711d0c1b49d2a6c950a3d4da9f6c.jpg.png

If you do, you might as well claim that all cameras with a wider dynamic range than an SDR display have the same dynamic range.
If you mean they convert more photons into electrons for the same level of saturation, then yes. However, the voltage output from the pixel is roughly the same.

It's only the variance in the voltage (noise) that changes.
To a large extent, that’s specifically because the voltage doesn’t represent the same number of photoelectrons, i.e. the conversion gains are different.
My point is precisely that exposure is probably not the relevant thing to look at for such comparisons, if only because equal exposures on differently-sized sensors do not produce equivalent images.
Equivalent in what terms? And what do you mean by 'exposure' ?
“Equivalent” in terms of angle of view, DOF, photon shot noise, diffraction, motion blur.
Why does any of that matter? It's just complicating things.
So, if you have the same exposure on a larger sensor, you have at least one of: a wider angle of view, a shallower DOF, a longer exposure time.
That's not remotely at the back of my mind when I go and take a picture.
I must be misunderstanding the question. They obviously matter since they are mostly tangible properties of the recorded image, unlike focal plane exposure or its relation to a reference exposure level.
If you equalize those parameters, you have similar noise and a lower exposure on the larger sensor (thus more room for highlights).
That is an exposure choice. But we already know larger sensors have more latitude.
Yes, the debate here is whether that extra latitude is at the top or the bottom of the dynamic range. I argue for looking at it in terms of photons because an equal number of photons is obtained with the same entrance pupil size and exposure time, and produces an image with similar characteristics, even if the photons are spread in such a way that the exposure is different, or the image is represented by different voltages.

When looking at it in this way, then the additional latitude is in the highlights. What would be the justification for using equal exposure?

If you can maintain the same photometric exposure on the larger sensor, it means that you are fine with either exposing longer for a given DOF, or sacrificing some DOF for a given exposure time. Not only does it not always hold, but even if you are fine with exposing for longer on the larger sensor, then arguably, you would also have been fine with it on the smaller sensor if you hadn’t been limited by saturation capacity. (Likewise for DOF if a lens with a sufficiently-large entrance pupil exists for that angle of view on the smaller sensor.)
Difference in SNR between two sensors is (roughly) n2/n1 = sqrt(a2/a1) for the same photometric exposure (average photons captured/unit area).

Where n = SNR and a = sensor area.

If you reduce exposure by 1 stop, you reduce SNR by about SQRT(2)

So if I have a minimum SNR threshold, then I have less exposure latitude with a smaller sensor.

Why bring equivalence into the equation? If I am not near the threshold, then equivalence is irrelevant. If noise is not visible in the image, it's not relevant either.
Well, exactly, if noise is already not a problem on the smaller sensor, then why would you not use the additional latitude of the larger sensor for the highlights by shooting an equivalent image? (Assuming that there are highlights that would justify it, of course.)
Secondly, it isn't necessarily true that you have more room for highlights. This will depend on how far you have to lift the shadows to restore the midtones to the same place, and since shadow SNR in increasingly affected by read noise in the lower stops, this is not predictable based on sensor area.
Sure, dynamic range is not only determined by sensor area, but speaking in input-referred units, when larger sensors do have more dynamic range, it’s generally thanks to a higher saturation point, not lower read noise.
But in general, large sensors give better SNR and DR for the same shooting conditions than smaller sensors. So you have more exposure latitude with larger sensors. But the reason is simply noise - either shot or read noise.
We will have to disagree on that reason.
The clipping point is the same. On a 14-bit sensor, its 16383 units, which translates to RGB 255.
On most current sensors, it’s 2000-3000 electrons per µm². Does it really mean much that they all use most of their 14-bit range to represent the number of electrons from an individual photosite? Of course they do, why use so many bits otherwise?
They use more bits to reduce quantisation error.
Which would be pointless if they didn’t use the whole range, right?
What do you mean by 'use the whole range'. Whole range of what? Full well capacity of the photosite? They don't even use that, they usually use only the linear part of the response curve.
Whole numerical range, just like you below.
Hence why they use approximately the same numerical range for a given bit depth, not because they clip at the same point.
They use the same numerical range for a given bit depth because 14 bits go from 0 to 16383. That's the only reason.
This subdiscussion took a weird turn. You first said that all 14-bit cameras clipped at the same point of 16383, I tried to say that they were merely making (almost) full use of the numerical range of 14 bits regardless of their actual clipping point, and now you are making it look as if you were explaining that to me. As a software engineer working on image compression, I am somewhat aware of how bits work, thank you very much.
The question is what does the maximum number represent when converted to image RGB?
That would depend on how you convert it.
Not really. Clipped data in all three channels will be white, although some raw converters clip earlier to allow you to 'recover' some highlights.
I stand by my response.
 
Last edited:
Pure white is the brightest you are going to get, whatever the camera records.
What is white?
Whatever the viewing device says it is.
If you mean the maximum output, then no, not really. If my camera clips, say, 5 stops above my mid-tones, then I might be happy to map that clipping level to the maximum level of an DisplayHDR 1000 monitor. But if it clips 3 stops above them, then likely not. I would probably prefer to map it to half a stop above whatever the reference white is. And that’s assuming that the output even has a maximum level—OpenEXR essentially doesn’t.
If all cameras calibrated middle grey at 18% of saturation, or 12.5% if we use the ISO (SAT) standard, then they would clip the same number of EV above that point, irrespective of the sensor.
Even in the case where one tone-maps to SDR: in the following hypothetical scenario, just because cameras A and B both map their clipping point to “white”, surely you wouldn’t deny that camera B has more highlight headroom:
A curved line assumes an adjustment to the data, which could be applied to any camera (and frequently is). But its not intrinsic to a sensor.
73cb711d0c1b49d2a6c950a3d4da9f6c.jpg.png

If you do, you might as well claim that all cameras with a wider dynamic range than an SDR display have the same dynamic range.
Doesn't mean that at all.
If you mean they convert more photons into electrons for the same level of saturation, then yes. However, the voltage output from the pixel is roughly the same.

It's only the variance in the voltage (noise) that changes.
To a large extent, that’s specifically because the voltage doesn’t represent the same number of photoelectrons, i.e. the conversion gains are different.
Quite. The difference is not the 'size' of the output signal, but the amount of noise it contains.
My point is precisely that exposure is probably not the relevant thing to look at for such comparisons, if only because equal exposures on differently-sized sensors do not produce equivalent images.
Equivalent in what terms? And what do you mean by 'exposure' ?
“Equivalent” in terms of angle of view, DOF, photon shot noise, diffraction, motion blur.
Why does any of that matter? It's just complicating things.
So, if you have the same exposure on a larger sensor, you have at least one of: a wider angle of view, a shallower DOF, a longer exposure time.
That's not remotely at the back of my mind when I go and take a picture.
I must be misunderstanding the question. They obviously matter since they are mostly tangible properties of the recorded image, unlike focal plane exposure or its relation to a reference exposure level.
Except that its focal plane exposure that determines noise.
If you equalize those parameters, you have similar noise and a lower exposure on the larger sensor (thus more room for highlights).
That is an exposure choice. But we already know larger sensors have more latitude.
Yes, the debate here is whether that extra latitude is at the top or the bottom of the dynamic range. I argue for looking at it in terms of photons because an equal number of photons is obtained with the same entrance pupil size and exposure time, and produces an image with similar characteristics, even if the photons are spread in such a way that the exposure is different, or the image is represented by different voltages.

When looking at it in this way, then the additional latitude is in the highlights. What would be the justification for using equal exposure?
Raw data is linear. Highlight extension is directly a result of internal processing and meter calibration which is different for every camera.

I can also create any tone curve I want and expose however I want to create as much headroom as I want. The only thing that limits me is how noisy the midtones and shadows will be when I alter the tone curve. So what limits me is shadow noise, not highlight noise.
If you can maintain the same photometric exposure on the larger sensor, it means that you are fine with either exposing longer for a given DOF, or sacrificing some DOF for a given exposure time.
Not only does it not always hold, but even if you are fine with exposing for longer on the larger sensor, then arguably, you would also have been fine with it on the smaller sensor if you hadn’t been limited by saturation capacity. (Likewise for DOF if a lens with a sufficiently-large entrance pupil exists for that angle of view on the smaller sensor.)
This is only of any remote concern to me if it reaches the edge of some threshold where I cannot get less DOF because my lens is not fast enough, or more because I didn't bring a tripod and the exposure time is too long. The rest of the time, it is of no consequence.
Difference in SNR between two sensors is (roughly) n2/n1 = sqrt(a2/a1) for the same photometric exposure (average photons captured/unit area).

Where n = SNR and a = sensor area.

If you reduce exposure by 1 stop, you reduce SNR by about SQRT(2)

So if I have a minimum SNR threshold, then I have less exposure latitude with a smaller sensor.

Why bring equivalence into the equation? If I am not near the threshold, then equivalence is irrelevant. If noise is not visible in the image, it's not relevant either.
Well, exactly, if noise is already not a problem on the smaller sensor, then why would you not use the additional latitude of the larger sensor for the highlights by shooting an equivalent image? (Assuming that there are highlights that would justify it, of course.)
Because for the kinds of photography I do, it is seldom an issue, and I prefer a lighter, smaller camera. I accept that occasionally I might have recovered an extra 1/2 EV from a larger sensor, but there are workaround like exposure bracketing and tone-mapping.
Secondly, it isn't necessarily true that you have more room for highlights. This will depend on how far you have to lift the shadows to restore the midtones to the same place, and since shadow SNR in increasingly affected by read noise in the lower stops, this is not predictable based on sensor area.
Sure, dynamic range is not only determined by sensor area, but speaking in input-referred units, when larger sensors do have more dynamic range, it’s generally thanks to a higher saturation point, not lower read noise.
Mostly, yes. But in a practical sense, DR is noise limited by the lowest midtone exposure you can get away with and still pull useful shadow detail.

There is seldom any noticeable noise in the highlights, but the SNR below the midtones is quite poor (on any camera). If you meter 1 EV below 18% grey, the signal is only 9% of saturation and you still have several EV to go before you hit the noise floor, with a very sharp spike at around -5 to -6 EV as read noise starts to kick in.
But in general, large sensors give better SNR and DR for the same shooting conditions than smaller sensors. So you have more exposure latitude with larger sensors. But the reason is simply noise - either shot or read noise.
We will have to disagree on that reason.
Whatever.
The clipping point is the same. On a 14-bit sensor, its 16383 units, which translates to RGB 255.
On most current sensors, it’s 2000-3000 electrons per µm². Does it really mean much that they all use most of their 14-bit range to represent the number of electrons from an individual photosite? Of course they do, why use so many bits otherwise?
They use more bits to reduce quantisation error.
Which would be pointless if they didn’t use the whole range, right?
What do you mean by 'use the whole range'. Whole range of what? Full well capacity of the photosite? They don't even use that, they usually use only the linear part of the response curve.
Whole numerical range, just like you below.
Sorry, I apologise for my lack of psychic ability. But an APSC sensor and FF sensor have the same numerical range, so I was struggling to understand your point.
Hence why they use approximately the same numerical range for a given bit depth, not because they clip at the same point.
They use the same numerical range for a given bit depth because 14 bits go from 0 to 16383. That's the only reason.
This subdiscussion took a weird turn. You first said that all 14-bit cameras clipped at the same point of 16383, I tried to say that they were merely making (almost) full use of the numerical range of 14 bits regardless of their actual clipping point, and now you are making it look as if you were explaining that to me.
As a software engineer working on image compression, I am somewhat aware of how bits work, thank you very much.
Should I send you my CV so you know who you are patronising?
The question is what does the maximum number represent when converted to image RGB?
That would depend on how you convert it.
Not really. Clipped data in all three channels will be white, although some raw converters clip earlier to allow you to 'recover' some highlights.
I stand by my response.
Good for you.

--
"A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away." Antoine de Saint-Exupery
 
Again, noise/grain is a factor. Smaller sensors have more noise...
I don't know of any evidence to support that generalization.

https://www.photonstophotos.net/Cha...n D850_14,Sony ILCE-6500_14,Sony ILCE-7RM3_14
This only measures READ NOISE not total signal to noise ratio, which includes shot noise.

It has very little bearing on image noise except in dark shadows.

Try a proper SNR chart...

https://www.dxomark.com/Cameras/Compare/Side-by-side/Sony-A6500-versus-Sony-A7R-III___1127_1187
That's on an equal-exposure basis. On an equal-aperture or equal-area basis larger sensors have no noise advantage except higher saturation levels. As always, it's a matter of choosing the appropriate measure for the photographic problem at hand. Your link applies where depth of field or exposure time is not important.
So what did your link represent?

Surely it's simply easier to start from an equal exposure (which we have data for) and then work out how much exposure adjustment you require to get the DOF or shutter speed you want?

When, or if, it happens to be an issue and depending on what you plan to achieve.

The trouble with equivalence is that it creates a non-existent universe where everyone is trying to take the same photo and print it at the same size.
 
Again, noise/grain is a factor. Smaller sensors have more noise...
I don't know of any evidence to support that generalization.

https://www.photonstophotos.net/Cha...n D850_14,Sony ILCE-6500_14,Sony ILCE-7RM3_14
This only measures READ NOISE not total signal to noise ratio, which includes shot noise.

It has very little bearing on image noise except in dark shadows.

Try a proper SNR chart...

https://www.dxomark.com/Cameras/Compare/Side-by-side/Sony-A6500-versus-Sony-A7R-III___1127_1187
That's on an equal-exposure basis. On an equal-aperture or equal-area basis larger sensors have no noise advantage except higher saturation levels. As always, it's a matter of choosing the appropriate measure for the photographic problem at hand. Your link applies where depth of field or exposure time is not important.
So what did your link represent?
Noise, in (log) units of electrons. It's labeled.

Sorry, I don't want to discuss it further. I have some work to do.
 
Last edited:
Even in the case where one tone-maps to SDR: in the following hypothetical scenario, just because cameras A and B both map their clipping point to “white”, surely you wouldn’t deny that camera B has more highlight headroom:
A curved line assumes an adjustment to the data, which could be applied to any camera (and frequently is). But its not intrinsic to a sensor.
If you are talking about white, then you are logically talking about a processed image. I mostly meant that just because whatever processor we use might map saturation to white by default doesn’t really mean much about the underlying saturation levels.
I must be misunderstanding the question. They obviously matter since they are mostly tangible properties of the recorded image, unlike focal plane exposure or its relation to a reference exposure level.
Except that its focal plane exposure that determines noise.
For a given sensor format, sure. But we are now comparing across formats, and then exposure without mention of the area doesn’t say much about photon noise per unit area of the final image.
So if I have a minimum SNR threshold, then I have less exposure latitude with a smaller sensor.

Why bring equivalence into the equation? If I am not near the threshold, then equivalence is irrelevant. If noise is not visible in the image, it's not relevant either.
Well, exactly, if noise is already not a problem on the smaller sensor, then why would you not use the additional latitude of the larger sensor for the highlights by shooting an equivalent image? (Assuming that there are highlights that would justify it, of course.)
Because for the kinds of photography I do, it is seldom an issue, and I prefer a lighter, smaller camera. I accept that occasionally I might have recovered an extra 1/2 EV from a larger sensor, but there are workaround like exposure bracketing and tone-mapping.
I meant if you were to use a larger-sensor camera, sorry if I didn’t make that clear.
Secondly, it isn't necessarily true that you have more room for highlights. This will depend on how far you have to lift the shadows to restore the midtones to the same place, and since shadow SNR in increasingly affected by read noise in the lower stops, this is not predictable based on sensor area.
Sure, dynamic range is not only determined by sensor area, but speaking in input-referred units, when larger sensors do have more dynamic range, it’s generally thanks to a higher saturation point, not lower read noise.
Mostly, yes. But in a practical sense, DR is noise limited by the lowest midtone exposure you can get away with and still pull useful shadow detail.

There is seldom any noticeable noise in the highlights, but the SNR below the midtones is quite poor (on any camera). If you meter 1 EV below 18% grey, the signal is only 9% of saturation and you still have several EV to go before you hit the noise floor, with a very sharp spike at around -5 to -6 EV as read noise starts to kick in.
I don’t think the situation is that dire. As an example, here is an attempt at an SNR curve for the E-M1 II and the α7 III, both at base ISO, derived from data from photonstophotos.net and DxOMark, as a function of the number of incident photons on the sensor, and normalized to 8 MP (so it should correspond to DxOMark’s “print SNR”):

7e1e588e05704d49abd8d88f2828ae2c.jpg.png



The E-M1 II curve is slightly more to the left because it appears to have slightly better quantum efficiency. If we discount that, then most of the curves practically overlap:



0523c7ca647348bab0217631de8f7afb.jpg.png

At any rate, it’s quite clear from this that if a picture is fine on an E-M1 II at base ISO, then it should also be fine on an α7 III at base ISO with the same number of photons, even if those photons being spread onto a larger area means a lower focal plane exposure for all tones.

(The chart might look different for an EOS RP, sure.)
The clipping point is the same. On a 14-bit sensor, its 16383 units, which translates to RGB 255.
On most current sensors, it’s 2000-3000 electrons per µm². Does it really mean much that they all use most of their 14-bit range to represent the number of electrons from an individual photosite? Of course they do, why use so many bits otherwise?
They use more bits to reduce quantisation error.
Which would be pointless if they didn’t use the whole range, right?
What do you mean by 'use the whole range'. Whole range of what? Full well capacity of the photosite? They don't even use that, they usually use only the linear part of the response curve.
Whole numerical range, just like you below.
Sorry, I apologise for my lack of psychic ability. But an APSC sensor and FF sensor have the same numerical range, so I was struggling to understand your point.
It was still the same range as in my previous comment, there was no new point introduced. My “right?” was to make sure that we were on the same page. It turns out that we were, but that we had some trouble communicating that.
Hence why they use approximately the same numerical range for a given bit depth, not because they clip at the same point.
They use the same numerical range for a given bit depth because 14 bits go from 0 to 16383. That's the only reason.
This subdiscussion took a weird turn. You first said that all 14-bit cameras clipped at the same point of 16383, I tried to say that they were merely making (almost) full use of the numerical range of 14 bits regardless of their actual clipping point, and now you are making it look as if you were explaining that to me. As a software engineer working on image compression, I am somewhat aware of how bits work, thank you very much.
Should I send you my CV so you know who you are patronising?
Sorry, my intent was not to patronise, only to express my own dissatisfaction at feeling patronised. But regardless, feel free to.
 
At any rate, it’s quite clear from this that if a picture is fine on an E-M1 II at base ISO, then it should also be fine on an α7 III at base ISO with the same number of photons, even if those photons being spread onto a larger area means a lower focal plane exposure for all tones.
The number of photons is more or less the same per unit area but not over the whole sensor. At base ISO, the FF sensor forms an image with 4x the photons with the same exposure.
 
At any rate, it’s quite clear from this that if a picture is fine on an E-M1 II at base ISO, then it should also be fine on an α7 III at base ISO with the same number of photons, even if those photons being spread onto a larger area means a lower focal plane exposure for all tones.
The number of photons is more or less the same per unit area but not over the whole sensor. At base ISO, the FF sensor forms an image with 4x the photons with the same exposure.
I didn’t mean to imply that the FF camera would meter to the same number of photons, sorry for the confusion. I was describing a situation in which the camera would be set to its base ISO setting but with a dialed exposure that would result in the same number of photons hitting the sensor.

From a traditional, ISO-centered perspective, such an image would indeed be considered “underexposed” for ISO 100. But in truth, the resulting RAW file should be perfectly usable noise-wise, if the equivalent (×4) exposure on the E-M1 II is. So then, why should it matter that it’s “underexposed”, if it made room for more highlights that are welcome?
 
Last edited:
Are you seeing the signal yet? ;)
 
Well, of course, if one meters relative to the sensor’s saturation, then it naturally follows that no amount of increasing saturation capacity would ever give more highlight headroom. But I would like to question whether it is more relevant to look at things like that rather than in terms of number of photons.

Per “Image Sensors and Signal Processing for Digital Stills Camera”, page 308:
Because the capacitance of a pixel depends on its cross-sectional structure and the area of the pixel, pixel capacitance, and thus full-well capacity, generally scales linearly with area for a given technology […]. The maximum number of photoelectrons that a 1 × 1 μm pixel can store is approximately 1/25th the number that a 5 × 5 μm pixel can store.
Therefore, it generally is the case that larger sensors can hold more light before clipping, for the image as a whole or for any fixed fraction of it.
No. Exposure is signal per unit area. The same exposure will saturate all sensors with the same QE at the same point.
I have never denied that. But exposure is not what I meant by “light”, photons are.
But that's not what you see in the image, that's only what's recorded. The image is just a number code that's reproduced by a printer or display. Pure white is the brightest you are going to get, whatever the camera records.
It seems logical that looking at light per unit area would appear to nullify a linear advantage of a larger area.
No, because with a larger area, you record more light overall, so the signal to noise ratio is higher over the total area of the image.
However, this applies only to signals and noise with spatial frequencies near to the width or height of the sensor. For smaller details, what matters is the exposure in each local patch of the image. You can have a small shadow area with very low S/N ration, next to a highlight area with very good S?N ratio. These ratios are not affected by the exposures in other parts of the image.
 
No, because with a larger area, you record more light overall, so the signal to noise ratio is higher over the total area of the image.
However, this applies only to signals and noise with spatial frequencies near to the width or height of the sensor. For smaller details, what matters is the exposure in each local patch of the image. You can have a small shadow area with very low S/N ration, next to a highlight area with very good S/N ratio. These ratios are not affected by the exposures in other parts of the image.
Sure, but I believe that this discussion assumes the use of equivalent focal lengths, so each patch of the image is correspondingly larger on the larger sensor. So, we can talk more or less equivalently about the whole image, half of it, or a millionth of it.
 
Last edited:
Well, of course, if one meters relative to the sensor’s saturation, then it naturally follows that no amount of increasing saturation capacity would ever give more highlight headroom. But I would like to question whether it is more relevant to look at things like that rather than in terms of number of photons.

Per “Image Sensors and Signal Processing for Digital Stills Camera”, page 308:
Because the capacitance of a pixel depends on its cross-sectional structure and the area of the pixel, pixel capacitance, and thus full-well capacity, generally scales linearly with area for a given technology […]. The maximum number of photoelectrons that a 1 × 1 μm pixel can store is approximately 1/25th the number that a 5 × 5 μm pixel can store.
Therefore, it generally is the case that larger sensors can hold more light before clipping, for the image as a whole or for any fixed fraction of it.
No. Exposure is signal per unit area. The same exposure will saturate all sensors with the same QE at the same point.
I have never denied that. But exposure is not what I meant by “light”, photons are.
But that's not what you see in the image, that's only what's recorded. The image is just a number code that's reproduced by a printer or display. Pure white is the brightest you are going to get, whatever the camera records.
It seems logical that looking at light per unit area would appear to nullify a linear advantage of a larger area.
No, because with a larger area, you record more light overall, so the signal to noise ratio is higher over the total area of the image.
However, this applies only to signals and noise with spatial frequencies near to the width or height of the sensor. For smaller details, what matters is the exposure in each local patch of the image. You can have a small shadow area with very low S/N ration, next to a highlight area with very good S?N ratio. These ratios are not affected by the exposures in other parts of the image.
Hi,

Any area would be built from pixels. Each of the pixels would have a an SNR which is the square root of the photon count. Making the pixels smaller there would be more pixels defining that area, but SNR would be nearly the same.

If we have a larger sensor, it will have a higher SNR if silicon design is identical, and exposure relative saturation is the same and the image is viewed at the same size.

Best regards

Erik
 
Well, of course, if one meters relative to the sensor’s saturation, then it naturally follows that no amount of increasing saturation capacity would ever give more highlight headroom. But I would like to question whether it is more relevant to look at things like that rather than in terms of number of photons.

Per “Image Sensors and Signal Processing for Digital Stills Camera”, page 308:
Because the capacitance of a pixel depends on its cross-sectional structure and the area of the pixel, pixel capacitance, and thus full-well capacity, generally scales linearly with area for a given technology […]. The maximum number of photoelectrons that a 1 × 1 μm pixel can store is approximately 1/25th the number that a 5 × 5 μm pixel can store.
Therefore, it generally is the case that larger sensors can hold more light before clipping, for the image as a whole or for any fixed fraction of it.
No. Exposure is signal per unit area. The same exposure will saturate all sensors with the same QE at the same point.
I have never denied that. But exposure is not what I meant by “light”, photons are.
But that's not what you see in the image, that's only what's recorded. The image is just a number code that's reproduced by a printer or display. Pure white is the brightest you are going to get, whatever the camera records.
It seems logical that looking at light per unit area would appear to nullify a linear advantage of a larger area.
No, because with a larger area, you record more light overall, so the signal to noise ratio is higher over the total area of the image.
However, this applies only to signals and noise with spatial frequencies near to the width or height of the sensor. For smaller details, what matters is the exposure in each local patch of the image. You can have a small shadow area with very low S/N ration, next to a highlight area with very good S?N ratio. These ratios are not affected by the exposures in other parts of the image.
Hi,

Any area would be built from pixels. Each of the pixels would have a an SNR which is the square root of the photon count. Making the pixels smaller there would be more pixels defining that area, but SNR would be nearly the same.
A single pixel doesn't have a S/N ratio in a still image. It makes a measurement, like a hand-held light meter or a thermometer. That measurement has an uncertainty, an error bar if you like, the width of which depends on the measurement. There's no way to split that single number into "signal" and "noise".

It's only when you have a array or a sequence of measurements that you can talk about a signal.
If we have a larger sensor, it will have a higher SNR if silicon design is identical, and exposure relative saturation is the same and the image is viewed at the same size.
But not if it is viewed at the same degree of enlargement.
 
A single pixel doesn't have a S/N ratio in a still image. It makes a measurement, like a hand-held light meter or a thermometer. That measurement has an uncertainty, an error bar if you like, the width of which depends on the measurement. There's no way to split that single number into "signal" and "noise".

It's only when you have a array or a sequence of measurements that you can talk about a signal.
This might be tangential to the discussion but... a single pixel does have noise characteristics. The sequence of measurements is in time.
 

Keyboard shortcuts

Back
Top