Dynamic range and RAW file "bit depth"

Started Jan 25, 2014 | Questions
Jack Hogan Veteran Member • Posts: 6,646
Re: Dynamic range and RAW file "bit depth"
1

The_Suede wrote:

szhorvat wrote:

xpatUSA wrote:

In other words, if I present my sensor output with it's theoretical 16.23 EV range to an 8-bit ADC is the sensor dynamic range any different than if it is presented to a 14-bit ADC?

Disclaimer: I don't go to DXOMark very often. And I'm not sure that I've answered the question.

I'll say: For practical purposes, yes, the dynamic range will be smaller in that case, provided that the digitization is done in a linear way. The smallest number of electrons we could measure (by looking at the digitized signal) will be 77000/2^8 = 300. It won't be possible to see the difference between 200 or 400 electrons, as all will be rounded to the same 300.

Unless ... we take the "dithering" done by natural noise into account. Which I didn't think of.

Or unless the digitization is done in a nonlinear way, i.e. out of the 2^8 = 256 values, 1 will correspond to something smaller than 300 to have a higher resolution in the dim range. So another possibility is that the sensor itself it linear but the digitization is done in a non-linear way.

here you ask the correct question:

Or maybe there was something else I didn't think of. Which is why I asked the question

Thanks for taking the time to reply.

DR is a statistics result or value. When you have integer data only, like if you count the average amount of perfect marbles present in 1000 bowls, you can still get fraction results. Like "14.8467 marbles per bowl". Even though the base unit is by necessity only possible in whole numbers. A perfect marble is either there or not, one or zero.

This also means that if only one in maybe around five bowls contains a perfect marble, you get an average amount that is a fraction of one, like 0.195 or something. Which is both possible from an average point of view, and impossible from a practical point of view for each individual bowl. 0.195 marbles is a broken marble, which is contradictory to the counting rule we set up.

the same is true for bits and photons. If only one in five positions contains an error of "one", the average is 0.2. Lower than the lowest possible quanta of the measurement unit.

and when the average measurement error is smaller than the lowest definable step in your metric, you get an average measurement resolution that is higher than what your metric defines. Then the value tells you how often the error occurs in stead of how many errors per instance you will get.

Well put, The_Suede. I will only add that therefore if you have a large enough sample one could display quite a large dynamic range with just one bit (as in B&W images in newspapers or 1-bit ADC in music). In photography sensors the oversampling necessary is not quite there yet (i.e. not enough pixels) for typical applications. The result for an incorrectly chosen bit depth is visible posterization (shadow blocking) and color artifacts. At current pixel densities therefore the compromise sweetspot appears to be to have one ADU equal to about 0.5-1.5 the read noise in electrons at base ISO.  John Sheehy did some tests a while back that suggest that read noise should optimally be 1.3ADUs.  I'd be interested to see the tests redone on current sensors.

Jack

bobn2
bobn2 Forum Pro • Posts: 58,093
Re: Dynamic range and RAW file "bit depth"
2

Jack Hogan wrote:

The_Suede wrote:

szhorvat wrote:

xpatUSA wrote:

In other words, if I present my sensor output with it's theoretical 16.23 EV range to an 8-bit ADC is the sensor dynamic range any different than if it is presented to a 14-bit ADC?

Disclaimer: I don't go to DXOMark very often. And I'm not sure that I've answered the question.

I'll say: For practical purposes, yes, the dynamic range will be smaller in that case, provided that the digitization is done in a linear way. The smallest number of electrons we could measure (by looking at the digitized signal) will be 77000/2^8 = 300. It won't be possible to see the difference between 200 or 400 electrons, as all will be rounded to the same 300.

Unless ... we take the "dithering" done by natural noise into account. Which I didn't think of.

Or unless the digitization is done in a nonlinear way, i.e. out of the 2^8 = 256 values, 1 will correspond to something smaller than 300 to have a higher resolution in the dim range. So another possibility is that the sensor itself it linear but the digitization is done in a non-linear way.

here you ask the correct question:

Or maybe there was something else I didn't think of. Which is why I asked the question

Thanks for taking the time to reply.

DR is a statistics result or value. When you have integer data only, like if you count the average amount of perfect marbles present in 1000 bowls, you can still get fraction results. Like "14.8467 marbles per bowl". Even though the base unit is by necessity only possible in whole numbers. A perfect marble is either there or not, one or zero.

This also means that if only one in maybe around five bowls contains a perfect marble, you get an average amount that is a fraction of one, like 0.195 or something. Which is both possible from an average point of view, and impossible from a practical point of view for each individual bowl. 0.195 marbles is a broken marble, which is contradictory to the counting rule we set up.

the same is true for bits and photons. If only one in five positions contains an error of "one", the average is 0.2. Lower than the lowest possible quanta of the measurement unit.

and when the average measurement error is smaller than the lowest definable step in your metric, you get an average measurement resolution that is higher than what your metric defines. Then the value tells you how often the error occurs in stead of how many errors per instance you will get.

Well put, The_Suede. I will only add that therefore if you have a large enough sample one could display quite a large dynamic range with just one bit (as in B&W images in newspapers or 1-bit ADC in music).

Not just B&W images in newspapers - essentially all printed images, including the output of inkjet and dye-sub printers. These devices cannot change the intensity of the dye/pigment, only the size of the blob deposited. So they are making an image with a large dynamic range using essentially a 1 bit output.

-- hide signature --

Bob

John Sheehy Forum Pro • Posts: 20,562
Re: Dynamic range and RAW file "bit depth"
3

szhorvat wrote:

Most cameras can record the sensor data with at most 14 bits of resolution. This means that the ratio of the lowest and highest representable values is 2^14.

That's an over-simplification. 0 is an important value, and many cameras have pixels "darker" than 0; in fact, black is actually recorded at something like 2048, so values of 2047, 2041, etc, are possible due to the bi-directional effect of read noise. Many cameras start out with data like this, and clip any value 2048 or lower to zero and subtract 2048 from everything else, and some cameras will stretch the RAW histogram out so that the highest number doesn't lose anything. So, your 0 to 16535 range is either non-existent, or it is actually scaled from something else.

The dynamic range of a sensor is the ratio between the brightest and dimmest recordable light intensity (or clipping and noise floor).

DR in system bound by noise is just an abstraction; not a concrete reality, as it would be if the black end were simply clipped well above the black level. Signal is detectable in digital sensors far below what their DxOMark DR would suggest; it is merely not distinguishable in fine detail individual pixels, because the variance of noise is much greater than these small signals. Taken as an average, however, over many local pixels, signals can become more distinct from noise. You can take any modern sensor, and photograph a checkerboard or chessboard with it, at signal levels where the brightest squares are stops below the so-called "noise floor", and if you know where the squares are in the RAW, and average over those squares, you will recover the checks. Same if you shoot a large white letter against a black background; the white could be exposed so that it is well below the "noise floor", and filtering of noise and/or image reduction can easily result in the letter being identified.

I notice that DxOMark lists several sensors as having a dynamic range larger than 14 EV, e.g. the Nikon D800 has 14.4 EV, which corresponds to a ratio of 2^14.4. How can they measure a dynamic range higher than the resolution of the sensor readout? If the sensor response is strictly linear, this shouldn't be possible.

This is because they are averaging neighboring pixels on sensors with more than 8MP to be virtual 8MP sensors, in "Print" mode. In "Screen" mode, they base their DR figure on the pure pixel statistics. In screen mode, however, there is no problem getting a DR greater than bit depth, even with a linear RAW, by a mathematical process because black frame noise can be lower than 1 RAW value. In fat, the D300 and D60 Nikon cameras (anf the Pentax K10D, IIRC) have less than 1 RAW level of read noise at base ISO, but of course, this leads to posterization of deep shadows. It is true, however, that a camera with linear RAW values, that had enough levels in its output to avoid posterization could not have a pixel-level DR greater than slightly less than the number of bits it has, because to avoid posterization, the read noise would have to be at least 1 RAW level, preferably greater than or equal to 1.3.

xpatUSA
xpatUSA Forum Pro • Posts: 13,304
Re: Dynamic range and RAW file "bit depth"

completelyrandomstuff wrote:

misread, I remembered something about 'if the digitization is linear' in that text. sorry if I offended you.

Thanks, and no offense taken. Just wanted to keep the record straight.

-- hide signature --

"Engage Brain before operating Keyboard!"
Ted

 xpatUSA's gear list:xpatUSA's gear list
Sigma DP2 Sigma DP1 Merrill Panasonic Lumix DMC-G1 Sigma SD14 Sigma SD15 +16 more
Iliah Borg Forum Pro • Posts: 24,742
Re: Dynamic range and RAW file "bit depth"
1

szhorvat wrote:

Most cameras can record the sensor data with at most 14 bits of resolution.

What is 14-bit resolution? Raw file being 14-bit?

-- hide signature --
RussellInCincinnati Veteran Member • Posts: 3,201
bit depth affects PRECISION, not dynamic range so much

szhorvat wrote: Most cameras can record the sensor data with at most 14 bits of resolution. This means that the ratio of the lowest and highest representable values is 2^14.

No, it just means that whatever the scene dynamic range might be, that range of brightnesses will be recorded in the raw file using no more than 8192 different numbers. Perhaps the stored values record scene brightnesses with a higher degree of precision at higher brightness levels, see below.

The dynamic range of a sensor is the ratio between the brightest, and dimmest recordable light intensity (or clipping and noise floor). I notice that DxOMark lists several sensors as having a dynamic range larger than 14 EV, e.g. the Nikon D800 has 14.4 EV, which corresponds to a ratio of 2^14.4.

Then hopefully the Nikon D800 raw file format lavishes many more of the 8192 different numbers on representing the upper brightness range with precision, than the format dedicates to representing the darkest parts of the scene. The firmware designer could declare that the camera will store scene exposure values ranging from 13.5 to 14.5 EV in the raw files using all the numbers between 4096 and 8191, obviously with great precision, for example. And reserve the lower raw format values to represent all the scene brightnesses below 13.5 with far less precision, etc.

One could say that a 17-bit raw format (stored numbers each ranging from 1 to 128000) is needed to "perfectly" record the 80 000 or whatever different possible photon counts that can emerge from a commercial portable camera sensor. But realistically, the cameras can do a nice job of representing the scene with a mere 8192 different values. Judiciously representing the bright, relatively noise-free parts of the scene with more closely-spaced stored values, than are alloted to representing the dark parts of a scene.

xpatUSA
xpatUSA Forum Pro • Posts: 13,304
Re: bit depth affects PRECISION, not dynamic range so much

RussellInCincinnati wrote:

szhorvat wrote: Most cameras can record the sensor data with at most 14 bits of resolution. This means that the ratio of the lowest and highest representable values is 2^14.

No, it just means that whatever the scene dynamic range might be, that range of brightnesses will be recorded in the raw file using no more than 8192 different numbers.

2^14 = 16,384d unsigned?

Your point was well made, though

-- hide signature --

"Engage Brain before operating Keyboard!"
Ted

 xpatUSA's gear list:xpatUSA's gear list
Sigma DP2 Sigma DP1 Merrill Panasonic Lumix DMC-G1 Sigma SD14 Sigma SD15 +16 more
olliess Senior Member • Posts: 1,349
Re: bit depth affects PRECISION, not dynamic range so much

RussellInCincinnati wrote:

Then hopefully the Nikon D800 raw file format lavishes many more of the 8192 different numbers on representing the upper brightness range with precision, than the format dedicates to representing the darkest parts of the scene. The firmware designer could declare that the camera will store scene exposure values ranging from 13.5 to 14.5 EV in the raw files using all the numbers between 4096 and 8191, obviously with great precision, for example. And reserve the lower raw format values to represent all the scene brightnesses below 13.5 with far less precision, etc.

The brightest parts of the scene have more noise (but better signal/noise) than the lowest parts. More values would just go into "encoding the noise." It would make more sense to have wider spacing values as the values increase, e.g., something close to logarithmic spacing. (I believe this is what the old Compressed NEF format did with a lookup table).

xpatUSA
xpatUSA Forum Pro • Posts: 13,304
Re: Dynamic range and RAW file "bit depth"

Iliah Borg wrote:

szhorvat wrote:

Most cameras can record the sensor data with at most 14 bits of resolution.

What is 14-bit resolution? Raw file being 14-bit?

Nice trap, Iliah

OT, but I love the word 'resolution', especially when used by itself. One of those words that means "all things to all men" **.

-- hide signature --

** 1 Corinthians ix. 22
Ted

 xpatUSA's gear list:xpatUSA's gear list
Sigma DP2 Sigma DP1 Merrill Panasonic Lumix DMC-G1 Sigma SD14 Sigma SD15 +16 more
Iliah Borg Forum Pro • Posts: 24,742
Re: Dynamic range and RAW file "bit depth"
2

xpatUSA wrote:

Iliah Borg wrote:

szhorvat wrote:

Most cameras can record the sensor data with at most 14 bits of resolution.

What is 14-bit resolution? Raw file being 14-bit?

Nice trap, Iliah

Heh. There is more to it. Even if one starts counting levels populated in a raw, some levels are artificially added noise. Instead of doing a simple Photon Transfer Noise experiment (takes 20 mins max and all is needed - a LED light source) folks spend hours interpreting some pretty random numbers.

OT, but I love the word 'resolution', especially when used by itself. One of those words that means "all things to all men" **.

I understand DR floor as the point at which my camera stops to resolve

-- hide signature --
Jack Hogan Veteran Member • Posts: 6,646
Re: This is a really clear explanation
2

completelyrandomstuff wrote:

http://www.earthboundlight.com/phototips/nikon-d300-d3-14-bit-versus-12-bit.html

It may be clear but it is quite old and misleading:

1) the D300 actually lowers read noise in 14-bit mode vs 12-bit by slowing the readout of the sensor (that's why FPS drop to 2.5FPS at 14-bit vs 6FPS at 12-bit), so if he took those images with it we really cannot conclude anything from them
2) The physics of light wrt photography is 100% linear (that is more Luminance = proportionately more photons = proportionately more electrons = proportionately higher ADUs), and linear light is what the human visual system expects to 'see'. The fact that our eyes and brains respond logarithmically to a linear stimulus has absolutely nothing to do with whether scene information is fully encoded by a 12-bit linear ADC or whether a 14 bit linear ADC would be preferable.
3) He used ACR and CNX ADL to 'recover' shadows. And we all know the amount of non-linear processing those algorithms use.

So it would be useful to see a more scientifically done test with today's cleaner cameras.

Jack

PS Plus let's not forget that DR requires defining a threshold for the denominator, which in this discussion has been implicitly assumed to be either the read noise or the signal at which SNR=1. Many cameras today show information well into 15 stops down from saturation (the D7k for instance). Very poor quality, but it's still there and part of your picture.

John Sheehy Forum Pro • Posts: 20,562
Re: Dynamic range and RAW file "bit depth"

I, John Sheehy hastily wrote:
So, your 0 to 16535 range is either non-existent, or it is actually scaled from something else.

I'm sorry; that should be 16383. The "535" was spliced from 65,535.

Iliah Borg Forum Pro • Posts: 24,742
Re: Dynamic range and RAW file "bit depth"

John Sheehy wrote:

I, John Sheehy hastily wrote:
So, your 0 to 16535 range is either non-existent, or it is actually scaled from something else.

I'm sorry; that should be 16383. The "535" was spliced from 65,535.

Im always making the same mistake

Now I feel better.

-- hide signature --
xpatUSA
xpatUSA Forum Pro • Posts: 13,304
Re: bit depth affects PRECISION, not dynamic range so much

olliess wrote:

It would make more sense to have wider spacing values as the values increase, e.g., something close to logarithmic spacing. (I believe this is what the old Compressed NEF format did with a lookup table).

Correct, see Bill Claff's article here:

http://home.comcast.net/~NikonD70/NikonInfo/NEF_Compression.htm

-- hide signature --

"Engage Brain before operating Keyboard!"
Ted

 xpatUSA's gear list:xpatUSA's gear list
Sigma DP2 Sigma DP1 Merrill Panasonic Lumix DMC-G1 Sigma SD14 Sigma SD15 +16 more
John Sheehy Forum Pro • Posts: 20,562
Re: bit depth affects PRECISION, not dynamic range so much
1

xpatUSA wrote:

olliess wrote:

It would make more sense to have wider spacing values as the values increase, e.g., something close to logarithmic spacing. (I believe this is what the old Compressed NEF format did with a lookup table).

Correct, see Bill Claff's article here:

I did the math a while back, and I found that a camera that has a RAW saturation of 65K electrons at base ISO needs no more than 300 levels for the top stop, 213 levels for the next stop down, 150 levels for the next stop down, etc, until you are down to where the read noise is statistically dominant and at least 1.3 ADU of your output levels, where you become linear down to the lowest original RAW values.

You really need to avoid histogram shifts in the mapping, though, as they can result in color shifts after white balance.

Allan Olesen Veteran Member • Posts: 3,391
Re: Dynamic range and RAW file "bit depth"
2

completelyrandomstuff wrote:

this is an interesting though. The sensors do not record linearly. There is more data represented in highlights than in the shadows.

If you mean more unique values per stop of light, that is actually what should happen in a linear representation.

Mark Scott Abeln
Mark Scott Abeln Forum Pro • Posts: 12,626
Signal to noise — illustrations
6

Someone mentioned that 1 bit-depth images can be adequate, and here is an example:

(click ‘Original size’ to see full resolution)

I think this is illustrative of one of the controversies in digital photography — are more, smaller sensels better, or are fewer but larger capacity sensels more desirable? I think the trade-off depends on how much post-processing is to be done to an image.

It would be difficult to do any kind of basic editing on the 1-bit-depth image, but it is certainly adequate for display.

Here is another image:

This uses only a handful of colors. It looks fine if you don’t look too close, but would also be difficult to edit.

I’ve seen some complaints lately regarding the Nikon D800 camera, that its files are somewhat difficult to edit well, particularly when it comes to skin tones, due to noise.

A camera with larger and less noisy pixels would be easier to edit well — albeit with less resolution.

And here is an illustration of signal-to-noise ratio:

(click ‘Original size’ to see full resolution)

There is a considerable amount of signal that can be visually picked out of the noise, and using a SNR of 1:1 is pretty conservative when estimating a usable dynamic range. But it is difficult to edit such a noisy image well, without doing lots of averaging of pixels. So do we cut off the most noisy part of an image, to allow good editing, or do we keep it for the detail? I see that as a difficult trade-off in some respects.

I suspect this might be a good area for more research, which might lead to a way to broaden the dynamic range of an image without having too much objectionable noise. One unit of noise in a particular part of an image might not be as bad as a unit of noise in another part of the image — as others have mentioned, highlights have a larger absolute amount of noise compared to shadows. But the visual effects of noise will vary depending on the color of the object being photographed.

The places where the typical eye is more sensitive to shifts in hues will likely be more important than less sensitive regions — the range of human skin hues is actually quite narrow, and noise there can look quite objectionable. Also, the range of hues around blue are quite sensitive to noise, but sadly this is noise-prone in Bayer sensors. Doing some sort of variable cut-off of dark pixels depending on color might be quite useful, as well as targeted noise reduction.

But this again is a trade-off between keeping good visual detail versus making an image easily editable.

 Mark Scott Abeln's gear list:Mark Scott Abeln's gear list
Nikon D200 Nikon D7000 Nikon D750 Nikon AF-S DX Nikkor 35mm F1.8G Nikon AF Nikkor 50mm f/1.8D +2 more
RussellInCincinnati Veteran Member • Posts: 3,201
oops 14 bits encodes 16384 different numbers, not 8192

szhorvat wrote: Most cameras can record the sensor data with at most 14 bits of resolution. This means that the ratio of the lowest and highest representable values is 2^14.

RussellInCincinnati wrote:

No, it just means that whatever the scene dynamic range might be, that range of brightnesses will be recorded in the raw file using no more than 8192 different numbers.

xpatUSA wrote:

2^14 = 16,384d unsigned?

Oops, keep thinking in terms of APS-C, where everything is halved.

RussellInCincinnati Veteran Member • Posts: 3,201
having pondered this, why not have 17 bit raw file option?

John Sheehy wrote:

xpatUSA wrote:

olliess wrote:

It would make more sense to have wider spacing values as the values increase, e.g., something close to logarithmic spacing. (I believe this is what the old Compressed NEF format did with a lookup table).

Correct, see Bill Claff's article here:

I did the math a while back, and I found that a camera that has a RAW saturation of 65K electrons at base ISO needs no more than 300 levels for the top stop, 213 levels for the next stop down, 150 levels for the next stop down, etc, until you are down to where the read noise is statistically dominant and at least 1.3 ADU of your output levels, where you become linear down to the lowest original RAW values.

You really need to avoid histogram shifts in the mapping, though, as they can result in color shifts after white balance.

Come to think of it, it wouldn't be skin off of anyone's back to just have a menu option for 17 bit-depth raw files. Totally linear recording of produced-electron counts. Yes you'd be encoding a ton of noise in the least significant bits, slowing down file write times since the numbers written would be less-losslessly-compressible, etc. But certain folks might want it, just like certain folks might want to encode their JPEGs in some other colorspace besides sRGB (an option my cameras have happily let me ignore for many a year now). It's not like any of us care much about raw file size any more. And the raw files would be even raw-er.

RussellInCincinnati Veteran Member • Posts: 3,201
fun to see shallow bit depth images

Mark Scott Abeln wrote:

Someone mentioned that 1 bit-depth images can be adequate, and here is an example:

(click ‘Original size’ to see full resolution)

I think this is illustrative of one of the controversies in digital photography — are more, smaller sensels better, or are fewer but larger capacity sensels more desirable? I think the trade-off depends on how much post-processing is to be done to an image.

It would be difficult to do any kind of basic editing on the 1-bit-depth image, but it is certainly adequate for display.

Here is another image:

This uses only a handful of colors. It looks fine if you don’t look too close, but would also be difficult to edit.

I’ve seen some complaints lately regarding the Nikon D800 camera, that its files are somewhat difficult to edit well, particularly when it comes to skin tones, due to noise.

A camera with larger and less noisy pixels would be easier to edit well — albeit with less resolution.

And here is an illustration of signal-to-noise ratio:

(click ‘Original size’ to see full resolution)

There is a considerable amount of signal that can be visually picked out of the noise, and using a SNR of 1:1 is pretty conservative when estimating a usable dynamic range. But it is difficult to edit such a noisy image well, without doing lots of averaging of pixels. So do we cut off the most noisy part of an image, to allow good editing, or do we keep it for the detail? I see that as a difficult trade-off in some respects.

I suspect this might be a good area for more research, which might lead to a way to broaden the dynamic range of an image without having too much objectionable noise. One unit of noise in a particular part of an image might not be as bad as a unit of noise in another part of the image — as others have mentioned, highlights have a larger absolute amount of noise compared to shadows. But the visual effects of noise will vary depending on the color of the object being photographed.

The places where the typical eye is more sensitive to shifts in hues will likely be more important than less sensitive regions — the range of human skin hues is actually quite narrow, and noise there can look quite objectionable. Also, the range of hues around blue are quite sensitive to noise, but sadly this is noise-prone in Bayer sensors. Doing some sort of variable cut-off of dark pixels depending on color might be quite useful, as well as targeted noise reduction.

But this again is a trade-off between keeping good visual detail versus making an image easily editable.

Thanks for this post.

Keyboard shortcuts:
FForum MMy threads