OP
szhorvat
•

Regular Member
•
Posts: 292

Re: Dynamic range and RAW file "bit depth"

The_Suede wrote:

szhorvat wrote:

xpatUSA wrote:

In other words, if I present my sensor output with it's theoretical 16.23 EV range to an 8-bit ADC is the sensor dynamic range any different than if it is presented to a 14-bit ADC?

Disclaimer: I don't go to DXOMark very often. And I'm not sure that I've answered the question.

I'll say: For practical purposes, yes, the dynamic range will be smaller in that case, provided that the digitization is done in a linear way. The smallest number of electrons we could measure (by looking at the digitized signal) will be 77000/2^8 = 300. It won't be possible to see the difference between 200 or 400 electrons, as all will be rounded to the same 300.

Unless ... we take the "dithering" done by natural noise into account. Which I didn't think of.

Or unless the digitization is done in a nonlinear way, i.e. out of the 2^8 = 256 values, 1 will correspond to something smaller than 300 to have a higher resolution in the dim range. So another possibility is that the sensor itself it linear but the digitization is done in a non-linear way.

here you ask the correct question:

Or maybe there was something else I didn't think of. Which is why I asked the question

Thanks for taking the time to reply.

DR is a statistics result or value. When you have integer data only, like if you count the average amount of perfect marbles present in 1000 bowls, you can still get fraction results. Like "14.8467 marbles per bowl". Even though the base unit is by necessity only possible in whole numbers. A perfect marble is either there or not, one or zero.

This also means that if only one in maybe around five bowls contains a perfect marble, you get an average amount that is a fraction of one, like 0.195 or something. Which is both possible from an average point of view, and impossible from a practical point of view for each individual bowl. 0.195 marbles is a broken marble, which is contradictory to the counting rule we set up.

the same is true for bits and photons. If only one in five positions contains an error of "one", the average is 0.2. Lower than the lowest possible quanta of the measurement unit.

and when the average measurement error is smaller than the lowest definable step in your metric, you get an average measurement resolution that is higher than what your metric defines. Then the value tells you how often the error occurs in stead of how many errors per instance you will get.

In fact this is what I asked previously (quoted above): "Unless ... we take the "dithering" done by natural noise into account."