Dynamic range and RAW file "bit depth"

Started Jan 25, 2014 | Questions
bobn2
bobn2 Forum Pro • Posts: 58,565
Re: Dynamic range and RAW file "bit depth"

D Cox wrote:

bobn2 wrote:

Not just B&W images in newspapers - essentially all printed images, including the output of inkjet and dye-sub printers. These devices cannot change the intensity of the dye/pigment, only the size of the blob deposited. So they are making an image with a large dynamic range using essentially a 1 bit output.

However, the dot size for half-tone is specified with a number of bits.

Many inkjet printers work on a fixed droplet size. Anyway, you can think of variable droplet just as subdividing the pixel, it's still either black or white.

-- hide signature --

Bob

OP szhorvat Regular Member • Posts: 297
Re: Dynamic range and RAW file "bit depth"

The_Suede wrote:

szhorvat wrote:

xpatUSA wrote:

In other words, if I present my sensor output with it's theoretical 16.23 EV range to an 8-bit ADC is the sensor dynamic range any different than if it is presented to a 14-bit ADC?

Disclaimer: I don't go to DXOMark very often. And I'm not sure that I've answered the question.

I'll say: For practical purposes, yes, the dynamic range will be smaller in that case, provided that the digitization is done in a linear way. The smallest number of electrons we could measure (by looking at the digitized signal) will be 77000/2^8 = 300. It won't be possible to see the difference between 200 or 400 electrons, as all will be rounded to the same 300.

Unless ... we take the "dithering" done by natural noise into account. Which I didn't think of.

Or unless the digitization is done in a nonlinear way, i.e. out of the 2^8 = 256 values, 1 will correspond to something smaller than 300 to have a higher resolution in the dim range. So another possibility is that the sensor itself it linear but the digitization is done in a non-linear way.

here you ask the correct question:

Or maybe there was something else I didn't think of. Which is why I asked the question

Thanks for taking the time to reply.

DR is a statistics result or value. When you have integer data only, like if you count the average amount of perfect marbles present in 1000 bowls, you can still get fraction results. Like "14.8467 marbles per bowl". Even though the base unit is by necessity only possible in whole numbers. A perfect marble is either there or not, one or zero.

This also means that if only one in maybe around five bowls contains a perfect marble, you get an average amount that is a fraction of one, like 0.195 or something. Which is both possible from an average point of view, and impossible from a practical point of view for each individual bowl. 0.195 marbles is a broken marble, which is contradictory to the counting rule we set up.

the same is true for bits and photons. If only one in five positions contains an error of "one", the average is 0.2. Lower than the lowest possible quanta of the measurement unit.

and when the average measurement error is smaller than the lowest definable step in your metric, you get an average measurement resolution that is higher than what your metric defines. Then the value tells you how often the error occurs in stead of how many errors per instance you will get.

In fact this is what I asked previously (quoted above): "Unless ... we take the "dithering" done by natural noise into account."

Jack Hogan Veteran Member • Posts: 6,681
Re: having pondered this, why not have 17 bit raw file option?

DSPographer wrote:

This isn't just wasteful of file bits, it also could significantly slow down the raw converter. In column parallel ramp converters like Sony's Exmor sensors use, having a nonlinear ramp dramatically speeds up the conversion process. So, on many cameras that have "14-bit" raw capability the conversion step size is not really uniform, instead it increases approximately as the square root of the level, so that the time to perform the ramp conversion is not excessive.

Hi DSPographer,

Interesting.  How does having a non-linear ramp affect accuracy of the linear ADC output throughout the range?

Jack Hogan Veteran Member • Posts: 6,681
Re: Magic Lantern & analog clipping

Iliah Borg wrote:

what work needs to be done in the raw converter to be able to correctly deal with the less well defined saturation level?

Linearization, normally, based on photon transfer curve. Pretty heavy equipment needed if doing it precisely for that small highlight region; but in practice just three points, the start, the saturation, and some midpoint, using a spline. The other thing necessary is to deal with pattern noise, also not a huge problem.

If the curve near actual FWC is easily characterized, why doesn't everybody do it?

Iliah Borg Forum Pro • Posts: 24,783
Re: Magic Lantern & analog clipping

Jack Hogan wrote:

Iliah Borg wrote:

what work needs to be done in the raw converter to be able to correctly deal with the less well defined saturation level?

Linearization, normally, based on photon transfer curve. Pretty heavy equipment needed if doing it precisely for that small highlight region; but in practice just three points, the start, the saturation, and some midpoint, using a spline. The other thing necessary is to deal with pattern noise, also not a huge problem.

If the curve near actual FWC is easily characterized, why doesn't everybody do it?

It is per camera characterization. And, sorry, Jack, but "everybody" type questions are one of the sort that make little sense to me

-- hide signature --
Jack Hogan Veteran Member • Posts: 6,681
Re: Magic Lantern & analog clipping

Iliah Borg wrote:

Jack Hogan wrote:

Iliah Borg wrote:

what work needs to be done in the raw converter to be able to correctly deal with the less well defined saturation level?

Linearization, normally, based on photon transfer curve. Pretty heavy equipment needed if doing it precisely for that small highlight region; but in practice just three points, the start, the saturation, and some midpoint, using a spline. The other thing necessary is to deal with pattern noise, also not a huge problem.

If the curve near actual FWC is easily characterized, why doesn't everybody do it?

It is per camera characterization.

I see, so not very practical for mass production houses like Canon and Nikon I guess.

And, sorry, Jack, but "everybody" type questions are one of the sort that make little sense to me

Touchè

RussellInCincinnati Veteran Member • Posts: 3,201
of course there are reasons not to record 17 bits

RussellInCincinnati wrote: Come to think of it, it wouldn't be skin off of anyone's back to just have a menu option for 17 bit-depth raw files. Totally linear recording of produced-electron counts. Yes you'd be encoding a ton of noise in the least significant bits, slowing down file write times since the numbers written would be less-losslessly-compressible, etc.

DSPographer wrote: This isn't just wasteful of file bits, it also could significantly slow down the raw converter. In column parallel ramp converters like Sony's Exmor sensors use, having a nonlinear ramp dramatically speeds up the conversion process. So, on many cameras that have "14-bit" raw capability the conversion step size is not really uniform, instead it increases approximately as the square root of the level, so that the time to perform the ramp conversion is not excessive.

OK, so there are at least 4 reasons why people wouldn't want 17-bit-deep raw file data, instead of merely 3 drawbacks. It just wouldn't hurt anyone to offer the option. Just like my browser offers me a really tiny default font-size on its menu of typeface sizes, even though we can think of many reasons not to use such a tiny font.

DSPographer Senior Member • Posts: 2,464
Re: having pondered this, why not have 17 bit raw file option?

Jack Hogan wrote:

DSPographer wrote:

This isn't just wasteful of file bits, it also could significantly slow down the raw converter. In column parallel ramp converters like Sony's Exmor sensors use, having a nonlinear ramp dramatically speeds up the conversion process. So, on many cameras that have "14-bit" raw capability the conversion step size is not really uniform, instead it increases approximately as the square root of the level, so that the time to perform the ramp conversion is not excessive.

Hi DSPographer,

Interesting. How does having a non-linear ramp affect accuracy of the linear ADC output throughout the range?

Here is a paper which discusses non-linear ramp converters of the sort Sony uses in their Exmor sensors:

http://www.imagesensors.org/Past%20Workshops/2005%20Workshop/2005%20Papers/44%20Otaka%20et%20al.pdf

 DSPographer's gear list:DSPographer's gear list
Canon PowerShot G7 X Canon EOS 5D Mark II Canon EF 24mm f/2.8 Canon EF 50mm f/1.8 II Canon EF 200mm f/2.8L II USM +4 more
DSPographer Senior Member • Posts: 2,464
Re: of course there are reasons not to record 17 bits
1

RussellInCincinnati wrote:

RussellInCincinnati wrote: Come to think of it, it wouldn't be skin off of anyone's back to just have a menu option for 17 bit-depth raw files. Totally linear recording of produced-electron counts. Yes you'd be encoding a ton of noise in the least significant bits, slowing down file write times since the numbers written would be less-losslessly-compressible, etc.

DSPographer wrote: This isn't just wasteful of file bits, it also could significantly slow down the raw converter. In column parallel ramp converters like Sony's Exmor sensors use, having a nonlinear ramp dramatically speeds up the conversion process. So, on many cameras that have "14-bit" raw capability the conversion step size is not really uniform, instead it increases approximately as the square root of the level, so that the time to perform the ramp conversion is not excessive.

OK, so there are at least 4 reasons why people wouldn't want 17-bit-deep raw file data, instead of merely 3 drawbacks. It just wouldn't hurt anyone to offer the option. Just like my browser offers me a really tiny default font-size on its menu of typeface sizes, even though we can think of many reasons not to use such a tiny font.

Well, 17 bits is a really odd size. If I had to choose one number format that would exceed any sensors capability it would probably be a 16 bit floating point like IEEE 754 binary16. I would also at least losslessly compress the raw files.

 DSPographer's gear list:DSPographer's gear list
Canon PowerShot G7 X Canon EOS 5D Mark II Canon EF 24mm f/2.8 Canon EF 50mm f/1.8 II Canon EF 200mm f/2.8L II USM +4 more
Iliah Borg Forum Pro • Posts: 24,783
Re: Magic Lantern & analog clipping

Jack Hogan wrote:

Iliah Borg wrote:

Jack Hogan wrote:

Iliah Borg wrote:

what work needs to be done in the raw converter to be able to correctly deal with the less well defined saturation level?

Linearization, normally, based on photon transfer curve. Pretty heavy equipment needed if doing it precisely for that small highlight region; but in practice just three points, the start, the saturation, and some midpoint, using a spline. The other thing necessary is to deal with pattern noise, also not a huge problem.

If the curve near actual FWC is easily characterized, why doesn't everybody do it?

It is per camera characterization.

I see, so not very practical for mass production houses like Canon and Nikon I guess.

Would be an awful burden indeed. Kodak used to have those curves in their data sheets, generic type. But they were AFAIK never used in actual digital backs.

-- hide signature --
xpatUSA
xpatUSA Forum Pro • Posts: 13,460
Re: Magic Lantern & analog clipping

Jack Hogan wrote:

Iliah Borg wrote:

Jack Hogan wrote:

Iliah Borg wrote:

what work needs to be done in the raw converter to be able to correctly deal with the less well defined saturation level?

Linearization, normally, based on photon transfer curve. Pretty heavy equipment needed if doing it precisely for that small highlight region; but in practice just three points, the start, the saturation, and some midpoint, using a spline. The other thing necessary is to deal with pattern noise, also not a huge problem.

If the curve near actual FWC is easily characterized, why doesn't everybody do it?

It is per camera characterization.

I see, so not very practical for mass production houses like Canon and Nikon I guess.

Looks like Sigma does it, or used to:

"CMbM:LinLUTS
Type=0 (long), Dimensions=2 (Channel, InputVal) (3x4096)
<big matrix skipped>"

Extracted from an SD9 X3F file.

-- hide signature --

"Engage Brain before operating Keyboard!"
Ted

 xpatUSA's gear list:xpatUSA's gear list
Sigma DP2 Sigma DP1 Merrill Panasonic Lumix DMC-G1 Sigma SD14 Sigma SD15 +16 more
Iliah Borg Forum Pro • Posts: 24,783
Re: Magic Lantern & analog clipping

xpatUSA wrote:

Jack Hogan wrote:

Iliah Borg wrote:

Jack Hogan wrote:

Iliah Borg wrote:

what work needs to be done in the raw converter to be able to correctly deal with the less well defined saturation level?

Linearization, normally, based on photon transfer curve. Pretty heavy equipment needed if doing it precisely for that small highlight region; but in practice just three points, the start, the saturation, and some midpoint, using a spline. The other thing necessary is to deal with pattern noise, also not a huge problem.

If the curve near actual FWC is easily characterized, why doesn't everybody do it?

It is per camera characterization.

I see, so not very practical for mass production houses like Canon and Nikon I guess.

Looks like Sigma does it, or used to:

"CMbM:LinLUTS
Type=0 (long), Dimensions=2 (Channel, InputVal) (3x4096)
<big matrix skipped>"

Yes, they use look-up tables for linearization.

-- hide signature --
hjulenissen Senior Member • Posts: 2,173
Re: Magic Lantern & analog clipping

Iliah Borg wrote:

Yes, they use look-up tables for linearization.

-- hide signature --

If it can be corrected using a smooth 3-parameter spline, how hard can it be to estimate those parameters automatically/manually using exposure bracketing? What it the consequence of recording  data at the non-linear region and doing nothing to invert it (is that not sort of what film cameras used to do?)

Anything that is "per camera", I would suspect also to (possibly) depend on temperature, exposure time, moon phase and what not?

Is this simply from the behaviour of a diode near saturation, or is it some more intricate physical thing?

-h

Jack Hogan Veteran Member • Posts: 6,681
Ramping up the ADC - step by step or skipping?

DSPographer wrote:

Jack Hogan wrote:

DSPographer wrote:

This isn't just wasteful of file bits, it also could significantly slow down the raw converter. In column parallel ramp converters like Sony's Exmor sensors use, having a nonlinear ramp dramatically speeds up the conversion process. So, on many cameras that have "14-bit" raw capability the conversion step size is not really uniform, instead it increases approximately as the square root of the level, so that the time to perform the ramp conversion is not excessive.

Hi DSPographer,

Interesting. How does having a non-linear ramp affect accuracy of the linear ADC output throughout the range?

Here is a paper which discusses non-linear ramp converters of the sort Sony uses in their Exmor sensors:

http://www.imagesensors.org/Past%20Workshops/2005%20Workshop/2005%20Papers/44%20Otaka%20et%20al.pdf

All Exmors (even those co-'designed' for Nikon DSLRs)?

I am asking because if I understand the article correctly the referenced ADC starts skipping levels as signal (therefore shot noise) increases - which makes sense from an information theory/physical standpoint and would explain why many current Sony ILCs (including a7s, RX1s etc.) encode all linear data from the sensor non-linearly before writing it to the Raw file without giving the user the option to record the full linear data. The benefit to the designers in terms of faster operation and lower power consumption (and possibly even less 1/f noise?) seem intuitive.

Sony's 'most faithful' raw data mode appears therefore to be similar to Nikon's Lossy Compression mode and its data encoding step could potentially closely correspond to the levels actually sampled by Sony's ADC.

Sample Sony look-up table embedded in Raw file. Ignore captions: they pertain to a different topic

On the other hand Nikon DOES give the user the option to save data in the Raw file linearly and 'uncompressed'. Below are the histograms of the entire raw files resulting from DPR's Studio Scene captures for the A7 and D610 at ISO 100. In the D610's most significant bit there is sparse but apparently full data covering virtually every linear level - while the A7's is clearly skipping levels per the non-linear table above. Tellingly, in the green channel the D610 uses 12074 unique values to encode the image versus the A7's 1774 values.

So unless Nikon is injecting noise after the ADC (are they? It would not look that way by the shape of the histograms) my guess would be that Nikon ADCs are programmed to ramp up value by value and produce a 'faithful' digital representation of the analog signal+noise, no matter what the source or level of noise. What do you think?

Jack

DSPographer Senior Member • Posts: 2,464
Re: Ramping up the ADC - step by step or skipping?

Jack Hogan wrote:

All Exmors (even those co-'designed' for Nikon DSLRs)?

I am asking because if I understand the article correctly the referenced ADC starts skipping levels as signal (therefore shot noise) increases - which makes sense from an information theory/physical standpoint and would explain why many current Sony ILCs (including a7s, RX1s etc.) encode all linear data from the sensor non-linearly before writing it to the Raw file without giving the user the option to record the full linear data. The benefit to the designers in terms of faster operation and lower power consumption (and possibly even less 1/f noise?) seem intuitive.

Sony's 'most faithful' raw data mode appears therefore to be similar to Nikon's Lossy Compression mode and its data encoding step could potentially closely correspond to the levels actually sampled by Sony's ADC.

Sample Sony look-up table embedded in Raw file. Ignore captions: they pertain to a different topic

On the other hand Nikon DOES give the user the option to save data in the Raw file linearly and 'uncompressed'. Below are the histograms of the entire raw files resulting from DPR's Studio Scene captures for the A7 and D610 at ISO 100. In the D610's most significant bit there is sparse but apparently full data covering virtually every linear level - while the A7's is clearly skipping levels per the non-linear table above. Tellingly, in the green channel the D610 uses 12074 unique values to encode the image versus the A7's 1774 values.

So unless Nikon is injecting noise after the ADC (are they? It would not look that way by the shape of the histograms) my guess would be that Nikon ADCs are programmed to ramp up value by value and produce a 'faithful' digital representation of the analog signal+noise, no matter what the source or level of noise. What do you think?

Jack

If I recall correctly, the D800 was using an accelerated ramp much like you show for the A7. I haven't seen an analysis for the D600 or D610 yet, so I don't know what they do (do we know for sure it is a Sony Exmor?). But, just because the histogram doesn't show skipped levels doesn't mean the A-D is using a single slope ramp. I don't think Nikon would be injecting noise, but they are known to process raw data before the file is written. So, there could be something like either an analog or digital column amplifier pattern noise (PRNU) compensation being performed: which could fill in missing levels much like a raw vignetting compensation would. Nikon may also have invented some way to get a full precision number latched even while the ramp slope is accelerated. In that case the digitization error might increase for the higher levels- but without missing codes being produced. Of course, Nikon might have just had Sony put in a mode where the slow ramp is used for the entire conversion range, but that would normally make the digitization so slow that the frame rate would need to be dramatically reduced.

 DSPographer's gear list:DSPographer's gear list
Canon PowerShot G7 X Canon EOS 5D Mark II Canon EF 24mm f/2.8 Canon EF 50mm f/1.8 II Canon EF 200mm f/2.8L II USM +4 more
Iliah Borg Forum Pro • Posts: 24,783
Re: Magic Lantern & analog clipping

If it can be corrected using a smooth 3-parameter spline, how hard can it be to estimate those parameters automatically/manually using exposure bracketing?

Not too hard, but why would the cameramaker opt to take blame for user errors?

What it the consequence of recording data at the non-linear region and doing nothing to invert it (is that not sort of what film cameras used to do?)

Film cameras never did it. Shoulder is born in development, latent image is fairly linear. The problem is that white balance can't be applied in a "traditional" way to non-linear portions of the curve. Non-neutral highlights are a serious problem.

Anything that is "per camera", I would suspect also to (possibly) depend on temperature, exposure time, moon phase and what not?

In those cameras where they do it the temperature is one of the parameters. Exposure time should not affect it.

Is this simply from the behaviour of a diode near saturation,

Yes it is.

-- hide signature --
Jack Hogan Veteran Member • Posts: 6,681
Ramping up the ADC - Does Nikon Accelerate its Ramps?

DSPographer wrote: If I recall correctly, the D800 was using an accelerated ramp much like you show for the A7. I haven't seen an analysis for the D600 or D610 yet, so I don't know what they do (do we know for sure it is a Sony Exmor?).

Yes, according to Chipworks it's a Sony IMX128.  Compare its part number to these .

But, just because the histogram doesn't show skipped levels doesn't mean the A-D is using a single slope ramp. I don't think Nikon would be injecting noise, but they are known to process raw data before the file is written. So, there could be something like either an analog or digital column amplifier pattern noise (PRNU) compensation being performed: which could fill in missing levels much like a raw vignetting compensation would.

It would seem rather wastful to do that before writing data to the raw file - as opposed to doing it after having pocketed the space/time saving... In other words, if the data rolling off the ADC is 'accelerated' to start with, follow Sony's lead no?  Unless it's a marketing gimmick (we've got an 'uncompressed' mode and you don't).

Nikon may also have invented some way to get a full precision number latched even while the ramp slope is accelerated. In that case the digitization error might increase for the higher levels- but without missing codes being produced. Of course, Nikon might have just had Sony put in a mode where the slow ramp is used for the entire conversion range, but that would normally make the digitization so slow that the frame rate would need to be dramatically reduced.

I see (said the blind man). The allegedly ramp-accelerating A7 is actually a bit slower in peak FPS than the D610 (5 vs 6). Maybe the A7 simply has a slower DSP, though. I know that whether the ramp is accelerated or not would make virtually no difference in practice to a photographer, although the potential astrophotographer in me would be slightly disappointed.

But now I am really curious. Does Nikon accelerate its ramps or not? Does Canon? Anybody know?

bobn2
bobn2 Forum Pro • Posts: 58,565
Re: Ramping up the ADC - Does Nikon Accelerate its Ramps?
1

Jack Hogan wrote:

DSPographer wrote: If I recall correctly, the D800 was using an accelerated ramp much like you show for the A7. I haven't seen an analysis for the D600 or D610 yet, so I don't know what they do (do we know for sure it is a Sony Exmor?).

Yes, according to Chipworks it's a Sony IMX128. Compare its part number to these .

But, just because the histogram doesn't show skipped levels doesn't mean the A-D is using a single slope ramp. I don't think Nikon would be injecting noise, but they are known to process raw data before the file is written. So, there could be something like either an analog or digital column amplifier pattern noise (PRNU) compensation being performed: which could fill in missing levels much like a raw vignetting compensation would.

It would seem rather wastful to do that before writing data to the raw file - as opposed to doing it after having pocketed the space/time saving... In other words, if the data rolling off the ADC is 'accelerated' to start with, follow Sony's lead no? Unless it's a marketing gimmick (we've got an 'uncompressed' mode and you don't).

Nikon may also have invented some way to get a full precision number latched even while the ramp slope is accelerated. In that case the digitization error might increase for the higher levels- but without missing codes being produced. Of course, Nikon might have just had Sony put in a mode where the slow ramp is used for the entire conversion range, but that would normally make the digitization so slow that the frame rate would need to be dramatically reduced.

I see (said the blind man). The allegedly ramp-accelerating A7 is actually a bit slower in peak FPS than the D610 (5 vs 6). Maybe the A7 simply has a slower DSP, though. I know that whether the ramp is accelerated or not would make virtually no difference in practice to a photographer, although the potential astrophotographer in me would be slightly disappointed.

But now I am really curious. Does Nikon accelerate its ramps or not? Does Canon? Anybody know?

I'd be interested to know more about the evidence that Sony is using this accelerated (non-linear) ramp technique. I haven't seen any evidence in the D800 files, but perhaps I haven't been looking in the right place.

Knowing how the Exmor ADCs work, where one DAC feeds the comparison signal to a whole line of comparators, and the ADCs are simply latches which latch the value of the counter feeding the DAC when the comparator strobes, what you'd expect is that at the top end of the scale there would be a lot of missing codes, since each ADC would skip exactly the same codes. I've never observed that.

-- hide signature --

Bob

The_Suede Contributing Member • Posts: 651
Re: Dynamic range and RAW file "bit depth"

szhorvat wrote:

The_Suede wrote:

szhorvat wrote:

xpatUSA wrote:

In other words, if I present my sensor output with it's theoretical 16.23 EV range to an 8-bit ADC is the sensor dynamic range any different than if it is presented to a 14-bit ADC?

Disclaimer: I don't go to DXOMark very often. And I'm not sure that I've answered the question.

I'll say: For practical purposes, yes, the dynamic range will be smaller in that case, provided that the digitization is done in a linear way. The smallest number of electrons we could measure (by looking at the digitized signal) will be 77000/2^8 = 300. It won't be possible to see the difference between 200 or 400 electrons, as all will be rounded to the same 300.

Unless ... we take the "dithering" done by natural noise into account. Which I didn't think of.

Or unless the digitization is done in a nonlinear way, i.e. out of the 2^8 = 256 values, 1 will correspond to something smaller than 300 to have a higher resolution in the dim range. So another possibility is that the sensor itself it linear but the digitization is done in a non-linear way.

here you ask the correct question:

Or maybe there was something else I didn't think of. Which is why I asked the question

Thanks for taking the time to reply.

DR is a statistics result or value. When you have integer data only, like if you count the average amount of perfect marbles present in 1000 bowls, you can still get fraction results. Like "14.8467 marbles per bowl". Even though the base unit is by necessity only possible in whole numbers. A perfect marble is either there or not, one or zero.

This also means that if only one in maybe around five bowls contains a perfect marble, you get an average amount that is a fraction of one, like 0.195 or something. Which is both possible from an average point of view, and impossible from a practical point of view for each individual bowl. 0.195 marbles is a broken marble, which is contradictory to the counting rule we set up.

the same is true for bits and photons. If only one in five positions contains an error of "one", the average is 0.2. Lower than the lowest possible quanta of the measurement unit.

and when the average measurement error is smaller than the lowest definable step in your metric, you get an average measurement resolution that is higher than what your metric defines. Then the value tells you how often the error occurs in stead of how many errors per instance you will get.

In fact this is what I asked previously (quoted above): "Unless ... we take the "dithering" done by natural noise into account."

Not really, since this is an effect that isn't dependent on noise. Even if the underlying material is PERFECTLY reproduced, averaging effects are present. There's only one exception, and that's when you have very large continuous surfaces without any surface detail present at all - like for instance a clear blue sky.

You can still work out exact error margins (DR) from a severely posterized flat surface though - a posterization that has clean, straight edges between levels has very low levels of noise present. If the posterization is broken up into a more jagged, randomly fluctuating line between two quantization levels, noise is stronger.

Keyboard shortcuts:
FForum MMy threads