14 Bit Advantage

But I often play with my RAW files trying and getting the best
possible sunset. Very often I suffer from a kind of ugly light and
color gradations way more noticeable than Canon's example. And this
problem is apparent in many other occasions where you play with the
levels and amplify some rays of light or luminous spots.

If 14 bit sampling can help avoiding that, I will bite the bullet
immediately.
The behavior you are describing sounds like it has much more to do with the nonlinear response of the sensor near saturation, which may be quite different for the 1D3 (or 40D or whatever you are using) than for other cameras. It has nothing to do with bit depth.

I see all sorts of posts claiming 14-bit being the wonder cure for all sorts of image problems without any quantitative analysis to back them up. Perhaps you could post an example of the color issues you are referring to? I'd also be happy to hack into the raw file and zero out the extra two bits for you to see whether they are in fact the source of whatever advantage you are seeing.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
No, it gives you no advantage whatsoever. The main idea of technique
of "exposing to the right" is to get better S/N ratio. This is
comletely unrelated to how many bits you use to digitize the data.
Secondary, significantly less important target of this technique, is
to cover more discrete ADC levels. But for that secondary part,
there's no point in trying to cover more than 2^12 levels, since,
once again, even if exposed for its best possible S/N ratio, no
modern sensor can produce more than 12 bit of information. Digitizing
it with 14 bit ADC achieves nothing.
Actually, it achieves a tiny little bit. Noise sources add together, and the 'quantisation noise' of the ADC is a noise source (since the value given by the ADC has an error given by the difference of the actual analog value and the smallest incremental value the ADC can register). A noise from a 12 stop DR sensor and a 12 bit ADC will add to give an 11 stop DR. However, a the quantisation noise of a 14 bit ADC is 1/4 of the noise level from a 12 stop sensor, so you und up with an 11.75 stop DR. The extra 2 bits buy 3/4 stop. I suspect teh real reason that 14 bit ADCs are commonplace is that Analog Devices (who are the major supplier of the chips) upped the resolution to 14 bits, since they new that sensors with enough DR to use it would be available within the product life of the new chips. Now it's become a marketing feature.
--
Bob
 
Try cutting a loaf of bread into slices a millimeter thick using a handsaw.

This analogy covers bit depth and various kinds of noise, including photon noise (size of the bread crumbs). There is, in fact, a minimum slice thickness that is possible for a given bread type/blade type combination.
--
http://www.pbase.com/victorengel/

 
But that assumes the underlying data supports the extra bits. Given your crude posterization example, I challenge you to find one example of an image captured at 14 bits, where, if you chop off the last two bits you wind up with a posterized image that is not posterized in the 14 bit case. I have yet to see such an example.
--
http://www.pbase.com/victorengel/

 
No, it gives you no advantage whatsoever. The main idea of technique
of "exposing to the right" is to get better S/N ratio. This is
comletely unrelated to how many bits you use to digitize the data.
Secondary, significantly less important target of this technique, is
to cover more discrete ADC levels. But for that secondary part,
there's no point in trying to cover more than 2^12 levels, since,
once again, even if exposed for its best possible S/N ratio, no
modern sensor can produce more than 12 bit of information. Digitizing
it with 14 bit ADC achieves nothing.
Actually, it achieves a tiny little bit. Noise sources add together,
and the 'quantisation noise' of the ADC is a noise source (since the
value given by the ADC has an error given by the difference of the
actual analog value and the smallest incremental value the ADC can
register). A noise from a 12 stop DR sensor and a 12 bit ADC will add
to give an 11 stop DR. However, a the quantisation noise of a 14 bit
ADC is 1/4 of the noise level from a 12 stop sensor, so you und up
with an 11.75 stop DR. The extra 2 bits buy 3/4 stop. I suspect teh
real reason that 14 bit ADCs are commonplace is that Analog Devices
(who are the major supplier of the chips) upped the resolution to 14
bits, since they new that sensors with enough DR to use it would be
available within the product life of the new chips. Now it's become a
marketing feature.
Why would a 12 stop DR sensor and 12 bit ADC yield 11 stop DR? An ideal ADC fed a uniform signal would have a quantization error that is only 1/sqrt(12) .3 of the quantization step, much less than the analog noise of the signal if the bit depth equals the DR.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
No, the amplitude of the quantisation error is + or - 1/2 least significant bit = 1 bit pp amplitude. Think of it this way. Both ADC and sensor effectively have 1 lsb noise. This will randomly be digitised, sample by sample as 0+0, 0+1, 1+0 or 1+1, = 2 lsb noise.
--
Bob
 
No, the amplitude of the quantisation error is + or - 1/2 least
significant bit = 1 bit pp amplitude. Think of it this way. Both ADC
and sensor effectively have 1 lsb noise. This will randomly be
digitised, sample by sample as 0+0, 0+1, 1+0 or 1+1, = 2 lsb noise.
--
No, that is the worst case scenario. If you don't know what the exact value is that you are trying to quantize, the quantized value could be spot on if what you were trying to quantize was an integer value, with zero error. It's only the worst you could be off is by .5 in either direction. Over a large sample signal uniformly distributed between the range of errors from -.5 to .5, and you need to compute the mean square error averaged over that range of values, then take the square root, to get the typical error. That is 1/sqrt(12) when you work it out, or about 0.3. If you combine this quantization error of an ideal ADC whose bit depth equals the DR, with the noise in the signal, the noise of the signal is one and the quantization error is .3; combining them in quadrature gives a total noise of about 1.06, or a rather negligible correction to noise of the signal itself.

You have to remember that the input to the ADC is a signal with a continuous range of values and not discrete, which is what you assumed in your analysis.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
Yes, I'm talking peak to peak noise. We don't know the distribution of the sensor noise, and the contribution of the quantisation noise changes according to the signal (since strictly it's not noise but a non-linearity (distortion) applied to the signal) so it's difficult to do a precise calculation. The real case will be different, but whatever when we're down to the least significant bit for both quantisation and sensor noise, the contribution of both is similar. Yet another way of looking at it. Imagine the sensor is yielding a signal that should be exactly x. If the sensor is giving an analogue noise equivalent to +- 1/2 lsb, then the actual signal fed to the adc could range from x+1/2 to x-1/2. If it's x+1/2 it will read as x+1, if it's x-1/2 it will read as x-1, so an error of 1 bit has been translated into an error of 2 bits, and effectively the noise has doubled, hence the loss of 1 stop DR.
--
Bob
 
But I often play with my RAW files trying and getting the best
possible sunset. Very often I suffer from a kind of ugly light and
color gradations way more noticeable than Canon's example. And this
problem is apparent in many other occasions where you play with the
levels and amplify some rays of light or luminous spots.

If 14 bit sampling can help avoiding that, I will bite the bullet
immediately.
The effects you see are to do with sensor saturation, not the ADC process. Since the photosites have bayer filters over them, given a bright light of a particular colour, defferent sites will satuturate at different points. When the demosaicing algorithm is aplied, this produces colour variations, which can have sharp boundaries, since the brighness gradient can be high. As Emil suggests, increasing the bit depth works at the wrong end. A 14 bit ADC will give a fraction of a bit more DR, which you will only be able to use if you adjust exposure downwards by a fraction of stop.

The best solution to your problem is to use exposure bracketing and overlay the images in PS (there are several tutorials on how to do this), this can extend DR by several stops.
--
Bob
 
...and practically noise-less all the way to the very bottom of the tonal scale, where only minor "dirt particles" are left.

This means, you can now compress/crush the shadows a bit more, and accomodate quite a good amount at the top. In other words, you can now re-distribute actual sensor's data in ways that could not be possible with 12-bit scales.

Bottom-line:

Cleaner shadows and around an effortless, no-side-effect, extra 0.5 EV of Dynamic Range, based on subjective evaluations of my 1D3 output vs. my previous 1D2/N.

Samples (under mid-day sun in Florida, even uncomfortable for the naked eye, I could not get this type of exposures with my previous 1D2N, at ISO200!!!):

http://www.pbase.com/feharmat/image/89083560/original
http://www.pbase.com/feharmat/image/89083562/original

Enjoy!

--

TIP: If you do not like this post, simply press the 'COMPLAINT' button. Mommy/Daddy are just one click away.
 
You can't attribute the additional DR solely to 14bit ADC. The MkIII sensor has seen many improvements over the Mk11/IIn sensor, which should result in lower noise, which means a larger DR. Most of the DR improvement you notice will be due to this, not the ADC, which will give a fraction of a stop.
--
Bob
 
Hello,

I am looking at either getting a 1DSIII or for the same money
upgrading an old 1D to a 1DIIn and getting a 1DSII.

Ignoring the extra pixels, can anyone comment on the advantage of
14bits.

i know the 1DIII has 14 Bit, can anyone comment on how obvious this
is compared to the 1DIIn.

I will mainly be using the camera for weddings, and landscape work,
with some portraits.

I have just purchased the 85 1.2 II, and want to be able to use it as
it was meant to be used (on full frame).

I am not really interested in the 5D (unless the do a nikon, and
include the AF from the 1 series in the 5DII, then it could be
interesting).

Sorry if this has been asked a thousand times, but the search is down.

If anyone can post some raw samples to show the differences, I would
be eternally greatfull.

thanks
--
Recent work:
http://www.pbase.com/jmb_56/cris2_40d
http://www.pbase.com/jmb_56/kim_m_40d
http://www.pbase.com/jmb_56/michelle_iii
Galleries:
http://www.pbase.com/jmb_56/canon_1dmk2n
http://www.pbase.com/jmb_56/40d_30d_and_20d_portraiture
 
You won't get much more that 8-bit from a print, whatever the input bit depth. But extended dynamic range on the input side gives headroom for postprocessing. Hence movie CGI is done in 16-bit, simply to stop the multiple stages of manipulation hitting the top or the bottom. It's like rounding error, if you do something that causes the dark pixels to go below 0, then that data is lost to all subsequent stages.
--
Bob
 
Sorry, I beg to differ. Go into Photoshop, make a smooth gradient spanning say 16 levels on the 0-255 scale (work in 8-bit mode); this represents a smooth background signal of continuous tonal gradient. Add gaussian noise amounting to a width of two units on the 0-255 scale; this will be the noise of the test image. Duplicate this noisy gradient image. On the duplicate, apply a levels adjustment to reduce the scale to 0-64 (this quarters all the levels), followed by a second levels adjustment to quadruple the scale back to 0-255. This will have truncated the last two bits of the image; the noise is equal to the quantization step of the image. You will see that if you look at the histogram, which should be combed. Now measure the std dev of a vertical slice of the file of uniform average color of the original image, and of the bit-truncated image. When I do this, the std dev of the truncated file goes up by about 15%; it certainly does not double.

Quantization error is insignificant until it far exceeds the noise level.
--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
Sorry, I beg to differ. Go into Photoshop, make a smooth gradient
spanning say 16 levels on the 0-255 scale (work in 8-bit mode); this
represents a smooth background signal of continuous tonal gradient.
Add gaussian noise amounting to a width of two units on the 0-255
scale; this will be the noise of the test image. Duplicate this
noisy gradient image. On the duplicate, apply a levels adjustment to
reduce the scale to 0-64 (this quarters all the levels), followed by
a second levels adjustment to quadruple the scale back to 0-255.
This will have truncated the last two bits of the image; the noise is
equal to the quantization step of the image. You will see that if
you look at the histogram, which should be combed. Now measure the
std dev of a vertical slice of the file of uniform average color of
the original image, and of the bit-truncated image. When I do this,
the std dev of the truncated file goes up by about 15%; it certainly
does not double.
OK, Emil. I think we're both guilty of simplifications. Quantisation error isn't actually noise, but it can be modelled as noise for the purpose of analysis. Unless you know the nature of the signal being quantised, you don't know the distribution of the 'noise' caused by quantisation. My assumption of the distribution was niaive, and so was yours (you assumed gaussian, and there's no reason that it should be, for a photographic image input, which can't be assumed to have the same frequency characteristics as the majority of electronic signals). I've also just added the amplitude values for illustration, rather than do the the RMS calculation. However the argument stands, despite that.
Quantization error is insignificant until it far exceeds the noise
level.
This statement is certainly wrong. In the end, it's quantisation error, treated as noise, that sets the DR limit of an ADC. It's significant when it is the same order of magnitude, and effectively insignificant when about 1/3 of the signal noise, and your experiment confirms this, the noise on the truncated image is indeed increased (but not, as you say, the doubling suggested by my niaive analysis, but also more than the 6% increase predicted by your analysis). Since, by your own measuremnts, we now have cameras approaching a 12 stop output dynamic range (including all noise sources) the raw DR of the sensors must be close to 12 stops (bits), and quantisation error, with a 12 bit ADC is certainly significant, particularly since real ADC's are not perfect (irregular steps and missing codes), and contribute more than the minimum theoretical noise. I doubt whether the manufacturers would have bothered to move to 14 bit if it were not. However, those extra 2 bits certainly do not buy 2 stops DR, but a fraction of a stop - and provide the headroom for sensor enhancements.

--
Bob
 
Moreover, I think the big mistake people make about bit depth and DR is that they think that they are the same, but they are not.

The sensor determines the DR, and DR is an analog range and not digital. So for a given sensor, cutting the DR into 12, 14, 16 or more bits per channel does not extend/decrease DR.

--
Lanned For Bife
 
Moreover, I think the big mistake people make about bit depth and DR
is that they think that they are the same, but they are not.
...Well, this is in fact a generalized mis-conception (which I would agree with), BUT...
The sensor determines the DR, and DR is an analog range and not
digital. So for a given sensor, cutting the DR into 12, 14, 16 or
more bits per channel does not extend/decrease DR.
...Here's where the sky falls: I used to think this way until I learned about the importance/effect of sampling/quantization noise.

There is a good-old, and very knowledgeable folk here (Peter Carmichael) that could shed a quantic realm of light on this topic, though. I hope he is around, here.

:-)

--

TIP: If you do not like this post, simply press the 'COMPLAINT' button. Mommy/Daddy are just one click away.
 
None of the modern DSLR sensors have the dynamic range to prouduce 14
bit of usable image data. Moreover, none of them can even fill 12 bit
of data. For this reason there's no advantage in using 14-bit ADC
over 12-bit ADC. It is purely a marketing feature.
You have clearly not understood what 14-bit ADC is. Regardless of the dynamic range, analog to digital conversion transforms analog data to bits. The higher the bits in this conversion, the smoother is the gradation of color and fidelity. Whether that is visible to naked eye is another matter. But it is simply wrong to say 14-bit thingy is a marketing ploy.

--
http://www.pbase.com/pradipta
 
You can't attribute the additional DR solely to 14bit ADC.
...Lower sampling/quantization noise + naturally cleaner shadows (capture) + more granularity to represent them without transforming them into worm-holes of tonality splattered all over the image, and at the bottom-end of the tonal scale.

With these kind of shadows, you simply crush them to the left, and accomodate for more on the right, and yet you still get very good shadows quality but a notch more highlights.

It is not trivial, and it is the result of multiple moving parts, but that is roughly what really happens.

This whole thread is LOST in space though, as it does not address the actual dynamics observed with these images (e.g. my 1D2N vs. my current 1D3).

Enjoy!

--

TIP: If you do not like this post, simply press the 'COMPLAINT' button. Mommy/Daddy are just one click away.
 

Keyboard shortcuts

Back
Top