The truth about 14-bit

How can you compare images with noise level differences when you use jpeg compression on the files?
 
The current sensors have more dynamic range than the system following the variable gain amplifier- i.e. the A-D converter. If the reverse was true then you wouldn't need variable gain at all and could just adjust exposure according to the desired ISO then compensate digitally in the raw converter (which is what is done on the 30D to go from ISO 1600 to ISO 3200) but from ISO 100 to 800 digitally boosting exposure during conversion is much worse than upping the ISO thus increasing the pre-amp gain before capture. So the A-D converter is the limiting factor for DR at 100 and 200 ISO and improving A-D dynamic range is the way to improve the low ISO finished image DR. One limit to the A-D DR in stops is the number of bits in the converter; but of course just adding more bits doesn't mean they are any good. So a 12 bit converter could have less noise than a 14 bit one if for instance only 11 bits from the 14 bit converter are any good. One thing that adds noise at the A-D converter stage is how fast the A-D is operating. Sony has attacked this problem with the latest sensor they just announced by using a separate A-D for each image column so each converter can be run very slowly, but then they are only using 12 bit converters so quantization noise will be an issue.The real question for a new camera is therefore how many stops of DR are captured in a single pixel of the raw file? It must be less then the number of bits in the A-D converter but just how much less is it?
 
what if you do lots of PP and reapply levels and contrast and so on many times.
Jak,

Do you have a monitor and video card capable of displaying most of
the AdobeRGB gamut. If not it noticeably changes the way the images
appear.
Video card, no problem.

LCD monitor, I seriously doubt the gamut is large enough for the
entire AdobeRGB gamut. I spent a LOT of time testing monitors to get
one that actually displayed the lower bits after calibration. FWIW
my LCD monitor is a Viewsonic VP930. BTW, my CRT display is a
Viewsonic G790.

Both my LCD and CRT monitors clearly show Mach banding for both the
1DmkII and 1DmkIII files (that is why we used a CPL, to create every
possible opportunity for banding). Problem is, and this was a REAL
surprise to me, there is no improvement (i.e. reduction) in banding
when you look at the 14-bit 1DmkIII file (note, IMO not even a slight
teeny tiny hint of improvement).

Regards,

Joe Kurkjian, Pbase Supporter

http://www.pbase.com/jkurkjia/original



SEARCHING FOR A BETTER SELF PORTRAIT
 
Just an interesting thought, not advanced to promote the purported "value" suggested by Canon's 40D writeup:

Maybe our monitors, or even our printer ink, cannot demonstrate to the eye the 2-bit difference between 12 and 14 bits, but if they could or can or someday will, isn't it better to have 2 bits of random values from 0 to 3 than to have them set unconditionally to zero, as would be true in the 12-bit case? For "typical" images with little pure white space, random bits would be dead right 25% of the time, whereas, at least in typical images, zero values would be correct in fewer cases. Also (splitting hairs) a random value has a better chance than zero of being closer to the correct value in all but white areas.
 
Both the 14-bit image
and the 12-bit image are written to a 16-bit format during
raw conversion by padding out the extra needed bits.
That's misleading. A transform is applied to convert the linear 12-bit or 14-bit RAW data to nonlinear 16-bit. There's a lot more involved than just 'padding'.
Subsequent manipulations are identical regardless of the
initial bit depth at capture.
There are two questions that I'm not in a position to answer - whether there really is 14-bits-worth of data, and if there is, whether the difference is visible to the human eye. But what I can say is that 14-bit RAW converted to TIFF with a gamma of (say) 2.2 will show much less banding at the dark end of the range as the relatively few pixel levels in the RAW file are 'stretched', so to speak, across a wider range in the TIFF.

If you take correctly exposed 12-bit and 14-bit RAW files, process them conventionally and view them on screen or in print, they will be indistinguishable. This is because all the extra data is discarded at the point of displaying or printing. But anything which digs deep into the shadows, either by pushing the exposure or by heavy image manipulation, will show a difference - and I believe, but can't demonstrate, that it can be visible.
The whole issue is whether there is extra information
recorded in those extra digits
This is the important point; luckily, it can be tested easily. I
analyzed images from the 1DMkii and it is clear, that the extra bits
DO carry information, the graduation is smoother than with the 1DMkII
(it's another issue, how it can be made useful in the final product).
That is not scientific. Just because 1d3 images have smoother
tonal gradations than 1d2 images does not mean that this
feature is due to the extra bit depth; there are many differences
between the two cameras -- different sensors, different processors,
and so on. You cannot isolate which of these (or what combination)
is most responsible for the final result just by looking at the
final result.
I agree you can't determine that by simple inspection. But an understanding of the maths will lead you to the conclusion that the 14-bit ADC is likely to be playing a part.
If the camera has 12 stops of dynamic range (and actually with the
one series it's more like 10-11) then 12 bits of data are sufficient
to represent the image. Adding extra bits to the ADC simply quantizes
the noise. Maybe one more bit beyond the dynamic range would
be useful for dithering, but that's about it.
Hmmm.

Useful resource: http://www.normankoren.com/digital_tonality.html
 
Has anyone else noticed that the latest versions of DPP have a RAW adjustment tab that looks like this:



This is a 20D shot being viewed in DPP. Notice that the histogram now has some unused space "grayed out" at the left and right ends? Could that be to make room for what will be there when I shoot with a 40D or 1DMkIII? Sure looks like extra dynamic range as opposed to just finer gradations so I'm confused by it all.

The older versions of DPP never had this.

And what about the "highlight priority" mode? Might it not be handy to have an extra bit or two available to digitize things if we turn the gain of the PGAs down a stop before doing the A/D conversion when shooting in HP mode? Of course, the noise of the entire system has to be lower to make this all yield what we'd like it to yield, but it might be so.

I think Canon put those grayed-out bars on the left and right of the histogram in DPP to tantalize us 20D shooters :)

--
Jim H.
 
That's misleading. A transform is applied to convert the linear
12-bit or 14-bit RAW data to nonlinear 16-bit. There's a lot more
involved than just 'padding'.
True.
whether the difference is visible to the human eye. But what I can
say is that 14-bit RAW converted to TIFF with a gamma of (say) 2.2
will show much less banding at the dark end of the range as the
relatively few pixel levels in the RAW file are 'stretched', so to
speak, across a wider range in the TIFF.
By "as" I assume you meant "because". I would challenge you to demonstrate this. My counterclaim is that what you would actually see is noise, not banding, which, more accurately, would be called posterization.
If you take correctly exposed 12-bit and 14-bit RAW files, process
them conventionally and view them on screen or in print, they will be
indistinguishable. This is because all the extra data is discarded at
the point of displaying or printing. But anything which digs deep
into the shadows, either by pushing the exposure or by heavy image
manipulation, will show a difference - and I believe, but can't
demonstrate, that it can be visible.
It will be visible: similar noise in the 14 bit and 12 bit versions will be accentuated.

--
http://www.pbase.com/victorengel/

 
whether the difference is visible to the human eye. But what I can
say is that 14-bit RAW converted to TIFF with a gamma of (say) 2.2
will show much less banding at the dark end of the range as the
relatively few pixel levels in the RAW file are 'stretched', so to
speak, across a wider range in the TIFF.
By "as" I assume you meant "because".
Actually no, I meant 'when'.
I would challenge you to
demonstrate this. My counterclaim is that what you would actually see
is noise,
Well, that is indeed the $64,000 question, I fully understand that. But whatever the sensor records, a 14-bit ADC will convert it more faithfully - that was my point.
not banding, which, more accurately, would be called
posterization.
Yes, I should have said posterisation.
 
First off, I'm a fan of 14-bit resolution (assuming the DR and noise of the sensor supports 14-bits and the A/D conversion is accurate to + - 1/2 LSB). I got REALLY REALLY REALLY excited about the 1DmkIII because of the 14-bits and high ISO noise improvement; the noise improvement was "real" but the advantage of the extra two bits, for some reason, can't be seen in the area (clear sky going transitioning from light to dark) where I had hoped to see an "obvious" improvement. IMO the extra two bits "should be" a big help as you map from RAW to RGB; however, if you can't see the difference you have to question the utility of the feature.
what if you do lots of PP and reapply levels and contrast and so on
many times.
The smart @ss answer is you shouldn't have screwed up the picture that badly in first place. :-) Okay, a fair enough question and kidding out of the way, clearly the extra resolution would be a benefit if multiple levels of "significant" adjustments are made, especially if you have to go between RGB and Lab color a couple of times (AFAIC once is bad enough).

Regards,

Joe Kurkjian, Pbase Supporter

http://www.pbase.com/jkurkjia/original



SEARCHING FOR A BETTER SELF PORTRAIT
 
I've applied the same procedure to both your pairs of images, sandwiched in the same order. I added 5% to the upper layer then took a difference and expanded it with levels. The first is the result of applying this procedure to your jpeg pair. The second is to the tif pair. The difference between these two is jpeg artifacts.





--
http://www.pbase.com/victorengel/

 
Both the 14-bit image
and the 12-bit image are written to a 16-bit format during
raw conversion by padding out the extra needed bits.
That's misleading. A transform is applied to convert the linear
12-bit or 14-bit RAW data to nonlinear 16-bit. There's a lot more
involved than just 'padding'.
Fair enough. I don't know where in typical raw converters the
data is padded out to 16 bits. I would have thought that the
sensible thing to do is to pad it out before applying any
nonlinear gamma correction, etc., but that's just me,
maybe the raw conversion programmers see an advantage
to working with less numerical precision when manipulating
the raw data.
Subsequent manipulations are identical regardless of the
initial bit depth at capture.
There are two questions that I'm not in a position to answer -
whether there really is 14-bits-worth of data, and if there is,
whether the difference is visible to the human eye. But what I can
say is that 14-bit RAW converted to TIFF with a gamma of (say) 2.2
will show much less banding at the dark end of the range as the
relatively few pixel levels in the RAW file are 'stretched', so to
speak, across a wider range in the TIFF.
That need not be the case. Take a 12-bit image (raw numbers
going up to 4096 maximum, though Canon's don't quite
achieve that) and multiply the raw values by 4. Add a random
integer from zero to three. The result will be virtually indistinguishable
from a 14-bit image whose last two bits are pure noise.

Stretch them all you want and they still are going to look
the same. That is what my little example above was trying
to demonstrate.
If you take correctly exposed 12-bit and 14-bit RAW files, process
them conventionally and view them on screen or in print, they will be
indistinguishable. This is because all the extra data is discarded at
the point of displaying or printing. But anything which digs deep
into the shadows, either by pushing the exposure or by heavy image
manipulation, will show a difference - and I believe, but can't
demonstrate, that it can be visible.
[snip]
If the camera has 12 stops of dynamic range (and actually with the
one series it's more like 10-11) then 12 bits of data are sufficient
to represent the image. Adding extra bits to the ADC simply quantizes
the noise. Maybe one more bit beyond the dynamic range would
be useful for dithering, but that's about it.
Hmmm.

Useful resource: http://www.normankoren.com/digital_tonality.html
What specifically are you pointing to there? THere are
many aspects discussed.

%%%%%%%%%%

One thing that is useful to distinguish in all this discussion
is the 12 or 14 bits of raw data versus what you get coming
out of the raw converter, which is a substantial massaging
of the raw data -- one which is specific to each camera
model, making comparison of tiff files output from raw
conversion of different camera models' images murky at best.

I would think the utility of the last two bits is best diagnosed
by taking a14 bit raw image and replacing the least significant
bits by random 1's and 0's and seeing if the resulting image
after any manipulation (extreme pushing of the image
is probably the most revealing, by expanding the shadow
gradations into the visual range) yields a difference.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
since most A/D are not perfect. maybe the current one is a bit dogy already in the 12th bit, so couldn't using a 14bit one at least assure a perfect A/D conversion of 12bits?
 

Keyboard shortcuts

Back
Top