Both the 14-bit image
and the 12-bit image are written to a 16-bit format during
raw conversion by padding out the extra needed bits.
That's misleading. A transform is applied to convert the linear
12-bit or 14-bit RAW data to nonlinear 16-bit. There's a lot more
involved than just 'padding'.
Fair enough. I don't know where in typical raw converters the
data is padded out to 16 bits. I would have thought that the
sensible thing to do is to pad it out before applying any
nonlinear gamma correction, etc., but that's just me,
maybe the raw conversion programmers see an advantage
to working with less numerical precision when manipulating
the raw data.
Subsequent manipulations are identical regardless of the
initial bit depth at capture.
There are two questions that I'm not in a position to answer -
whether there really is 14-bits-worth of data, and if there is,
whether the difference is visible to the human eye. But what I can
say is that 14-bit RAW converted to TIFF with a gamma of (say) 2.2
will show much less banding at the dark end of the range as the
relatively few pixel levels in the RAW file are 'stretched', so to
speak, across a wider range in the TIFF.
That need not be the case. Take a 12-bit image (raw numbers
going up to 4096 maximum, though Canon's don't quite
achieve that) and multiply the raw values by 4. Add a random
integer from zero to three. The result will be virtually indistinguishable
from a 14-bit image whose last two bits are pure noise.
Stretch them all you want and they still are going to look
the same. That is what my little example above was trying
to demonstrate.
If you take correctly exposed 12-bit and 14-bit RAW files, process
them conventionally and view them on screen or in print, they will be
indistinguishable. This is because all the extra data is discarded at
the point of displaying or printing. But anything which digs deep
into the shadows, either by pushing the exposure or by heavy image
manipulation, will show a difference - and I believe, but can't
demonstrate, that it can be visible.
[snip]
If the camera has 12 stops of dynamic range (and actually with the
one series it's more like 10-11) then 12 bits of data are sufficient
to represent the image. Adding extra bits to the ADC simply quantizes
the noise. Maybe one more bit beyond the dynamic range would
be useful for dithering, but that's about it.
Hmmm.
Useful resource:
http://www.normankoren.com/digital_tonality.html
What specifically are you pointing to there? THere are
many aspects discussed.
%%%%%%%%%%
One thing that is useful to distinguish in all this discussion
is the 12 or 14 bits of raw data versus what you get coming
out of the raw converter, which is a substantial massaging
of the raw data -- one which is specific to each camera
model, making comparison of tiff files output from raw
conversion of different camera models' images murky at best.
I would think the utility of the last two bits is best diagnosed
by taking a14 bit raw image and replacing the least significant
bits by random 1's and 0's and seeing if the resulting image
after any manipulation (extreme pushing of the image
is probably the most revealing, by expanding the shadow
gradations into the visual range) yields a difference.
--
emil
--
http://theory.uchicago.edu/~ejm/pix/20d/