The truth about 14-bit

Well, better gradation with 14-bit (more "steps", more "linear") will give you less moire than a 12-bit converter at the very least. I'd say less moire is better...
 
I can say that the images I have taken with the 1D3 do seem to have better color (more lifelike) than what I am used to seeing from my other cameras. This isn’t always the case but it seems to be the trend. I haven’t seen any earth-shattering difference but overall I think I am seeing an improvement so maybe 14-bit is a benefit. The difference could also be related to the tone-curve or other processing as well so… In any case I welcome the 14-bit and look forward to using the 40D.

Greg

--



http://www.pbase.com/dadas115/
 
produce equal intensity levels throughout the picture. Don't take my
word for it, download the two picture files and see (or not as the
case may be) for yourself - there is no visible difference when you
compare 16-bit TIFF files.
I wonder what kind of monitor you are using while looking for 12/14bit difference..?
 
Jak,

Do you have a monitor and video card capable of displaying most of the AdobeRGB gamut. If not it noticeably changes the way the images appear.

My professional series monitor bit the dust and I made the mistake of buying an LCD with a smaller gamut and I really notice the difference in the images.

I just check the other thread you referenced and I looked at the images using my old backup professional series CRT and the banding was much less noticeable on it than the cringe new LCD.
Similarly, the 14bit A/D converter might not seem to affect the image
much at first glance, but the ability to get greater tonality when
you push into the shadows is a definite bonus.
Your statement above is 14-bit hype. The problem is you can't see
the "greater tonality" in the shadows when the 1DmkII and 1DmkIII
pictures are identical and converter tone curves normalized to
produce equal intensity levels throughout the picture. Don't take my
word for it, download the two picture files and see (or not as the
case may be) for yourself - there is no visible difference when you
compare 16-bit TIFF files.

Regards,

Joe Kurkjian, Pbase Supporter

http://www.pbase.com/jkurkjia/original



SEARCHING FOR A BETTER SELF PORTRAIT
 
If the greater bit depth helps eliminating banding depends on the actual reason of that banding; this is a complex issue.

However, greater bit depth is useful when converting the raw image to 16-bit TIFF, which then undergoes stages of approximation/interpolation.

Interpolation is necessary already in the raw processing (converion) phase. In specialized applications like HDR and particularly panorama stitching and blending, several stages of interpolation have to be applied. Add the post-pprocessing to this, like fine color adjustment, resizing and sharpening; the errors of interpolations can accumulate.

--
Gabor

http://www.panopeeper.com/panorama/pano.htm
 
someone said that some of the A/D converters can be a bit dodgy and not always get the last bit correct so maybe using 14bit ones would at least give us perfect 12bits, whereas now, peraps we are only getting 11bits without any mangling?
anyway just speculation.
If you read this:
"Adding to the improved virtuosity of the images captured by the EOS
40D SLR is the camera's 14-bit Analog-to-Digital (A/D) conversion
process. Able to recognize 16,384 colors per channel (four times the
number of colors recognized by the EOS 30D SLR's 12-bit conversion
capability), the EOS 40D camera is able to produce images with finer
and more accurate gradations of tones and colors."
The obvious impact of not having it means fewer gradations, which
means bigger jumps between gradations, This will be perceived as
banding.

In case I am not being obvious enough. I think this is pure marketing
all the way. Like when the Sony Cybershots with micro sensors had
14bit ADCs, or when Pentax went to a 22bit ADC. Or even when the
Canon G series went from 10bit to 12bit when those tiny sensors don't
really have the S/N to support even 12bit, let alone the 14bit that
Sony was using on the Cybershots.

There is a slight possibility that this will make an infinitesimal
difference on 1Dmk3 with it's fat sensitive low noise pixels. None at
all that it will make a difference with the 40d.

But it is certainly paying off in increased tongue wagging for Canon.
 
However, greater bit depth is useful when converting the raw image to
16-bit TIFF, which then undergoes stages of
approximation/interpolation.

Interpolation is necessary already in the raw processing (converion)
phase. In specialized applications like HDR and particularly panorama
stitching and blending, several stages of interpolation have to be
applied. Add the post-pprocessing to this, like fine color
adjustment, resizing and sharpening; the errors of interpolations can
accumulate.
Huh? No interpolation necessary, just adding extra digits.
A $1 bill is worth 100 cents. I have not interpolated the
value of the dollar to render it in pennies. There is no
extra "information" expressed in adding two extra digits
to the representation of the monetary amount.

Both 14-bit and 12-bit image data are written to 16-bit
data formats for further manipulation by raw converters
and Photoshop, in the same way as the above analogy:
pad out the extra digits with zeros. No info gained or lost.
Further editing takes place in the same 16-bit format.
The whole issue is whether there is extra information
recorded in those extra digits, and for that to be the
case the camera would have to have far more dynamic
range (equivalently, far less noise) than it does.
Without true image information beyond random noise
in those extra two bits, there is nothing for the raw converter
or photoshop to work with that would distinguish a
14-bit file from a 12-bit one.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
Huh? No interpolation necessary, just adding extra digits
The interpolation is not coming from the greater bit depth but from the manipulations, typically from calculating the value of a "new pixel" from "existing pixels". This occurs for example in transformation of the image, in resizing, in stitching, in sharpening, in enhancing the contrast. In fact there are not many changes to the image, which do not involve calculation of a new pixel value.

Another such occasion is changing between color spaces; Lab offers some advantages over RGB, but converting back and forth causes small errors.
Both 14-bit and 12-bit image data are written to 16-bit
data formats for further manipulation by raw converters
and Photoshop, in the same way as the above analogy
This has nothing to do with the subject; even two valuable bits can be stored in 16-bits, it plays no role, as long as the storage is not smaller than the actual value requires.
The whole issue is whether there is extra information
recorded in those extra digits
This is the important point; luckily, it can be tested easily. I analyzed images from the 1DMkii and it is clear, that the extra bits DO carry information, the graduation is smoother than with the 1DMkII (it's another issue, how it can be made useful in the final product).
and for that to be the
case the camera would have to have far more dynamic
range (equivalently, far less noise) than it does
There is no such connection. Dynamic range and bit depth are almost totally independent.
Without true image information beyond random noise
in those extra two bits, there is nothing for the raw converter
or photoshop to work with that would distinguish a
14-bit file from a 12-bit one.
I wonder on what you base the assumption, that there is only "random noise" in the extra bits?

--
Gabor

http://www.panopeeper.com/panorama/pano.htm
 
It's not if you can tell the difference between unmodied images, it's
the extra tones available for editing. The 14-bit option gives us 16K
tones while the 12-bit gives us 4K. It is hard to belive that the
entire extra 12K tones of 14-bit will get lost in noise. I expect the
histogram to hold up much better under editing 14-bit
Believe it. You don't get extra tones of any substance. It's just noise. The problem is that at 14 bits, the precision exceeds the accuracy of the data. Precision is meaningful only when the data is accurate to that precision.

All you have to do to verify this is to do some pixel peeping. If using 14 bits were significant, then truncating at 12 bits would result in posterization. Until someone can show me an example of such posterization, I'll continue to believe there is no value to 14 bits. I haven't seen it yet, but that doesn't mean it doesn't exist.
--
http://www.pbase.com/victorengel/

 
Jak,

Do you have a monitor and video card capable of displaying most of
the AdobeRGB gamut. If not it noticeably changes the way the images
appear.
Video card, no problem.

LCD monitor, I seriously doubt the gamut is large enough for the entire AdobeRGB gamut. I spent a LOT of time testing monitors to get one that actually displayed the lower bits after calibration. FWIW my LCD monitor is a Viewsonic VP930. BTW, my CRT display is a Viewsonic G790.

Both my LCD and CRT monitors clearly show Mach banding for both the 1DmkII and 1DmkIII files (that is why we used a CPL, to create every possible opportunity for banding). Problem is, and this was a REAL surprise to me, there is no improvement (i.e. reduction) in banding when you look at the 14-bit 1DmkIII file (note, IMO not even a slight teeny tiny hint of improvement).

Regards,

Joe Kurkjian, Pbase Supporter

http://www.pbase.com/jkurkjia/original



SEARCHING FOR A BETTER SELF PORTRAIT
 
So I can't comment but if you have a complex amount of tones within the scene that is when you are most likely to notice any differences.

Another factor to consider is while you can study a shot and not see the differences your eye does more than just look at colour but uses the amount of shadow / highlight tones to create a 3D perspective image and I think this is where it will be noticeable - and now close the gap with film.

Also the highlight tone priority mode is also 14 bit.
Please refer to my post above:
http://forums.dpreview.com/forums/read.asp?forum=1019&message=24454904

Regards,

Joe Kurkjian, Pbase Supporter

http://www.pbase.com/jkurkjia/original



SEARCHING FOR A BETTER SELF PORTRAIT
--

 
Huh? No interpolation necessary, just adding extra digits
The interpolation is not coming from the greater bit depth but from
the manipulations, typically from calculating the value of a "new
pixel" from "existing pixels". This occurs for example in
transformation of the image, in resizing, in stitching, in
sharpening, in enhancing the contrast. In fact there are not many
changes to the image, which do not involve calculation of a new pixel
value.

Another such occasion is changing between color spaces; Lab offers
some advantages over RGB, but converting back and forth causes small
errors.
Cumulative errors in image manipulation post-capture have
nothing to do with whether the image has 12-bit depth
or 14-bit depth at the time of capture. Both the 14-bit image
and the 12-bit image are written to a 16-bit format during
raw conversion by padding out the extra needed bits.
Subsequent manipulations are identical regardless of the
initial bit depth at capture.
Both 14-bit and 12-bit image data are written to 16-bit
data formats for further manipulation by raw converters
and Photoshop, in the same way as the above analogy
This has nothing to do with the subject; even two valuable bits can
be stored in 16-bits, it plays no role, as long as the storage is
not smaller than the actual value requires.
The whole issue is whether there is extra information
recorded in those extra digits
This is the important point; luckily, it can be tested easily. I
analyzed images from the 1DMkii and it is clear, that the extra bits
DO carry information, the graduation is smoother than with the 1DMkII
(it's another issue, how it can be made useful in the final product).
That is not scientific. Just because 1d3 images have smoother
tonal gradations than 1d2 images does not mean that this
feature is due to the extra bit depth; there are many differences
between the two cameras -- different sensors, different processors,
and so on. You cannot isolate which of these (or what combination)
is most responsible for the final result just by looking at the
final result.
and for that to be the
case the camera would have to have far more dynamic
range (equivalently, far less noise) than it does
There is no such connection. Dynamic range and bit depth are almost
totally independent.
If the camera has 12 stops of dynamic range (and actually with the
one series it's more like 10-11) then 12 bits of data are sufficient
to represent the image. Adding extra bits to the ADC simply quantizes
the noise. Maybe one more bit beyond the dynamic range would
be useful for dithering, but that's about it.
Without true image information beyond random noise
in those extra two bits, there is nothing for the raw converter
or photoshop to work with that would distinguish a
14-bit file from a 12-bit one.
I wonder on what you base the assumption, that there is only "random
noise" in the extra bits?
http://www.openphotographyforums.com/forums/showpost.php?p=31580&postcount=8

and the rest of the thread from which this post originates. BTW the
discussion there concerns the 1d3, whose larger photosites
will surely have more DR than those of the 40D, for which 14-bit
makes even less sense.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
This is the important point; luckily, it can be tested easily. I
analyzed images from the 1DMkii and it is clear, that the extra bits
DO carry information, the graduation is smoother than with the 1DMkII
(it's another issue, how it can be made useful in the final product).
Really. Care to share images from this analysis and what 12 bit display you were using to note that 4096 shades were inadequate. Certainly you wouldn't use a display capably of only displaying 256 shades (8bit) to determine that 4096 was inadequate?
 
Yeah I'd like a 12bit per channel monitor too :)

Currently top end graphics cards only offer 10 bit per channel through the DVI port.

The 24bits per channel is for working with a large tone palette (other 8bit is for alpha).
This is the important point; luckily, it can be tested easily. I
analyzed images from the 1DMkii and it is clear, that the extra bits
DO carry information, the graduation is smoother than with the 1DMkII
(it's another issue, how it can be made useful in the final product).
Really. Care to share images from this analysis and what 12 bit
display you were using to note that 4096 shades were inadequate.
Certainly you wouldn't use a display capably of only displaying 256
shades (8bit) to determine that 4096 was inadequate?
--

 
At one time 16 bit audio ruled and everyone thought extra bits were not needed.

Then eventually everyone wanted more bits, the next big step was 20 bit audio with a theoretical dynamic range of 120DB. Professional studios only get about 110db S/N using high-end sources, so the 120DB S/N was as much as would ever be needed. 110db is like being in the front row of a rock concert, and 0db is the threshold of human hearing.

But when 24-bit systems came out they sounded better than 20-bit systems. It has been found that our ears could pull out information that was buried in the noise by several bits. Could you hear it on every recording, no. Most "rock" music is virtually indistinguishable between 16-bit and 24-bit, but on other music it was obvious. And now 24-bit audio can represent every level of dynamics the human ear can detect, but we are still moving toward 32-bit.

Sample rates for audio have gone from 48Khz, to 96 Khz, and now 192Khz is the professional norm. That is 4x what is theoretically necessary to capture all perfect human hearing frequencies.

Until we have the 40D in our hands and the RAW software has been optimized to handle the 14-bit data, we can't make any determinations.

I think Canon feels there is a value to it. It is such an obscure spec that there isn't any real marketing value in it, except amongst us geeks. But maybe they are just paving the way for future sensors that have a high DR.

mike
--
http://www.pbase.com/chibimike
 

Keyboard shortcuts

Back
Top