A 2-bit (14 vs 12) challenge

After 404'ing the original link, I hunted for the original JPEG and
CR2 on imaging resource, and it's nothing like the TIFF I downloaded.
Mine is rotated 90 degrees and the white area under the child's chin
in a medium to dark green.

I've downloaded the TIFF from the above link 3 times and get the same
result. Anyone else?
You have it right. As I mentioned in my original post, the only thing that has been done to this file is debayering (apart from bit truncation in one of the two files). To reproduce the Imaging Resource appearance, you have to do manually all the other things that are done under the hood by a raw converter -- set the black and white points, white balance, gamma correction, etc.

The reason the white area looks green is because, as in most raw data, the red and blue channels are quite underexposed relative to the green channel. The raw converter applies a levels correction separately to each color channel based on the WB setting written into the metadata of the raw file. Since this involves further manipulation of the image, I thought it best left up to the individual to carry it out; the sample files are the least manipulated they could possibly be, and hence the most unadulterated source for the challenge.

There is a nice tutorial on correcting white balance in Photoshop on Ron Bigelow's site
http://ronbigelow.com/articles/eyedroppers/eydroppers.htm

or if you don't want to know the details, do an auto colors correction in photoshop.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
...I'll have to try it out but some PS image tweaks increase the noticeability of image aspects.
 
I don't think a face is an appropriate image for DR comparison. It doesn't matter if the DR change is due to a new A-D converter or a better pre-amp we should compare the results using an image that stresses the DR capability of the sensor. To do that I would suggest a scene with half a well exposed bright image out a window and the other half a very dark image inside with lots of subtle shadow details ex. of a dark bookcase of books. Make two files with one having two bits truncated then compare the results for the dark indoor portion after pushing the exposure to try and reveal the shadow details. The shadows should subtle have identifiable details bairly perceptible like book titles that would be damaged by quantization noise- not just smooth regions that would look fine if rendered flat.
 
Hi, DSP,
I don't think a face is an appropriate image for DR comparison. It
doesn't matter if the DR change is due to a new A-D converter or a
better pre-amp we should compare the results using an image that
stresses the DR capability of the sensor.
Does this mean that you feel that the only (significant) potential advantage of greater bit depth at the ADC is improved dynamic range?

By the way, I'd be interested in what definition of dynamic range you hold to. Is it the ISO definition (the ratio of the photometric exposure corresponding to the highest digital output to the photometric exposure at which the signal-to-noise ratio would be one)? Is it the ratio of the photometric exposure corresponding to the highest digital output from the sensor to the photometric exposure corresponding to the lowest non-zero output (or the lowest "above black" output)?

Thanks.

Best regards,

Doug
 
Take a look at the difference layer. Add curves to it and bring the top right corner close to the top left, at about 8 or so. Now the differences are hugely amplified. Draw a selection around some "distinctly visible" area on the difference and now look at the two originals, even in 200%.

The difference is 1 RGB unit at random places . If noise pleases you, then good for you.

--
Gabor

http://www.panopeeper.com/panorama/pano.htm
 
Hi, Emil,
But is this an issue? 14-bit is supposed to help ease tonal
transitions across the board, for instance to decrease posterization
of skies etc. These are not in the lowest two EV of exposure.
I certainly agree with your concept.

But my question remains: what exactly would be meant by the "lowest 2 Ev [I assume that "2 stops" would be a better notation] of exposure".

If the nominal black point in a 12-bit raw file is 1024 raw units, then the "lowest 2 stops of exposure" would extend from what value through what value? I'm not able to develop any concept of the meaning of that term.

If the photometric exposure were considered to be proportional to r-1024, then one could argue that the "lowest two stops" (that is, the lowest 1:4 range) would extend from r=1025 through r=1028. Is that what you mean?

It's a dangerous notation, a little like asking what range of frequencies, from zero up, constitutes two octaves.

A safer thing would be to speak of a certain fraction of the range. We could for example call the range from 1024 through 1030 "the lowest 0.002 of the range" (the range running from 1024 to 4096).

But to force this into "stops" notation, where a ratio isn't really involved (one limit being zero) just doesn't make sense to me.

For example, how many stops below saturation is the black point? Answer: Infinity. So it's not clear where 2 stops higher than that would be.

Thanks.

Best regards,

Doug
 
Hi, DSP,

Does this mean that you feel that the only (significant) potential
advantage of greater bit depth at the ADC is improved dynamic range?

By the way, I'd be interested in what definition of dynamic range you
hold to. Is it the ISO definition (the ratio of the photometric
exposure corresponding to the highest digital output to the
photometric exposure at which the signal-to-noise ratio would be
one)? Is it the ratio of the photometric exposure corresponding to
the highest digital output from the sensor to the photometric
exposure corresponding to the lowest non-zero output (or the lowest
"above black" output)?

Thanks.

Best regards,

Doug
Yes the only significant impact of greater bit depth is improved DR for each color component. Consider a smoothly varying gray region of the image with an A-D count above the black level of about 64 for all three color components. Shot noise alone for that region would cause a standard deviation in the pixel values of 8 counts so those pixels would not suffer from posterization at the sensor and adding 2 more bits would have no effect. Only for those color components or regions where the count above the black level is low, i.e. where the sensor DR is stressed, would the added bits have any effect.

The ISO definition of DR is what I would use; but you need to define noise as any deviation from a low level reference signal so that quantization effects are included, plus the resolution of the reference signal must be defined since noise suppression can be used to increase dynamic range by reducing noise while still preserving large signal structure but fine structure would be removed.
 
Oddly, they both have pixels which are multiples of 2, and are double
the DCRAW undebayered (-D) values. Does IRIS do this in the
debayering? I would have thought it may do that to reduce
digitization noise a little on conversion, but since all pixel values
in your sames are multiples of 2 (even), really nothing was goined at
all...
I'm not sure what you're referring to here, the RGB values of both
samples take on odd values when I look at the two sample tiffs in
IRIS. I also do not know exactly what is being done by IRIS during
the bayer interpolation (that is, what algorithm is being
implemented).
Strange...when I look at both samples in MaximDL, the values are all even for RG&B.

With Sample A, the max pixel value goes up to 32768 (mainly the specular highlight in the eyes) and min pixel about 1440 with the main part of the histogram in the 1900-28000 range.

When I run through DCRAW with no interpolation:
C:\cygwin\dcraw> dcraw -T -D -4 -v Y0B0B0494.CR2

I get a min of 1031 and a max of 15280 with the main part of the histogram in the 1000 to 14000

None of this invalidates anything as long as you truncated before conversion (which you did), so I guess further investigation is not adding to the thread.

Rick
 
Thanks. Nice discussion.

I believe the full ISO definition of dynamic range of a digital camera embraces most of the considerations you mention (directly or indirectly).

Of course, there are those who feel that increased resolution of luminance has benefits not related to dynamic range.

I myself haven't considered the matter enough to have a refined opinion.

In addition, I am still a little confused as the the effect of different quantizing step size in the impact (in the digital domain) of random noise on a continuous (and essentially random variable of interest (here, photometric exposure) prior to quantization. I will have to tune up my quantizing theory in that regard!

Thanks again for your insights.

Best regards,

Doug
 
In your test you assume that 14-bit is for providing more tonality details. But let's say for a second that the 40D DR is 2 stops better, then you need the 2 extra bits to preserve the same tonality detail found in the 30D bit 12-bit RAW. To evaluate the results if the 40D with 2 stops better DR was writing 12-bit raw files, you need to compare the tonality detail between 12 bit and 10 bit.

Of course the reality is that the 40D may have, say, 2/3 stops better DR than the 30D. Still, you can't make 12 2/3 bit chips!. And you can't really make then 13-bit either, because even bit number is much easier. It also provides room for future DR improvements: no need to redesign that part of the chip for future dSLR generations.

The reality may be that 14-bit is for some DR improvement, maybe a little more tonality detail, and, of course, manufacturers are not going to complain if some people upgrade their camera just because of the 14-bit feature. Same they don't mind if people upgrade to get 2 more MPs, say, even if the true impact on image quality is less than the numbers suggest.

I applaud the idea behind the test. Very clever!
--
Thierry
 
I know GPS is totally different, noise in CCD/CMOS is thermal, shot noise and there is also leakage current, corss talk etc. samples of which are uncorrelated. You can of course interpolate the extra 2 bit in between original 12-Bit values and produce 14-Bit file, but that will be fake tones. in fact if you did this and the result is the same as 14Bit file, then it tells you the extra 2 bits don't do any thing
Nice tests,
14 Bit ADC is more of a marketing gimmick for commerical DSC sensors,
The noise margin for typical CMOS/CCD image sensors is higher than
such bit depth.
True. But if the noise is not totally white, you may be able to dig
some info to help e.g. to reduce some noise components. If you do not
happen to know the GPS data is entirely below the noise level, and is
detected by correlation. With images you cannot use long time
correlation, but if you have accurate enough model of e.g. the noise
creation mechanisms there may be ways to tackle with that.

So before we know how Canon is using the 14bit data, there is no
justified reason to say this is just a gimmic.
Now if add 4 values by even linear interpolation
between two adjacent 12-Bit values, you will not be able to visually
differentiate that from the original 14-Bit file. 14Bit ADC has been
around for a while and has been used for special appliaction CCDs
which have over 20EV dynamic range, but until recently the prices
were very high. Now that prices have come down, it is being a trend
to have a 14Bit ADC. I have not yet seen a single sample where it
makes a visually measurable difference, it is more a waste of flash
memory and resources.

Arash
For those of you who think the 14 bit color depth on the new
generation of camera bodies is the cat's meow, here's a challenge:
show us that the extra two bits are worthwhile, in terms of smoother
tonal gradients, ability to withstand editing manipulations, etc.

Here's the setup: I downloaded one of the 1D3 raw files to be found at
http://www.imaging-resource.com/PROD...K3/E1DMK3A.HTM
specifically the sample raw file
http://www.imaging-resource.com/PROD.../Y0B0B0494.CR2
will be used here. You can find a full resolution jpeg on the above
webpage (just click on the thumbnail) if you want to see what it
would look like after a typical raw conversion.

I took the raw file into IRIS, a freeware image analysis tool used in
astrophotography. One file was simply bayer interpolated from the
raw using IRIS (under the hood I believe the engine is dcraw), and so
has full 14-bit resolution. Another copy of the file had its raw
values divided by 4 and then multiplied by 4 to truncate the last two
bits (since when dividing by four in integer arithmetic the
fractional part is dropped when dividing by four, the last two bits
are set to zero when scaling back up again by four). While this is
not the optimal way to truncate the two least significant bits, it
will do for the present purpose. Here are the two files:

http://theory.uchicago.edu/~ejm/pix/...s/sample-A.tif
http://theory.uchicago.edu/~ejm/pix/...s/sample-B.tif

Please note that these are 60MB, 16-bit tiff files, so be sure you
want them before trying to download (I hope my server is up to it).


Post-process as much as you like -- stretch the histogram wildly to
push shadows six stops, for instance, so that any issues will show up
at the 8-bit resolution of our monitors. Or stretch highlights so
that banding of lighter tones should be apparent on our monitors.
There are any number of ways of making the distinction between 12-bit
and 14-bit color depth apparent on a standard computer monitor using
Photoshop, if you are sufficiently creative, AND there is something
to be seen. It's not that you would necessarily take such extreme
measures during an ordinary editing session, rather the issue is
whether there is anything in principle to be gained by adding the two
extra bits.

Your task is to discern which file is the 12-bit file and which is
the 14-bit file, by post-processing each in exactly the same way, and
demonstrating that one stands up better to manipulation. If you can
find a difference, show us the proof and tell us what you did so that
we can reproduce your methods. If no one can find a meaningful
difference, then the extra two bits are in practice superfluous (and
wasteful).

Please note that no setting of black/white points, gamma correction,
white balance, curves correction, etc, have been done to these tiff
files -- only the bayer interpolation has been performed. You'll have
to do manually in Photoshop all these other corrections that raw
converters do. I think that's fairer, so that you have available the
nearly raw data without prior manipulation other than bayer
interpolation (and in one case bit truncation).

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
14 Bit has been around for a decade or more and has been used in military equipment... yah I remeber that Sony camera, I actually had one, DSC-S75 or something. the funny thing was it didn't even have RAW so in that case it was a complete marketing gimmick
14Bit ADC has been
around for a while and has been used for special appliaction CCDs
which have over 20EV dynamic range, but until recently the prices
were very high. Now that prices have come down, it is being a trend
to have a 14Bit ADC.
Sony (from 2001) uses 14bit ADC (DSC 707, 717, 828, R1): "14-Bit DXP
A/D Conversion
Sony's 14-bit Digital EXtended Processor captures the range between
highlight and shadow with up to 16,384 values, for extended dynamic
contrast and detail."

So agree: 14Bit ADC has been around for a while

http://www.pbase.com/arra
 
to implement in silico.
--
Thierry
 
DR is limited by image sensor not the ADC chip, using a 14-Bit ADC doesn't mean you get more DR. 40D has not shown to have any more DR than 30D and practically it would be very difficult for it to be so due to smaller pixel pitch.

Arash
In your test you assume that 14-bit is for providing more tonality
details. But let's say for a second that the 40D DR is 2 stops
better, then you need the 2 extra bits to preserve the same
tonality detail found in the 30D bit 12-bit RAW. To evaluate the
results if the 40D with 2 stops better DR was writing 12-bit raw
files, you need to compare the tonality detail between 12 bit and 10
bit.

Of course the reality is that the 40D may have, say, 2/3 stops better
DR than the 30D. Still, you can't make 12 2/3 bit chips!. And you
can't really make then 13-bit either, because even bit number is much
easier. It also provides room for future DR improvements: no need to
redesign that part of the chip for future dSLR generations.

The reality may be that 14-bit is for some DR improvement, maybe a
little more tonality detail, and, of course, manufacturers are not
going to complain if some people upgrade their camera just because of
the 14-bit feature. Same they don't mind if people upgrade to get 2
more MPs, say, even if the true impact on image quality is less than
the numbers suggest.

I applaud the idea behind the test. Very clever!
--
Thierry
 

Keyboard shortcuts

Back
Top