12bit versus 14bit - another sample test

David314

Senior Member
Messages
4,790
Solutions
2
Reaction score
321
Location
US
i took some software code that i had written several years ago to decode the raw files from a Fuji F700 and altered it to read the D300 raw uncompressed 14 bit files

with this i was able to read linear sensor data with no interpolation -this is truly raw

i will post some snippets of this code in the following post

this code is an outgrowth from dcraw of that vintage - with a uniformly exposed towel i can see no difference between 12 bits and 14 bits in detail or noise but.....

take this picture shot raw 14bit uncompressed D300 shown here full size



now things get interesting

question 1 - can you see a difference between 12bit and 14bit - remember this is a 12 bit file with just the 2 LSB's truncated and the answer is yes

question 2 - can you seen any detail in the window with just the LSB's and the answer is no

here is a screen shot compare the lower left hand corner 14bit with the 6LSB's and the upper right hand corner with 12 bit with the least 4 LSB's

this is a white uniform region just below the windowsill - notice the 14bit less coarse noise

also note the lower right hand corner which shows simply noise in the bright region



and final question - is there any image data int he 2LSB's of the 14bit data file

the answer is yes, in the very darkest parts of the picture -

this is the 2 LSB's of the 14bit data - nothing else and shifted in the program into a 16 bit value for import into photoshop



note these are all the same image with the output manipulated by software

so to sum it up - it appears that 14bit does capture some detail and does produce a different noise pattern (less grain) but only in the darkest parts of the image

good shooting!

David
 
if you really want to know how to decode raw files get dcraw.c

i downloaded the files several years ago and it is a stupdendous effort but very difficult to modify but I wish to acknowledge the wonderful contribution of Dave Coffin

If you wish to reproduce this experiment here is a start in the right direction and in case i made a mistake in coding it can be reviewed

note for the D300 RAW_WIDTH = 4352 RAW_HEIGHT = 2868

this is the code that masked the LSB's for the comparison of noise

basically it masked all but the lower 6 LSB's of the 14 bit file and in the case of the 12 bit file it also masked out the 2 LSB's
the

if(BitMask) usSensor = (usSensor & 0x003c)
f(BitMask1) usSensor = (usSensor & 0x003f)

if you substitute

usSensor = (usSensor & 0x0003)

you create a 16 bit quantity from the 2 LSB's of the 14bit raw data

code to follow - note this works for uncompressed raw

fseek(fInput,0,SEEK_END);

fseek (fInput, -RAW_WIDTH*RAW_HEIGHT*2, SEEK_END);

printf("Reading file \n");

for (iRow=0; iRow
{
fread (usDiode, 2, RAW_WIDTH, fInput); load row of pixels

for(i=0x0; i
{
usSensor = htons(usDiode); swap since it is big endian

if(BitMask) usSensor = (usSensor & 0x003c)
if(BitMask1) usSensor = (usSensor & 0x003f)

if((iRow & 1) && !(i & 1) (!(iRow & 1) && (i&1))) odd row
{
rgbS[iRow].fGreen = usSensor;
SHistogram.iGreen[usSensor> > 3]++ ;
}
else if(!(iRow & 1)) if odd row and not green it is red
{
rgbS[iRow].fRed = usSensor;
SHistogram.iRed[usSensor> > 3]++;
}
else if odd row and not green it is blue
{
rgbS[iRow].fBlue = usSensor;
SHistogram.iBlue[usSensor> > 3]++;
}

}

}
 
This is interesting; thanks for the test, as well as the open and informative manner in which you presented it.

One thing that immediately comes to mind is the fact that Nikon clips blacks. I don't know if it explains everything you are seeing, but it may be part of it. On the D300 I tested (and I don't think it is out of the ordinary), raw level zero is actually about 5 raw levels above true black at ISO 200 (I didn't look at other ISO's but I think it's qualitatively similar until you get to high ISO).

This means that raw level zero encodes not only all sensor data for which electronic noise fluctuations sent the analog data below true black, but also up to about one standard deviation above true black. The data has to be more than one standard deviation of noise above true black before it registers in the lowest nonzero raw level. To summarize, at ISO 200 on the camera I tested:

raw level 0: holds all data up to 5 ADU above true black
raw level 1: 6 ADU above true black
raw level 2: 7 ADU above true black
raw level 3: 8 ADU above true black

So you see, the difference between raw level zero and raw level one in 14-bit raw data on the D300 is actually MORE than one 12-bit ADU. Raw values are nonlinearly distorted at black on Nikons, distorting the notion that a change in one raw level means a fixed change in tonality (this is of course only true between level zero and level one, not between any adjacent pair of higher levels).

For this to be of any consequence for your observations, a substantial percentage of the pixels in your image must be at raw level zero (not just in the last two bits, but in all 14 bits). Are there many pixels with raw level zero (ie are they a substantial percentage) in the raw data of the crops you are showing, for instance in the last image? If yes, I can explain further.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
For this to be of any consequence for your observations, a
substantial percentage of the pixels in your image must be at raw
level zero (not just in the last two bits, but in all 14 bits). Are
there many pixels with raw level zero (ie are they a substantial
percentage) in the raw data of the crops you are showing, for
instance in the last image? If yes, I can explain further.
just to be clear these are all the same image

there are slightly less than 5% (4.8%) sensors with a level of zero in the whole image - most are in the lower left corner

interestingly enough - these raw images contain the masked off area on the right side of the sensor that Iliah Borg has mentioned a number of times

if you take a black frame picture everything goes to zero but this masked off area retains values

the current dcraw software skips over this area

David
 
Would you mind sending me a .tif or the .NEF of the image (or at
least a .tif of the lower left corner showing the same crop as you
posted)?
Actually, it would be easiest to work directly with the .NEF; I can always run it through dcraw myself, but I can't run backwards from the output to determine the raw data.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
Would you mind sending me a .tif or the .NEF of the image (or at
least a .tif of the lower left corner showing the same crop as you
posted)?
Actually, it would be easiest to work directly with the .NEF; I can
always run it through dcraw myself, but I can't run backwards from
the output to determine the raw data.
I can send a TIF that is the raw file - that is exactly what you see here.

the problem with dcraw is that it is very convoluted and hard to figure out what is really coming out of it

for instance it doesn't give you the whole raw file

i basically had to throw it all away - but it is a great resource

i sent you email

David
 
OK, rather than a lot of words I thought a picture might prove illuminating. Here's the cityscape one again, taken very underexposed so that much of the bottom half of the image is black. On the 1D3 which I used, black is not clipped rather it is offset to 1024 in the raw data. If one artificially steps in and clips the green1 channel data, resetting all raw values less than 1024 to the value 1024, then truncates to the last two bits, the image on the left below results. If one does not clip blacks first, the truncation to the last two bits looks like the image on the lower right (only about 5% of the pixel values are clipped here):



Clipping blacks artificially sets a substantial portion of the underexposed raw values to zero modulo four, ie the last two bits will be 00. As a result, the image features get outlined by the clipping of black and near-black raw values. If there is no clipping, there are no features. Note that one doesn't need to have the tonality pure black to get clipping, here is an example of D300 raw data histograms as the exposure is reduced toward zero:



So even near-blacks will get a dusting of extra values set to 00 in the last two bits which will show up in images analogous to the last one in the OP.

As for the increased "coarse noise" observed may be coming from additional pixels with the raw values between zero and four getting clipped to zero. These are not necessarily issues with coarser quantization of 12-bit vs 14-bit, if indeed all the structure noted in the OP is due to clipping of blacks in Nikon raw data.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
If one artificially steps in and clips the
green1 channel data, resetting all raw values less than 1024 to the
value 1024, then truncates to the last two bits, the image on the
left below results. If one does not clip blacks first, the
truncation to the last two bits looks like the image on the lower
right (only about 5% of the pixel values are clipped here):
Sorry, the parenthetic comment refers to the lower left image -- only about 5% of the pixels are clipped to black on the lower left image; no pixels are of course clipped in the lower right image in the previous post.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
the current dcraw software skips over this area
} else if (!strcmp(model,"D300")) {
width -= 32;
exactly, the right most 32 bit sensor values are not read in by dcraw

David
Maybe, just maybe, if you remove the statement then dcraw will not
skip the data and handle it instead?
But you do not want dcraw to handle it; it does not know what to do with it. If you comment out all dcraw statements trimming masked frame out and dump the full raw data (careful with dcraw modes, in some cases it is still processing data even doing a dump) - then you can use several methods to improve SNR, implementing appropriate algorithms. That full unprocessed dump will be in libraw as an option.
--
Be seeing you,
François
--
http://www.libraw.org/
 
Maybe, just maybe, if you remove the statement then dcraw will not
skip the data and handle it instead?
But you do not want dcraw to handle it; it does not know what to do
with it. If you comment out all dcraw statements trimming masked
frame out and dump the full raw data (careful with dcraw modes, in
some cases it is still processing data even doing a dump) - then you
can use several methods to improve SNR, implementing appropriate
algorithms. That full unprocessed dump will be in libraw as an option.
You are right, I was unclear, by "handle it" I really meant "let it do what it is you want done".

I haven't followed the libraw effort closely and I thought the aim was to provide dcraw based libraries. It seems instead it is a complete rewrite too...
--
Be seeing you,
François
 
Lots of additional information is buried in EXIF (Maker Notes). Much of that is absolutely essential to a good raw conversion. We would be happy to help a project of, say, librarising the excellent ExifTool (by Phil Harvey, http://www.sno.phy.queensu.ca/~phil/exiftool/ )

As far as libraw, yes, we will be adding some code to dcraw to allow better conversions, based on our experience with 2 actual raw converter projects.

--
http://www.libraw.org/
 
same image as before showing 2 LSB's of 14bit raw file

this is only green and the missing green has been interpolated with a simple average of the neareest four neighbors

any 0 has been replaced with a random number 0-3 which because of the algorithm tends to produce 1 or 2 but averges out to 1.5



this is not a crop - this is the whole image

there is a couch and a black jacket in the lower left hand corner, the couch is green and you can see with a histogram that the values are lower than the wall above it

so i guess i still have to say - there is some information in the 2 LSB's of the 14bit quantity from a D300

David
 
You have proven, that the last two bits of the 14bit values do contain image relevant data. This is no way proves, that the 14bits are more valueable, than the 12bits.

Here is the surprise: the last two bits of the 12bit values too reproduce the courase outline of the image. Emil has explained, why the very low values represent more than those bits are regarded simply numerically. The proof is simple:
those bits outline the image only in the very dark parts
In other words, not the last two bits reproduce the image, but the very low values.

I am an advocate of 14bit recording (uncompressed), but I base that on different considerations.

--
Gabor

http://www.panopeeper.com/panorama/pano.htm
 
And in layman language, what does it all mean.
It means that one cannot conclude that the last two bits contain image information by an examination of images with light levels near to black, when black is clipped as it is in Nikon cameras. The clipping of the raw histogram as black is approached



means the following: Before the illumination level gets to black (for instance patch 16 in the above image) the histogram is smooth and the last two bits of the raw values are pretty much evenly distributed among the four possible values they can take on. The image in such regions looks like random static. As the illumination level approaches black, a bigger and bigger percentage of the pixels are zero (and in particular the last two bits are zero). So there is a shading of the images that David and I are presenting, where the percentage of pixels that are black is a reflection of the average illumination level. This is another example of artificial correlations of the last two bits with the rest of the data that imprints on the last two bits image information.

Bottom line: The last image in the original post of this thread has little to do with the merits of 12-bits vs 14-bits.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 

Keyboard shortcuts

Back
Top