12bit versus 14bit - another sample test

As for the increased "coarse noise" observed may be coming from
additional pixels with the raw values between zero and four getting
clipped to zero. These are not necessarily issues with coarser
quantization of 12-bit vs 14-bit, if indeed all the structure noted
in the OP is due to clipping of blacks in Nikon raw data.
after looking at this more than is healthy - i would say the difference in the noise iis due to the truncation of the bits making 12 bits look a bit darker

if you round up the 12 bit vaue - that is add 2 to the 14 bit before truncating it appears brighter

i've stared at a myriad of gray card images where the exposure was boosted by 8-10 stops and even though the histogram shows as little as 4 levels in the 12bit image and 4 times that many in the 14 bit image - i can not see a difference even with this difference spread out over the entire histrogram

that may not make a lot of sense but in plain words

i see no difference with a 14bit raw fille in the noise of the darkest shadows and the 12 bit data from the same file - if the conversion from 14bits is rounded off to the nearest 1/2 bit

David
 
this is the 2 LSB's of the 14bit data - nothing else and shifted in
the program into a 16 bit value for import into photoshop
I was interested to see is it possible to extract a little more contours and textures from the above image , and used a quick method:



Please take into account that working from a jpeg might be not the best thing. Unless I'm missing something, original image being downsized 2.1 times should affect the resulting reconstruction of data in a negative way too.

--
http://www.libraw.org/
 
I get something similar when I do the exercise (my random number is really evenly split among 0-3):

http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/dpr/0to3_randomize0s-2.jpg

The shading is happening because if you take a histogram of a small patch of the couch before randomizing the zero values, it looks like this:

0 1750
1 437
2 379
3 286

One clearly sees the effect of the clipping in the excess of pixels having the value zero modulo four. And after adding a random number in the range 0-3 to each pixel having the value 0 mod 4, a similar (but not identical) patch from the same area has a histogram

0 459
1 959
2 926
3 805

In other words you have pushed the excess of zero values and evenly distributed the zeros among all four values, leaving a deficit of zero values relative to the other three. Now one sees the negative of the original clipped image. The image one sees after this attempt to remove clipping is still affected by the clipping.

Once the image is clipped, the distribution of raw values is skewed and it is going to be very difficult if not impossible to "unclip" the image data in a way that restores the approximately random distribution of the last two bits that pertains before clipping, as exhibited for example in the pair of 1D3 images before and after clipping that I posted before.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
I got it through the usendit clone you sent the link to; results posted yesterday below. I think the problem with your method for trying to remove the effects of clipping will not remove the effects of clipping, all it will do is redistribute the skewed pattern of the last two bits that was artificially introduced by the clipping.
--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
I get something similar when I do the exercise (my random number is
really evenly split among 0-3):

http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/dpr/0to3_randomize0s-2.jpg

The shading is happening because if you take a histogram of a small
patch of the couch before randomizing the zero values, it looks like
this:

0 1750
1 437
2 379
3 286

One clearly sees the effect of the clipping in the excess of pixels
having the value zero modulo four. And after adding a random number
in the range 0-3 to each pixel having the value 0 mod 4, a similar
(but not identical) patch from the same area has a histogram

0 459
1 959
2 926
3 805
Wouldn't it be better to do the following mapping, assuming 0 is the clipping of ADC values 0 to k-1:
if pixel = 0 then pixel = random(0, k-1)
else pixel = pixel + k

i.e., shift the non-zero pixels by k too. Then clip some low order bits (it would help with k was a power of 2) and see what it looks like?

David Gay
 
Wouldn't it be better to do the following mapping, assuming 0 is the
clipping of ADC values 0 to k-1:
if pixel = 0 then pixel = random(0, k-1)
else pixel = pixel + k
i.e., shift the non-zero pixels by k too. Then clip some low order
bits (it would help with k was a power of 2) and see what it looks
like?
I think I showed in a different example using a 1D3 (which does not clip blacks in the raw data), that if the data is not clipped, there is no structure in the last two bits; and that if one clips the blacks, a correlation between the average brightness and the last two bits is introduced. So in that example, any procedure that removes the effect of clipping the blacks must effectively randomize the last two bits entirely, and the remnants of the image must disappear.

Anyone claiming to see an effect must explain how they have removed the effects of clipped blacks in the raw data without totally randomizing the last two bits. This will be rather difficult to prove; it is much easier from the direction I chose, to show that unclipped raw data has a pattern when blacks are clipped. The better place to look is deep shadows of Canon raw data; I have yet to see an effect equivalent to the OP's result arising there (without clipping the data by hand), and I think the example I used is rather generic in this regard.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
Wouldn't it be better to do the following mapping, assuming 0 is the
clipping of ADC values 0 to k-1:
if pixel = 0 then pixel = random(0, k-1)
else pixel = pixel + k
i.e., shift the non-zero pixels by k too. Then clip some low order
bits (it would help with k was a power of 2) and see what it looks
like?
I think I showed in a different example using a 1D3 (which does not
clip blacks in the raw data), that if the data is not clipped, there
is no structure in the last two bits;
Why not to make a thorough practical check of this? You need a night scene with lamps and deep shadows; and a little dcraw or DNG converter hack. Next, you need a 1D MkII and 1D MkIII to compare and of course to take it all to Canon forum.

--
http://www.libraw.org/
 
0 459
1 959
2 926
3 805

In other words you have pushed the excess of zero values and evenly
distributed the zeros among all four values, leaving a deficit of
zero values relative to the other three. Now one sees the negative
of the original clipped image.
i am not seeing a negative in your example - i see dark areas that are dark in the original picture still being dark - to me a negative would have light where it was dark before

the original assertion was they are dark because of the excess of zeros

since now there is a deficiency of zeros and they are still dark - there must be a predominance of 1 or 2 vice 3 in these dark areas - and they existed even before the zeros were randomized





however it still could be the zeros are skewing the data but perhaps the solution is just to make the zeros a 2 and scale the LSB from 1-3

or just look at a dark area and see if there are more 1 and 2 values vice 3

i can't do that right now but will give it a try in the near future

regards,

David
 
however it still could be the zeros are skewing the data but perhaps
the solution is just to make the zeros a 2 and scale the LSB from 1-3
Better would be to add a random number from 1-3 to the values that are zero; otherwise the regions that were zero will have less noise grain and might still be visible to the eye.
or just look at a dark area and see if there are more 1 and 2 values
vice 3
that would do as well, if you are talking about the original raw file vs any interpolated or processed version of it. Unless I'm overlooking another effect, it should be valid to throw out the corrupted data and examine the rest, at least statistically in patches. However, don't look at the interpolated data since that will have data with zeros interacting with nonzero data through the interpolation. Better to look at the individual green channels, eg using IRIS.
i can't do that right now but will give it a try in the near future
OK.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
Well I haven't tried your code out (plus I don't have a D300 lol) but thanks for doing all this work. It's nice to see people doing scientific experiments with these things.

I'm not familiar with NEF because I'm on Canon, and Canon RAW files (CR2) seem to be compressed (lossless, of course) because the CR2 files are of various sizes (or could I be overlooking something?). I can imagine this could be a headache when de-coding the individual pixels. Are all NEF files the same size?

GTW
--
http://www.flickr.com/genotypewriter
 
after looking at this more than is healthy - i would say the
difference in the noise iis due to the truncation of the bits making
12 bits look a bit darker

if you round up the 12 bit vaue - that is add 2 to the 14 bit before
truncating it appears brighter
Well it makes sense, in theory because, those additional 2 bits in the 14-bit files also contribute to the intensity of a particular pixel. The 12 bit files don't have it, so a 14 bit pixel could be up to (2^2) (2^14) or 4/16384 or 0.0244140625% brighter than a 12-bit pixel.

GTW
--
http://www.flickr.com/genotypewriter
 
however it still could be the zeros are skewing the data but perhaps
the solution is just to make the zeros a 2 and scale the LSB from 1-3
Better would be to add a random number from 1-3 to the values that
are zero; otherwise the regions that were zero will have less noise
grain and might still be visible to the eye.
or just look at a dark area and see if there are more 1 and 2 values
vice 3
that would do as well, if you are talking about the original raw file
vs any interpolated or processed version of it. Unless I'm
overlooking another effect, it should be valid to throw out the
corrupted data and examine the rest, at least statistically in
patches. However, don't look at the interpolated data since that
will have data with zeros interacting with nonzero data through the
interpolation. Better to look at the individual green channels, eg
using IRIS.
I went back and did a more careful job analyzing the crease of the couch, both in the last two bits of the original raw green1 channel data, and in the version I posted with the zero values replaced by random numbers from 0 to 3. This time I overlayed the two so I could draw a crop from the same area and look at the histogram. In the original data one has for this crop from the couch corner

0 4224
1 1369
2 1205
3 1014

The roughly three times the number of values in the first bin is the result of clipping the sensor data. But note also that there is not really an even distribution of the other three; we'll come back to that in a moment. After randomizing the values of the pixels with value zero, the identical crop has the level population

0 1063
1 2406
2 2289
3 2054

So yes there is naively a negative of the original image in that the zero value is now underpopulated; but also the relative excess of 1 over 2 over 3 remains as one would expect, and it is this which is leading to some residual subtle shading of the image.

Why is this occurring and is it independent of the clipping of the raw data? The answer goes back to the histogram sequence I posted before from D300 raw data near black:



Consider the first patch, patch 16; it is relatively free of clipping, and the bell curve of the histogram is roughly equally distributed among integers modulo four (there is no preference for any particular value of the last two bits). In part this is because both the part of the histogram which is rising from left to right and the part which is falling from left to right is present. On the rising part, there is a gradation in level populations 01> 2> 3. When both are present, the total population evens out.

Now consider the last patch, patch 22; it is strongly clipped, and only the falling part of the histogram is present. This means that not only is the value 0 for the last two bits vastly over-represented, but also there will be a gradation in the population of the other levels 1> 2> 3 just as is seen in the histogram of the crop of the couch above. The part of the sensor data that constitutes the rising part of the histogram, where the level population is 3> 2> 1 for the nonzero values of the last two bits, has been lumped into the pixels with the last two bits equal to zero. And it is this part of the histogram that is clipped which is responsible for evening out the populations of the last two bits, not just the zero value but the other three as well.

The upshot is that the population of ALL the values of the last two bits are affected by the clipping, and no simple procedure is going to undo that. One really has to start with unclipped data to arrive at a valid conclusion from this sort of test.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
In part this is because
both the part of the histogram which is rising from left to right and
the part which is falling from left to right is present. On the
rising part, there is a gradation in level populations 01> 2> 3. When
both are present, the total population evens out.
Sorry this was garbled -- when the histogram is rising, larger numbers are more populated and the gradation of the populations of the last two bits are 3> 2> 1. When the histogram is falling, the smaller numbers are more populated and the gradation of the populations is 1> 2> 3.

The conclusions were correct.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
Sorry this was garbled -- when the histogram is rising, larger
numbers are more populated and the gradation of the populations of
the last two bits are 3> 2> 1. When the histogram is falling, the
smaller numbers are more populated and the gradation of the
populations is 1> 2> 3.
and that little slope is information that is absent in 12bit data

David
 
Why not to look at the blue channel? That may give you some more
information on how exactly and why the data looks clipped.
so blue is going to be less sensitive and fewer in number -

any clues on the right side of the image - in those 32 pixels?

David
 

Keyboard shortcuts

Back
Top