NEF Compression

You guys are looking at a jpeg in midtones and highlights; the data is 8 bits (though with a tone curve that compresses highlights even more). There is no way you are going to see any effects from the absence of a 13th or 14th bit in the raw data.

Consider two midtone pixel values 116 and 117. In a color space such as sRGB having gamma=2.2, these come from raw values 724 and 738 in 12-bit data (whose range is 0-4095) ignoring any additional tone curve, which would only separate them more. The point is that two tonal values separated by one unit in 8-bit sRGB color are already separated by over ten raw levels in the raw data in 12 bit encoding. Twelve bits are already overkill in finely grading the tonal transitions that can be displayed on an 8-bit jpeg, since the grading is over ten times finer than what your monitor can display.

A consequence is that any posterization of sky that may be present is due to that truncation from 12 to 8 bits when the jpeg was output.

Edit: In the above, I am ignoring the effects of white balance in determining the raw levels corresponding to output colors. This won't affect the conclusions materially so long as the white balance multiplier for the R and B channels isn't huge (usually they are around 1-2 for outdoor daylight shots, which is not huge).
--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
I'm sure it's not intentional, but there is about a 1/10 of a stop
difference -- probably attributable to the camera's aperture lever.

It appears in the shadows under the roof eve by the corner; but since
in this image it is so small and the EV there is so small, it is
inconsequential.
Bearing in mind the comment from ejmartin in the post above: "You guys are looking at a jpeg in midtones and highlights; the data is 8 bits (though with a tone curve that compresses highlights even more). There is no way you are going to see any effects from the absence of a 13th or 14th bit in the raw data." I think I will leave this one to the physicists and software developers to debate, particularly as even the scientists seem to disagree on the basic point.
And by the way, the opinion of my clients is of far more importance
to me than any histogram, mathematical formula or forum post.
Still irrelevant to the discussion. Would your clients object if you
were getting the shots they wanted in 14 bit mode?
Not irrelevant to me - read it again.

--
Lightbox Photography : http://www.the-lightbox.com
Aerial Photography : http://www.aerial-photographer.co.uk
 
You guys are looking at a jpeg in midtones and highlights; the data
is 8 bits (though with a tone curve that compresses highlights even
more). There is no way you are going to see any effects from the
absence of a 13th or 14th bit in the raw data.
Still despite that, I did see some small difference in the eve under the near corner of the roof.
A consequence is that any posterization of sky that may be present is
due to that truncation from 12 to 8 bits when the jpeg was output.
Not all output has to be JPEG though; some printers take 16 bit TIFF.
Edit: In the above, I am ignoring the effects of white balance in
determining the raw levels corresponding to output colors. This
won't affect the conclusions materially so long as the white balance
multiplier for the R and B channels isn't huge (usually they are
around 1-2 for outdoor daylight shots, which is not huge).
The greater the multiplication of the data through WB gain and increased ISO, the more this becomes relevant. In that regard, it is not relevant to Lightbox; so I have no issue with that, but that doesn't mean it doesn't have relevance to me.
And by the way, the opinion of my clients is of far more importance
to me than any histogram, mathematical formula or forum post.
Still irrelevant to the discussion. Would your clients object if you
were getting the shots they wanted in 14 bit mode?
Not irrelevant to me - read it again.
I didn't say it was irrelevant to you, I said it was irrelevant to the discussion.
 
You guys are looking at a jpeg in midtones and highlights; the data
is 8 bits (though with a tone curve that compresses highlights even
more). There is no way you are going to see any effects from the
absence of a 13th or 14th bit in the raw data.
Exactly what I was thinking! Show me a print and I will show you a difference. Let me push it a little in PS CS3 (Shadow/Highlights) and make a pair of prints, and I will show you a difference.
Consider two midtone pixel values 116 and 117. In a color space such
as sRGB having gamma=2.2, these come from raw values 724 and 738 in
12-bit data (whose range is 0-4095) ignoring any additional tone
curve, which would only separate them more. The point is that two
tonal values separated by one unit in 8-bit sRGB color are already
separated by over ten raw levels in the raw data in 12 bit encoding.
Twelve bits are already overkill in finely grading the tonal
transitions that can be displayed on an 8-bit jpeg, since the grading
is over ten times finer than what your monitor can display.

A consequence is that any posterization of sky that may be present is
due to that truncation from 12 to 8 bits when the jpeg was output.

Edit: In the above, I am ignoring the effects of white balance in
determining the raw levels corresponding to output colors. This
won't affect the conclusions materially so long as the white balance
multiplier for the R and B channels isn't huge (usually they are
around 1-2 for outdoor daylight shots, which is not huge).
--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
--
Steve Bingham
http://www.dustylens.com
http://www.ghost-town-photography.com
 
I posed the question to Nikon and this is their reply.

Lossless compression uses a visually loss less compression which has not visible quality loss but in some extreme conditions or tests a potential loss could happen. In general it is the best option but to guarantee no loss at all uncompressed would be the best option.

--
Philip
 
I posed the question to Nikon and this is their reply.

Lossless compression uses a visually loss less compression which has
not visible quality loss but in some extreme conditions or tests a
potential loss could happen. In general it is the best option but to
guarantee no loss at all uncompressed would be the best option.
That's a bummer. That's a misnomer to call it loss-less compression when it's not loss-less. I wonder if this is what the D70 used to have where they compress out some highlight information that they don't think can be normally seen.

--
John
Popular: http://jfriend.smugmug.com/popular
Portfolio: http://jfriend.smugmug.com/portfolio
 
Iliah Borg wrote:
The D3 has close to 12 stops dynamic range, which means that 12 bits
is certainly needed to encode all the image data that is captured.
Due to sample variation, some examples may have a tad more than 12
stops DR, which would justify at least the 13th bit of data encoding.
The 14th bit is always lost in the noise.
--

Ummmm.... no. While the 'bits' appear to match up - they're nothing more than a base-2 method of storing numbers.

With an 8bit range, black is 0, white is 255.
With a 12bit range, black is 0, white is 4,096
With a 14bit range, black is 0, white is 16,384

Basically the range is exactly the same, you don't gain any extra stops of dynamic range with a higher bit depth. A higher bit depth provides smoother graduations of brightness/intensity, and when performing calculations - greater accuracy (less chance of rounding causing an unwanted side-effect). Your monitor can only display 255 (8bit x3 = 24bit + unused alpha channel = 32bit) distinct values for each channel anyway - so by and large any improvement in quality without manipulating the RAW file is purely a placebo. The advantage of 14bit is when you start manipulating the photo, not before. Of course to gain this advantage Photoshop needs to import the photo as a 16bit image, not an 8bit (default) one - otherwise you lose all the extra precision 14bit (and 12bit for that matter) originally offered you. Incidentally most printers don't even reach 8bit precision...

So in summary - no matter how much you may believe a 14bit photo looks better to you on-screen - it makes 'naff all difference. Its only purpose, other than marketing, is to provide greater flexibility with manipulating the photo in post.
 
On page 67 of the English manual it says that Lossless Compression
reduces file size from 20 to 40% and has no effect on image quality.

If this is so why would you used the uncompressed option
Nikon was bending the truth, there is no visible difference unless you start editing highlights and shadows, where the compression has clipped to save on data.

Nikon did at some point publish information on this, i did have it in my Nikon Pro mag, but i cant find it, either way if you intend on editing your best bet is not to compress. Besides with 16GB CF cards you dont need too!

Cheers

--
James Grove
http://www.jamesgrove.co.uk

 
Nikon was bending the truth
If they would be "bending the truth", then that would be to their disadvantage, as there is NO (NONE, NADA, ZERO) loss associated with the lossless compression .

Rather, the person answering the question did not understand it in the first place, like yourself.
there is no visible difference unless
you start editing highlights and shadows, where the compression has
clipped to save on data
I don't understand, why so many members lacking any related knowledge find it necessary to exhibit their ignorance.

--
Gabor

http://www.panopeeper.com/panorama/pano.htm
 
Iliah Borg wrote:
The D3 has close to 12 stops dynamic range, which means that 12 bits
is certainly needed to encode all the image data that is captured.
Due to sample variation, some examples may have a tad more than 12
stops DR, which would justify at least the 13th bit of data encoding.
The 14th bit is always lost in the noise.
--

Ummmm.... no. While the 'bits' appear to match up - they're nothing
more than a base-2 method of storing numbers.

With an 8bit range, black is 0, white is 255.
With a 12bit range, black is 0, white is 4,096
With a 14bit range, black is 0, white is 16,384

Basically the range is exactly the same, you don't gain any extra
stops of dynamic range with a higher bit depth. A higher bit depth
provides smoother graduations of brightness/intensity, and when
performing calculations - greater accuracy (less chance of rounding
causing an unwanted side-effect). Your monitor can only display 255
(8bit x3 = 24bit + unused alpha channel = 32bit) distinct values for
each channel anyway - so by and large any improvement in quality
without manipulating the RAW file is purely a placebo. The advantage
of 14bit is when you start manipulating the photo, not before. Of
course to gain this advantage Photoshop needs to import the photo as
a 16bit image, not an 8bit (default) one - otherwise you lose all the
extra precision 14bit (and 12bit for that matter) originally offered
you. Incidentally most printers don't even reach 8bit precision...

So in summary - no matter how much you may believe a 14bit photo
looks better to you on-screen - it makes 'naff all difference. Its
only purpose, other than marketing, is to provide greater flexibility
with manipulating the photo in post.
Noise is what limits DR, and noise is what limits the utility of using more bits to encode the photosite data. Your explanation of why extra bits are better is a commonly repeated internet myth. If you want to understand why, study this:

http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/noise-p3.html#bitdepth

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
Noise is what limits the camera's DR.
You try A900 for example. The ADC simply clips the shadows.
It's not the camera's fault if
you don't fit the DR of the scene into the DR window of the camera
via underexposure.
Nobody speaks of "camera faults" here. Exposure is the same, it is only amplification that changes.

--
http://www.libraw.org/
 

Keyboard shortcuts

Back
Top