Compressed raw files and exposure strategy

For processing you can return to a linear raw file
Latitude is diminished if compression is performed the way Sony are doing it.
No, nothing is visibly changed by level compression
Why not to try and see. Using a Sony camera, of course.
I don't waste my time repeating experiments just to verify simple established facts. You are making an extraordinary claim that either level compression in general damages a file (which contradicts logic) or that Sony botched the A-D converter. So only if someone shows some convincing evidence that Sony did botch it would I consider investigating what they did wrong. Of course I think it much more likely that someone would demonstrate that they are confused about how to test the effect of level compression, just as you suggested an improper test here:
http://forums.dpreview.com/forums/read.asp?forum=1042&message=39720020
 
Normally the sunsets here are rich enough in tonality to produce visible posterization in the reds/yellows when converted to JPEGs...but not tonight unfortunately. Here are a sequence of handheld sunset photos spaced 1/3 EV apart. To goose any tonality issues I pushed saturation to +100 in LR3. All images shot raw, ISO 100, WB synchronized, etc.. Photos are 100% crop then resized down to 1536 pixels across.

LR3 reports clipping for half the left sky on the 9th photo in the sequence. I didn't check the raw files themselves to see when the raw clipping is actually occurring but it's pretty obvious as you moved further down from image #9.

Orig Photo at base ISO, 100% crop:
http://horshack.smugmug.com/photos/i-FBxwW4X/0/O/i-FBxwW4X-O.jpg

And here are the comps, starting the with the base exposure and then +1/3 EV after that including up to and well past clipping. I adjusted exposure with the LR3 slider rather than using the recovery slider.

http://horshack.smugmug.com/photos/i-5wsxPLc/0/O/i-5wsxPLc-O.png
 
I don't waste my time repeating experiments just to verify simple established facts.
Everything starts with repeating experiments that were already done by others.
Yes, I repeat interesting ones. This experiment is boring because the result is so easy to predict from basic principles. Besides repeating experiments, I take the time to learn from physics and math about the basic principles of how nature works. I would suggest that you try learning some of that yourself.
 
I don't waste my time repeating experiments just to verify simple established facts.
Everything starts with repeating experiments that were already done by others.
Yes, I repeat interesting ones. This experiment is boring because the result is so easy to predict from basic principles
Maybe you are ignoring some of those basic principles.


I would suggest that you try learning some of that yourself.
Well, after you :)

--
http://www.libraw.org/
 
So as long as the steps are smaller than the expected shot noise by a factor of 2 (or 1.4 if you aren't too picky) and as long as the compression/decompression doesn't change the expected level (this is the mistake Nikon made with their raw compression implementation) then no information is lost by increasing the spacing for bright tone levels.
Where was it shown that this is Nikon's mistake? I knew there was a problem with it but didn't know the diagnosis.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
Had taken a few different sequences of photos, found one set which posterizes at the base exposure when contrast is +100 and vibrance is +25, so a sequence that stresses the tonality more for the ETTR/Sony highlilghts compression.

Here is the base exposure (JPEG):

And here are the comps, starting the with the base exposure and then +1/3 EV after that including up to and past clipping. Note that in this and all the other comps these wedges are 16-bit TIFFs then the full composite image is saved as a PNG, all to prevent file-format induced posterization. It may not be as obvious in these downsamples but there is a small but noticeable degradation in the tonality for the ETTR photos. Much of the tonal benefit of the lesser-ETTR photos seems to be lost in the much higher visible noise relative to the more-ETTR photos.

I haven't found anything in these experiments thus far that would dissuade me from continuing to use ETTR.

Base exposure 100% crop (JPEG):
http://horshack.smugmug.com/photos/i-kZ743SF/0/O/i-kZ743SF-O.jpg

ETTR comps (TIFFs, screen-captured as PNG):
http://horshack.smugmug.com/photos/i-CZ5PXwT/0/O/i-CZ5PXwT-O.png

Based vs ETTR EV+2.0 noise comp. This is the lowest ETTR exposure before clipping around the sun starts to adversely affecting the hue recovery. Notice the difference in noise heavily favoring the ETTR exposure:

http://horshack.smugmug.com/photos/i-vdk8ttG/0/O/i-vdk8ttG-O.png
 
I'm afraid I do not understand. Some annotation of the exposures and conversion settings would help.

One of the important things is workflow. You are converting raw files with a particular converter, part of your workflow. That converter can have its own idiosyncrasies.

--
http://www.libraw.org/
 
What happens with the row number 5? What was the exposure, compared to row number 1, and what LR settings were used at the conversion?
Yep, I noticed that too but forgot to go back to it to investigate. I just checked and somehow the LR3 recovery slider got moved from 0 to 23 for that one image.
 
I'm afraid I do not understand. Some annotation of the exposures and conversion settings would help.

One of the important things is workflow. You are converting raw files with a particular converter, part of your workflow. That converter can have its own idiosyncrasies.
It's a straight LR3 conversion, all defaults including Adobe Standard profile, process 2010, and using the exposure slider with no other tonal/curve adjustments. For this sunset #2 I also boosted the contrast to +100 and vibrance to +25. It's my normal workflow except I would usually tailor the curve for a given photograph/scene and not boost the contrast/vibrance in this fashion.
 
I'm afraid I do not understand. Some annotation of the exposures and conversion settings would help.

One of the important things is workflow. You are converting raw files with a particular converter, part of your workflow. That converter can have its own idiosyncrasies.
It's a straight LR3 conversion
What were the shooting parameters? The images seem to be of different brightness, have you taken any measurements of light?

--
http://www.libraw.org/
 
What were the shooting parameters? The images seem to be of different brightness, have you taken any measurements of light?
The EXIF data is preserved for the baseline JPEG I posted, which is 1/800 f/8 ISO 100. Each subsequent photo has 1/3 EV added to shutter. All were handheld, shot one at a time with about 3-4 seconds between each photo. The sun was setting fast so there may be some small exposure differences but they match pretty well.

Btw I selected baseline photo by choosing the exposure where the clouds at the fringe of the sun were in the middle of the midtones as reported by LR3.
 
Same photographs as my original hue shift comparison but with image #5 corrected and the composite organized more cleanly and with EV labels added.

I'm pretty shocked by the consistency of the colors across the ETTR range. I was expecting to see more noticeable shifts/twists, either caused by the non-linear sensor/ADC response and/or Adobe's profile exposure hue twists.

Here's the full chart 100% crop of the "base exposure" shot (JPEG)



And here are the extracted wedges (TIFF 16-bit, screen-cap saved as lossless PNG). I adjusted exposure with the LR3 slider, with the recovery slider kept at zero. Clipping begins at +2 EV for the white square, a little later for the other colors. You can see the clipping when the wedges start departing from their expected color.

 
DSPographer, am actually following what you are saying about putting noise back when rebuilding say an uncompressed TIFF image from a raw file. You've explained painstakingly in this thread that there's inevitably sizable noise in data, and that noise-numerical-error goes up as (perhaps roughly) the square root of the pixel brightness. Therefore, say we're discussing a sensor where the top brightness level the pixel can output is assigned the max value, 4095.

When 3 pixels are all struck with the same really bright light, of actual level 4088, there's so much "square root of 4088-ish" noise that you might have pixel #1 reporting brightness 4089 (1 level too high because of noise), another one reporting 4086 (2 levels too low 'cause of noise), and another pixel reporting the perfect result of 4088.

So there's so much noise in this bright data that there's really no accuracy lost by storing a "4088" for all the pixels. For all anyone knows, the light hitting all those pixels might indeed have been at level 4088 (and in fact in this example it was, in this example it was just noise that created the other slightly different 4086 and 4089 numbers).

And there's perhaps so much noise that you don't really know if the light is any dimmer on a pixel until you get down to a pixel that reads out at, say 4080. So really, why bother providing unique numbers for the bright pixel values between 4081 and 4095? Just store a "4095" for every pixel reading out anywhere between 4081 and 4095, store a "4094" for every pixel that read out between 4066 and 4080, etc.

This is what the Sony compression scheme is doing, assigning far fewer different numbers to the brightest values in the raw data off the sensor. And any Sony raw converter must know that sensor-value-to-raw-file-number assignment scheme so that it can undo it. In our example when a raw file value of 4094 is encountered, the raw converter should know that came from a real sensor brightness value of about 4073 (out of 4095 max).

Poster Iliah feels that useful data is being lost by in essence rounding off all those high original sensor data values varying between 4081 and 4095, and representing them in the stored raw file with a single number "4095". But you've pointed out that a knowledgeable raw file converter could for all practical purposes produce a set of uncompressed TIFF pixel values that is as accurate and full of naturalistic variation as the original pixel data . By simply adding a random number between -7 and +7, to a brightness of 4088, to come up with the uncompressed pixel value, whenever the raw converter ecounters a "4095" in the raw file.

So in our example say the raw converter reads the value "4095" in three different places in the raw file. The converter might write a value into the .TIFF output file of 4088 when it reads the first "4095". But the converter will according to DPSographer add say (a random offset) +3 to the next raw number "4095" it finds, and write that out to the .TIFF as a 4091. Maybe it'll take -2 off the next "4095" and write that out as a 4086. It will do this "random error injection" into the .TIFF values so that there is no misleading sameness i.e. banding to the output pixel values in the .TIFF file.

So in our example the sensor values of 4089, 4086, and 4088 were all stored in a raw file as number 4095. A stupid raw converter would, upon reading those raw piexls, have written all 3 of them into the output file as a uniform brightness value of 4088. But when those raw values are converted to .TIFF by the good algorithm with random noise injection described in the previous paragraph, the raw file values might be rebuilt for the .TIFF output file as values 4088, 4091 and 4086.

It seems at a glance that that good raw converter has "lost something", because its random noise injection has rebuilt the original 4089, 4086 and 4088 values (all stored in the raw file as 4095) as 4088, 4091 and 4086 for the .TIFF output file. But the good converter has, via the noise injection, refrained from writing out the "4095" raw values to the .TIFF as a misleadingly bland block of white space, a "band". And the error in the reconstituted .TIFF file values, is no greater than the noise-created errors in the original pre-raw-storage sensor values anyway.

Fairly low distortion 28mm enlarger lens, about F8



 
I don't waste my time repeating experiments just to verify simple established facts.
Everything starts with repeating experiments that were already done by others.
Yes, I repeat interesting ones. This experiment is boring because the result is so easy to predict from basic principles
Maybe you are ignoring some of those basic principles.
I would suggest that you try learning some of that yourself.
Well, after you :)

--
http://www.libraw.org/
It's my understanding that analog gain is applied up to near ISO 950 on this 16MP Sony sensor generation, at least as sampled on the D7000 variant by Marianne Oelund. Would you then not expect to see some difference in std. dev between base ISO digitally pushed vs analog, even if the performance between the two techniques was very close? What method can you use to isolate the effect of the compression vs. just a small but measurable difference in analog vs digital gain?
 
The difference is not just in standard deviation, there is a substantial difference in brightness and in the contrast of the shadow portion; and subtle but visible difference in colour renditions.

--
http://www.libraw.org/
 
More light through longer exposure usually means less noise; but what I see with Sony compression is that the portion in Zones 7 to 4 which are usually most important parts of the image, suffers as a result of ETTR. The resolution, colour, contrast are not as good as with normal exposure, and that is before post-processing.

Different raw converters have different accuracy showing these effects. It well may be that because of rather inaccurate calculations ACR and LR mask those effects.

--
http://www.libraw.org/
 
The difference between dithering the larger quantization step with noise and using a smaller quantization step will amount to the quantization error, plus the introduced dithering noise. The std dev of the former is about .3 of the level spacing, as is that of the latter. Adding the noises in quadrature (sqrt of the sum of squares of noise contributions), the larger spacing adds about .4 level of noise after dithering. If the other noises (shot noise, read noise) are sufficiently larger than this, which they typically are if the compression is well designed, then this additional noise/error will be negligible.

BTW, Iliah often exposes at a lower ISO and compensates in the conversion, so he is only selectively concerned about increasing the level spacing ;)

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 

Keyboard shortcuts

Back
Top