Compressed raw files and exposure strategy

The difference is not just in standard deviation, there is a substantial difference in brightness and in the contrast of the shadow portion; and subtle but visible difference in colour renditions.
Have you done this for the D7000, using uncompressed NEFs? Would be interesting to see an NEX-5N vs D7000 compressed+uncompressed comp to help isolate the effects of the Sony compression.
 
The difference is not just in standard deviation, there is a substantial difference in brightness and in the contrast of the shadow portion; and subtle but visible difference in colour renditions.
Have you done this for the D7000, using uncompressed NEFs? Would be interesting to see an NEX-5N vs D7000 compressed+uncompressed comp to help isolate the effects of the Sony compression.
Yes, a fundamental principle of science is to control for variables that can affect the result other than what you are trying to measure. Since the point of this thread was the level compression and not the equivalence of analog versus digital gain, Iliah's example doesn't correctly test for the effect we are interested in. In fact I never saw tests to support the conclusion that the Exmor sensors don't produce output when analog gain is applied that is slightly different than when digital gain is applied. As an electrical engineer I would say that it would actually be rather surprising if they were exactly identical. (That doesn't mean that analog gain always results in lower effective noise levels. I have seen commercial A-D boards for interfacing to a PC where setting an analog gain other than 1 actually increased the noise referenced to the amplifier input: the amplifier input stage was actually worse than the A-D input.)
 
Well, you didn't control for the difference between analog and digital gain so your result is hardly surprising. You also should be careful to show just how visible the difference you are measuring is. When other people have said that analog and digital gain is equivalent in some Exmor sensors I think they were allowing for some slight differences that would be just visible but not enough to be important to them.

The level compression criteria that I was using will slightly, but not visibly, change the noise deviation: which could be measured by calculating the effect across an entire image.

Given the step size is set at half the STD of the shot noise the quantization step adds:
10*log10(1+1/(4*12)) = 0.09 dB of quantization noise to the image.

If after decompression we add back random dithering to the bits we truncated, that adds an additional 0.09 dB of noise to the image (techniques do exist to shape the frequency content of this dithering to make it even less visible), for a total of 0.18dB increase in noise standard deviation.

This increase is not visible but would be measurable with the tool you used in those images. My requirement that the decompression table be set up so the expected value of a pixel not change through compression/decompression means that for correctly performed raw level compression the average brightness shouldn't change.

The 0.18dB amount is a smaller change than you calculated in that image and you calculated a slight brightness change: I expect both of those effects are from the different analog gain.
 
Well, you didn't control for the difference between analog and digital gain
You think so, but you are wrong.
When other people have said that analog and digital gain is equivalent in some Exmor sensors I think they were allowing for some slight differences
The difference in contrast in shadows is not a "slight difference". It amounts to the necessity of using very different tone curves for processing to bring back details. Such processing makes for the poor tonality, the shadows start to fell apart.

--
http://www.libraw.org/
 
So as long as the steps are smaller than the expected shot noise by a factor of 2 (or 1.4 if you aren't too picky) and as long as the compression/decompression doesn't change the expected level (this is the mistake Nikon made with their raw compression implementation) then no information is lost by increasing the spacing for bright tone levels.
Where was it shown that this is Nikon's mistake? I knew there was a problem with it but didn't know the diagnosis.
That is what I remember was reported to be the problem by someone here that analyzed it. Unfortunately I didn't do the analysis myself (I don't own a Nikon) and don't have the link handy.

The post was from some time ago so my memory might be faulty.
 
Well, you didn't control for the difference between analog and digital gain
You think so, but you are wrong.
To isolate the effect of the level compression you need to use an image where nothing else was changed. Since the analog gain was changed, you didn't do that.
When other people have said that analog and digital gain is equivalent in some Exmor sensors I think they were allowing for some slight differences
The difference in contrast in shadows is not a "slight difference". It amounts to the necessity of using very different tone curves for processing to bring back details. Such processing makes for the poor tonality, the shadows start to fell apart.
Then post that as your example instead of a measurement made across an entire image.

Of course this is a different issue than raw compression. For testing that you also need to be conducting your test with a camera where you can change nothing except the use of raw compression.
 
Well, you didn't control for the difference between analog and digital gain
You think so, but you are wrong.
To isolate the effect of the level compression you need to use an image where nothing else was changed. Since the analog gain was changed, you didn't do that.
LOL. Take a900 and check.

The issue is practical. With this camera you can either do a push or not, and that is the decision to make.
When other people have said that analog and digital gain is equivalent in some Exmor sensors I think they were allowing for some slight differences
The difference in contrast in shadows is not a "slight difference". It amounts to the necessity of using very different tone curves for processing to bring back details. Such processing makes for the poor tonality, the shadows start to fell apart.
Then post that as your example instead of a measurement made across an entire image.
I see, you can't read histograms :) Look at the left slope of luminance channel. Or shoot yourself.

--
http://www.libraw.org/
 
Hi pako,
Things one may want to think about while examining the graph:
  • do you want to use ETTR, given such a transfer?
  • do you want to set ISO in the camera rather than adjusting brightness during conversion?
  • is correct exposure with such a camera more important than with a camera that offers uncompressed raw?
Interesting.

From your point of view, can you please answer this questions?
These were rhetorical questions from Iliah, as in this thread he has been trying to demonstrate the following:

1) That if you expose to the right with a Sony camera, and then correct the exposure in postprocessing, you risk losing some information (brightness levels) due to Sony's raw compression algorithms.

2) That you should be careful to adjust ISO in the camera to get proper exposure rather than adjusting exposure in postprocessing. Exactly the same reasons: Sony's raw compression algorithm.

3) That with Sony's cameras it is important to get the right exposure when you take the picture, because there is less "latitude in postprocessing" than other cameras that do not compress raw data (using an algorithm similar to Sony's).
Regards

PS:"expose to the right" is not Panacea anymore?
I don't think it was ever considered a panacea, but it's true that I read the recommendation to ETTR when using digital cameras, the basic argument being that if you correct exposure in postprocessing you'll get less noise in your final output, compared to "exposing to the left".

What Iliah is basically arguing is that Sony's raw compression algorithm results in "less latitude" in postprocessing to correct overexposure.

Personally I would much prefer visual proof of his argument, instead of a graph or a table with numbers...
--
Andrew
Novice photographer
 
ejMartin: The difference between dithering the larger quantization step with noise and using a smaller quantization step will amount to the quantization error, plus the introduced dithering noise...If the [other sources of infidelity such as] ...(shot noise, read noise) are sufficiently larger than this, which they typically are if the compression is well designed, then this additional noise/error will be negligible.

Which sounds to me like you can't condemn or approve of a compression scheme's reasonableness until you know what exact camera and ISO setting it's being used with.

Professor Martin, thanks, and I take indirectly from your comments that if camera companies wanted to bother, they could harmlessly provide us with varying-size raw files, depending on the ISO settings of a camera. Or at least give us the option of asking for raw file bit depths, customized to the taking ISO. And I trust it would not be too big a deal for raw converters to deal with varying bit depths in raw data, as long as there was meta-data to clearly indicate the situation at the head of each image file?
 
Well, you didn't control for the difference between analog and digital gain
You think so, but you are wrong.
To isolate the effect of the level compression you need to use an image where nothing else was changed. Since the analog gain was changed, you didn't do that.
LOL. Take a900 and check.
This is your test, remember?
The issue is practical. With this camera you can either do a push or not, and that is the decision to make.
When other people have said that analog and digital gain is equivalent in some Exmor sensors I think they were allowing for some slight differences
The difference in contrast in shadows is not a "slight difference". It amounts to the necessity of using very different tone curves for processing to bring back details. Such processing makes for the poor tonality, the shadows start to fell apart.
Then post that as your example instead of a measurement made across an entire image.
I see, you can't read histograms :) Look at the left slope of luminance channel. Or shoot yourself.
Your unlabeled bar graphs that have neither axis scales nor error bars.

P.S. Do you realize that without raw compression analog gain directly reduces the impact of the A-D step size by the gain amount; but that with compression analog gain pushes the signal up to a point where the step size increases thus counteracting the resolution improvement from the gain? So, turning on compression simply reduces the improvement in digital resolution from applying analog gain.
 
ejMartin: The difference between dithering the larger quantization step with noise and using a smaller quantization step will amount to the quantization error, plus the introduced dithering noise...If the [other sources of infidelity such as] ...(shot noise, read noise) are sufficiently larger than this, which they typically are if the compression is well designed, then this additional noise/error will be negligible.

Which sounds to me like you can't condemn or approve of a compression scheme's reasonableness until you know what exact camera and ISO setting it's being used with.
You need to know the standard deviation of the noise versus the step size. If the step size is half the standard deviation or less then you are fine.
 
Well, you didn't control for the difference between analog and digital gain
You think so, but you are wrong.
To isolate the effect of the level compression you need to use an image where nothing else was changed. Since the analog gain was changed, you didn't do that.
LOL. Take a900 and check.
This is your test, remember?
I did my test. You turn :)
Your unlabeled bar graphs that have neither axis scales nor error bars.
Your invective is directed to Adobe? :)
P.S. Do you realize that without raw compression analog gain directly reduces the impact of the A-D step size by the gain amount; but that with compression analog gain pushes the signal up to a point where the step size increases thus counteracting the resolution improvement from the gain? So, turning on compression simply reduces the improvement in digital resolution from applying analog gain.
All theory. We deal with real world here. How many calibration points you need for that ADC? How good is the calibration stability you are going to reach at each point for that price? And that is just one thing.

--
http://www.libraw.org/
 
There is a very good reason Sony uses some raw compression in their Exmor sensors. Those sensors use column parallel ramp type A-D converters. The conversion speed of this type of converter is proportional to the number of steps of the ramp. By scaling the ramp the way Iliah plotted Sony can get enough precision for the nearly black pixels at the beginning of the ramp without having to wait for the ramp to go though 16000 steps to reach the level for the highest levels. So this explains how Sony implements the level compression: they progressively change the ramp step size at the counter in the ramp generator of the A-D converter.
If this is the way it is done, could there be some transients during the counting as one switches from one ramp step size to the next one, leading to spikes in the read noise at the slope discontinuities in the compression graph?

Also, in this sort of A-D conversion, is the digitized value the first one larger than the input, the last one smaller, or a rounding of the analog input? Each leads to slight shifts in the average value.

Finally, elsewhere in the thread you mentioned one could shape dither noise to produce a better result for decompression. What are some examples?

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
you can't condemn or approve of a compression scheme's reasonableness
The scheme is brilliant, it is just some rough parts in implementation as of today, and additionally one needs to pay more attention to exposure than it is with "linear" raw.

I'm familiar with some SDK implementations that use a register to switch compression on and off. In rare cases even the look-up table that controls compression can be programmed.

--
http://www.libraw.org/
 
There is a very good reason Sony uses some raw compression in their Exmor sensors. Those sensors use column parallel ramp type A-D converters. The conversion speed of this type of converter is proportional to the number of steps of the ramp. By scaling the ramp the way Iliah plotted Sony can get enough precision for the nearly black pixels at the beginning of the ramp without having to wait for the ramp to go though 16000 steps to reach the level for the highest levels. So this explains how Sony implements the level compression: they progressively change the ramp step size at the counter in the ramp generator of the A-D converter.
If this is the way it is done, could there be some transients during the counting as one switches from one ramp step size to the next one, leading to spikes in the read noise at the slope discontinuities in the compression graph?
I don't think changing the step size would generate a spike. The diagram which I saw for this method used a counter to drive a D-A converter as the ramp generator and the increment on the counter was changed to compress the levels- which is simple if the steps sizes are a power of 2. With a simple D-A converter there are some steps that have higher error than others- whether or not you have raw compression. Whenever any count rolls-over to the next higher bit you switch which outputs of the D-A resistor ladder you are using, so when many bits flip you can get a step which has more error in the increment than typical.
Also, in this sort of A-D conversion, is the digitized value the first one larger than the input, the last one smaller, or a rounding of the analog input? Each leads to slight shifts in the average value.
There is a comparator with an input for the ramp level and an input of the pixel voltage. When the ramp exceeds the pixel voltage a count is latched in the digital output. At the end of the ramp this count is transferred to an output shift register. I don't think it would round the analog level but it could be configured to latch either the last smaller count or the first larger one at the comparator.
Finally, elsewhere in the thread you mentioned one could shape dither noise to produce a better result for decompression. What are some examples?
The basic idea comes from the audio world (see AES journals) where they dither the 24 bit studio audio when creating a 16 bit signal for making a CD. That dither signal can be generated with a feedback loop including a filter that shapes the noise spectrum to minimize how audible it is. With an image we could use 2-D filtering of the dither to minimize the visibility but a complication would be how to generate the color channel dither data and also how or whether to distribute it among the sensor channels before or after de-mosaicing the raw data. Anyway, with a step size equal to half the analog noise deviation then for most algorithms it would not be necessary to add any digital dither on decompression to avoid artifacts like banding.
 
If this is the way it is done, could there be some transients during the counting as one switches from one ramp step size to the next one, leading to spikes in the read noise at the slope discontinuities in the compression graph?
I don't think changing the step size would generate a spike. The diagram which I saw for this method used a counter to drive a D-A converter as the ramp generator and the increment on the counter was changed to compress the levels- which is simple if the steps sizes are a power of 2. With a simple D-A converter there are some steps that have higher error than others- whether or not you have raw compression. Whenever any count rolls-over to the next higher bit you switch which outputs of the D-A resistor ladder you are using, so when many bits flip you can get a step which has more error in the increment than typical.
It would make sense to smooth the analog ramp to avoid any little noise spikes created when bits on the input to the resistor ladder role over. This smoothing filter might be designed so that the slope of the ramp doesn't change abruptly.
 

Keyboard shortcuts

Back
Top