7D dynamic range?

No, you're not getting it. Without showing the shadow side we don't know the tolerance for underexposure of the tested digital camera.
Most people prefer to give a good exposure instead of under expose, but hey.
The idea that underexposing is "bad" is a throw back to print films. It is not "bad" on a digital camera when the purpose is to capture more highlight detail. You're underexposing relative to a meter reading of middle gray, but that's fine when most of your dynamic range is on the shadow side but most of your scene detail is on the highlight side. On print film, where the opposite is true, you do the opposite, you over expose relative to middle gray when you have more shadow detail to capture than the film can handle with a "correct" exposure.
Here are more Ektar vs. digital which includes under exposure tests:
http://forums.dpreview.com/forums/read.asp?forum=1018&message=31266941
Only the last link was even an attempt at testing DR, and a very sorry attempt at that. Once again, the CORRECT way to do this is a single exposure of a calibrated Stouffer transmission step wedge. I'm not going to guess whether or not a frame of a child's toy shows an actual DR limit or bad lighting, nor try to guess what stop it corresponds to. Ektar may very well perform better in a proper test, but you have to have a proper test to determine this.

Proper testing is what separates the men who work for magazines and sites like this one from the boys who shoot household items and post in the forums. Those step wedges are fairly cheap and if you're interested in figuring out how materials perform you should pick one up.
No usable evidence has been shown. You appear to be a person who is too ignorant about dynamic range and exposure to realize that. (You throw an insulting accusation, I throw one back. You want to remain civil, then don't start a fight.)
I've seen many of your "digital is god's best creation" attitudes on here to know one.
And you've still left me convinced that you do not understand exposure. There are some good books on the topic, pick one up.
 
When faced with a high contrast scene on film, you overexpose to take advantage of the fact that most of the total range is on the highlight side. When faced with the same scene on digital, you underexpose to take advantage of the fact that most of the total range is on the shadow side.
Isn't the most tonal range on the highlight side on digital?
I was speaking about the dynamic range, not the number of tones that can be discerned at a given stop of exposure.

You are correct that there's finer tonal separation in digital files on the highlight side. That's why you expose to the right. For some scenes this still means underexposing relative to a meter reading of middle gray in order to hold the highlights. You always want to keep the histogram as close to the right side as possible without clipping.

Still, it's pretty amazing how deep you can dig into the shadows of a RAW file and get good tonal separation.
 
I also was pretty surprised when I heard that digital cameras have 9 stops of DR, as I learned negativ film has 7-9 stops DR.

Unfortunatly I can't find the website right now, but I once saw a comparison on a website that showed that film is more like 14 stops DR.
Here's a pretty good test:
http://forums.dpreview.com/forums/read.asp?forum=1031&message=32891438

One thing in DPR's review, they list a maximum/best DR for a dSLR that is often 11 to 12EV. Yet often state, not in every review and not sure why not, that their "best" DR image would not look good since it would be a concave tone curve very low in contrast instead of the typical "S" tone curve.
The response of negative film is logarithmic. Well, it's actually more like an S-curve on a logarithmic scale. But the response of digital is linear (on a linear scale).

That comparison is misleading, it suggests the difference is only about 1 stop. It's not, it's much more. When you overexpose film and then correct for the response curve - what you get is just more and more noisy highlights as you increase the exposure time (up to a point where any detail is hidden by noise).
With digital, it's just clamped.

You could probably go on a few more stops with the film in that comparison and still get acceptable results.
 
I don't use IRIS, I have no idea how and what it is doing.

The point is, that the standard deviation is much greater with "raw" raw than with the black level corrected data. Example: my 40D with ISO 100, 1/30sec, shows 5.95-6.00 on a black frame shot, equal on the masked pixels and image area, without black level correction . However, after black level correction, >

This fact makes comparison of the DR this way between Canons and most other cameras impossible - another reason I prefer to measure the relative noise on uniform patches of non-black frame shots. When doing so, I am watching for the proportion of "black clipping", because when you measure the noise on such deep shedows, which contain a lot of black clipped pixels, the noise becomes "moderated" (lowered). This is the source of the funny changes in the curves of ignorant measurements.
I wouldn't think IRIS would do auto-BLC since the astro people would HATE that.

well you do have to be very careful with the other brands but I'm only measuring Canon stuff and again while your SNR tests are perhaps more real world relevant overall they are not measuring the same thing
 
I don't use IRIS, I have no idea how and what it is doing.

The point is, that the standard deviation is much greater with "raw" raw than with the black level corrected data. Example: my 40D with ISO 100, 1/30sec, shows 5.95-6.00 on a black frame shot, equal on the masked pixels and image area, without black level correction . However, after black level correction, >

This fact makes comparison of the DR this way between Canons and most other cameras impossible - another reason I prefer to measure the relative noise on uniform patches of non-black frame shots. When doing so, I am watching for the proportion of "black clipping", because when you measure the noise on such deep shedows, which contain a lot of black clipped pixels, the noise becomes "moderated" (lowered). This is the source of the funny changes in the curves of ignorant measurements.
I wouldn't think IRIS would do auto-BLC since the astro people would HATE that.

well you do have to be very careful with the other brands but I'm only measuring Canon stuff and again while your SNR tests are perhaps more real world relevant overall they are not measuring the same thing
and again why would it show say avg 2047.34, median 2047.59, etc. all as is and then secretly do some sort of BPC and only then apply the StdDev? That would be awfully surprising.
 
why would it show say avg 2047.34, median 2047.59, etc. all as is and then secretly do some sort of BPC and only then apply the StdDev? That would be awfully surprising.
I am the wrong person to ask about IRIS. However, Rawnalyze is doing just what you described, with very good reason.

1. The min, max and pixel value averages relate to the absolute values .

2. The displayed image highly depends on the black level correction. See the following two captures: the one w/o BL correction is horrendeously flat, which is natural.

3. The intensity (labeled with AI) reflects how the image is displayed. In fact the non-corrected values are pretty useless, because the original pixel values are not linear; they can be used only within the same image.

4. For me it is natural, that the standard deviation is displayed after BL correction, for that way the noise of different cameras can be compared. The standard deviation is affected by the BL correction only if some pixels become negative after the BL substraction (the negative values will be replaced by zero).

5. If I want to see the uncorrected values, I "tell" Rawnalyze not to do the correction.

The colored dots in the black areas represent the "black clipping"; I turned the display on (by the checkbox "Raw clipping" in order to demonstrate, that black clipping does occur on the selected area, but not much. In particular, there are only a few such green pixels, therefor the standard deviation changes only minimal between the BL corrected and not corrected areas.





--
Gabor

http://www.panopeeper.com/panorama/pano.htm
 
Most Canon bodies--and from what I have seen posted from some preliminary tests here the 7D is no exception have slightly better DR @ 200 than 100 and then it gradually gets worse as you increase the ISO
I don't know of any modern Canon DSLR which would have greater DR with ISO 200 than with ISO 100. Just the opposite: the DR is always greater with ISO 100, althiugh the difference is tiny in some cases.

--
Gabor

http://www.panopeeper.com/panorama/pano.htm
I haven't read the whole thread, but there is an obvious reason why DR may be
better at ISO 200 than 100.

The amplification from sensor to A/D conversion is rarely tuned so that the voltage swing matches the sensor saturation. This means that while the higlight clipping level from the next lowest ISO to the highest native ISO is determined by clipping of the voltage swing in the amplifiers, at the lowest ISO, it is determined by sensor saturation. The difference is usually 0.1 - 0.3 stops.

At the lowest and next lowest ISO, the noise that is mostly relevant for measuring DR, is amp/converter noise, which is largely independent of the ISO setting.

So I see nothing strange at all in the vast majority of measurements telling that the highest DR is at ISO 200.

--
It depends on the eye
 
As long as you are using a signed integer representation, it is not going to matter whether black point subtraction is performed. Std dev is independent of where the mean is located. Only if the negative data are clipped to zero during black point subtraction (for instance by using unsigned integer representation) will the std dev be affected.
--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
As long as you are using a signed integer representation, it is not going to matter whether black point subtraction is performed. Std dev is independent of where the mean is located. Only if the negative data are clipped to zero during black point subtraction (for instance by using unsigned integer representation) will the std dev be affected
The goal of the black point correction is to arrive at pixel values, which are useful in further processing, like intensity adjustment, white balancing, gamma application. Negative values, as "hangover" of the uncorrected pixel values, are of no use there.

The standard deviation displayed by Rawnalyse is based on those, corrected pixel values (except if the correction is not to be carried out).

--
Gabor

http://www.panopeeper.com/panorama/pano.htm
 
The goal of the black point correction is to arrive at pixel values, which are useful in further processing, like intensity adjustment, white balancing, gamma application. Negative values, as "hangover" of the uncorrected pixel values, are of no use there.
They are of value before you subtract the blackpoint offset, and even moreso if you use signed math. They are of value all throughout the conversion process. Any converter that clips at mean black before performing all of the conversion steps is operating in error, IMO. Near-blacks have less noise and have more closely-linear means if the negative values are maintained throughout conversion.

Here's a simple example - you shoot under incandescent light, and the WB requires multiplying the red channel by 0.9, and the blue channel by 3.8. If you black-clip first before the WB, you will have all positive noise, and when you multiply the blue channel, the mean read noise gets multiplied by 3.8 in the blue channel. Had the blacks not been clipped, individual positive outliers would still be multiplied by 3.8, but so would the negative ones. Mean read noise is still zero/black, very useful for downsamples or binnings, or abstractions that lower the color resolution and/or smooth color to edges.

It is a mistake to get rid of the negative read noise early in the conversion process, IMO. There is actually some signal there; it is not pure noise, but is affected by weak signal.

--
John

 
They are of value before you subtract the blackpoint offset, and even moreso if you use signed math
Yes, they are, but not after the correction - and that was my point.
Any converter that clips at mean black before performing all of the conversion steps is operating in error, IMO
Well, I'm afraid it's you who is in error. One thing is the black point correction itself, which can (or could) be subjected to noise reduction as well.

However, the non-black point corrected pixel values are not linear, therefor it is nonsensical to make adjustments to them, like WB, intensity, etc.
It is a mistake to get rid of the negative read noise early in the conversion process, IMO. There is actually some signal there; it is not pure noise, but is affected by weak signal
I am very happy to see, that you too acknowledge, that not everything under the noise level is worthless.

--
Gabor

http://www.panopeeper.com/panorama/pano.htm
 
They are of value before you subtract the blackpoint offset, and even moreso if you use signed math
Yes, they are, but not after the correction - and that was my point.
Any converter that clips at mean black before performing all of the conversion steps is operating in error, IMO
Well, I'm afraid it's you who is in error. One thing is the black point correction itself, which can (or could) be subjected to noise reduction as well.

However, the non-black point corrected pixel values are not linear, therefor it is nonsensical to make adjustments to them, like WB, intensity, etc.
Huh? Your idea of linear is a bit peculiar. The Canon black point offset is a constant added to all RAW values. RAW data is regarded as linear because RAW level S is proportional to pixel illumination level L , so that S=cL . Adding a constant a still gives a linear relation S=cL+a
It is a mistake to get rid of the negative read noise early in the conversion process, IMO. There is actually some signal there; it is not pure noise, but is affected by weak signal
I am very happy to see, that you too acknowledge, that not everything under the noise level is worthless.
It is. However, if one combines pixels, the S/N ratio of the aggregate can increase to the point where signal rises above noise and is detectable.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
They are of value before you subtract the blackpoint offset, and even moreso if you use signed math
Yes, they are, but not after the correction - and that was my point.
Any converter that clips at mean black before performing all of the conversion steps is operating in error, IMO
Well, I'm afraid it's you who is in error. One thing is the black point correction itself, which can (or could) be subjected to noise reduction as well.
However, the non-black point corrected pixel values are not linear, therefor it is nonsensical to make adjustments to them, like WB, intensity, etc.
They most certainly are linear. They make perfect sense for calculations, up until the point where you need to put the image into a conventional image for display. WB, demosaicing, NR, etc, etc, can all be performed before gamma is applied to a conversion. And there is no reason to assume that in the future, negative values can't be kept in a file in case a program wants to resample the image, even in sRGB format. RAW conversion on the fly, with an embedded recipe which can be over-ridden. That's where I want things to go.
It is a mistake to get rid of the negative read noise early in the conversion process, IMO. There is actually some signal there; it is not pure noise, but is affected by weak signal
I am very happy to see, that you too acknowledge, that not everything under the noise level is worthless.
You just said the same was worthless and should be discarded - you seem to be contradicting yourself.

The only thing I can think of that you might be saying here is that you are being sarcastic about me allegedly "finally admitting" that those two extra bits (14 vs 12) are meaningful, but this has nothing to do with bit depth, per se. Any bit depth used must have levels extending below black, and many times, I have quantized RAW data, including the negative noise, with no ill effect, as long as the quantization offset is properly accounted for (something Roger Clark completely forgot about when he demonstrated the "damage" done by quantization while arguing with me).

--
John

 
However, the non-black point corrected pixel values are not linear, therefor it is nonsensical to make adjustments to them, like WB, intensity, etc.
Huh? Your idea of linear is a bit peculiar. The Canon black point offset is a constant added to all RAW values. RAW data is regarded as linear because RAW level S is proportional to pixel illumination level L , so that S=cL . Adding a constant a still gives a linear relation S=cL+a
This is plainly ridiculous. I wrote the non-black point corrected pixel values are not linear ; now you carry out the correction, then an adjustment, and "uncorrect" the result again.

Did you have anything particular in mind?

Regarding the "constant additive": IIRC it was you, who suggested that the off-setting may be the result of a constant boost before the A/D conversion; if so, the effective offset may or may not be a constant.

Anyway, I don't think Canon put the masked pixels in the raw file only for fun. ACR too is calculating with them; the way depends on the model. For example the 5D2's pattern noise is lower if the global average of the masked pixels is applied instead of row- and column oriented corrections.

--
Gabor

http://www.panopeeper.com/panorama/pano.htm
 
However, the non-black point corrected pixel values are not linear, therefor it is nonsensical to make adjustments to them, like WB, intensity, etc.
They most certainly are linear
I was writing about the non-black level corrected pixel values . I think you are referring to black level corrected values , while keeping the negative values. They are pretty different.
They make perfect sense for calculations, up until the point where you need to put the image into a conventional image for display. WB, demosaicing, NR, etc, etc, can all be performed before gamma is applied to a conversion
It is possible to do so - but what for? I wrote before, that that noise reduction, which is integrated in the demosaicing process can use that info; but what is it good later? Using the adjusted negative values for example in a later resampling is totally unacceptable. The original negative values would have to be retained for that purpose.
The only thing I can think of that you might be saying here is that you are being sarcastic about me
Certainly not.

--
Gabor

http://www.panopeeper.com/panorama/pano.htm
 
However, the non-black point corrected pixel values are not linear, therefor it is nonsensical to make adjustments to them, like WB, intensity, etc.
Huh? Your idea of linear is a bit peculiar. The Canon black point offset is a constant added to all RAW values. RAW data is regarded as linear because RAW level S is proportional to pixel illumination level L , so that S=cL . Adding a constant a still gives a linear relation S=cL+a
This is plainly ridiculous. I wrote the non-black point corrected pixel values are not linear ; now you carry out the correction, then an adjustment, and "uncorrect" the result again.
They are linear. Remember "y=ax+b"? "b" is the blackpoint offset. The function y is linear.
Did you have anything particular in mind?
Regarding the "constant additive": IIRC it was you, who suggested that the off-setting may be the result of a constant boost before the A/D conversion; if so, the effective offset may or may not be a constant.
Anyway, I don't think Canon put the masked pixels in the raw file only for fun.
Canon only uses it globally. There is no evidence that they pay any attention to line-by-line black values in conversion. In any event, you must discard a certain number of outliers when using the masks, as the samples are too small to hide the wilder ones. If you have 151 values, it would be wise to ignore the lowest 20 and the highest 20.
ACR too is calculating with them; the way depends on the model. For example the 5D2's pattern noise is lower if the global average of the masked pixels is applied instead of row- and column oriented corrections.
... failing to ignore the outliers?

BTW, when you shoot at super-high-ISO pushes, the image can essentially be treated as a blackframe, because the banding is much stronger than the signal. Especially so when there are no image-wide or -high horizontal or vertical contours.

--
John

 
They are linear. Remember "y=ax+b"? "b" is the blackpoint offset. The function y is linear
The pixel value is not a function; it is the variable in functions, like WB application.
Canon only uses it globally
I wonder how you know this. I am asking it seriously.
In any event, you must discard a certain number of outliers when using the masks, as the samples are too small to hide the wilder ones
I do, and I think ACR does too; I define firm lower and upper limits per camera model for the acceptable masked pixel values. I am lazy to make a test for the verification, how much the global average would change if the extreme values too were regarded in the calculation. Particularly the 5D2 has many such pixels at high ISOs - there can be thousands with the value under 800, hundreds of them being zero.

--
Gabor

http://www.panopeeper.com/panorama/pano.htm
 

Keyboard shortcuts

Back
Top