Sensor at high ISO Comparison of cameras.

but do not truncate (unfortunately, RawDigger does). That might produce negative numbers (but in the highlights, highly unlikely) but so what, the computation of the st.dev. remains unchanged, and the mean would be positive unless you are really deep in the darks over a small patch.
Negative numbers must not be assigned to unsigned integers! Does RawDigger do that?
Unfortunately, not. When you measure the mean and the st.dev. over small patches with black point compensation activated, it truncates the values without any need of doing that.
As noted in the thread, black point comp can be turned off. You can also manually set it to whatever value you want, including the lowest pre-corrected value if that floats your boat.
I know that it can be turned off. I do it all the time but then the preview is washed out and it is not clear on what exactly I am clicking.
But what's your use case? What is it in the displayed image that you're "clicking" and why? For me, setting black level to the lowest real (pre-corrected) DN in the image has never been too much to "wash out" the image. A typical use case for me is to set a selection patch and move it to the location in the image I want to analyze. Maybe I'll also display the corresponding selection histogram. Then, I can always toggle on/off (or adjust) the black level setting for that patch. Of course, toggling off the black level setting completely is likely to wash out the image, but so what? I've already identified the area to be analyzed. If I want to move to another area, I just toggle the black point setting back on. Rinse and repeat...
The ideal behavior with a black point subtraction, IMO, would be to display a black point corrected rendering with the black point subtracted in the calculations and in the values shown without shying away from negative values. Negative numbers are numbers, too!
YMMV, but my ideal default behavior is showing the black point corrected values as they will be used elsewhere (e.g., RawDigger's histograms and external raw converters).
No. Some programs will cut off valid data, and others will do it right. It's arbitrary and capricious.
Some raw converters allow you to tweak the raw black points. Many don't or only allow it to be done indirectly. Regardless, that's not what I was getting at in the sentence to which you responded. My point is that they don't label or graph the RGB values as negative values. At least as a default setting it makes sense to me for RawDigger to also behave similarly. Perhaps JACS has a specific use case that I'm not thinking about and can elaborate. Otherwise, this is really just a minor side-issue to the main topic of the thread, on which we both appear to agree.
 
Can you point me to a good source for the proposition that ACR applies sharpening and noise reduction even when all associated sliders are zero'd? Thanks.
I meant those as examples of subliminal processing by raw converters but here you go Nick, almost exactly 10 years ago, maybe I should revisit it with current versions of ACR and my upgraded understanding

https://www.strollswithmydog.com/raw-converter-sharpening-with-sliders-at-zero/

I don't have one for noise reduction but I remember coming across several examples of that roughly at the same time.
So, I see you're tooting your own horn again, eh? :-) As always, I enjoyed reading your thoughtful and insightful article. A question and a request:
  • I looked quickly but didn't see any discussion here on DPR related to your article. Did I miss it or it just didn't prompt a discussion? Are you aware of any threads here on the PST forum that go into depth on Adobe's sharpening and noise behavior at zero'd out settings?
Some time ago, I was interested in the "front-ends" of RAW-processors Adobe LR.4.x (at that time, shared a common core with ACR.x) as well as DxO Optics Pro.6-8.x. It appeared that particular front-end conformations [including proprietary utilization of various de-mosacing algorithm(s) and their variable parameter setting(s), resulting in image "noise/sharpness" effects, according to source Iliah Borg] of these applications were not only an independent function of manufacturer data-base camera make/model/setting data, they were also (possibly) affected by image-data characteristics dynamically assessed in ("pre")-processing.

While I recall discussing this stuff on older threads to some extent, I'm finding when using Google Site Searches these days that DPR has been steadily deleting tons of older threads.
 
Last edited:
but do not truncate (unfortunately, RawDigger does). That might produce negative numbers (but in the highlights, highly unlikely) but so what, the computation of the st.dev. remains unchanged, and the mean would be positive unless you are really deep in the darks over a small patch.
Negative numbers must not be assigned to unsigned integers! Does RawDigger do that?
Unfortunately, not. When you measure the mean and the st.dev. over small patches with black point compensation activated, it truncates the values without any need of doing that.
As noted in the thread, black point comp can be turned off. You can also manually set it to whatever value you want, including the lowest pre-corrected value if that floats your boat.
I know that it can be turned off. I do it all the time but then the preview is washed out and it is not clear on what exactly I am clicking.
But what's your use case? What is it in the displayed image that you're "clicking" and why? For me, setting black level to the lowest real (pre-corrected) DN in the image has never been too much to "wash out" the image. A typical use case for me is to set a selection patch and move it to the location in the image I want to analyze. Maybe I'll also display the corresponding selection histogram. Then, I can always toggle on/off (or adjust) the black level setting for that patch. Of course, toggling off the black level setting completely is likely to wash out the image, but so what? I've already identified the area to be analyzed. If I want to move to another area, I just toggle the black point setting back on. Rinse and repeat...
The ideal behavior with a black point subtraction, IMO, would be to display a black point corrected rendering with the black point subtracted in the calculations and in the values shown without shying away from negative values. Negative numbers are numbers, too!
YMMV, but my ideal default behavior is showing the black point corrected values as they will be used elsewhere (e.g., RawDigger's histograms and external raw converters).
No. Some programs will cut off valid data, and others will do it right. It's arbitrary and capricious.
Some raw converters allow you to tweak the raw black points. Many don't or only allow it to be done indirectly. Regardless, that's not what I was getting at in the sentence to which you responded. My point is that they don't label or graph the RGB values as negative values.

At least as a default setting it makes sense to me for RawDigger to also behave similarly. Perhaps JACS has a specific use case that I'm not thinking about and can elaborate. Otherwise, this is really just a minor side-issue to the main topic of the thread, on which we both appear to agree.
Well, whatever raw viewers should or shouldn't show, they truncate negative numbers, and destroy data in doing so.

The OP's pictures clearly exhibit this problem. He messed up, so his pictures tell you virtually nothing about noise, which is what they were supposed to show.
 
but do not truncate (unfortunately, RawDigger does). That might produce negative numbers (but in the highlights, highly unlikely) but so what, the computation of the st.dev. remains unchanged, and the mean would be positive unless you are really deep in the darks over a small patch.
Negative numbers must not be assigned to unsigned integers! Does RawDigger do that?
Unfortunately, not. When you measure the mean and the st.dev. over small patches with black point compensation activated, it truncates the values without any need of doing that.
As noted in the thread, black point comp can be turned off. You can also manually set it to whatever value you want, including the lowest pre-corrected value if that floats your boat.
I know that it can be turned off. I do it all the time but then the preview is washed out and it is not clear on what exactly I am clicking.
But what's your use case? What is it in the displayed image that you're "clicking" and why? For me, setting black level to the lowest real (pre-corrected) DN in the image has never been too much to "wash out" the image. A typical use case for me is to set a selection patch and move it to the location in the image I want to analyze. Maybe I'll also display the corresponding selection histogram. Then, I can always toggle on/off (or adjust) the black level setting for that patch. Of course, toggling off the black level setting completely is likely to wash out the image, but so what? I've already identified the area to be analyzed. If I want to move to another area, I just toggle the black point setting back on. Rinse and repeat...
The ideal behavior with a black point subtraction, IMO, would be to display a black point corrected rendering with the black point subtracted in the calculations and in the values shown without shying away from negative values. Negative numbers are numbers, too!
YMMV, but my ideal default behavior is showing the black point corrected values as they will be used elsewhere (e.g., RawDigger's histograms and external raw converters).
No. Some programs will cut off valid data, and others will do it right. It's arbitrary and capricious.
Some raw converters allow you to tweak the raw black points. Many don't or only allow it to be done indirectly. Regardless, that's not what I was getting at in the sentence to which you responded. My point is that they don't label or graph the RGB values as negative values.

At least as a default setting it makes sense to me for RawDigger to also behave similarly. Perhaps JACS has a specific use case that I'm not thinking about and can elaborate. Otherwise, this is really just a minor side-issue to the main topic of the thread, on which we both appear to agree.
Well, whatever raw viewers should or shouldn't show, they truncate negative numbers, and destroy data in doing so.
You seem to be arguing that a black level should never be set simply because it invariably "truncates negative numbers". Sorry, but that's going too far and is counter to what every raw converter (and raw viewer) does, at least by default. Done optimally, the only "data" eliminated is noise. Getting rid of that is a very good thing indeed because otherwise you're going to be mightly struggling with useless and unwanted "data" (i.e., read noise) and all that this implies, including deep shadow lightening and color casts. Jim rightly points out that hardwiring the truncation into the raw data itself is questionable and unnecessary. It's better done later in the raw workflow. He's NOT arguing that it should never be done.

If there's a valid reason to set a black point (and there most certainly is, as every camera maker has recognized), then there's also a valid reason for "raw viewers" such as RawDigger to allow the user to model and measure what happens when a black point is set. I think that RawDigger does a fine job of permitting the user to accept the default setting, or not, and to visualize and measure the default results and any applied changes.
The OP's pictures clearly exhibit this problem. He messed up, so his pictures tell you virtually nothing about noise, which is what they were supposed to show.
I agree with John Sheehy that the OP's patches tell us something related to sensor noise. However, what it tells us is highly dependent on other factors than the just the read noise generated by the tested cameras.
 
You seem to be arguing that a black level should never be set simply because it invariably "truncates negative numbers". Sorry, but that's going too far and is counter to what every raw converter (and raw viewer) does, at least by default. Done optimally, the only "data" eliminated is noise. Getting rid of that is a very good thing indeed because otherwise you're going to be mightly struggling with useless and unwanted "data" (i.e., read noise) and all that this implies, including deep shadow lightening and color casts. Jim rightly points out that hardwiring the truncation into the raw data itself is questionable and unnecessary. It's better done later in the raw workflow. He's NOT arguing that it should never be done.
Right. If you're going to end up with an unsigned integer, you're going to eventually need to truncate negative values. However, when I'm writing code, I keep the data in floating point as long as I can, usually converting to unsigned integer just before writing the file.
If there's a valid reason to set a black point (and there most certainly is, as every camera maker has recognized), then there's also a valid reason for "raw viewers" such as RawDigger to allow the user to model and measure what happens when a black point is set. I think that RawDigger does a fine job of permitting the user to accept the default setting, or not, and to visualize and measure the default results and any applied changes.
I too, think that RawDigger offers all the necessary options.
The OP's pictures clearly exhibit this problem. He messed up, so his pictures tell you virtually nothing about noise, which is what they were supposed to show.
I agree with John Sheehy that the OP's patches tell us something related to sensor noise. However, what it tells us is highly dependent on other factors than the just the read noise generated by the tested cameras.
Yup.
 
Well, whatever raw viewers should or shouldn't show, they truncate negative numbers, and destroy data in doing so.
You seem to be arguing that a black level should never be set simply because it invariably "truncates negative numbers".
No I'm not. I'm saying that you'd better not truncate negative numbers if you want to determine the noise.
Sorry, but that's going too far and is counter to what every raw converter (and raw viewer) does, at least by default. Done optimally, the only "data" eliminated is noise. Getting rid of that is a very good thing indeed because otherwise you're going to be mightly struggling with useless and unwanted "data" (i.e., read noise) and all that this implies, including deep shadow lightening and color casts. Jim rightly points out that hardwiring the truncation into the raw data itself is questionable and unnecessary. It's better done later in the raw workflow. He's NOT arguing that it should never be done.

If there's a valid reason to set a black point (and there most certainly is, as every camera maker has recognized), then there's also a valid reason for "raw viewers" such as RawDigger to allow the user to model and measure what happens when a black point is set. I think that RawDigger does a fine job of permitting the user to accept the default setting, or not, and to visualize and measure the default results and any applied changes.
The OP's pictures clearly exhibit this problem. He messed up, so his pictures tell you virtually nothing about noise, which is what they were supposed to show.
I agree with John Sheehy that the OP's patches tell us something related to sensor noise. However, what it tells us is highly dependent on other factors than the just the read noise generated by the tested cameras.
I think the OP's images are hopelessly compromised by uncontrolled variables. They show the color cast that you will get with a certain camera and raw developer, with the settings that he used. They don't tell you much else.

If you're saying that the images need to be truncated at the black level (0 photons) for a visual comparison of the read noise, then I think I agree with that -- as long as the histograms are all Gaussian. But the OP has not even attempted that, and he also has other uncontrolled variables.

ADDED LATER: It's important to realize that if you do subtract the mean black level, you are in effect cutting the histogram in half, thereby reducing the apparent noise. Measurements made with low-level light will not have the benefit of this apparent noise reduction. That is why one must not measure read noise with truncated negative values.

I think the OP should just familiarize himself with Bill Claff's data. If he want's to go further, then he should figure out how to set the black point correctly.
 
Last edited:
Can you point me to a good source for the proposition that ACR applies sharpening and noise reduction even when all associated sliders are zero'd? Thanks.
I meant those as examples of subliminal processing by raw converters but here you go Nick, almost exactly 10 years ago, maybe I should revisit it with current versions of ACR and my upgraded understanding

https://www.strollswithmydog.com/raw-converter-sharpening-with-sliders-at-zero/

I don't have one for noise reduction but I remember coming across several examples of that roughly at the same time.

Jack
So, I see you're tooting your own horn again, eh? :-) As always, I enjoyed reading your thoughtful and insightful article. A question and a request:
  • I looked quickly but didn't see any discussion here on DPR related to your article. Did I miss it or it just didn't prompt a discussion? Are you aware of any threads here on the PST forum that go into depth on Adobe's sharpening and noise behavior at zero'd out settings?
  • Any chance you could dig up and supply the original dcraw rendering you used in your analysis (preferably TIFF but JPEG would be fine as well)? I'd like to compare it to what I'm seeing in ACR. Obviously, I can already access the original raw here on DPR, but I don't have dcraw. The closest I can come to it is RawTherapee, but I'm not sure how close that would really be to your dcraw baseline rendering.
Thanks!
Hi Nick, I am afraid any related files are long gone and if Detail Man, Grand Master of historical references for the forum's proceedings, hasn't come up with anything there is no recourse. I believe RawTherapee has AHD, VNG and LMMSE demosaicing algorithms similar to dcraw.

I will not have access to my tools for the next couple of weeks. If I have time when I return I may give this exercise another go. It would be interesting to see how ACR/LR has evolved over the last decade or so. I also have a more nuanced understanding of what those results may mean.

Jack
 
Yes, indeed. To the OP: I think you are confusing image brightness with noise.

You need the raw data for each color without black level correction. The read noise will be the square root of the numerical values for each color. Subtracting the black level should not be done, because negative numbers will not be shown, and the values may be truncated erroneously at zero.
The st.dev. is a square root of the photon count, not of the numerical RAW values which are proportional to the photon count. For example, a RAW value of 10,000, corresponds to, say, 30,000 photons, which is around 173 but that would give you a RAW value of around 58. Not quite sqrt(10,000)=100.
You don't need to explain. It was just a braino. On the other hand, these photos were taken with a lens cap, so the photon count is 0, and the OP is measuring read noise.
Also, you must subtract the black point in the computations
I disagree. You need to subtract the black level for imaging, but not for measuring noise.
There seems to be a rush to subtract black, in general, in raw conversion. There is really no need to do it, though. We could have a standard of leaving negative values in image files, for more linear results near black. The color matrix conversions Displays could just clip them to black, but the negative numbers would maintain mean linearity through any resampling that the image would face.
but do not truncate (unfortunately, RawDigger does). That might produce negative numbers (but in the highlights, highly unlikely) but so what, the computation of the st.dev. remains unchanged, and the mean would be positive unless you are really deep in the darks over a small patch.
Negative numbers must not be assigned to unsigned integers! Does RawDigger do that?
If the black point is 512, then:

514 becomes 2
513 becomes 1
512 becomes 0
511 becomes 0
510 becomes 0
etcetera

Some manufacturers clip the raw data to mean black, and RawDigger is incapable of showing the real sigmas and means for those cameras, in a black frame. If you create a synthetic Gaussian distribution with a sigma of 1.0, and then center-clip it, the sigma drops to 0.584 if the precision allowed for a well-drawn Gaussian histogram, with many values, and you clipped to the middle/peak value. So, if you have a blackframe sigma from a camera that black-clips, the real underlying sigma of "black" is going to be about 1/0.584 or 1.71x what RawDigger or any similar software says, and the mean will move from black to black plus 0.4x the sigma. Many cameras have junk data in near-blacks with only two or three values representing 98% of the black frame values, so you can't even rely on 1.71x or 0.4 x sigma for such cameras, nor any that clip blacks away from the actual center (I believe the Nikon D3 was like that, IIRC, giving well over 50% "zeros" at base ISO, or at least the one copy I found blackframes from back when the camera was introduced did). Just clipping one standard deviation too high turns a sigma of 1.0 into 0.26, implying a read noise floor 2 stops lower than what is really there. You really need to deduce the underlying analog blackframe noise from the SNR vs signal curves from non-clipped ranges, for such cameras.
 
You seem to be arguing that a black level should never be set simply because it invariably "truncates negative numbers". Sorry, but that's going too far and is counter to what every raw converter (and raw viewer) does, at least by default. Done optimally, the only "data" eliminated is noise.
Not really. Truncating the negative numbers changes the mean of the signal. Now, over a large enough uniform patch one might be able to reverse that effect computationally but the nonlinear distortion of the mean near the bottom together with the quantization error pretty much destroys those values.
Getting rid of that is a very good thing indeed because otherwise you're going to be mightly struggling with useless and unwanted "data" (i.e., read noise) and all that this implies, including deep shadow lightening and color casts. Jim rightly points out that hardwiring the truncation into the raw data itself is questionable and unnecessary. It's better done later in the raw workflow. He's NOT arguing that it should never be done.
Right. If you're going to end up with an unsigned integer, you're going to eventually need to truncate negative values. However, when I'm writing code, I keep the data in floating point as long as I can, usually converting to unsigned integer just before writing the file.
In other words, you are using the negative values. When you interpolate for demosaicing or for downsizing, it is possible to get positive values even if there some negative ones in the sum.
 
but do not truncate (unfortunately, RawDigger does). That might produce negative numbers (but in the highlights, highly unlikely) but so what, the computation of the st.dev. remains unchanged, and the mean would be positive unless you are really deep in the darks over a small patch.
Negative numbers must not be assigned to unsigned integers! Does RawDigger do that?
Unfortunately, not. When you measure the mean and the st.dev. over small patches with black point compensation activated, it truncates the values without any need of doing that.
As noted in the thread, black point comp can be turned off. You can also manually set it to whatever value you want, including the lowest pre-corrected value if that floats your boat.
I know that it can be turned off. I do it all the time but then the preview is washed out and it is not clear on what exactly I am clicking.
But what's your use case? What is it in the displayed image that you're "clicking" and why?
Say the dark gray squares - I want to estimate the mean and the st.dev. I cannot see them clearly enough since the whole image appears washed out. This happens, for example, when I do not check the BP box, black should correspond to 512, for example, but the rendering would render 0 RAW values as black which basically do not even exist.
For me, setting black level to the lowest real (pre-corrected) DN in the image has never been too much to "wash out" the image. A typical use case for me is to set a selection patch and move it to the location in the image I want to analyze. Maybe I'll also display the corresponding selection histogram. Then, I can always toggle on/off (or adjust) the black level setting for that patch. Of course, toggling off the black level setting completely is likely to wash out the image, but so what? I've already identified the area to be analyzed. If I want to move to another area, I just toggle the black point setting back on. Rinse and repeat...
Too many on and offs for me.
The ideal behavior with a black point subtraction, IMO, would be to display a black point corrected rendering with the black point subtracted in the calculations and in the values shown without shying away from negative values. Negative numbers are numbers, too!
YMMV, but my ideal default behavior is showing the black point corrected values as they will be used elsewhere (e.g., RawDigger's histograms and external raw converters). if I'm really interested in seeing where the negative values for the default black subtraction level appear in the image, I can always set the black level to the lowest pre-corrected DN and then set the underexposure warning manually to the difference between the lowest DN and the default black setting (or whatever other black level setting I'm investigating). Then I can just toggle on/off the Underexposure warning for all (or specified) channels to see where the "negative" pixels appear.
I do not want to see a binary info only, I want, after sampling, to see the actual mean and st.dev., not those of the truncated data.
The resulting st.dev., and even the mean to some extent computed this way are misleading and do not reflect the actual characteristics of the noise.
"Misleading" is context-specific isn't it?
The context was the actual characteristics of the noise, like the first and the second moment of it (mean and variance/st.dev.).
 
You seem to be arguing that a black level should never be set simply because it invariably "truncates negative numbers". Sorry, but that's going too far and is counter to what every raw converter (and raw viewer) does, at least by default. Done optimally, the only "data" eliminated is noise.
Not really. Truncating the negative numbers changes the mean of the signal. Now, over a large enough uniform patch one might be able to reverse that effect computationally but the nonlinear distortion of the mean near the bottom together with the quantization error pretty much destroys those values.
Getting rid of that is a very good thing indeed because otherwise you're going to be mightly struggling with useless and unwanted "data" (i.e., read noise) and all that this implies, including deep shadow lightening and color casts. Jim rightly points out that hardwiring the truncation into the raw data itself is questionable and unnecessary. It's better done later in the raw workflow. He's NOT arguing that it should never be done.
Right. If you're going to end up with an unsigned integer, you're going to eventually need to truncate negative values. However, when I'm writing code, I keep the data in floating point as long as I can, usually converting to unsigned integer just before writing the file.
In other words, you are using the negative values.
Indeed.
When you interpolate for demosaicing or for downsizing, it is possible to get positive values even if there some negative ones in the sum.
Check.
 
but do not truncate (unfortunately, RawDigger does). That might produce negative numbers (but in the highlights, highly unlikely) but so what, the computation of the st.dev. remains unchanged, and the mean would be positive unless you are really deep in the darks over a small patch.
Negative numbers must not be assigned to unsigned integers! Does RawDigger do that?
Unfortunately, not. When you measure the mean and the st.dev. over small patches with black point compensation activated, it truncates the values without any need of doing that.
As noted in the thread, black point comp can be turned off. You can also manually set it to whatever value you want, including the lowest pre-corrected value if that floats your boat.
I know that it can be turned off. I do it all the time but then the preview is washed out and it is not clear on what exactly I am clicking.
But what's your use case? What is it in the displayed image that you're "clicking" and why?
Say the dark gray squares - I want to estimate the mean and the st.dev. I cannot see them clearly enough since the whole image appears washed out. This happens, for example, when I do not check the BP box, black should correspond to 512, for example, but the rendering would render 0 RAW values as black which basically do not even exist.
You seem to be arguing that the RGB rendering display option should just ignore the effects of any black level adjustments. That's an unwanted and misleading restriction of useful display functionality, as far as I'm concerned - especially since it's so easy to satisfy your use case (getting the statistics for a selection patch) with only a few additional clicks (as explained and illustrated below).
For me, setting black level to the lowest real (pre-corrected) DN in the image has never been too much to "wash out" the image. A typical use case for me is to set a selection patch and move it to the location in the image I want to analyze. Maybe I'll also display the corresponding selection histogram. Then, I can always toggle on/off (or adjust) the black level setting for that patch. Of course, toggling off the black level setting completely is likely to wash out the image, but so what? I've already identified the area to be analyzed. If I want to move to another area, I just toggle the black point setting back on. Rinse and repeat...
Too many on and offs for me.
It's literally three mouse clicks (assuming your default setting is to subtract black point):
  • Set your selection patch and locate it in the desired image location. (This doesn't count as a click since you'd still have to do this even if negative values were preserved.)
  • Click 1: Click on the Black Level setting (lower left corner of the screen)
  • Click 2: In the preferences window that pops open, uncheck the Subtract Black checkbox.
  • Click 3: Click the Apply button
We're not exactly talking carpel tunnel syndrome inducing effort here...
The ideal behavior with a black point subtraction, IMO, would be to display a black point corrected rendering with the black point subtracted in the calculations and in the values shown without shying away from negative values. Negative numbers are numbers, too!
YMMV, but my ideal default behavior is showing the black point corrected values as they will be used elsewhere (e.g., RawDigger's histograms and external raw converters). if I'm really interested in seeing where the negative values for the default black subtraction level appear in the image, I can always set the black level to the lowest pre-corrected DN and then set the underexposure warning manually to the difference between the lowest DN and the default black setting (or whatever other black level setting I'm investigating). Then I can just toggle on/off the Underexposure warning for all (or specified) channels to see where the "negative" pixels appear.
I do not want to see a binary info only,
The only thing that's "binary" is the on/off display of underexposed pixels (i.e., the "negative" pixels, if the Manual Per Channel Underexposure Detection is appropriately set and used). The UnExp (underexposure) checkbox only affects the display. The avg. and st.dev measurements aren't affected by it. See below.
I want, after sampling, to see the actual mean and st.dev., not those of the truncated data.
See below.

[ATTACH alt="The DPR +6 Exposure Latitude shot for one of the OP's tested cameras. Note that the selection patch has been placed in one of the deepest shadowed areas of the severely underexposed mage, so we're measuring mostly read noise. "Auto" (default) black level of 512 is applied. Consequently, the undesired truncation of "negative" values is also applied."]3652053[/ATTACH]
The DPR +6 Exposure Latitude shot for one of the OP's tested cameras. Note that the selection patch has been placed in one of the deepest shadowed areas of the severely underexposed mage, so we're measuring mostly read noise. "Auto" (default) black level of 512 is applied. Consequently, the undesired truncation of "negative" values is also applied.

[ATTACH alt="The 3-click strategy I described above has been applied, so now there's no black level subtraction and no truncation affecting the st.dev calculation. However, the preview is "washed out" and it's somewhat difficult to see where the selection patch is located relative to items in the image, per your complaint. Note that I've gone ahead and entered (but not toggled on yet) the Min values from the selection patch into the corresponding Per Channel black level boxes."]3652054[/ATTACH]
The 3-click strategy I described above has been applied, so now there's no black level subtraction and no truncation affecting the st.dev calculation. However, the preview is "washed out" and it's somewhat difficult to see where the selection patch is located relative to items in the image, per your complaint. Note that I've gone ahead and entered (but not toggled on yet) the Min values from the selection patch into the corresponding Per Channel black level boxes.

[ATTACH alt="The Per Channel black subtraction shown in the prior screenshot has now been applied. This means the "negative" values are not being truncated, hence the st.dev for the selection patch is the same as when no black subtraction is applied. Note that the image is far less "washed out" and different parts of the image are easily distinguishable. Note that the Underexposure warning is checkmarked on, which enables visualization of the "negative" DNs when appropriate values are added into the Manual Per Channel Underexposure Detection preferences. Toggling the UnExp checkbox on/off does not affect the Min, Max, Avg or sigma values."]3652055[/ATTACH]
The Per Channel black subtraction shown in the prior screenshot has now been applied. This means the "negative" values are not being truncated, hence the st.dev for the selection patch is the same as when no black subtraction is applied. Note that the image is far less "washed out" and different parts of the image are easily distinguishable. Note that the Underexposure warning is checkmarked on, which enables visualization of the "negative" DNs when appropriate values are added into the Manual Per Channel Underexposure Detection preferences. Toggling the UnExp checkbox on/off does not affect the Min, Max, Avg or sigma values.

RawDigger's UI/UX may not be the most elegant or easiest to use, but it gets the job done.
 

Attachments

  • b65d9deb4e54447aac27c100bc790013.jpg
    b65d9deb4e54447aac27c100bc790013.jpg
    6 MB · Views: 0
  • a3005a40b74844fa918a94865684fb71.jpg
    a3005a40b74844fa918a94865684fb71.jpg
    1.1 MB · Views: 0
  • 44394ceca1334c9d9b3969bc0479f231.jpg
    44394ceca1334c9d9b3969bc0479f231.jpg
    7.3 MB · Views: 0
Can you point me to a good source for the proposition that ACR applies sharpening and noise reduction even when all associated sliders are zero'd? Thanks.
I meant those as examples of subliminal processing by raw converters but here you go Nick, almost exactly 10 years ago, maybe I should revisit it with current versions of ACR and my upgraded understanding

https://www.strollswithmydog.com/raw-converter-sharpening-with-sliders-at-zero/

I don't have one for noise reduction but I remember coming across several examples of that roughly at the same time.

Jack
So, I see you're tooting your own horn again, eh? :-) As always, I enjoyed reading your thoughtful and insightful article. A question and a request:
  • I looked quickly but didn't see any discussion here on DPR related to your article. Did I miss it or it just didn't prompt a discussion? Are you aware of any threads here on the PST forum that go into depth on Adobe's sharpening and noise behavior at zero'd out settings?
  • Any chance you could dig up and supply the original dcraw rendering you used in your analysis (preferably TIFF but JPEG would be fine as well)? I'd like to compare it to what I'm seeing in ACR. Obviously, I can already access the original raw here on DPR, but I don't have dcraw. The closest I can come to it is RawTherapee, but I'm not sure how close that would really be to your dcraw baseline rendering.
Thanks!
Hi Nick, I am afraid any related files are long gone
I figured it was a long shot.
and if Detail Man, Grand Master of historical references for the forum's proceedings, hasn't come up with anything there is no recourse. I believe RawTherapee has AHD, VNG and LMMSE demosaicing algorithms similar to dcraw.
Yes, it does and I understand that it's built on the dcraw libraries, but I wanted to confirm that what I'm generating from RawTherapee is as clean as the dcraw rendering you used for your baseline measurements.
I will not have access to my tools for the next couple of weeks. If I have time when I return I may give this exercise another go. It would be interesting to see how ACR/LR has evolved over the last decade or so. I also have a more nuanced understanding of what those results may mean.
I can promise you'll have at least one reader of your updated "stroll" through this topic!
 
[ATTACH alt="The DPR +6 Exposure Latitude shot for one of the OP's tested cameras. Note that the selection patch has been placed in one of the deepest shadowed areas of the severely underexposed mage, so we're measuring mostly read noise. "Auto" (default) black level of 512 is applied. Consequently, the undesired truncation of "negative" values is also applied."]3652053[/ATTACH]
The DPR +6 Exposure Latitude shot for one of the OP's tested cameras. Note that the selection patch has been placed in one of the deepest shadowed areas of the severely underexposed mage, so we're measuring mostly read noise. "Auto" (default) black level of 512 is applied. Consequently, the undesired truncation of "negative" values is also applied.

...

[ATTACH alt="The Per Channel black subtraction shown in the prior screenshot has now been applied. This means the "negative" values are not being truncated, hence the st.dev for the selection patch is the same as when no black subtraction is applied..."]3652055[/ATTACH]
The Per Channel black subtraction shown in the prior screenshot has now been applied. This means the "negative" values are not being truncated, hence the st.dev for the selection patch is the same as when no black subtraction is applied...
On the contrary, it appears to me that sigma for the 100x300 area of the G layer is 1.16 with truncated data and 1.58 with nontruncated data.
 
Last edited:
[ATTACH alt="The DPR +6 Exposure Latitude shot for one of the OP's tested cameras. Note that the selection patch has been placed in one of the deepest shadowed areas of the severely underexposed mage, so we're measuring mostly read noise. "Auto" (default) black level of 512 is applied. Consequently, the undesired truncation of "negative" values is also applied."]3652053[/ATTACH]
The DPR +6 Exposure Latitude shot for one of the OP's tested cameras. Note that the selection patch has been placed in one of the deepest shadowed areas of the severely underexposed mage, so we're measuring mostly read noise. "Auto" (default) black level of 512 is applied. Consequently, the undesired truncation of "negative" values is also applied.

...

[ATTACH alt="The Per Channel black subtraction shown in the prior screenshot has now been applied. This means the "negative" values are not being truncated, hence the st.dev for the selection patch is the same as when no black subtraction is applied..."]3652055[/ATTACH]
The Per Channel black subtraction shown in the prior screenshot has now been applied. This means the "negative" values are not being truncated, hence the st.dev for the selection patch is the same as when no black subtraction is applied...
On the contrary, it appears to me that sigma for the 100x300 area of the G layer is 1.16 with truncated data and 1.58 with nontruncated data.
Please re-read the text that I've bolded and underlined. We're all in agreement (and have been from the get-go) that the sigma based on truncated data is generally not what we are looking for. The issue JACS and I were discussing was how easily (or not) RawDigger could be used to calculate the non-truncated sigma and mean values of a user-defined and positioned patch. You've removed from your reply the second screenshot, which is the relevant one to compare to the Per Channel black subtraction screenshot above.
 
[ATTACH alt="The DPR +6 Exposure Latitude shot for one of the OP's tested cameras. Note that the selection patch has been placed in one of the deepest shadowed areas of the severely underexposed mage, so we're measuring mostly read noise. "Auto" (default) black level of 512 is applied. Consequently, the undesired truncation of "negative" values is also applied."]3652053[/ATTACH]
The DPR +6 Exposure Latitude shot for one of the OP's tested cameras. Note that the selection patch has been placed in one of the deepest shadowed areas of the severely underexposed mage, so we're measuring mostly read noise. "Auto" (default) black level of 512 is applied. Consequently, the undesired truncation of "negative" values is also applied.

...

[ATTACH alt="The Per Channel black subtraction shown in the prior screenshot has now been applied. This means the "negative" values are not being truncated, hence the st.dev for the selection patch is the same as when no black subtraction is applied..."]3652055[/ATTACH]
The Per Channel black subtraction shown in the prior screenshot has now been applied. This means the "negative" values are not being truncated, hence the st.dev for the selection patch is the same as when no black subtraction is applied...
On the contrary, it appears to me that sigma for the 100x300 area of the G layer is 1.16 with truncated data and 1.58 with nontruncated data.
Please re-read the text that I've bolded and underlined. We're all in agreement (and have been from the get-go) that the sigma based on truncated data is generally not what we are looking for.
Ah, yes, mea culpa. Our lives are so busy that we sometimes miss what is in plain sight. I could have sworn you were arguing the opposite somewhere, but I guess not.
The issue JACS and I were discussing was how easily (or not) RawDigger could be used to calculate the non-truncated sigma and mean values of a user-defined and positioned patch.
I understood that, and I think your point was well taken.
You've removed from your reply the second screenshot, which is the relevant one to compare to the Per Channel black subtraction screenshot above.
It appears that the screenshots speak for themselves. From the screenshots I quoted it looks like the first one I quoted has auto level of 512 subtracted, with rather low standard deviations, while the second one has a black level of (496, 493, 498, 501), with much higher standard deviations. Except for that, I think we're in agreement.
 
Last edited:
uares - I want to estimate the mean and the st.dev. I cannot see them clearly enough since the whole image appears washed out. This happens, for example, when I do not check the BP box, black should correspond to 512, for example, but the rendering would render 0 RAW values as black which basically do not even exist.
You seem to be arguing that the RGB rendering display option should just ignore the effects of any black level adjustments.
Not that. Subtracting the BP does reflect the black level adjustments. Truncating negative values in the tables on the top serves no useful purpose and hides data.
For me, setting black level to the lowest real (pre-corrected) DN in the image has never been too much to "wash out" the image. A typical use case for me is to set a selection patch and move it to the location in the image I want to analyze. Maybe I'll also display the corresponding selection histogram. Then, I can always toggle on/off (or adjust) the black level setting for that patch. Of course, toggling off the black level setting completely is likely to wash out the image, but so what? I've already identified the area to be analyzed. If I want to move to another area, I just toggle the black point setting back on. Rinse and repeat...
Too many on and offs for me.
It's literally three mouse clicks (assuming your default setting is to subtract black point):
  • Set your selection patch and locate it in the desired image location. (This doesn't count as a click since you'd still have to do this even if negative values were preserved.)
  • Click 1: Click on the Black Level setting (lower left corner of the screen)
  • Click 2: In the preferences window that pops open, uncheck the Subtract Black checkbox.
  • Click 3: Click the Apply button
We're not exactly talking carpel tunnel syndrome inducing effort here...
Unless you have to do it over and over again, which is my case.
 
While I recall discussing this stuff on older threads to some extent, I'm finding when using Google Site Searches these days that DPR has been steadily deleting tons of older threads.
I don't think we have. The only deletions I'm aware of are those where the original poster has requested that their account be deleted.

With regards ACR, the Camera Raw team told us that the NR and sharpening defaults differ between models, which is why our studio scene processing methodology minimizes both settings and then applies a standard amount of sharpening.

Richard - DPReview.com
 
While I recall discussing this stuff on older threads to some extent, I'm finding when using Google Site Searches these days that DPR has been steadily deleting tons of older threads.
I don't think we have. The only deletions I'm aware of are those where the original poster has requested that their account be deleted.

With regards ACR, the Camera Raw team told us that the NR and sharpening defaults differ between models, which is why our studio scene processing methodology minimizes both settings and then applies a standard amount of sharpening.

Richard - DPReview.com
Thanks, RIchard.
 
While I recall discussing this stuff on older threads to some extent, I'm finding when using Google Site Searches these days that DPR has been steadily deleting tons of older threads.
I don't think we have. The only deletions I'm aware of are those where the original poster has requested that their account be deleted.
Your impression does not appear to comport with empirical evidence. DPR data itself reveals that to date a total 672 (= 17450 - 16778) of my posts have been deleted. Less than around one-third of those deletions occurred in earlier years - a few deletions from what were clearly targeted actions, the rest from wholesale deletions of entire DPR forum threads.

The (seemingly evident) more recent disappearances of my posts is around double that. Those numbers sure sound like a lot of alleged (fairly unusual) DPR "self-deportations".

I have used Google's site search function for years ("site:URL + SearchTerms"), which is hands-down light years ahead of anything that DPR's home-grown search widget has ever revealed. Around the time that DPR was acquired by present ownership (and since), my similar searches for known keywords in previous posts now yield almost nothing at all.

(Perhaps) there exist some differences in how much Google takes an interest in storing information pointing to prior DPR Forum posts - or perhaps something related to DPR ?
With regards ACR, the Camera Raw team told us that the NR and sharpening defaults differ between models, which is why our studio scene processing methodology minimizes both settings and then applies a standard amount of sharpening.
My understanding at the time (posted; not able to be located) regarding LR 4.x (which was stated by Eric Chan to have a common core with some numerical incarnation of ACR) was that - similar to DxO OP (6.x, 7.x) selected between various de-mosaicing algorithms (based upon camera/lens, make/model, as well as possibly also upon analyzed scene elements and image characteristics). Such practices would seem to negate validity of the above approach.
 
Last edited:

Keyboard shortcuts

Back
Top