How does RAW black point subtraction work?

NateW

Well-known member
Messages
100
Reaction score
39
I’ve been trying to improve my understanding of how raw photo processing works, but I think I’m getting hung up on black point subtraction. RawDigger tells me that the black point for my A7IV is 512. I’m assuming that this is 512 out of 16383 (.03125), and not after scaling to fill the full 16bit range (.0078125), correct?

So, if 512 is 9 bits of data, how is the camera encoding 14stops of linear dynamic range in only the upper 5 bits of data? Is the data encoding only linear after black point subtraction is performed?

And if the camera black point is 9 log2 DNS (512/16383) why does the photon transfer curve show that noise maxes out around 6 log2 DNs (64/16383)?

Sorry in advance, I know these are elementary questions. Just trying to locate the misunderstanding them might stem from. Thanks!
 
if it’s not referenced to human vision, it ain’t color.
Never hurts to remind :)
Maxwell had found a complication that certain colours needed one primary to be added to the test colour before a mixture of the remaining two primaries could match the mixture of test colour and added primary. This difficulty is handled in the relationship by allowing the coefficient a or b or c associated with the added primary to be negative. The colour box allowed for negative coefficients but the Maxwell colour triangle doesn't, which is why one has to say that 'most' colours can be represented by the Maxwell colour triangle. ... In the CIE system all real colours have positive coordinates. The 'colour triangle' now becomes a distorted shape with rounded sides of the CIE chromaticity diagram but the concept behind it is just the one Maxwell laid down in the late 1850s.

Maxwell's Color Triangle - Source: https://homepages.abdn.ac.uk/j.s.reid/pages/Maxwell/Legacy/Maxtriangle.gif

Maxwell's Color Triangle - Source: https://homepages.abdn.ac.uk/j.s.reid/pages/Maxwell/Legacy/Maxtriangle.gif

Seems like Maxwell's model related to perceptual impressions surrounding "human vision".
 
Last edited:
Right. But camera CFA spectra are not linear transformations of CIE 1931 XYZ. So cameras don’t see color. That occurs for raw files when colors are assigned during the development process.

--
 
what’s the purpose of adding some number to the ADC count?

one possible reason just occurred to me: calibration.
Yes, the part that is added after ADC is for calibration.
But if that’s necessary, why not handle it in the metadata?
Reveals too much about a camera.
You are probably right, but I have never once thought of looking at the bad sectors table on a new disk and sending the disk back because it was too long.
You'd most likely see an empty table because the defects mapped during manufacturing were typically not exposed in the reported table. The SMART thresholds are pretty interesting too - most are set so high you'd get a SMART warning at around the same time the drive would start failing almost all media requests.
 
Putting my own pedantic hat on, if it’s not a precise descriptor of human visual perception (which it clearly seems not), then it should not earn the status of unquestionable "objective reality". You may recall my previous (2018) raising of issues surrounding the development and nature of the so-called "Luther-Ives" condition, also referencing published evidence that CIE 1931 XYZ does not in fact adequately provide for certain (perceptual) metameric errors:

https://www.dpreview.com/forums/post/61825078
 
Last edited:
Putting my own pedantic hat on, if it’s not a precise descriptor of human visual perception (which it clearly seems not), then it should not earn the status of unquestionable "objective reality". You may recall my previous (2018) raising of issues surrounding the development and nature of the so-called "Luther-Ives" condition, also referencing published evidence that CIE 1931 XYZ does not in fact adequately provide for certain (perceptual) metameric errors:

https://www.dpreview.com/forums/post/61825078
Touché.



The first issue the comes to mind is the 1931 dataset and the fact that the observers as a group seem to have been blue deficient.

The next issue is that the visual field affects the results, as do other test conditions.

Then there are variations amongst observers deemed color normal.

Adaptation makes a difference.

On top of that, there are probably nonlinearies we don’t understand.

But, at this point, trying to change the way color management software works is kinda like trying to get everyone to throw out their QWERTY keyboards. Or totally abandon IP4.

--
 
No self-respecting conversion is going to ignore the blackpoint when it is higher than zero. You can't get correct color when you leave the bias in. However, it is more accurate to maintain original readout values of less than 512 as negative numbers, rather than clipping to 0 (everything 512 or less becomes zero) if you want minimal artifacts for very weak signals, and only clip the data at the last stage to create a display image. All interpolation and resizing is best done when negative values are maintained. This clips less signal and maintains linearity.
I'm working on trying to reverse-engineer's Adobe's blackpoint handling. So far I already established it's not doing simple subtraction (link). By comparison, a cursory check of RawTherapee indicates it does do simple subtraction - the same experiment performed in that link yields identical output for RawTherapee (ie, modifying a blackframe raw so that the ADU values below black level are clamped to the black level yields the same output as the original blackframe raw with the ADU values unomdified).
Yes, I am not surprised by that. It must be some kind of attempt at uniformity to conform with the cameras that output clipped-black raws, or some psychological disapproval of negative light intensities. If this does not yield off-colors near black, then perhaps they have come up with some universal voodoo to undo the non-linearity, or the expression of color is muted at those levels.

If they can undo the non-linearity of means due to clipped blacks, then they have solved that problem, but I still think some SNR is lost with very weak signals, in the absence of veiling flare due to "underexposure" when you black-clip before any interpolation. I guess I care more about these extreme corner cases than most converter developers.
 
No self-respecting conversion is going to ignore the blackpoint when it is higher than zero. You can't get correct color when you leave the bias in. However, it is more accurate to maintain original readout values of less than 512 as negative numbers, rather than clipping to 0 (everything 512 or less becomes zero) if you want minimal artifacts for very weak signals, and only clip the data at the last stage to create a display image. All interpolation and resizing is best done when negative values are maintained. This clips less signal and maintains linearity.
I'm working on trying to reverse-engineer's Adobe's blackpoint handling. So far I already established it's not doing simple subtraction (link). By comparison, a cursory check of RawTherapee indicates it does do simple subtraction - the same experiment performed in that link yields identical output for RawTherapee (ie, modifying a blackframe raw so that the ADU values below black level are clamped to the black level yields the same output as the original blackframe raw with the ADU values unomdified).
Yes, I am not surprised by that. It must be some kind of attempt at uniformity to conform with the cameras that output clipped-black raws, or some psychological disapproval of negative light intensities. If this does not yield off-colors near black, then perhaps they have come up with some universal voodoo to undo the non-linearity, or the expression of color is muted at those levels.

If they can undo the non-linearity of means due to clipped blacks, then they have solved that problem, but I still think some SNR is lost with very weak signals, in the absence of veiling flare due to "underexposure" when you black-clip before any interpolation. I guess I care more about these extreme corner cases than most converter developers.
I my limited experience coding noise reduction algorithms, allowing negative starting and intermediate values makes things easier.
 
No self-respecting conversion is going to ignore the blackpoint when it is higher than zero. You can't get correct color when you leave the bias in. However, it is more accurate to maintain original readout values of less than 512 as negative numbers, rather than clipping to 0 (everything 512 or less becomes zero) if you want minimal artifacts for very weak signals, and only clip the data at the last stage to create a display image. All interpolation and resizing is best done when negative values are maintained. This clips less signal and maintains linearity.
I'm working on trying to reverse-engineer's Adobe's blackpoint handling. So far I already established it's not doing simple subtraction (link). By comparison, a cursory check of RawTherapee indicates it does do simple subtraction - the same experiment performed in that link yields identical output for RawTherapee (ie, modifying a blackframe raw so that the ADU values below black level are clamped to the black level yields the same output as the original blackframe raw with the ADU values unomdified).
Yes, I am not surprised by that. It must be some kind of attempt at uniformity to conform with the cameras that output clipped-black raws, or some psychological disapproval of negative light intensities. If this does not yield off-colors near black, then perhaps they have come up with some universal voodoo to undo the non-linearity, or the expression of color is muted at those levels.

If they can undo the non-linearity of means due to clipped blacks, then they have solved that problem, but I still think some SNR is lost with very weak signals, in the absence of veiling flare due to "underexposure" when you black-clip before any interpolation. I guess I care more about these extreme corner cases than most converter developers.
I just posted a comparison of non-clamped vs clamped for up to -11EV:

 
Putting my own pedantic hat on, if it’s not a precise descriptor of human visual perception (which it clearly seems not), then it should not earn the status of unquestionable "objective reality". You may recall my previous (2018) raising of issues surrounding the development and nature of the so-called "Luther-Ives" condition, also referencing published evidence that CIE 1931 XYZ does not in fact adequately provide for certain (perceptual) metameric errors:

https://www.dpreview.com/forums/post/61825078
Touché.

The first issue the comes to mind is the 1931 dataset and the fact that the observers as a group seem to have been blue deficient.

The next issue is that the visual field affects the results, as do other test conditions.

Then there are variations amongst observers deemed color normal.

Adaptation makes a difference.

On top of that, there are probably nonlinearies we don’t understand.

But, at this point, trying to change the way color management software works is kinda like trying to get everyone to throw out their QWERTY keyboards. Or totally abandon IP4.
Your last (practical, and not technical) point above is arguably true enough - but should protagonists of ostensible accuracy of the Color Matching Functions for the CIE Standard Observer used in CIE 1931 color-spaces as "objective perceptual reality" blithely ignore imperfections (for instance, noted and addressed by William A Thornton) - as the veracity of the various assumptions that are made regarding (primary/derivative) color-spaces depend entirely upon their (seemingly questionably) reliable level of "human perceptual accuracy" ?
 
Last edited:
Putting my own pedantic hat on, if it’s not a precise descriptor of human visual perception (which it clearly seems not), then it should not earn the status of unquestionable "objective reality". You may recall my previous (2018) raising of issues surrounding the development and nature of the so-called "Luther-Ives" condition, also referencing published evidence that CIE 1931 XYZ does not in fact adequately provide for certain (perceptual) metameric errors:

https://www.dpreview.com/forums/post/61825078
Touché.

The first issue the comes to mind is the 1931 dataset and the fact that the observers as a group seem to have been blue deficient.

The next issue is that the visual field affects the results, as do other test conditions.

Then there are variations amongst observers deemed color normal.

Adaptation makes a difference.

On top of that, there are probably nonlinearies we don’t understand.

But, at this point, trying to change the way color management software works is kinda like trying to get everyone to throw out their QWERTY keyboards. Or totally abandon IP4.
Your last (practical, and not technical) point above is arguably true enough - but should protagonists of ostensible accuracy of the Color Matching Functions for the CIE Standard Observer used in CIE 1931 color-spaces as "objective perceptual reality" blithely ignore imperfections (for instance, noted and addressed by William A Thornton) - as the veracity of the various assumptions that are made regarding (primary/derivative) color-spaces depend entirely upon their (seemingly questionably) reliable level of "human perceptual accuracy" ?
I won’t argue with you there. When I was doing this stuff for a living I thought that by now we’d be way beyond the 1931 standard.



It does work fairly well in practice, though.

--
https://blog.kasson.com
 
Last edited:
I’ve been trying to improve my understanding of how raw photo processing works, but I think I’m getting hung up on black point subtraction.
Even in the absence of any light stimulus to the sensor, the signal going into the A/D is non-zero. This is due to signal offsets at various stages (intentional and not). And these offsets can vary with the signal characteristics.

Interesting to speculate on whether the same applies to the signal outputs from our eyes and how the brain handles them --- does it do a black-point subtraction after conversion to the image that we perceive in our brain, or before? Is it adaptive to the scene or fixed? If we take evolution as a general guide, the "intelligence" is likely to be distributed between the neurological sensing, processing and imaging functions. Some on this forum may know the answers.
 
Last edited:

Keyboard shortcuts

Back
Top