Reciprocity failure and fogging

ProfHankD

Veteran Member
Messages
9,798
Solutions
32
Reaction score
6,251
Location
Lexington, KY, US
Film suffers reciprocity failure in low light. A classic way to reduce this effect involves pre-fogging, or fogging simultaneous with exposure, of the film.

Electronic imaging sensors supposedly do NOT suffer reciprocity failure. However, I've been playing with simultaneous fogging exposures as a method to compress tonal range, and I am finding a noticeable amount of improvement in the detail recoverable in the darkest shadow areas....

So, the question is why? What mechanisms would be behind this, or is this likely just a quirk of my experimental set-up?
 
Last edited:
I guess pre-fogging means pre-exposing with uniform light? Then you raise the level above the read noise and the other non-pleasant artifacts of the sensor range, and then subtract.

The problem is that your supposedly uniform pre-exposure is not quite uniform and has its own shot noise. That noise (not NSR) could be very high related to the weak signal. Perhaps you have to pre-fog "just a little".
 
Last edited:
I guess pre-fogging means pre-exposing with uniform light? Then you raise the level above the read noise and the other non-pleasant artifacts of the sensor range, and then subtract.
For the sensor, I actually am preferring to use an adapted lens with a custom-made (3D-printed) adapter that allows precisely controlled fogging. One version is essentially a translucent adapter that allows controlled ambient light fogging, the other embedds a diffuser and a controlled light source that allows at least 8EV control range. Generally, I have used the same camera exposure settings both with and without the fogging, but allowing the shutter speed to be slightly faster with the fogging also seems to work.
The problem is that your supposedly uniform pre-exposure is not quite uniform and has its own shot noise. That noise (not NSR) could be very high related to the weak signal. Perhaps you have to pre-fog "just a little".
You are correct that the fogging light is not perfectly even (although it is very even until near the frame edges) and it certainly does have its own shot noise. Yes, the fogging seems to work best at relatively low levels compared to the scene exposure.

However, my question is really why does this fogging help?

My experiments definitely show an improvement in shadow detail (using linear raw captures), but it's not clear to me what the mechanism is that would make adding fogging light to a digital capture result in cleaner shadow detail. I am effectively boosting the black level, but I would have expected sensor noise to be independent of that -- because sensors are not supposed to suffer from reciprocity failure.
 
It would be good to see some examples. My guess why it works - it lifts the signal above the bad zone of the sensor.

Dolby used to do something similar but not quite with tape recorders: amplify (rather than add to) high frequencies above a certain level and then bring them down in reproduction. That puts them above the hiss. This was my understanding when I was a high-school student and there was no internet...
 
My experiments definitely show an improvement in shadow detail (using linear raw captures), but it's not clear to me what the mechanism is that would make adding fogging light to a digital capture result in cleaner shadow detail. I am effectively boosting the black level, but I would have expected sensor noise to be independent of that -- because sensors are not supposed to suffer from reciprocity failure.
So the baseline pipeline would be:

scene luminance->sensel->ADC->raw preprocessing->raw storage->raw developer

While the alternate pipeline (ideally) is:

(scene luminance add constant)->sensel->ADC->raw preprocessing->raw storage->raw developer->subtract constant

If this is more or less accurate, and you experience that injecting moderate amounts of "constant light" into the scene prior to the sensor increases shadow accuracy, that points toward non-linear effects somewhere, does it not? I assume that you control the raw developer so you are not "fooled" by application of color profiles, gamma etc.

Could the signal at the shadow end be small enough so as not to be self-dithered by shot noise? If so, could adding a constant (with more shot-noise) prior to ADC, then subtracted in SW possibly dither the near-blacks enough to bring out more detail? I did not think this through, it is just an idea.

Are camera ADCs (sufficiently) perfectly linear? Or is there some time-integrator granularity limits at either end?

Does your camera clip black levels at nominal "black" (like Nikon? used to do), or does it allow for zero-mean noise about its nominal black point?
 
Last edited:
My experiments definitely show an improvement in shadow detail (using linear raw captures), but it's not clear to me what the mechanism is that would make adding fogging light to a digital capture result in cleaner shadow detail. I am effectively boosting the black level, but I would have expected sensor noise to be independent of that -- because sensors are not supposed to suffer from reciprocity failure.
Does your camera clip black levels at nominal "black" (like Nikon? used to do), or does it allow for zero-mean noise about its nominal black point?
This ADC clipping would be my first choice to investigate. It could be effective clipping from firmware/software in the camera. In the dark with short exposure what sort of signal levels/histogram do you get?
 
My experiments definitely show an improvement in shadow detail (using linear raw captures), but it's not clear to me what the mechanism is that would make adding fogging light to a digital capture result in cleaner shadow detail. I am effectively boosting the black level, but I would have expected sensor noise to be independent of that -- because sensors are not supposed to suffer from reciprocity failure.
Does your camera clip black levels at nominal "black" (like Nikon? used to do), or does it allow for zero-mean noise about its nominal black point?
This ADC clipping would be my first choice to investigate. It could be effective clipping from firmware/software in the camera. In the dark with short exposure what sort of signal levels/histogram do you get?
Eric, I was hoping you might know what's going on here....

I've been doing this primarily with a few Sony cameras, and am still tweaking my experimental set-up, but I am seeing a similar modest improvement in raws with all the cameras I've tried thus far. The cause could be as simple as clipping due to sloppy setting of the black point in raw images, but I really wonder if there are nonlinear effects for very dim exposures using CMOS sensors that really do behave a bit like film reciprocity failure.... Incidentally, I also haven't yet looked at how this behaves as I dramatically vary the exposure time; my tests so far are based on 5-second exposures.

Anyway, with my current testbed, the complete scene alignment between shots isn't perfect and the full frame histograms thus wouldn't perfectly match even for two hopefully-identical foggings. I've really been looking only at specific portions of the scene where the fogging exposure is very even, so that shouldn't matter. Even with that very imperfect alignment of the full scene and fogging that is somewhat darker near frame edges, the RawTherapee histograms still give a decent impression of what's happening. For example, here's the full-frame histogram from a heavily fogged shot with a Sony A7 (this is from a compressed ARW , but I'm only looking at portions of the scene where the compression should be lossless):

RawTherapee histograms: original left and with quite strong fogging exposure right
RawTherapee histograms: original left and with quite strong fogging exposure right

Those are the RGB interpolated histograms; the raw histograms don't show much difference beyond a slight smoothing of waveform and right shift. BTW, the test scene is a Kodak reflection density guide in a lightbox being pretty dramatically underexposed so the unfogged shots have raw histograms hitting the left edge.

I'll be making much more precise samplings soon maintaining perfect pixel-level scene alignment and directly using either dcraw or libraw for extracting the raw data statistics from only the relevant regions.
 
Chemical based photo materials can be “hypersensitized”. This is accomplished by a preexposure to light. The idea – photo sensitive silver salt crystals have a threshold sensitivity. Meaning, they will not be rendered developable until they receive a specific minimal amount of exposing energy.

The sensitivity of these crystals can be elevated pre-exposing them to a low level of radiant energy. Hypersensitization can also be accomplished by treating the film with chemicals and or heat. The idea is to artificially bring them to a state just below their threshold of sensitivity.

Once accomplished they now require a lowered amount of exposure to render them developable. Because exposing energy accumulates during the time the shutter is open, hypersensitized materials are said to be “faster” (elevated ISO).

I reason, digital applications could also gain some sensitivity by a similar pre-flash. Likely this procedure improves shadow detail as these darker regions are below or at the boundary of the threshold of recordability.
 
Do you have a good way of calibrating the entire response? One way to do this is to use a step wedge that gives a known range of exposures in a single image acquisition. If the step wedge has a series of reflectances R1, R2,..., then the exposures are qR1, qR2,... If you calibrate the sensor system well at intermediate exposures, then you can determine relative values for R1, R2,... Then you can just vary the light or exposure time to get variable values of q. Assuming you can control the conditions well so the spectrum doesn't vary and the relative illumination stays constant over the image plane, it is quite easy to get a detailed and precise calibration at very low or very high exposures. You can get a large number of data points with a modest number of exposures. I have had very good success with this to calibrate x-ray image receptors (as they are called in medical radiology), and in general I found nonlinearity at the origin.

I would be curious to see your results, because I have never seen this done for photographic sensors. I guess your problem might be either nonlinearity at the origin or incorrect black-level subtraction.
 
Do you have a good way of calibrating the entire response? One way to do this is to use a step wedge that gives a known range of exposures in a single image acquisition. If the step wedge has a series of reflectances R1, R2,..., then the exposures are qR1, qR2,... If you calibrate the sensor system well at intermediate exposures, then you can determine relative values for R1, R2,... Then you can just vary the light or exposure time to get variable values of q. Assuming you can control the conditions well so the spectrum doesn't vary and the relative illumination stays constant over the image plane, it is quite easy to get a detailed and precise calibration at very low or very high exposures. You can get a large number of data points with a modest number of exposures. I have had very good success with this to calibrate x-ray image receptors (as they are called in medical radiology), and in general I found nonlinearity at the origin.

I would be curious to see your results, because I have never seen this done for photographic sensors. I guess your problem might be either nonlinearity at the origin or incorrect black-level subtraction.
I'm very glad to hear you say that you saw "nonlinearity at the origin" because I found quite a few things, including one published, peer-reviewed, paper that says this does NOT happen.

I think I'll continue to investigate this and, hopefully, will have a real useful/publishable answer within a month or two....
 
Do you have a good way of calibrating the entire response? One way to do this is to use a step wedge that gives a known range of exposures in a single image acquisition. If the step wedge has a series of reflectances R1, R2,..., then the exposures are qR1, qR2,... If you calibrate the sensor system well at intermediate exposures, then you can determine relative values for R1, R2,... Then you can just vary the light or exposure time to get variable values of q. Assuming you can control the conditions well so the spectrum doesn't vary and the relative illumination stays constant over the image plane, it is quite easy to get a detailed and precise calibration at very low or very high exposures. You can get a large number of data points with a modest number of exposures. I have had very good success with this to calibrate x-ray image receptors (as they are called in medical radiology), and in general I found nonlinearity at the origin.

I would be curious to see your results, because I have never seen this done for photographic sensors. I guess your problem might be either nonlinearity at the origin or incorrect black-level subtraction.
I'm very glad to hear you say that you saw "nonlinearity at the origin" because I found quite a few things, including one published, peer-reviewed, paper that says this does NOT happen.

I think I'll continue to investigate this and, hopefully, will have a real useful/publishable answer within a month or two....
I have no information on this, but there are so many cameras that have color shifts in the black region that I have to suspect that there are unhandled problems.
 
Do you have a good way of calibrating the entire response? One way to do this is to use a step wedge that gives a known range of exposures in a single image acquisition. If the step wedge has a series of reflectances R1, R2,..., then the exposures are qR1, qR2,... If you calibrate the sensor system well at intermediate exposures, then you can determine relative values for R1, R2,... Then you can just vary the light or exposure time to get variable values of q. Assuming you can control the conditions well so the spectrum doesn't vary and the relative illumination stays constant over the image plane, it is quite easy to get a detailed and precise calibration at very low or very high exposures. You can get a large number of data points with a modest number of exposures. I have had very good success with this to calibrate x-ray image receptors (as they are called in medical radiology), and in general I found nonlinearity at the origin.

I would be curious to see your results, because I have never seen this done for photographic sensors. I guess your problem might be either nonlinearity at the origin or incorrect black-level subtraction.
I'm very glad to hear you say that you saw "nonlinearity at the origin" because I found quite a few things, including one published, peer-reviewed, paper that says this does NOT happen.

I think I'll continue to investigate this and, hopefully, will have a real useful/publishable answer within a month or two....
I did dome analysis years ago. I used the studio scene - the strip of gray squares on the top under 6-7 different exposures. It was not really important to know how they change from square to square. I took the brightest exposure as belonging to the linear range by default, and was looking at nonlinear response near the bottom. I posted some graphs here indicating possible non-linear behavior, different for each channel (after proper "WB").

The problem there is that the results depend a lot on the choice of the black point, which I do not assume to be known - I try to estimate it. A linear/nonlinear function is still linear/nonlinear after you add the black point, etc., but the nonlinear part is expected mostly near black. Then it is good to use a log-log scale. And here is where the problem lies - a small change of the black point can make linear response look nonlinear (near black, away from it - no visible difference). Also, the data is a bit nosy in the first place.

Here are the graphs. Again, I would not bet my life on them... The D810 probably clips the black noise, which skews everything.


D810






5DSR
 
I did dome analysis years ago. I used the studio scene - the strip of gray squares on the top under 6-7 different exposures. It was not really important to know how they change from square to square. I took the brightest exposure as belonging to the linear range by default, and was looking at nonlinear response near the bottom. I posted some graphs here indicating possible non-linear behavior, different for each channel (after proper "WB").
The whole "where do you measure this in the image pipeline" issue is definitely significant.... There's even a fair bit of "cooking" done to many raws. As I discovered some years ago when I built KARWY , there's even rather more horrible stuff done to camera raws that get converted to DNGs by ADC.
The problem there is that the results depend a lot on the choice of the black point, which I do not assume to be known - I try to estimate it. A linear/nonlinear function is still linear/nonlinear after you add the black point, etc., but the nonlinear part is expected mostly near black. Then it is good to use a log-log scale. And here is where the problem lies - a small change of the black point can make linear response look nonlinear (near black, away from it - no visible difference). Also, the data is a bit nosy in the first place.

Here are the graphs. Again, I would not bet my life on them... The D810 probably clips the black noise, which skews everything.
Handling of the black point is very approximate in all the cameras I've played with. Generally, the black point is taken by averaging values of pixels under a physical mask at the edges of the sensor, with several rows/columns reserved for that purpose. The good news is that tends to correct for temperature, but the bad news is it definitely does not correct for individual pixel differences and, even worse, basing that on an average means noise can make values go negative and get clipped. My gut feeling is that sloppy black points are responsible for most of the non-linear badness for individual pixels, which seems to be what you're suggesting here -- and that seems quite consistent with the multitude of dynamic range data and other relevant wisdom posted at https://www.photonstophotos.net/ .

At some level, I really don't care what the reason is, I just want to know that the apparent benefit I'm seeing from fogging exposures isn't a quirk, but a repeatable implication of some relevant physics and/or algorithm. In other words, I want to know it's a reproducible effect. ;-) The departure from linearity being less at higher values is that kind of property.

I just need to really get my test set-up more precise and make a lot of measurements. ;-)
 
...I used the studio scene - the strip of gray squares on the top under 6-7 different exposures. It was not really important to know how they change from square to square. I took the brightest exposure as belonging to the linear range by default, and was looking at nonlinear response near the bottom. I posted some graphs here indicating possible non-linear behavior, different for each channel (after proper "WB").
Yes, that's it exactly. Assuming linearity in the middle range is probably not a bad substitute for actual calibration. Multiple images with different exposures can be used to check consistency. With a reasonable model for the calibration equation one can do least squares fitting to determine the q values that I mentioned.
The problem there is that the results depend a lot on the choice of the black point, which I do not assume to be known - I try to estimate it. A linear/nonlinear function is still linear/nonlinear after you add the black point, etc., but the nonlinear part is expected mostly near black. Then it is good to use a log-log scale. And here is where the problem lies - a small change of the black point can make linear response look nonlinear (near black, away from it - no visible difference). Also, the data is a bit nosy in the first place.
To look for nonlinearity and diagnose quirks near the origin I think it's important to use a linear plot. One should treat measurements on the masked strip just like any other point. Ideally, one would take the values as recorded in the raw file, without ANY white point rebalancing or noise reduction. Whatever the camera reports in the raw file is part of the calibration, even if the camera has already altered the values.

It might be important to check histograms on each measurement. If the camera clips the noise, then the mean value might not be the appropriate measurement. If the camera processes the data in an inconsistent way, that might just be an irreproducible part of the problem.
 
Last edited:
ProfHankD, see my reply to J A C S. This forum tends to fork discussions in Threaded View.
 
Last edited:
Film suffers reciprocity failure in low light. A classic way to reduce this effect involves pre-fogging, or fogging simultaneous with exposure, of the film.

Electronic imaging sensors supposedly do NOT suffer reciprocity failure. However, I've been playing with simultaneous fogging exposures as a method to compress tonal range, and I am finding a noticeable amount of improvement in the detail recoverable in the darkest shadow areas....

So, the question is why? What mechanisms would be behind this, or is this likely just a quirk of my experimental set-up?
I found the opposite when I tried with a Tiffen Filter that created intentional, well-controlled global veiling flare about 15 years ago. All I saw was more photon noise in the shadows, once the color of the flare was subtracted. That was with a camera that had good, old-fashioned unmolested RAW data with a positive offset for black, where I kept the sub-black levels in place after subtracting the blackpoint with signed numbers, so all interpolation was done with sub-black values still there to keep averages linear. After much thought, I realized that there was actually less practical DR available with the filter, because effective headroom was lower, and there was more noise in the shadows (extra variation from the extra photon noise).

The only things that come to mind are RAW cooking and/or sloppy math for near-blacks in the conversion code. My most recent camera, the Canon R5, has heavy filtering of near blacks at base ISO, strong enough that the standard deviation of the original pixels in one RAW color channel in a black frame is 0.7DN (and it doesn't even look like noise, when viewed), but when I bin 10x10, I get 25DN, suggesting that the real standard deviation was probably about 2.5DN before cooking (unless there is enough extra spatially-correlated noise at Nyquist/10 to significantly raise it from a lower number like 2.2DN).

So, your RAW data could be cooked near black, and fogging keeps the near-blacks out of the oven. Also, if the camera clips the RAW data at what it thinks is black, that is not a good thing for very small signals, as the means are not linear near black, and some signal is lost; not just negative read noise swings. I can't recall exactly how I calculated it, but years ago I determined that SNR loss approaches 40% for black clipping for the very weakest signals, if the data is clipped at exactly the right level, and if the assumed black level is too high, even by a couple DN, the smallest signals start to disappear rapidly.

Converters can also show no respect for near-black signals, even with cameras that give unmolested RAWs with positive black offsets.
 
Film suffers reciprocity failure in low light. A classic way to reduce this effect involves pre-fogging, or fogging simultaneous with exposure, of the film.

Electronic imaging sensors supposedly do NOT suffer reciprocity failure. However, I've been playing with simultaneous fogging exposures as a method to compress tonal range, and I am finding a noticeable amount of improvement in the detail recoverable in the darkest shadow areas....

So, the question is why? What mechanisms would be behind this, or is this likely just a quirk of my experimental set-up?
I found the opposite when I tried with a Tiffen Filter that created intentional, well-controlled global veiling flare about 15 years ago. All I saw was more photon noise in the shadows, once the color of the flare was subtracted. That was with a camera that had good, old-fashioned unmolested RAW data with a positive offset for black, where I kept the sub-black levels in place after subtracting the blackpoint with signed numbers, so all interpolation was done with sub-black values still there to keep averages linear. After much thought, I realized that there was actually less practical DR available with the filter, because effective headroom was lower, and there was more noise in the shadows (extra variation from the extra photon noise).

The only things that come to mind are RAW cooking and/or sloppy math for near-blacks in the conversion code. My most recent camera, the Canon R5, has heavy filtering of near blacks at base ISO, strong enough that the standard deviation of the original pixels in one RAW color channel in a black frame is 0.7DN (and it doesn't even look like noise, when viewed), but when I bin 10x10, I get 25DN, suggesting that the real standard deviation was probably about 2.5DN before cooking (unless there is enough extra spatially-correlated noise at Nyquist/10 to significantly raise it from a lower number like 2.2DN).

So, your RAW data could be cooked near black, and fogging keeps the near-blacks out of the oven. Also, if the camera clips the RAW data at what it thinks is black, that is not a good thing for very small signals, as the means are not linear near black, and some signal is lost; not just negative read noise swings. I can't recall exactly how I calculated it, but years ago I determined that SNR loss approaches 40% for black clipping for the very weakest signals, if the data is clipped at exactly the right level, and if the assumed black level is too high, even by a couple DN, the smallest signals start to disappear rapidly.

Converters can also show no respect for near-black signals, even with cameras that give unmolested RAWs with positive black offsets.
I count this as yet another vote for bad handling of the black point as the cause. ;-)

I have to say, having any such effect be due to processing, rather than due to the sensor pixels, has always seemed more likely.... This also could explain why all the astro stuff, which more directly plays with high-quality raw sensors, doesn't talk about such things. Certainly, there is a huge collection of materials on reciprocity failure using film for astro purposes, while the electronic sensor materials can be summed up as "keep it cool."
 
I guess pre-fogging means pre-exposing with uniform light? Then you raise the level above the read noise and the other non-pleasant artifacts of the sensor range, and then subtract.
For the sensor, I actually am preferring to use an adapted lens with a custom-made (3D-printed) adapter that allows precisely controlled fogging. One version is essentially a translucent adapter that allows controlled ambient light fogging, the other embedds a diffuser and a controlled light source that allows at least 8EV control range. Generally, I have used the same camera exposure settings both with and without the fogging, but allowing the shutter speed to be slightly faster with the fogging also seems to work.
The problem is that your supposedly uniform pre-exposure is not quite uniform and has its own shot noise. That noise (not NSR) could be very high related to the weak signal. Perhaps you have to pre-fog "just a little".
You are correct that the fogging light is not perfectly even (although it is very even until near the frame edges) and it certainly does have its own shot noise. Yes, the fogging seems to work best at relatively low levels compared to the scene exposure.

However, my question is really why does this fogging help?
Sloppy handling of near-blacks in the RAW files, and in the conversion math. Deep shadows get no respect. Any RAW file format that pre-clips blacks is already at a disadvantage, regardless of the converter. You can't regain the small signals that fall below black once they are clipped.

This is what I typically do when I do manual conversions (my "color science" won't win any prizes, though): I promote the RAW data to a higher level of precision, subtract black, without clipping. If I want to get crazy, I can determine black on a line-by-line basis for each of two color channels in it, and correct with that extra precision. Anyway, I leave the negative values there, using a signed number space, and do any interpolations including demosaicing while these numbers can still be negative, and any such operations pull local means closer to a true linear representation, with less negative excursion below black, before finally clipping for RGB display. I firmly believe even RGB-display-oriented image files could keep the negatives in there, so that if you further interpolate or resample, the negatives pull even closer to black and get clipped less and near-blacks remain more linear in the end. Best results are only possible, of course, if the RAW files maintain a positive black offset, and the near-blacks are not cooked.
 
To look for nonlinearity and diagnose quirks near the origin I think it's important to use a linear plot.
The problem with the linear plot is - it has to be as large as a wall in my house, and I need to be looking for non-linearity in a tiny portion of it. Imagine what I posted stretched exponentially diagonally in NE direction. Instead of -14 to 0, on a scale of 1 to 2^14~16k or so, and I am looking at values from 0 to 4 or 8, etc.
One should treat measurements on the masked strip just like any other point. Ideally, one would take the values as recorded in the raw file, without ANY white point rebalancing or noise reduction. Whatever the camera reports in the raw file is part of the calibration, even if the camera has already altered the values.
I put "WB" in quotation marks. Without that, the curves would be shifted up or down, left or right a bit. I "WB" B and R to have the same response to gray as G. It is for visualization purposes only.

NR... I do not do additional one but since those are mean values over some patches, it is there and should be there.
It might be important to check histograms on each measurement. If the camera clips the noise, then the mean value might not be the appropriate measurement.
Yes, I mentioned this. On the other hand, the RAW converter may care about the mean mostly, regardless of how it was obtained.
If the camera processes the data in an inconsistent way, that might just be an irreproducible part of the problem.
 
I guess pre-fogging means pre-exposing with uniform light? Then you raise the level above the read noise and the other non-pleasant artifacts of the sensor range, and then subtract.
For the sensor, I actually am preferring to use an adapted lens with a custom-made (3D-printed) adapter that allows precisely controlled fogging. One version is essentially a translucent adapter that allows controlled ambient light fogging, the other embedds a diffuser and a controlled light source that allows at least 8EV control range. Generally, I have used the same camera exposure settings both with and without the fogging, but allowing the shutter speed to be slightly faster with the fogging also seems to work.
The problem is that your supposedly uniform pre-exposure is not quite uniform and has its own shot noise. That noise (not NSR) could be very high related to the weak signal. Perhaps you have to pre-fog "just a little".
You are correct that the fogging light is not perfectly even (although it is very even until near the frame edges) and it certainly does have its own shot noise. Yes, the fogging seems to work best at relatively low levels compared to the scene exposure.

However, my question is really why does this fogging help?
Sloppy handling of near-blacks in the RAW files, and in the conversion math. Deep shadows get no respect. Any RAW file format that pre-clips blacks is already at a disadvantage, regardless of the converter. You can't regain the small signals that fall below black once they are clipped.

This is what I typically do when I do manual conversions (my "color science" won't win any prizes, though): I promote the RAW data to a higher level of precision, subtract black, without clipping. If I want to get crazy, I can determine black on a line-by-line basis for each of two color channels in it, and correct with that extra precision. Anyway, I leave the negative values there, using a signed number space, and do any interpolations including demosaicing while these numbers can still be negative, and any such operations pull local means closer to a true linear representation, with less negative excursion below black, before finally clipping for RGB display. I firmly believe even RGB-display-oriented image files could keep the negatives in there, so that if you further interpolate or resample, the negatives pull even closer to black and get clipped less and near-blacks remain more linear in the end. Best results are only possible, of course, if the RAW files maintain a positive black offset, and the near-blacks are not cooked.
BTW, in optimization (where I am not an expert), this relates to optimization with constraints. The constraint is to keep the values non-negative in the output. You optimize a certain "cost functional", i.e., from the noisy RAW, you want to generate a nice JPEG which cannot be just a direct output of a conventional conversion. Now, I am speculating how this may work, and this depends on the cost functional anyway. When you get to the deep shadows, you may want to do more averaging (NR) and tonal range compression instead of clipping.
 

Keyboard shortcuts

Back
Top