Film suffers
reciprocity failure in low light. A classic way to reduce this effect involves pre-fogging, or fogging simultaneous with exposure, of the film.
Electronic imaging sensors supposedly do NOT suffer reciprocity failure. However, I've been playing with
simultaneous fogging exposures as a method to compress tonal range, and I am finding
a noticeable amount of improvement in the detail recoverable in the darkest shadow areas....
So, the question is
why? What mechanisms would be behind this, or is this likely just a quirk of my experimental set-up?
I found the opposite when I tried with a Tiffen Filter that created intentional, well-controlled global veiling flare about 15 years ago. All I saw was more photon noise in the shadows, once the color of the flare was subtracted. That was with a camera that had good, old-fashioned unmolested RAW data with a positive offset for black, where I kept the sub-black levels in place after subtracting the blackpoint with signed numbers, so all interpolation was done with sub-black values still there to keep averages linear. After much thought, I realized that there was actually less practical DR available with the filter, because effective headroom was lower, and there was more noise in the shadows (extra variation from the extra photon noise).
The only things that come to mind are RAW cooking and/or sloppy math for near-blacks in the conversion code. My most recent camera, the Canon R5, has heavy filtering of near blacks at base ISO, strong enough that the standard deviation of the original pixels in one RAW color channel in a black frame is 0.7DN (and it doesn't even
look like noise, when viewed), but when I bin 10x10, I get 25DN, suggesting that the real standard deviation was probably about 2.5DN before cooking (unless there is enough extra spatially-correlated noise at Nyquist/10 to significantly raise it from a lower number like 2.2DN).
So, your RAW data could be cooked near black, and fogging keeps the near-blacks out of the oven. Also, if the camera clips the RAW data at what it thinks is black, that is not a good thing for very small signals, as the means are not linear near black, and some signal is lost; not just negative read noise swings. I can't recall exactly how I calculated it, but years ago I determined that SNR loss approaches 40% for black clipping for the very weakest signals, if the data is clipped at exactly the right level, and if the assumed black level is too high, even by a couple DN, the smallest signals start to disappear rapidly.
Converters can also show no respect for near-black signals, even with cameras that give unmolested RAWs with positive black offsets.