Fred Dominic
Well-known member
Ok, so I can't say that I understood what you meant after reading the above, but I did a bit of background reading on LENR, and essentially it sounds like the camera itself is doing blackframe subtraction (sorry, I'm still quite a camera newbie).Because the very deepest modulations of the periodic banding only exists when the camera writes the RAW. Take shot 1 at 1 second, wait a little so the camera doesn't warm up, take shot 2 at 1 second, then, enable long exposure noise reduction forIn any event, might I ask if you could better explain how it was that you came to the conclusion that this is post read and after the ADC?
So let me then try to rephrase what you are saying (let me know if I've messed it up).
On the one hand, we've got an image x and a blackframe b, and the camera itself somehow generates b (when LENR is turned on) so the
camera does
x - b
and then does some further processing (lets call this further processing P[]), so
the image we get is:
y = P[x - b]
and you are seeing vertical banding in this y.
On the other hand, lets say we've got an image x, and we generate our own black
frame b. When we do our own blackframe subtraction, we get:
y' = P[x] - P
and you're saying that y' does not exhibit significant banding, but y does. Therefore, the banding must be introduced not by the sensor but by the further processing P[]. When we construct y', we are subtracting off any artifact caused
by P which is why y' does not show significant banding.
Is that essentially what you're saying?