Curve fitting and read noise, FWC, gain

2a) We know it's a potential well not a physical well. But does it hold electrons of different energies with the same efficiency?

2b) Is charge converted to voltage with the same efficiency for different energies of electrons.
I am no expert, but as far as I understand from Janesick (p. 38) and from a thread here a while back, for photons in the visual range allowed through by the filters to silicon, 1 converted photon = 1 valence electron of charge q, independently of the original photon energy. Happy to learn otherwise if that should not be the case.
Ah, as I read it the electrons freed from the silicon all have the same energy and any excess energy heats the silicon (at least in the visible range). (Chapter 2 Photon Interaction)

Regards,
 
I hesitate to bring this up because I don't feel like ripping all the code apart right now, but we could use the high mu values to compute FWC, and the low mu values to compute RN, and do one-dimensional optimization for both in single ISO mode instead of two-dimensional optimization. In all-ISO mode, we could do one dimensional optimization for FWC and two dimensional optimization for preAmp RN and postAmp Rn, instead of three-dimensional optimization.

Please say you hate this idea.
I hate this idea because I would like the world to work perfectly according to our simple theories - which assume that read noise and shot noise (and pattern/PRNU which we are ignoring for now) add in quadrature and there are no other types of noise (pattern or quantization) present.

The simple model seems to work very well with data from image pairs (I am impressed) except at low ISOs where we get ringing in the shadows (especially red and blue which typically get less light) in ultra-clean sensors. The ringing is WAY off the model so no fitting criterion is going to work unless we can model what's causing it. I still haven't fully understood the mechanism of the ringing, let alone figure out how to do model it.

So if one wanted this level of accuracy (does one? Not necessarily) then one option would be to test for heavy deviations from the model in the shadows and in that case rely on the LSE criterion which biases for the highlights. If the shadows look well behaved, stick with the log minimization criterion which fine tunes things all along the curve. With this strategy the more and the deeper the data points in the shadows the better, so no more throwing away points with SNR<2.

Just thinking aloud.
This all begs for testing with the camera simulator that is built into the Matlab program. I had actually forgotten about it, it's been so long since we used it. I'm trying to fire it up but I seem to have broken it. Give me a little time. It'll be interesting to see if there are ripples when we use the camera simulator. Mebbe not; there is no dc offset in the simulator as it's currently written, and I don't want to add that until I'm confident it's working OK.
I got the camera simulator working. Here's what I get for a 12 bit camera with a FWC of 100000 electrons, preamp RN of 1e-, postamp RN of 0 at ISO 100:

Horizontal axis mu, stops from full scale; Vertical axis sigma stops from full scale
Horizontal axis mu, stops from full scale; Vertical axis sigma stops from full scale

We see more ringing than that on the D810 and the a7S.

Then I added a variable offset before the ADC, and set it to half an LSB. I got this:

74d3de377b8546a094ff5860b1bc32b1.jpg.png

Aha!

My guess is that, with the right combination of RN and offset, we'll be able to replicate the 12 bit camera behavior.

Jim

--
http://blog.kasson.com
 
Last edited:
I hesitate to bring this up because I don't feel like ripping all the code apart right now, but we could use the high mu values to compute FWC, and the low mu values to compute RN, and do one-dimensional optimization for both in single ISO mode instead of two-dimensional optimization. In all-ISO mode, we could do one dimensional optimization for FWC and two dimensional optimization for preAmp RN and postAmp Rn, instead of three-dimensional optimization.

Please say you hate this idea.
I hate this idea because I would like the world to work perfectly according to our simple theories - which assume that read noise and shot noise (and pattern/PRNU which we are ignoring for now) add in quadrature and there are no other types of noise (pattern or quantization) present.

The simple model seems to work very well with data from image pairs (I am impressed) except at low ISOs where we get ringing in the shadows (especially red and blue which typically get less light) in ultra-clean sensors. The ringing is WAY off the model so no fitting criterion is going to work unless we can model what's causing it. I still haven't fully understood the mechanism of the ringing, let alone figure out how to do model it.

So if one wanted this level of accuracy (does one? Not necessarily) then one option would be to test for heavy deviations from the model in the shadows and in that case rely on the LSE criterion which biases for the highlights. If the shadows look well behaved, stick with the log minimization criterion which fine tunes things all along the curve. With this strategy the more and the deeper the data points in the shadows the better, so no more throwing away points with SNR<2.

Just thinking aloud.
This all begs for testing with the camera simulator that is built into the Matlab program. I had actually forgotten about it, it's been so long since we used it. I'm trying to fire it up but I seem to have broken it. Give me a little time. It'll be interesting to see if there are ripples when we use the camera simulator. Mebbe not; there is no dc offset in the simulator as it's currently written, and I don't want to add that until I'm confident it's working OK.
I got the camera simulator working. Here's what I get for a 12 bit camera with a FWC of 100000 electrons, preamp RN of 1e-, postamp RN of 0 at ISO 100:

Horizontal axis mu, stops from full scale; Vertical axis sigma stops from full scale
Horizontal axis mu, stops from full scale; Vertical axis sigma stops from full scale

We see more ringing than that on the D810 and the a7S.

Then I added a variable offset before the ADC, and set it to half an LSB. I got this:

74d3de377b8546a094ff5860b1bc32b1.jpg.png

Aha!

My guess is that, with the right combination of RN and offset, we'll be able to replicate the 12 bit camera behavior.

Jim

--
http://blog.kasson.com
You are right, yes, that pesky offset could be the culprit in combination with a small read noise compared to LSB size - and does the D810 have optical black pixels that I think are used to estimate it before subtraction in many Nikon cameras? I haven't found them yet.

On the other hand we are looking at 14-bit data with read noise around 80% of an LSB. if the error is of the order of 50% of an LSB it could still cause trouble.
 
Therefore, to get the 'proper' FWC (which I think, as Jim suggested, should be about equal for all channels) one should use different signal normalization factors in DN, corrected for the WB pre-conditioning 'gain'.
Except that I tried that and while it fixes the FWC it screws up the relative read noise. Original set of D810 data (columns = R,G,B, rows = ISO outlined below):

D810 read (noise) in e- and sat(uration) * 10ke-. Columns=R,G,B channels. Rows=ISO as shown at bottom.
D810 read (noise) in e- and sat(uration) * 10ke-. Columns=R,G,B channels. Rows=ISO as shown at bottom.

FWCs are not the same. Implied pre-conditioning (normalized to 1 = FWC of one green channel):

Pre-conditioning implied by FWC (yellows=green ch, blue=red ch, purple=blue ch)
Pre-conditioning implied by FWC (yellows=green ch, blue=red ch, purple=blue ch)

So applying a 1.14 factor to the red mean signal and re-fitting for read noise and saturation (columns represent the red and green channel resp.):

86113360db8448a49906fc373996ccc6.jpg.png

Hypothesis busted, as it looks like it messes with an otherwise fine looking read noise :-(
 
Last edited:
Last edited:
Therefore, to get the 'proper' FWC (which I think, as Jim suggested, should be about equal for all channels) one should use different signal normalization factors in DN, corrected for the WB pre-conditioning 'gain'.
Hypothesis busted, as it looks like it messes with an otherwise fine looking read noise :-(
Actually, hypothesis unbusted - as I made a mistake in the post just above: I only multiplied the signal means by the WB pre-conditioning factors but not the standard deviations (doh!, both were incorrectly normalized by the pre-scaling). By instead multiplying both numerator and denominator by the same factor the SNR curve shape results unchanged - and therefore so does the relative read noise, as originally suspected. FWC on the other hand is affected as expected, as the curve is plotted against mean signal which has shifted as a result of the operation.

So here is my new effort at attempting to properly present FWC/saturation estimates for WB pre-conditioned data. I took a look at the histogram of a dark D810 raw image at base ISO and counted the missing codes in the red and blue channels in order to get a rough estimate of the pre-conditioning factors (the green channels are fully populated): red = 34 codes missing in 250 raw levels; blue 36 codes missing.

This implies that the red channel was pre-scaled by a factor of about 1.136 (similar to what I estimated earlier based on the average fwc difference) and the blue one by 1.144 (about 10% lower than my fwc estimate). This is what happens when I run the fit with these factors applied to the red and blue channel means and standard deviations (read noise is unchanged):

D810, r and b data corrected for pre-scaling, Jim Kasson Data
D810, r and b data corrected for pre-scaling, Jim Kasson Data

0a965a1a0cc040e7946d6470e929d1e0.jpg.png

Other than for the first couple of ISO stops where we saw that ringing was messing with the fit and skewing results slightly, the red channel has almost caught up with the greens as expected, with FWC within 1% or so. The blue channel on the other hand has gotten much closer but it is still 2-3% off. I wonder why?

In any case I think that the case for just using the green channel data for RN and FWC estimates - or at least attempt to correct for pre-scaling - is getting a little stronger in my book.

Jack
 
Last edited:
This is a pretty common error with least-squares fitting.

I haven't read through this, but any fitting method requires the appropriate measure. Minimizing the sum of squares of errors is appropriate if the uncertainties of all points are equal. Otherwise, the errors should be normalized to the statistical uncertainty of each measurement. In other words, a weighted fit.

In the first case, the uncertainties are kind of on a linear scale, but the fit was on a log scale! As you noted, that is a substantial mismatch. Just judging visually from the plot, I can't see the random deviations from the line. However, it appears that you have assumed that all the uncertainties are equal on a log scale. This does appear to give you a good fit, at least visually on the scale of the plots.

I'm sorry, I don't have time to explain (or remember!) this in a rigorous form, but that is the gist of it.

There is another possible problem. You and Jim Kasson may have this covered well, but it if the zero offset is incorrect, the signal will be calculated incorrectly, and the relative errors will be greatest for lowest signals. I have seen quite a few test photos in which the zero offset appears to have been determined incorrectly. In particular, I'm not sure it is always exactly equal to what is read on the dark strip.
 
This thread is almost 10 years old. Many questions here have been worked out since (for instance FWC differences in the raw color channels of Nikon cameras and ringing in low ISO PTC curves).

Should anyone have any relevant questions feel free to fire away. Otherwise let it rest in peace or start a new thread.

Jack
 
Last edited:
This thread is almost 10 years old. Many questions here have been worked out since (for instance FWC differences in the raw color channels of Nikon cameras and ringing in low ISO PTC curves).
Ah, I would have hoped so. It popped up, so I didn't notice the date.
Should anyone have any relevant questions feel free to fire away. Otherwise let it rest in peace or start a new thread.

Jack
But what is this ringing that keeps coming up?
 
But what is this ringing that keeps coming up?

 
Last edited:
While I was acting as Jim Kasson's coolie a couple of months ago as he coded up an excellent Photon Transfer matlab application, I came across this interesting tidbit on curve fitting that I would like to submit to the scrutiny of the forum. All data was collected by Jim with his usual professionalism by taking raw capture pairs of a defocused uniform patch from highlights to the deep shadows. Each pair was added to determine mean signal (S) and subtracted to obtain the relative standard deviation (total random noise N). The effect of pattern noise and PRNU is thus effectively minimized, producing data that should depend entirely on the signal, shot noise and read noise only.

SNR** is simply S divided by N and here is Jim's data for the green channel of a D810 at base ISO plotted as a function of Signal on the x-axis (zero stops is clipping/saturation, otherwise known as FWC in some circles):

Note how the data is linear in the highlights because SNR there is virtually all due to the square root of the signal (also equal to shot noise) which plots as a straight line on a log-log scale. The slope is correct (quadrupling the signal doubles the SNR because of the square root) and there is no sign of PRNU, thanks to Jim's excellent technique and pair subtraction.
At this late date, I'd like to say that Jack deserves all the credit for the idea of adding and subtracting pairs of images. Not only does it get rid of PRNU, but it means that uneven lighting of the target and falloff of the lens is not an issue.

 
At this late date, I'd like to say that Jack deserves all the credit for the idea of adding and subtracting pairs of images. Not only does it get rid of PRNU, but it means that uneven lighting of the target and falloff of the lens is not an issue.
Standing on the shoulders of giants Jim, I believe I saw Emil Martinec doing it. An additional benefit is that one can get relative noise without having to know the black level.

What I did not realize at the time is that pair subtraction also minimizes 1/sqrt(12) quantization rounding noise from the ADC, which adds in quadrature and is normally swept under the read noise carpet. Together with FPN, this is one of the reasons why the read noise calculated in the OP from pair subtraction is lower than simply reading standard deviation in a uniform patch of a single raw frame.

Jack
 
At this late date, I'd like to say that Jack deserves all the credit for the idea of adding and subtracting pairs of images. Not only does it get rid of PRNU, but it means that uneven lighting of the target and falloff of the lens is not an issue.
Standing on the shoulders of giants Jim, I believe I saw Emil Martinec doing it. An additional benefit is that one can get relative noise without having to know the black level.

What I did not realize at the time is that pair subtraction also minimizes 1/sqrt(12) quantization rounding noise from the ADC, which adds in quadrature and is normally swept under the read noise carpet. Together with FPN, this is one of the reasons why the read noise calculated in the OP from pair subtraction is lower than simply reading standard deviation in a uniform patch of a single raw frame.

Jack
Jack,

I'm sure you know this as it's implied in your answer but subtraction doesn't eliminate noise it simply lowers it.

Regards,

Bill
 

Keyboard shortcuts

Back
Top