rnclark wrote:
Hi Jack,
Theories are great, but often leave something out, or don't fully apply to a limited set of observations.
For example, Nyquist theorem says sampling with at least twice the highest frequency. But that assumes an infinite time base and constant frequency.. With a limited time base, one gets aliasing unless sampling is at much higher than Nyquist.
In the real world, there are systematics. For example, in image sensors, fixed pattern noise, 1/f noise, finite digitization. Typically, one can go to great lengths to reduce noise sources like fixed patterns, but in reality, it gets very difficult to correct to greater than 10x lower than the random noise.
In the above model, which is a pure mathematical model with a random number generator (which of course is not perfectly random, but is a more pore model than a real sensor with low level FPN), it gets difficult to extract signals smaller than about 10times lower than the random noise. Thus we see with random noise of 3 electrons, it is tough to get below 0.3 photon per pixel per frame. Of course one can stack more but to get to 0.2 photons/exposure, one would need to increase the number of frames by 1.5^2 or 225 frames, and in the real world it would take more due to the systematics. This leads to a key in astrophotography: collect the light as fast as you can, meaning large aperture fast optics.
Regarding quantization noise, there is always +/- 1 bit. So at unity gain with 3 electron read noise you get:
sqrt(3^2 +1^2) = 3.2 electrons noise.
It is that quantization error that contributes to the degradation of extracting the faint signal.
I hear you on real-world vs ideal, and FPN, Roger. On the other hand about quantization error, I understand that the random read noises that we are typically able to measure and discuss in these fora already include such a quantization component. For instance when bclaff or sensorgen say that the D7200 has about 1.9e- read noise at base ISO, that figure already includes quantization error: it's part of the measurement.
Your earlier chart and comment spurred me to investigate further, though, so I simulated a version of the strips: the numbers are Poisson objects with intensities increasing linearly from zero to the left to a mean of 1 e-/capture to the right, 0.1e- times the number so 6 represents a signal of 0.6e-/capture. To make them display this way I equated the lowest value in the file to zero and the highest value to 255, the equivalent of a levels adjustment:
A) Linear intensities of e-/capture, so 3 corresponds to 3/10e- and 10 to 1e-/capture
If we add 3 e- of gaussian read noise to the signal ramp and digitize the resulting image with a 'proper' gain of 1 DN = 2e- (top row in your chart) this is what we get after the levels adjustment:
B) 1 Frame: same as above with 3e- of random read noise and quantized at 1 LSB = 2e-.
Ugh, we can't see anything. Fair enough theory says that the signal to noise ratio at number 10 should be about 0.316; and the SNR at number one should be 0.033, much (much!) below the typically acceptable working range. Bear with me though, as I am trying to think this through.
Since random noise standard deviation (3e-) was higher than 1 DN (2e-) in figure B) above, information theory suggests that the light information from the signal (the numbers) was nevertheless 'properly' captured in the raw data: assuming no DSNU and FPN all we would need to do to pull out the information buried deep into the noise is to stack many such captures to reduce the proportion of random read noise in the final image. For instance, here is a stack of 10 lights of the subject above
C) 10 frames with the same properties as the one just above
and here is one of 100
D) 100 frames of B) above
Coming out nicely. If we keep at this ideal exercise we should be able to get at the visual information stored in the individual captures with any arbitrary target SNR. And in fact the result of stacking enough (for me) such captures is top image A), showing the original signal numbers in all their glory. You had not realized that the top image was the final stack, had you? If you want a cleaner final image, keep going.
Of course I am not suggesting that anybody sane should attempt to capture the number of frames required to recover a signal recorded at an SNR of 1/30 in a single frame, just that one can do so if one so desires - as long as the information is in the data in the first place. And the condition for that to be true is that random noise at the input of the ADC needs to be higher than one LSB. In this case it was and so we could.
Let's take a look now at a case when it isn't. The following four strips correspond to B), C), D) and A) above respectively, with the same signal but lower random read noise of 1e-, digitized by an ADC where it takes 10e- to reach 1 LSB (the a7S has a similar gain at base ISO):
B1) 1 Frame: same signal as B) above but with 1e- of random read noise and quantized at 1 DN = 10e-.
Yeah, the tiny signal (remember, maximum 1 e-) just can't make it to the 0.5DN threshold without the help of the earlier relatively stronger dithering power, so most of the single frames are full of zeros (0, zip). The few specs one can see are statistical oddities.
C1) 10 frames as in B1), compare to C)
D1) 100 frames as in B1), compare to D) This is what information loss looks like.
The oddities are piling up, but because they are oddities they are not in the right proportions... so by the time one gets to n frames (as in figure A above) the result is nonlinear, with distorted means and standard deviations :
A1) Hey, in quantum mechanics if you play tennis against a wall long enough the ball goes through it
Here instead is what A) looks like for comparison again, same number of lights stacked. Here means and standard deviations are proportional to the original, as expected from captures that captured the 'full' visual information:
A) Same signal but more noise than A1) ... and 'proper' ADC threshold (gain)
Therefore, ignoring pixel response non uniformities for a moment (mainly FPN and DSNU), the key to being able to capture visual information in the raw data is a properly dithered ADC: random noise rms at its input of at least 1 DN (= 1 LSB). Less will result in visual information loss; more will not help but will require stacking more captures in order to dig out that signal at the same final target SNR.
Bill Claff conveniently provides such estimates of read noise per DN for most cameras by measuring noise directly off raw data dark frames at every ISO. Keep a bit of a margin, however, because the resulting figures refer to the output of the ADC and include ADC noise, prnu, FPN, quantization noise and more*. The actual random noise at the input of the ADC will therefore be somewhat lower once the above components have been subtracted out. How much lower depends on how well the camera imaging system was tuned during design and manufacturing. Every generation is better than the last.
Ok, thanks to you folks this concludes my initiation to astro, I think I understand how the basics work. Now all I have to do is go out, buy the kit and practice. Don't hold your breath
Jack
* Also careful about the actual bit depth of some cameras when determining the random noise in one LSB: some 14-bit cameras run at 13 or even 12 bits depending on operating mode. For instance the a7mkII , like many other advanced Sonys, appears to run the ADC at 13 bits at base ISO. I understand that Bill Claff's read noise in DN figures refer to a 14-bit scale instead, so its measured read noise of 1.1DN at ISO100 could actually be half that at 13-bits. And the a7mkII does in fact show quantization artifacts a stop or so up from base ISO.