when you say the signal to noise ratio has to be greater than 1 where
noise is the standard deviation of the noise
Again, for individual pixels.
you are saying that it can't be determined that two population means
that are closer than one standard deviation are different
No, I didn't say that. When one combines data from multiple pixels, S/N improves and that allows information to be extracted.
but you also need to state to what degree of confidence and your
sample size
Agreed.
so lets take two normally distributed populations that differ by only
on standard deviation
http://farm4.static.flickr.com/3081/2618062151_b39c76c378_o.jpg
and if you pick one sample from each population - you have a 25%
chance of guessing wrong from which population you pulled the sample
With what priors? Is one given the two distributions and then asked whether a given sample came from one or the other distribution? If so, that is far more information than one has when looking at an individual pixel.
what does that mean for an image
take Bob - a 100x100 bitmap with a single pixel at 120 and the rest
at 135
now lets apply noise with a standard deviation of 16 to the image -
can you find that pixel? - take my word for it or repeat the
experiment - you won't see it
better to use .png so that one doesn't get nasty jpeg artifacts; but perhaps your image server doesn't allow png files...
I repeated your lines example using ImageJ. I don't think it's appropriate to add noise chromatically; the raw data is "monochrome" in its original state (or if you like, 2/3 of the color information is missing from the Bayer CFA). Here are the results for a grayscale version of your test:
original (four lines, two brighter than the background and two darker)
with 1 std dev of noise (ie noise equal to contrast difference between lines and background)
two std dev of noise
three std dev of noise
I think by this point one loses the ability to distinguish the lines (or could you tell that I rotated the image by 180 degrees in this last one?).
What is happening here is that the signal (the lines) spans multiple pixels here (as opposed to your first test with a single pixel, which was apparently undetectable), and our perception integrates that information to pull signal out of the noise. This is why pattern noise -- banding -- which is largely one-dimensional, is quite visible in DSLR images even though its absolute magnitude is quite a bit smaller than the overall std dev of read noise; our perception is designed (because it carried survival advantage for our ancestors) to detect lines and edges.
It might seem that this example makes a case that an extra bit (the 13th for current cameras) beyond the level of the read noise is worthwhile, in a situation where there is zero variation of the signal over a large region except for some linear detail. On the other hand, the quite similar low contrast resolution test showed little effect of reduced bit depth...
http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/dpr/sinexp4_8bitL-6bitR.png
Indeed, your example and this one are looking at different things. I am asking the question as to what bit depth still retains the essential image information, while yours is asking what level of noise is needed to obliterate a linear signal that spans multiple pixels (so that the pattern recognition in our perception, which is automatically averaging over multiple pixels, can detect it).
Now, let's ask my question about your example -- let's truncate the first noisy lines example above to a level spacing equal to the std dev of the noise:
and indeed the lines are still quite visible, because the information that the eye is integrating is still there -- on average the level of the lines differs from the level of the background, even when the level spacing is equal to the std dev of the noise.
--
emil
--
http://theory.uchicago.edu/~ejm/pix/20d/