12bit versus 14bit - another sample test

David,

I have been following this discussion since the beginning and have discussed it in private with my friend Iliah. To give you some background on my point of view. ;-)

Just wanted to say that your post was a very good description and break down of the topic that has flowed across multiple threads.

--
Charles
Light illuminates while shadow defines.
http://stakeman.smugmug.com
 
Why would you want to limit your sample size to 100?
because i am trying to show the minimum sample size to have a 95%
confidence level that two populations are different

from an imaging standpoint - the more samples that are needed to show
this difference, the less useful it is

Emil has posited that the 14 and 13th bit are useless on cameras
because there is too much noise

i suggest that if you sample as little as 100 sensors - you can use
the 14th bit - this is a best case analysis using the read noise of a
D300 as measured using the masked sensors that are present in the raw
data
Emil is right in that he states as the signal rises the noise rises
so for large signals -14 bits is not very useful as the number of
samples needed to show this difference will grow too large to be
practical from an imaging stand point
David,

A good summary of the previous threads, which I have been following with interest. There seems to be a bit of disagreement among some pretty smart fellows and I have not formed an opinion since Emil and Iliah know more than I do and have not reached agreement. However, if there is a difference, it must be rather small and I doubt its significance for practical work.
i used the TTest because it is commonly available function and that
is what i have been taught to use in comparing two populations to
determine if they are the same

if there is another function in excel or minitab that i should use-
would love to hear about it
Yes, you can use the Z-test, which uses the normal distribution rather then the t-distribution and is found in Excel and Minitab. The statistics book that I am currently using, Biostatistics: a foundation for analysis in the health sciences, 5th ed, Wiley, 1991 offers the following advice on using z or t:

If the sample size is small and the population variance is known (not usually the case), one should use the z-test; if the population variance is unknown, one should use the t-test with small samples.

If the sample size is large and the population variance is known, one should use the z-test. If the population variance (σ) is not known one can use the t-test or estimate the population variance from the sample variance (s) and use the z-test. In Excel, if the population variance is not given in the z-test calculation, the sample variance is used.

He doesn't define large or small, but when n is 100, the differences between the two distributions is minimal as can be seen by looking at statistical tables for the normal distribution and t-distribution with various degrees of freedom.

--
Bill
 
Why would you want to limit your sample size to 100?
because i am trying to show the minimum sample size to have a 95%
confidence level that two populations are different

from an imaging standpoint - the more samples that are needed to show
this difference, the less useful it is

Emil has posited that the 14 and 13th bit are useless on cameras
because there is too much noise
A bit too sweeping a generalization. My main point has always been that the S/N ratio defines the number of usable levels of detail. I have consistently maintained that the 13th and 14th bit are below the level of noise of individual pixels, and so signal below that noise level is not detectable from examination of individual pixels. S/N ratio can be improved by combining data from multiple pixels; so if one is willing to sacrifice a substantial amount of resolution, one can combine pixels from a single image to get image information from noisy bits using a single image. An alternative is to average multiple images, as is done in astrophotography. Either way, one is improving the S/N ratio to the point where something can be detected.
i suggest that if you sample as little as 100 sensors - you can use
the 14th bit - this is a best case analysis using the read noise of a
D300 as measured using the masked sensors that are present in the raw
data
And this is an example of how giving up resolution can extract some information from the extra bits (in principle). I don't think for any practical purpose one would want to give up a factor of ten in resolution to extract a 14th bit of data from the image.
this all began several filled up threads ago and involved Emil
stating 14bits was worthless for present sensor technology and is
only random noise
Not random; quantization error is correlated to the signal. The question is what one needs to do to extract that signal, if one quantizes more finely by using extra bits.

[snip to meet pointless 6000 word limit]

A fair representation of the thread history IMO.
however, that is not the way modulo arithmetic works - and i showed
that with some excel charts
Huh? Your Excel spreadsheets demonstrated my point -- that without clipping the population of the last two bits is approximately even, and with clipping it is skewed in the direction I said it would be -- level populations of the last two bits are 0> > 1> 2> 3
those i posit for very low levels, the 13th and 14th bit are still
useful on Nikon cameras since they have set the black point below
zero vice Canon which elevates the black point above zero so you can
see the noise profile
No, when the data is clipped, one correlates the level populations of the last two bits with the data from higher bits that DO contain image information. There is no information in those last two bits that wasn't PUT there by correlating the last two bits with the higher bits. Or to say it differently, the information contained in the last two bits that comes from clipping is not NEW information, it is information that is already contained in bits 12 and below, and just copied into the 13th and 14th bits through the clipping procedure. It can't be any other way -- if unclipped data shows no pattern in the extra two bits, then any pattern observed after the data is clipped is due to correlating those two bits with bits that actually do contain signal above the level of the noise.
that leads to today and back to the heart of the statement

if the signal is less than the noise you can't see the signal

i think more precisely he is saying if you have two population of
sensors you can't measure the difference between them that is less
than the standard deviation

but.... if you can draw enough samples from the two populations, that
statement is not true
Yes, if you are willing to sacrifice enough resolution to combine data from multiple pixels then one can beat down the S/N ratio of the combined data may require more bits than the data of individual pixels does. That doesn't necessarily mean that the individual pixels require more bit depth. For instance, one way of getting at signal below the noise level is to make multiple measurements and average them; the average of 12-bit (noisy) data can be a fraction of a level, and a finer fraction the more samples are in the average. So it remains to be seen whether the individual pixels all must have higher bit depth just because the aggregate of the pixels requires finer quantization.

Concretely, the average of four integers can be any multiple of 1/4; that doesn't mean that a measurement denominated in integral values must be stored as 1/4 integer values.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
A bit too sweeping a generalization. My main point has always been
that the S/N ratio defines the number of usable levels of detail. I
have consistently maintained that the 13th and 14th bit are below the
level of noise of individual pixels, and so signal below that noise
level is not detectable from examination of individual pixels. S/N
emil
I think that in the same manner that we see details 'in the grain' of a film shot, it may be possible to see trends in the less-noisy pixels (pixels which individually are NOT noisy in the 13th/14th bit).

I know that shortly after the D300 came out, otherwise identical shots were presented at a website, and clearly showd a difference.
 
A bit too sweeping a generalization. My main point has always been
that the S/N ratio defines the number of usable levels of detail. I
have consistently maintained that the 13th and 14th bit are below the
level of noise of individual pixels, and so signal below that noise
level is not detectable from examination of individual pixels. S/N
emil
I think that in the same manner that we see details 'in the grain' of
a film shot, it may be possible to see trends in the less-noisy
pixels (pixels which individually are NOT noisy in the 13th/14th bit).
And how do you know which individual pixels are NOT noisy? That's like knowing which of yesterday's lottery winning numbers was not randomly drawn.
I know that shortly after the D300 came out, otherwise identical
shots were presented at a website, and clearly showd a difference.
Yes those have been discussed in this long series of threads. The D300 does show an improvement, it is due to the different nature of 14bit readout vs 12bit readout. BTW, DPR poster bobn2 has suggested that D300 14-bit capture is an example of using multiple samples of 12-bit data to generate the 13th and 14th bits -- since the D300's ADC's are in the Sony sensor architecture and are 12-bit ADC's, how is one going to get 14-bit data? Read the same sensor data at 12-bit accuracy four times (which is possible in CMOS technology), and add the results together. The four readouts take much longer, which explains the drop in frame rate. Adding together four samples in the range 0-4095 generates a result in the range 0-16383, and takes a lot longer, hence the drop in frame rate. Also, random pattern noise in the 12-bit reads will tend to average out, and so one will see a lot less banding in shadows. All these are properties of the 14-bit D300 mode relative to the 12-bit mode.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
noise forms a probability density function and you can't simply say there is a yes or no answer - you have to talk in probabilities

saying that you can't see anything when the signal is below the standard deviation of the noise lacks a context

namely - sample size and confidence interval

but we will get back to that in the next post

for now - lets assume you are correct - anything below a signal to noise ratio (where noise is defined as 1 standard deviation of the noise probability density function) is less than 1 is simply random

lets make the following assumptions for a D300

4 photons per level at 14bits
read noise 4.5 levels (measured) = 18 photons
total noise = sqrt(read noise ^2 + photon noise ^2)
photon noise = sqrt(photons)

if you look at the top of the following spreadsheet you can see photon noise decreases as signal goes down and eventually read noise dominates and you are in trouble once you get to the 13th bit of resolution

however, we know that 0 is not really zero on the D300 it is about 5 levels above black so there really is a signal 0f 22 photons with the 14th bit and now you have a signal to noise ratio greater than 1 - the yellow part

 
when you say the signal to noise ratio has to be greater than 1 where noise is the standard deviation of the noise

you are saying that it can't be determined that two population means that are closer than one standard deviation are different

but you also need to state to what degree of confidence and your sample size

but you already mentioned you wouldn't take a sample size greater than one

so lets take two normally distributed populations that differ by only on standard deviation

quite a bit of over lap right?



in fact 16% of upper population is below the mean of the lower population

and if you pick one sample from each population - you have a 25% chance of guessing wrong from which population you pulled the sample

what does that mean for an image

take Bob - a 100x100 bitmap with a single pixel at 120 and the rest at 135

look closely you will see the pixel - ok - i lied - the jpeg compression of flickr took it out



now lets apply noise with a standard deviation of 16 to the image - can you find that pixel? - take my word for it or repeat the experiment - you won't see it



sorry about that but lets try this

lets take BobA - it has two lines right down the middle -



now apply noise with a standard deviation of 16



now apply noise with a standard deviation of 32



yep you can still make out the lines

so in spite of the means of the two populations being 16 you can see detail with the standard deviation of the noise being 32

and this is a real world imaging example - imagine a hair or whisker or any fine detail

code for producing these images to follow
 
some of you i hope may wish to duplicate the examples so i have included the code used to generate noise in the bitmap

first construct a 100x100 bitmap in microsoft paint and put any pattern you want on it

now you will need the following code which will compile under microsoft visual c

it will input your bitmpa and output a file named ted.bmp with the noise and a file called normal.txt with the values of the noise

the standard deviation of the noise is defined in the call to random() the code for which has been copied from a source on the web

to change the standard deviation of the noise you would have to change the 16 to some other number

noise.cpp : Defines the entry point for the console application.
//
  1. include "stdafx.h"
  2. include "string.h"
  3. include "stdlib.h"
  4. include "math.h"
  5. include "time.h"
double random(double stdev)
{
double u1,u2,v1,v2,x1;
double s=2;
while(s> =1)
{
u1 = rand();
u1 = u1/RAND_MAX;
u2 = rand();
u2 = u2/RAND_MAX;
v1 = 2*u1-1;
v2 = 2*u2-1;
s = v1*v1 + v2*v2;
};
x1 = v1 * sqrt((-2*log(s)) s);
x1 = x1 * stdev;
return x1;
}


int main(int argc, char* argv[])
{
FILE fInput;
FILE
fOutput;
char cInput[20];

unsigned char cData[45000];
int iSize,i;

double dData;
double dRandom;

srand((unsigned)time( NULL ));

strcpy(cInput,argv[1]);

strcat(cInput,".bmp");

if (!(fInput = fopen (cInput, "rb")))
{
printf("file not found");
return 1;
}

iSize = fread(cData,1,45000,fInput);
fclose(fInput);

fOutput=fopen("normal.txt","wb");

for(i=0x36;i
{
dData = (unsigned char) cData;
dRandom = random(16);
fprintf(fOutput,"%f\n",dRandom);
dData = dData + dRandom + .5;
cData = (unsigned char) (dData);
}

fclose(fOutput);
fOutput = fopen("ted.bmp","wb");

fwrite(cData,1,iSize,fOutput);
fclose(fOutput);

return 0;
}
 
If the sample size is small and the population variance is known (not
usually the case), one should use the z-test; if the population
variance is unknown, one should use the t-test with small samples.
He doesn't define large or small, but when n is 100, the differences
between the two distributions is minimal as can be seen by looking at
statistical tables for the normal distribution and t-distribution
with various degrees of freedom.
it looks like either test is probabaly valid in this case - particularly just to show a point

thanks for dusting off the ole statistics book

David
 
for now - lets assume you are correct - anything below a signal to
noise ratio (where noise is defined as 1 standard deviation of the
noise probability density function) is less than 1 is simply random
I didn't say that S/N less than one is random; it's simply S/N less than one. Looking at any individual pixel from the sample, one will not be able to distinguish whether its deviation from the mean is due to signal or due to noise.
lets make the following assumptions for a D300

4 photons per level at 14bits
read noise 4.5 levels (measured) = 18 photons
total noise = sqrt(read noise ^2 + photon noise ^2)
photon noise = sqrt(photons)

if you look at the top of the following spreadsheet you can see
photon noise decreases as signal goes down and eventually read noise
dominates and you are in trouble once you get to the 13th bit of
resolution

however, we know that 0 is not really zero on the D300 it is about 5
levels above black so there really is a signal 0f 22 photons with the
14th bit and now you have a signal to noise ratio greater than 1 -
the yellow part

http://farm4.static.flickr.com/3182/2618049357_535c812e0a_o.jpg
I don't think you're looking at it the right way. Yes zero raw level on the D300 is about 20 photons, let's just call it 20, so raw level one is then 24 photons. Distinguishing a signal in the 14th bit amounts to being able to distinguish a contrast difference of one raw level, or four photons. The noise is over 18 photons (mostly read noise down near zero signal) and only increases when the photon signal increases. So that contrast difference of four photons is well under the noise.

Look at your table -- at max signal level the S/N ratio only approaches 256=2^8. The noise itself is 256 photons, and there is no way you are going to detect a contrast difference of four photons which is what the 14th bit is encoding. Just because the S/N ratio is bigger than one at some mean signal doesn't mean that the 14th bit is holding signal information.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
when you say the signal to noise ratio has to be greater than 1 where
noise is the standard deviation of the noise
Again, for individual pixels.
you are saying that it can't be determined that two population means
that are closer than one standard deviation are different
No, I didn't say that. When one combines data from multiple pixels, S/N improves and that allows information to be extracted.
but you also need to state to what degree of confidence and your
sample size
Agreed.
so lets take two normally distributed populations that differ by only
on standard deviation

http://farm4.static.flickr.com/3081/2618062151_b39c76c378_o.jpg

and if you pick one sample from each population - you have a 25%
chance of guessing wrong from which population you pulled the sample
With what priors? Is one given the two distributions and then asked whether a given sample came from one or the other distribution? If so, that is far more information than one has when looking at an individual pixel.
what does that mean for an image

take Bob - a 100x100 bitmap with a single pixel at 120 and the rest
at 135
now lets apply noise with a standard deviation of 16 to the image -
can you find that pixel? - take my word for it or repeat the
experiment - you won't see it
better to use .png so that one doesn't get nasty jpeg artifacts; but perhaps your image server doesn't allow png files...

I repeated your lines example using ImageJ. I don't think it's appropriate to add noise chromatically; the raw data is "monochrome" in its original state (or if you like, 2/3 of the color information is missing from the Bayer CFA). Here are the results for a grayscale version of your test:

original (four lines, two brighter than the background and two darker)



with 1 std dev of noise (ie noise equal to contrast difference between lines and background)



two std dev of noise



three std dev of noise



I think by this point one loses the ability to distinguish the lines (or could you tell that I rotated the image by 180 degrees in this last one?).

What is happening here is that the signal (the lines) spans multiple pixels here (as opposed to your first test with a single pixel, which was apparently undetectable), and our perception integrates that information to pull signal out of the noise. This is why pattern noise -- banding -- which is largely one-dimensional, is quite visible in DSLR images even though its absolute magnitude is quite a bit smaller than the overall std dev of read noise; our perception is designed (because it carried survival advantage for our ancestors) to detect lines and edges.

It might seem that this example makes a case that an extra bit (the 13th for current cameras) beyond the level of the read noise is worthwhile, in a situation where there is zero variation of the signal over a large region except for some linear detail. On the other hand, the quite similar low contrast resolution test showed little effect of reduced bit depth...

http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/dpr/sinexp4_8bitL-6bitR.png

Indeed, your example and this one are looking at different things. I am asking the question as to what bit depth still retains the essential image information, while yours is asking what level of noise is needed to obliterate a linear signal that spans multiple pixels (so that the pattern recognition in our perception, which is automatically averaging over multiple pixels, can detect it).

Now, let's ask my question about your example -- let's truncate the first noisy lines example above to a level spacing equal to the std dev of the noise:



and indeed the lines are still quite visible, because the information that the eye is integrating is still there -- on average the level of the lines differs from the level of the background, even when the level spacing is equal to the std dev of the noise.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
and if you pick one sample from each population - you have a 25%
chance of guessing wrong from which population you pulled the sample
With what priors? Is one given the two distributions and then asked
whether a given sample came from one or the other distribution? If
so, that is far more information than one has when looking at an
individual pixel.
the point is that 1 std deviation is not enough difference you need more like 2.2 std deviations to be 95% certain
I repeated your lines example using ImageJ. I don't think it's
appropriate to add noise chromatically;
noise wasn't added chromatically, it was simple r g b noise on each of three channels which is exactly what you are going so seem from an sensor that is taking a picture of a gray patch
It might seem that this example makes a case that an extra bit (the
13th for current cameras) beyond the level of the read noise is
worthwhile, in a situation where there is zero variation of the
signal over a large region except for some linear detail. On the
other hand, the quite similar low contrast resolution test showed
little effect of reduced bit depth...
the key here is little effect - there is some - but you are right for large samples you don't need the bit depth

try taking out even more levels - you can still see features

and i finally found out how to see a difference in the noise grain - you deal in channels rather than the entire RGB space and then you will notice the lack of levels

Daid
 
ejmartin wrote:
Looking at any individual pixel from the sample, one will
not be able to distinguish whether its deviation from the mean is due
to signal or due to noise.
my contention is you need a level at least twice that of the standard deviation to make that assessment from a single pixel

otherwise you will be wrong about 35% of the time

picking a single standard deviation of the noise as the minimum resolution you can measure to seems arbitrary to me but i need to dust of a statistics book

do you have a reference for that for me?

btw i measured the read noise at ISO 100 on the D300 at 3.5 levels
 
The logical next step from the above is to truncate the noisy lines example which has the noise std dev equal to twice the line contrast (so twice the signal level). Here is the noisy lines example with the noise std dev equal to twice the contrast level of the lines (noise std dev 32 levels, contrast between lines and background 15 levels):



and here is the same image with the levels rounded off to the nearest multiple of 32 (so bit truncated such that the quantization step is equal to the noise std dev):



The signal is still there in the bit-truncated image at about the same level as the original, even though the quantization step is twice the signal.

BTW, this would not have been possible without the noise to dither the tonal transitions. Here is the truncation to 32 level quantization step in the absence of noise:



--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
the reason that you can truncate a lot of bits and still have an image show up is because of the bit boundary - in this case 128 so plus or minus one bit changes the MSB

so lets pick something a little harder 200

in the upper left corner a bit map image with a background at 200 and strips plus and minus 2 all the way to 10

in the upper right hand corner - this has been truncated - notice some of the stripes are still there and some have disappeared

lower left hand corner - noise with a standard deviation of 3 has been added and the levels have been adjusted

lower right hand corner same image but lowest two bits have been truncated - notice as Emil states - you can now see more detail -

notice difference in grain noise and there is detail missing



code to follow
 
the following code takes a 100x100 bit map image and applies noise with a standard deviation of 3 and outputs in a file named ted.bmp

it also truncates and rounds off the 2 lsb's and outputs in a file named teda.bmp

noise.cpp : Defines the entry point for the console application.
//
  1. include "stdafx.h"
  2. include "string.h"
  3. include "stdlib.h"
  4. include "math.h"
  5. include "time.h"
double random(double stdev)
{
double u1,u2,v1,v2,x1;
double s=2;
while(s> =1)
{
u1 = rand();
u1 = u1/RAND_MAX;
u2 = rand();
u2 = u2/RAND_MAX;
v1 = 2*u1-1;
v2 = 2*u2-1;
s = v1*v1 + v2*v2;
};
x1 = v1 * sqrt((-2*log(s)) s);
x1 = x1 * stdev;
return x1;
}


int main(int argc, char* argv[])
{
FILE fInput;
FILE
fOutput;
char cInput[20];
;
unsigned char cData[800000];
int iSize,i;

double dData;
double dRandom;

srand((unsigned)time( NULL ));

strcpy(cInput,argv[1]);

strcat(cInput,".bmp");

if (!(fInput = fopen (cInput, "rb")))
{
printf("file not found");
return 1;
}

iSize = fread(cData,1,800000,fInput);
fclose(fInput);

fOutput=fopen("normal.txt","wb");

for(i=0x36;i
{
dData = (unsigned char) cData;
dRandom = random(3);

fprintf(fOutput,"%f\n",dRandom);
if((dData+dRandom + 0.5)
else if (dData+dRandom+.5 > 255) dData = 255;
else dData = dData + dRandom + .5;

cData = (unsigned char) (dData);
}

fclose(fOutput);
fOutput = fopen("ted.bmp","wb");

fwrite(cData,1,iSize,fOutput);
fclose(fOutput);

for(i=0x36;i
{
if(cData > (unsigned char) 254) cData = 255;
else cData = (cData+0x2) & 0xf8;
}

fclose(fOutput);
fOutput = fopen("teda.bmp","wb");

fwrite(cData,1,iSize,fOutput);
fclose(fOutput);

return 0;
}
 
the reason that you can truncate a lot of bits and still have an
image show up is because of the bit boundary - in this case 128 so
plus or minus one bit changes the MSB

so lets pick something a little harder 200

in the upper left corner a bit map image with a background at 200 and
strips plus and minus 2 all the way to 10

in the upper right hand corner - this has been truncated - notice
some of the stripes are still there and some have disappeared

lower left hand corner - noise with a standard deviation of 3 has
been added and the levels have been adjusted

lower right hand corner same image but lowest two bits have been
truncated - notice as Emil states - you can now see more detail -

notice difference in grain noise and there is detail missing

http://farm4.static.flickr.com/3263/2634792323_1527869929_o.jpg
I'm a bit puzzled -- your figure says 6-bit but the spacing of levels on the upper right is 8 levels, so truncated to 5-bit tonal depth; is that right?

If so, with noise of std dev 3 levels and quantization step 8 levels, the lack of bit depth really does start to lose image information. This is also seen in the demo in my article

http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/gradient256-composite-alt.gif

the part with noise=12 and 3-bit tonal depth (quantization step 32) has the same ratio of noise to quantization step, and indeed posterization is becoming apparent. In an image with detail, the posterization will lose low contrast detail as you have seen. This is why one wants the quantization step to be slightly less (it needn't be a lot less, like a factor of four) than the noise level -- then the std dev of noise is more than the quantization step, and so the typical jump between neighboring pixels due to the noise is at least as large as the roundoff due to quantization (actually, on average, it's quite a bit more; the typical quantization error due to roundoff of a continuous signal is about 1/sqrt[12] .29 of the quantization step).

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
I'm a bit puzzled -- your figure says 6-bit but the spacing of levels
on the upper right is 8 levels, so truncated to 5-bit tonal depth; is
that right?
yes, i used the wrong mask - 0xf8 instead of 0xfc

so bottom right is the original bottom left is 6 bits

top left is 6 bits with noise

top right is 8 bits with noise

this is only showing the red channel and with heroic leveling

detail is there but there is a difference in the grain



so it now makes sense, the noise is toggling the upper bits
This
is why one wants the quantization step to be slightly less (it
needn't be a lot less, like a factor of four) than the noise level --
it would appear that at least for the D300 at ISO 100 this would requre 13bits as i measured 3.5 levels for std deviation for read noise

David
 
I'm a bit puzzled -- your figure says 6-bit but the spacing of levels
on the upper right is 8 levels, so truncated to 5-bit tonal depth; is
that right?
yes, i used the wrong mask - 0xf8 instead of 0xfc
So you are simply truncating the last two bits? I have been rounding to the nearest 6-bit level. It makes some difference when you push the image data this hard. I suppose it depends on the details of how the ADC works as to whether it rounds up, down or to the nearest level.
so it now makes sense, the noise is toggling the upper bits
This
is why one wants the quantization step to be slightly less (it
needn't be a lot less, like a factor of four) than the noise level --
it would appear that at least for the D300 at ISO 100 this would
requre 13bits as i measured 3.5 levels for std deviation for read
noise
Hmmm. Perhaps you have a good one. The data I have in hand gives a "Lo ISO" read noise of 4.44 ADU. This is from the method outlined in my article. How are you doing the measurement? From the masked pixels?

BTW, the low ISO extension is not really ISO 100 (again for the data I have), rather the gain shows it to be ISO 125. There is truth in advertising though, the metadata field for ISO is blank when the low ISO extension is set ;-)

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 

Keyboard shortcuts

Back
Top