DesmondD: This is an honest question: Considering the speed of light compared to the shutter speeds we use what kind of shutter speeds can actually show up the randomness of "light packets"? There must be a point where it all averages out pretty well - where is the boundary?
VidJa: does anyone know where we are in efficiency of the sensors? with other words, how many of the available photons do we measure with current sensors and how far can we expect to improve?
So you spotted a typo. Do some googling yourself about quantum efficiencies of modern sensors: values of 50 - 80% region are normal.
And you have not quite understood the '1000x more light sensitive' thing. That is normalised per unit quantity of material: so it means that maybe you could make sensors from graphenes that need 1/1000 as much material in them as the active ingredient in a CMOS sensorm but you can't get an efficiency 1000 times greater than 80%.
If you made a sensor with graphenes that had a full-well capacity 1000x higher than a modern CMOS sensor it would indeed give amazingly noise free images but you would need to have 1000 x times the exposure to make the most of it (an ISO value of <1). Shutter speeds of 1 sec in bright sunlignt anyone?? If you shot a picture at ISO-100 to get a sensible shutter speed you would still be limited by the shot noise associated with the statistical properties of light which is nothing to do with the sensor material.
mpgxsvcd: In order to believe this article you have to accept that shot noise is a significant factor in short exposure photography. That fact is stated in the article. However, it really isn’t demonstrated. I think that is where some people are getting hung up. They can’t accept that fact without seeing it.
It would be really cool if you showed how much noise is contributed from shot noise vs. read noise in each review. You could stack out the read noise with Dark Frames leaving only the shot noise from a very dark scene.
No, it is *equal* to the square root: a consequence of normal statistical sampling - see
(paragraph 6, beginning 'for large numbers...)
To use the test-tube analogy from the article, let's say the density of raindrops and width of the test tube mouth means that the average number of drops collected per test tube is 100. So if you count them all and plot a distribution curve, it will look like a bell curve with a standard deviation of +/- 10 raindrops.
This arises because the random distribution of raindrops does not mean that they fall exactly evenly, everywhere - that is definitely not random. (Get 100 blindfolded people to lob a dart at a dartboard so that the board has 100 randomly-distributed holes in it. Are the holes perfectly evenly spread out with the same number in every square inch? Or will there be some clumps of holes close together and some areas with no holes?)
the information you want is here. Efficiencies are now up to 80-90% in the best cases
Chris2210: I was expecting this article to have at least some explanation of why pixel pitch is important - particularly at high sensitivities. It's fairly straightforward by my understanding:
To follow the test tube analogy in the scenario of a very low sample [rain or light whether that's short exposure or low flow] a larger test tube/ sensel gives a better chance of a sample which is useful. Greater numerical arrays [be that tubes or megapixels] may be able to reduce the effect of noise by averaging out across neighbouring captures [downsampling], but once the individual samples become increasingly unreliable that no longer applies.
Hence sensors at any given size with larger/fewer receptors will have a potentially higher sensitivity ceiling [all other things being equal]. That's right, isn't it?
Imagine taking a 10MP sensor and then subdividing it into a 40MP sensor; 4 small photosites cover the area of one large one.
The large photosite can count 4x more photons, so the uncertainty (noise) is halved compared to a smaller photosite. But with the smaller photosites, you have four of them in the same area. Each individual one is 2x noisier, but if you average the readings from four over the same area, the noise goes down by the square root of sample size –it is halved. The result is a draw, for a given sensor area: it makes no different how many photosites you split the sensor into, the noise per unit area of sensor, or per unit area of final print, is the same. So total noise basically relates to sensor area, not MP count.
To put it another way: the smaller photosites are individually noisier, but they are printed smaller with more of them per square inch of print so the noise is less apparent – it’s a draw. (But you will get more detail).
You can calculate it easily enough from here:http://www.sensorgen.info/
Take the first on the list (EOS 1000D). Full well capacity of 34000 electrons so at base ISO the shot noise associated with the brightest parts of the pic will be sqrt(34000) which is around 185 e. Read noise is given as 5 electrons. So Read noise is less than 3% of shot noise if you expose to the right at base ISO.
For an EOS-1DX, shot noise in the highights at base ISO is sqrt(90000) = 300 electrons. Read noise is 1.2e, so is <1% of shot noise.
Going back to the 1000D, to have read noise (5e) equal to shot noise, you'd have to have a maximum signal of 25 e (instead of 34000!) as the brightest part of the picture - which is equivalent to an ISO of over 100,000...