Continued from
this thread.
In particular, I wish to address two replies to the
following post:
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Dimitris Servis wrote:
"But the single greatest factor in the noisiness of a photo is how much light the photo is made from"
Yes.
Where are your references for this claim? Or at least an explanation of the scientific basis of this statement of faith?
1. How do you define the noisiness of a photo?
In terms of luminance noise (as opposed to chroma noise), the noise is the standard deviation of the recorded signal from the mean signal, where the mean signal is taken to be the "true" signal.
For example, if you take measurements of 90, 105, 97, 110, and 98 electrons, the mean is 100 electrons and the standard deviation (noise) is 7.7 electrons, resulting in a relative noise of 7.7 / 100 = 7.7%.
In addition, noise has both a magnitude (demonstrated above) and a frequency. For example, let's consider two photos of the same scene, one photo made with 4x as many pixels as the other and with the assumption that the electronic noise (the noise from the sensor and supporting hardware) is insignificant compared to the photon noise (the noise from the light itself) or that the electronic noise from the two sensors is essentially the same. The noise of the photo with 4x the number of pixels will have a noise frequency that is twice as high. But, while the individual pixels will be more noisy (have a greater relative deviation), will the photo itself be more noisy? The answer is no, no it will not -- the photo made with fewer pixels will be more blurry.
2. What is the mechanism that connects the definition of noise in (1) with how much light the photo is made from under the assumption that it is captured by an array of nxn independent pixels connected to off-chip amplifiers?
If we have two photos of the same scene displayed at the same size, then, with the same assumptions about electronic noise discussed in the previous paragraph, the photo made with more light will be less noisy.
No. The noise per unit area will be the same but the noise in the photo from the larger sensor will be less obvious because it has not been enlarged as much for the same size of final image. The larger sensor receives more light than the smaller sensor because it is bigger. You are confusing cause and effect.
Some thought experiments:
1. Use a D800 once with a 50 1.4 and once a 35 1.8. How do the two images compare with respect to your noise metric? How do they compare to a D7000+35 1.8?
2. How does a theoretical 4/3 6,549 x 4,912 image compare to the 7,360 x 4,912 D800 image relative to your noise metric?
3. As a frequent user of a D750 and an Om-d Em5 mk ii how are the variables available to me before taking a picture affected by your metric? How does your metric manifest itself in my A3+ prints?
How about a demonstration? Here's the Canon 6D at ISO 6400 and Olympus EM5 at ISO 1600, thus both photos made with the same total amount of light (additionally, both sensors have similar electronic noise levels). All but identical with regards to noise.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Reply to Dimitris' response:
I don't have images with electrons. Sorry.
Well, Dimitris, you've expressed the crux of your misunderstanding. Light is composed of photons. Those photons release electrons from the silicon in the sensor which the camera records. The counts of those photoelectrons are the very basis of the information that is needed to create the photo.
Reply to Jack Hogan:
He is not interested in equivalence, shutter speed is different and Exposure is the same.
I assume you mean the example I linked. The exposures are not the same for the two photos -- the linked example shows the same noise in two photos made with the same total amount of light.
What an absurd notion this idea of total light equivalence is – and without any scientific basis whatsoever, which is of course why you will not find a single scientific paper from a reputable source explaining the theory. Plenty on shot noise and other sources of noise in digital images but not a single one that even considers the size of the sensor in this regard.
Ah, a subtle change to your trolling. You used to deny shot noise. Now you're denying that there are any papers which 'even consider the size of the sensor'. Well, I don't know wether there are or not, but since it's such a simple and obvious consequence of shot noise I it's not going to be a research result. Simply, what we know about shot noise is that if you take samples with a mean of n events, then the standard deviation will be √n. The signal to noise is the ratio n/√n = √n. What we know about the photoelectric effect is that individual photoelectrons are released by individual photons. Put the two together and you find that the more photons per sample (pixel) the higher the number of photoelectrons (events) and therefore the higher the SNR.
Try as hard as you like, you will not find a single scientific paper, or even an informal article from a reputable source, that offers a scientific explanation as to why the more light a sensor receives (of a given intensity or brightness) the higher the signal to noise ratio of each pixel or the image as a whole will be.
Ah, now we're back to denying shot noise. Anyway, here is a nice post from Iliah giving a few of the sources that you claim don't exist.
https://www.dpreview.com/forums/post/60115587
The simple fact is that small sensors produce lower quality images than large sensors because the laws of physics and limits of technology mean that it is not possible to produce a small sensor with the signal to noise ratio and dynamic range of a large sensor. The larger the sensor, the easier it becomes.
Please do explain which 'laws of physics' you are talking about, and precisely what are the resulting engineering constraints that make it 'not possible', at least over the range we are talking about for camera sensors (generally sensor diagonals in the tens of millimetres and pixel sizes in the range 3-10 microns). And while we're at it, could we please have a reference to a 'single scientific paper from a reputable source explaining the theory'.
That is also why the quality of images, especially visible noise and dynamic range, improves dramatically when one doubles the size of a tiny sensor but the difference in quality between full frame and medium format sensors is much less, and even often non-existent. Above a certain size, the electronics are working at their maximum in terms of light capture, signal to noise ratio and dynamic range and any further increase in size will give no improvement in quality – but higher resolution of course.
That would not happen if the noise in an image was really dependent on the total amount of light used to make the image. It would simply keep improving through medium and large format, but it does not because it is simply not true.
The same above. Let's have a 'single scientific paper from a reputable source explaining the theory'.
--
Tinkety tonk old fruit, & down with the Nazis!
Bob