Total Light Theory continued.

What an absurd notion this idea of total light equivalence is – and without any scientific basis whatsoever, which is of course why you will not find a single scientific paper from a reputable source explaining the theory.
It should be very easy then for someone faulting this theory [I'd rather call it a model] using science based arguments. Are you up to that job?
It is for those proposing the theory to provide the evidence in support of it. And, as I have pointed out, they seem to be unable to do so. Why not give it a go yourself if you feel you are up to the job?
 
Continued from this thread.

In particular, I wish to address two replies to the following post:

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Dimitris Servis wrote:

"But the single greatest factor in the noisiness of a photo is how much light the photo is made from"
Yes.

Where are your references for this claim? Or at least an explanation of the scientific basis of this statement of faith?

1. How do you define the noisiness of a photo?
In terms of luminance noise (as opposed to chroma noise), the noise is the standard deviation of the recorded signal from the mean signal, where the mean signal is taken to be the "true" signal.

For example, if you take measurements of 90, 105, 97, 110, and 98 electrons, the mean is 100 electrons and the standard deviation (noise) is 7.7 electrons, resulting in a relative noise of 7.7 / 100 = 7.7%.

In addition, noise has both a magnitude (demonstrated above) and a frequency. For example, let's consider two photos of the same scene, one photo made with 4x as many pixels as the other and with the assumption that the electronic noise (the noise from the sensor and supporting hardware) is insignificant compared to the photon noise (the noise from the light itself) or that the electronic noise from the two sensors is essentially the same. The noise of the photo with 4x the number of pixels will have a noise frequency that is twice as high. But, while the individual pixels will be more noisy (have a greater relative deviation), will the photo itself be more noisy? The answer is no, no it will not -- the photo made with fewer pixels will be more blurry.

2. What is the mechanism that connects the definition of noise in (1) with how much light the photo is made from under the assumption that it is captured by an array of nxn independent pixels connected to off-chip amplifiers?
If we have two photos of the same scene displayed at the same size, then, with the same assumptions about electronic noise discussed in the previous paragraph, the photo made with more light will be less noisy.

No. The noise per unit area will be the same but the noise in the photo from the larger sensor will be less obvious because it has not been enlarged as much for the same size of final image. The larger sensor receives more light than the smaller sensor because it is bigger. You are confusing cause and effect.

Some thought experiments:

1. Use a D800 once with a 50 1.4 and once a 35 1.8. How do the two images compare with respect to your noise metric? How do they compare to a D7000+35 1.8?

2. How does a theoretical 4/3 6,549 x 4,912 image compare to the 7,360 x 4,912 D800 image relative to your noise metric?

3. As a frequent user of a D750 and an Om-d Em5 mk ii how are the variables available to me before taking a picture affected by your metric? How does your metric manifest itself in my A3+ prints?
How about a demonstration? Here's the Canon 6D at ISO 6400 and Olympus EM5 at ISO 1600, thus both photos made with the same total amount of light (additionally, both sensors have similar electronic noise levels). All but identical with regards to noise.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Reply to Dimitris' response:

I don't have images with electrons. Sorry.

Well, Dimitris, you've expressed the crux of your misunderstanding. Light is composed of photons. Those photons release electrons from the silicon in the sensor which the camera records. The counts of those photoelectrons are the very basis of the information that is needed to create the photo.

Reply to Jack Hogan:

He is not interested in equivalence, shutter speed is different and Exposure is the same.

I assume you mean the example I linked. The exposures are not the same for the two photos -- the linked example shows the same noise in two photos made with the same total amount of light.
What an absurd notion this idea of total light equivalence is – and without any scientific basis whatsoever, which is of course why you will not find a single scientific paper from a reputable source explaining the theory. Plenty on shot noise and other sources of noise in digital images but not a single one that even considers the size of the sensor in this regard.
Try as hard as you like, you will not find a single scientific paper, or even an informal article from a reputable source, that offers a scientific explanation as to why the more light a sensor receives (of a given intensity or brightness) the higher the signal to noise ratio of each pixel or the image as a whole will be.
The simple fact is that small sensors produce lower quality images than large sensors because the laws of physics and limits of technology mean that it is not possible to produce a small sensor with the signal to noise ratio and dynamic range of a large sensor. The larger the sensor, the easier it becomes.
...aside from the notion of total light, how is it that you think the SNR of the larger sensor system has a greater SNR than the smaller sensor system (for a given exposure)?
That is also why the quality of images, especially visible noise and dynamic range, improves dramatically when one doubles the size of a tiny sensor but the difference in quality between full frame and medium format sensors is much less, and even often non-existent. Above a certain size, the electronics are working at their maximum in terms of light capture, signal to noise ratio and dynamic range and any further increase in size will give no improvement in quality – but higher resolution of course.
That would not happen if the noise in an image was really dependent on the total amount of light used to make the image. It would simply keep improving through medium and large format, but it does not because it is simply not true.
Can you give an example of any two photos of the same scene made with cameras that had sensors and supporting hardware that introduced [roughly] the same electronic noise, where the photo made with more light isn't less noisy?

Here's an example:

http://www.josephjamesphotography.com/dpr/pics/6DEM5/fullsize.htm

Oh, whoops -- they are made with the same amount of light and are all but identical, noise-wise. Must be a coincidence.
That is all very well but what I asked for was the scientific basis of your theory of Total Light Equivalence. And the fact that you are unable to provide it illustrates all too clearly that it does not exist. End of story.
 
That is all very well but what I asked for was the scientific basis of your theory of Total Light Equivalence. And the fact that you are unable to provide it illustrates all too clearly that it does not exist. End of story.
It goes back to academics like Poisson who never held a camera. Just forget about it.
 
Continued from this thread.

In particular, I wish to address two replies to the following post:

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Dimitris Servis wrote:

"But the single greatest factor in the noisiness of a photo is how much light the photo is made from"
Yes.

Where are your references for this claim? Or at least an explanation of the scientific basis of this statement of faith?

1. How do you define the noisiness of a photo?
In terms of luminance noise (as opposed to chroma noise), the noise is the standard deviation of the recorded signal from the mean signal, where the mean signal is taken to be the "true" signal.

For example, if you take measurements of 90, 105, 97, 110, and 98 electrons, the mean is 100 electrons and the standard deviation (noise) is 7.7 electrons, resulting in a relative noise of 7.7 / 100 = 7.7%.

In addition, noise has both a magnitude (demonstrated above) and a frequency. For example, let's consider two photos of the same scene, one photo made with 4x as many pixels as the other and with the assumption that the electronic noise (the noise from the sensor and supporting hardware) is insignificant compared to the photon noise (the noise from the light itself) or that the electronic noise from the two sensors is essentially the same. The noise of the photo with 4x the number of pixels will have a noise frequency that is twice as high. But, while the individual pixels will be more noisy (have a greater relative deviation), will the photo itself be more noisy? The answer is no, no it will not -- the photo made with fewer pixels will be more blurry.

2. What is the mechanism that connects the definition of noise in (1) with how much light the photo is made from under the assumption that it is captured by an array of nxn independent pixels connected to off-chip amplifiers?
If we have two photos of the same scene displayed at the same size, then, with the same assumptions about electronic noise discussed in the previous paragraph, the photo made with more light will be less noisy.

No. The noise per unit area will be the same but the noise in the photo from the larger sensor will be less obvious because it has not been enlarged as much for the same size of final image. The larger sensor receives more light than the smaller sensor because it is bigger. You are confusing cause and effect.

Some thought experiments:

1. Use a D800 once with a 50 1.4 and once a 35 1.8. How do the two images compare with respect to your noise metric? How do they compare to a D7000+35 1.8?

2. How does a theoretical 4/3 6,549 x 4,912 image compare to the 7,360 x 4,912 D800 image relative to your noise metric?

3. As a frequent user of a D750 and an Om-d Em5 mk ii how are the variables available to me before taking a picture affected by your metric? How does your metric manifest itself in my A3+ prints?
How about a demonstration? Here's the Canon 6D at ISO 6400 and Olympus EM5 at ISO 1600, thus both photos made with the same total amount of light (additionally, both sensors have similar electronic noise levels). All but identical with regards to noise.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Reply to Dimitris' response:

I don't have images with electrons. Sorry.

Well, Dimitris, you've expressed the crux of your misunderstanding. Light is composed of photons. Those photons release electrons from the silicon in the sensor which the camera records. The counts of those photoelectrons are the very basis of the information that is needed to create the photo.

Reply to Jack Hogan:

He is not interested in equivalence, shutter speed is different and Exposure is the same.

I assume you mean the example I linked. The exposures are not the same for the two photos -- the linked example shows the same noise in two photos made with the same total amount of light.
What an absurd notion this idea of total light equivalence is – and without any scientific basis whatsoever, which is of course why you will not find a single scientific paper from a reputable source explaining the theory. Plenty on shot noise and other sources of noise in digital images but not a single one that even considers the size of the sensor in this regard.
Ah, a subtle change to your trolling. You used to deny shot noise. Now you're denying that there are any papers which 'even consider the size of the sensor'. Well, I don't know wether there are or not, but since it's such a simple and obvious consequence of shot noise I it's not going to be a research result. Simply, what we know about shot noise is that if you take samples with a mean of n events, then the standard deviation will be √n. The signal to noise is the ratio n/√n = √n. What we know about the photoelectric effect is that individual photoelectrons are released by individual photons. Put the two together and you find that the more photons per sample (pixel) the higher the number of photoelectrons (events) and therefore the higher the SNR.
Try as hard as you like, you will not find a single scientific paper, or even an informal article from a reputable source, that offers a scientific explanation as to why the more light a sensor receives (of a given intensity or brightness) the higher the signal to noise ratio of each pixel or the image as a whole will be.
Ah, now we're back to denying shot noise. Anyway, here is a nice post from Iliah giving a few of the sources that you claim don't exist.

https://www.dpreview.com/forums/post/60115587
The simple fact is that small sensors produce lower quality images than large sensors because the laws of physics and limits of technology mean that it is not possible to produce a small sensor with the signal to noise ratio and dynamic range of a large sensor. The larger the sensor, the easier it becomes.
Please do explain which 'laws of physics' you are talking about, and precisely what are the resulting engineering constraints that make it 'not possible', at least over the range we are talking about for camera sensors (generally sensor diagonals in the tens of millimetres and pixel sizes in the range 3-10 microns). And while we're at it, could we please have a reference to a 'single scientific paper from a reputable source explaining the theory'.
That is also why the quality of images, especially visible noise and dynamic range, improves dramatically when one doubles the size of a tiny sensor but the difference in quality between full frame and medium format sensors is much less, and even often non-existent. Above a certain size, the electronics are working at their maximum in terms of light capture, signal to noise ratio and dynamic range and any further increase in size will give no improvement in quality – but higher resolution of course.
That would not happen if the noise in an image was really dependent on the total amount of light used to make the image. It would simply keep improving through medium and large format, but it does not because it is simply not true.
The same above. Let's have a 'single scientific paper from a reputable source explaining the theory'.

--
Tinkety tonk old fruit, & down with the Nazis!
Bob
Once again you simply prove my point. Nowhere in your post or in any of the links you provide is there any explanation whatsoever as to why the more light a sensor receives of a given intensity or brightness, that is to say the greater the area of the sensor, the higher the signal to noise ratio of each pixel or the image as a whole will be.
OK, let's discuss that. One good reason why there wouldn't be a explanation as to why 'the more light a sensor receives of a given intensity or brightness...the higher the signal to noise ratio of each pixel or the image as a whole will be' is because what you wrote is a nonsense in scientific terms. The phrase 'the more light a sensor receives of a given intensity or brightness' makes no sense at all. Maybe it's worthwhile (always an optimist, me) trying to translate it into something that makes sense, so it can be discussed.

Let's rephrase your proposition to 'The luminous energy gathered by a sensor is given by the exposure times the sensor's area'.

Exposure is measured in lux seconds.

The lux (symbol: lx) is the SI derived unit of illuminance and luminous emittance, measuring luminous flux per unit area. It is equal to one lumen per square metre.

(
I hope you don't mind me using Wikipedia as a convenient source, if you doubt what it says, you can easily check the source they used, in this case https://physics.nist.gov/cuu/Units/units.html)

So, if we multiply the exposure by the area, we have an amount measured in lux seconds.

The lumen (symbol: lm) is the SI derived unit of luminous flux, a measure of the total quantity of visible light emitted by a source. Luminous flux differs from power (radiant flux) in that radiant flux includes all electromagnetic waves emitted, while luminous flux is weighted according to a model (a "luminosity function") of the human eye's sensitivity to various wavelengths.

https://en.wikipedia.org/wiki/Lumen_(unit)
using the same source reference as for 'lux'.

So 'luminous flux' is power weighted by the 'luminosity function'. Since the function provides a constant weighting, within a given spectrum we can take 'luminous flux' as being equivalent to 'power' (which is measured in watts). Thus we have now established that multiplying exposure by sensor area gives us a quantity measured in Watt seconds (times some proportionality factor, dependent on the illumination spectrum). Watt seconds is 'Joules' the SI unit of energy, and since light is a quantum phenomenon, an amount of energy must correspond to an integral number of photons. Since I'm sure that you won't want to take my word for it, you can see the same chain of reasoning carried through here:

http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/RYER/ch02.html

OK, so now we have established that exposure times area gives us a number of photons. Now, suppose that we take two sensors with different areas. We sample each sensor with the same number of samples, clearly the area of the samples will also be in proportion to the area of the sensors. If we subject both sensors to the same exposure, then the number of photons collected (on average) per sample will be in proportion to the area of the sensors.

--
Tinkety tonk old fruit, & down with the Nazis!
Bob
 
Last edited:
It is also telling that they do not have exposure / integration time / anything like that as a user-entered parameter in their calculator, instead it is X axis http://camera.hamamatsu.com/eu/en/technical_guides/relative_snr/index.html
Surprise surprise, no mention of total light on the x-axis, just photons...
Those photons are total light per pixel ;) How we form an image from those pixels is out of their control. They provide just the raw data.
 
It is also telling that they do not have exposure / integration time / anything like that as a user-entered parameter in their calculator, instead it is X axis http://camera.hamamatsu.com/eu/en/technical_guides/relative_snr/index.html
Surprise surprise, no mention of total light on the x-axis, just photons...
Those photons are total light per pixel ;) How we form an image from those pixels is out of their control. They provide just the raw data.
I think we should be campaigning for a better deal for photons.
 
Once again you simply prove my point. Nowhere in your post or in any of the links you provide is there any explanation whatsoever as to why the more light a sensor receives of a given intensity or brightness, that is to say the greater the area of the sensor, the higher the signal to noise ratio of each pixel or the image as a whole will be.
How about the Hamamatsu link provided by Iliah Borg ? http://hamamatsu.magnet.fsu.edu/articles/ccdsnr.html

In the photon-limited regime we are interested in, they show that SNR is approximately sqrt(P Qe t) where Qe is the quantum efficiency, P is the photons/second incident on each pixel and t the exposure time.

In other words SNR ≈ sqrt(photons captured)
The fact is that there is no scientific basis whatsoever for the notion of Total Light Equivalence.
Tell that to the folk who make sensors.
 
Maximum entropy methods (MEM) are a cornerstone of astronomical imaging. Here's just one review

The difference between still photography and astronomical imaging is the former enjoys a radically higher S/N compared to the latter.

Astronomers have slowly replaced frequency-of-occurrence (or classic, Fisher statistics with hypothesis-based statistics. They had no choice. And hypothesis-based statistics are now common-place in oil exploration, medical diagnostics, and risk analysis – just to name a few applications.

Until recently, MEM approaches required access to expensive IT resources. This is no longer a factor. Slowly, but surely Information theory will become more common.

To be complete, frequency-of-occurrence approaches are sufficient when information content is high. For this reason, hypothesis-based methods and information theory will be uncommon in still-photography imaging for the foreseeable future. Why learn new statistics when the tools you have are good enough? This doesn't preclude information theory from contributing useful insights about still-photography technologies,
 
Continued from this thread.

In particular, I wish to address two replies to the following post:

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Dimitris Servis wrote:

"But the single greatest factor in the noisiness of a photo is how much light the photo is made from"
Yes.

Where are your references for this claim? Or at least an explanation of the scientific basis of this statement of faith?

1. How do you define the noisiness of a photo?
In terms of luminance noise (as opposed to chroma noise), the noise is the standard deviation of the recorded signal from the mean signal, where the mean signal is taken to be the "true" signal.

For example, if you take measurements of 90, 105, 97, 110, and 98 electrons, the mean is 100 electrons and the standard deviation (noise) is 7.7 electrons, resulting in a relative noise of 7.7 / 100 = 7.7%.

In addition, noise has both a magnitude (demonstrated above) and a frequency. For example, let's consider two photos of the same scene, one photo made with 4x as many pixels as the other and with the assumption that the electronic noise (the noise from the sensor and supporting hardware) is insignificant compared to the photon noise (the noise from the light itself) or that the electronic noise from the two sensors is essentially the same. The noise of the photo with 4x the number of pixels will have a noise frequency that is twice as high. But, while the individual pixels will be more noisy (have a greater relative deviation), will the photo itself be more noisy? The answer is no, no it will not -- the photo made with fewer pixels will be more blurry.

2. What is the mechanism that connects the definition of noise in (1) with how much light the photo is made from under the assumption that it is captured by an array of nxn independent pixels connected to off-chip amplifiers?
If we have two photos of the same scene displayed at the same size, then, with the same assumptions about electronic noise discussed in the previous paragraph, the photo made with more light will be less noisy.

No. The noise per unit area will be the same but the noise in the photo from the larger sensor will be less obvious because it has not been enlarged as much for the same size of final image. The larger sensor receives more light than the smaller sensor because it is bigger. You are confusing cause and effect.

Some thought experiments:

1. Use a D800 once with a 50 1.4 and once a 35 1.8. How do the two images compare with respect to your noise metric? How do they compare to a D7000+35 1.8?

2. How does a theoretical 4/3 6,549 x 4,912 image compare to the 7,360 x 4,912 D800 image relative to your noise metric?

3. As a frequent user of a D750 and an Om-d Em5 mk ii how are the variables available to me before taking a picture affected by your metric? How does your metric manifest itself in my A3+ prints?
How about a demonstration? Here's the Canon 6D at ISO 6400 and Olympus EM5 at ISO 1600, thus both photos made with the same total amount of light (additionally, both sensors have similar electronic noise levels). All but identical with regards to noise.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Reply to Dimitris' response:

I don't have images with electrons. Sorry.

Well, Dimitris, you've expressed the crux of your misunderstanding. Light is composed of photons. Those photons release electrons from the silicon in the sensor which the camera records. The counts of those photoelectrons are the very basis of the information that is needed to create the photo.

Reply to Jack Hogan:

He is not interested in equivalence, shutter speed is different and Exposure is the same.

I assume you mean the example I linked. The exposures are not the same for the two photos -- the linked example shows the same noise in two photos made with the same total amount of light.
What an absurd notion this idea of total light equivalence is – and without any scientific basis whatsoever, which is of course why you will not find a single scientific paper from a reputable source explaining the theory. Plenty on shot noise and other sources of noise in digital images but not a single one that even considers the size of the sensor in this regard.
Ah, a subtle change to your trolling. You used to deny shot noise. Now you're denying that there are any papers which 'even consider the size of the sensor'. Well, I don't know wether there are or not, but since it's such a simple and obvious consequence of shot noise I it's not going to be a research result. Simply, what we know about shot noise is that if you take samples with a mean of n events, then the standard deviation will be √n. The signal to noise is the ratio n/√n = √n. What we know about the photoelectric effect is that individual photoelectrons are released by individual photons. Put the two together and you find that the more photons per sample (pixel) the higher the number of photoelectrons (events) and therefore the higher the SNR.
Try as hard as you like, you will not find a single scientific paper, or even an informal article from a reputable source, that offers a scientific explanation as to why the more light a sensor receives (of a given intensity or brightness) the higher the signal to noise ratio of each pixel or the image as a whole will be.
Ah, now we're back to denying shot noise. Anyway, here is a nice post from Iliah giving a few of the sources that you claim don't exist.

https://www.dpreview.com/forums/post/60115587
The simple fact is that small sensors produce lower quality images than large sensors because the laws of physics and limits of technology mean that it is not possible to produce a small sensor with the signal to noise ratio and dynamic range of a large sensor. The larger the sensor, the easier it becomes.
Please do explain which 'laws of physics' you are talking about, and precisely what are the resulting engineering constraints that make it 'not possible', at least over the range we are talking about for camera sensors (generally sensor diagonals in the tens of millimetres and pixel sizes in the range 3-10 microns). And while we're at it, could we please have a reference to a 'single scientific paper from a reputable source explaining the theory'.
That is also why the quality of images, especially visible noise and dynamic range, improves dramatically when one doubles the size of a tiny sensor but the difference in quality between full frame and medium format sensors is much less, and even often non-existent. Above a certain size, the electronics are working at their maximum in terms of light capture, signal to noise ratio and dynamic range and any further increase in size will give no improvement in quality – but higher resolution of course.
That would not happen if the noise in an image was really dependent on the total amount of light used to make the image. It would simply keep improving through medium and large format, but it does not because it is simply not true.
The same above. Let's have a 'single scientific paper from a reputable source explaining the theory'.
 
You don't need a scientific paper to show that if the total light is the same, the shot noise is the same.
You sure about that.

https://www.dpreview.com/forums/post/55746756
I do have an issue with what you did there. It seems to me that this is mostly about statistics. Taking a single example of how four photoelectrons might be distributed around four and one pixel respectively. Four photoelectrons is not really a population representative of much of a normal image. Detail Man raised this point in that thread.
 
It is also telling that they do not have exposure / integration time / anything like that as a user-entered parameter in their calculator, instead it is X axis http://camera.hamamatsu.com/eu/en/technical_guides/relative_snr/index.html
Surprise surprise, no mention of total light on the x-axis, just photons...
Those photons are total light per pixel ;) How we form an image from those pixels is out of their control. They provide just the raw data.
I think we should be campaigning for a better deal for photons.
For the new deal. Part of it must be demosaicking ;)
 
Once again you simply prove my point. Nowhere in your post or in any of the links you provide is there any explanation whatsoever as to why the more light a sensor receives of a given intensity or brightness, that is to say the greater the area of the sensor, the higher the signal to noise ratio of each pixel or the image as a whole will be.
How about the Hamamatsu link provided by Iliah Borg ? http://hamamatsu.magnet.fsu.edu/articles/ccdsnr.html

In the photon-limited regime we are interested in, they show that SNR is approximately sqrt(P Qe t) where Qe is the quantum efficiency, P is the photons/second incident on each pixel and t the exposure time.

In other words SNR ≈ sqrt(photons captured)
The fact is that there is no scientific basis whatsoever for the notion of Total Light Equivalence.
Tell that to the folk who make sensors.
 
Continued from this thread.

In particular, I wish to address two replies to the following post:

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Dimitris Servis wrote:

"But the single greatest factor in the noisiness of a photo is how much light the photo is made from"
Yes.

Where are your references for this claim? Or at least an explanation of the scientific basis of this statement of faith?

1. How do you define the noisiness of a photo?
In terms of luminance noise (as opposed to chroma noise), the noise is the standard deviation of the recorded signal from the mean signal, where the mean signal is taken to be the "true" signal.

For example, if you take measurements of 90, 105, 97, 110, and 98 electrons, the mean is 100 electrons and the standard deviation (noise) is 7.7 electrons, resulting in a relative noise of 7.7 / 100 = 7.7%.

In addition, noise has both a magnitude (demonstrated above) and a frequency. For example, let's consider two photos of the same scene, one photo made with 4x as many pixels as the other and with the assumption that the electronic noise (the noise from the sensor and supporting hardware) is insignificant compared to the photon noise (the noise from the light itself) or that the electronic noise from the two sensors is essentially the same. The noise of the photo with 4x the number of pixels will have a noise frequency that is twice as high. But, while the individual pixels will be more noisy (have a greater relative deviation), will the photo itself be more noisy? The answer is no, no it will not -- the photo made with fewer pixels will be more blurry.

2. What is the mechanism that connects the definition of noise in (1) with how much light the photo is made from under the assumption that it is captured by an array of nxn independent pixels connected to off-chip amplifiers?
If we have two photos of the same scene displayed at the same size, then, with the same assumptions about electronic noise discussed in the previous paragraph, the photo made with more light will be less noisy.

No. The noise per unit area will be the same but the noise in the photo from the larger sensor will be less obvious because it has not been enlarged as much for the same size of final image. The larger sensor receives more light than the smaller sensor because it is bigger. You are confusing cause and effect.

Some thought experiments:

1. Use a D800 once with a 50 1.4 and once a 35 1.8. How do the two images compare with respect to your noise metric? How do they compare to a D7000+35 1.8?

2. How does a theoretical 4/3 6,549 x 4,912 image compare to the 7,360 x 4,912 D800 image relative to your noise metric?

3. As a frequent user of a D750 and an Om-d Em5 mk ii how are the variables available to me before taking a picture affected by your metric? How does your metric manifest itself in my A3+ prints?
How about a demonstration? Here's the Canon 6D at ISO 6400 and Olympus EM5 at ISO 1600, thus both photos made with the same total amount of light (additionally, both sensors have similar electronic noise levels). All but identical with regards to noise.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Reply to Dimitris' response:

I don't have images with electrons. Sorry.

Well, Dimitris, you've expressed the crux of your misunderstanding. Light is composed of photons. Those photons release electrons from the silicon in the sensor which the camera records. The counts of those photoelectrons are the very basis of the information that is needed to create the photo.

Reply to Jack Hogan:

He is not interested in equivalence, shutter speed is different and Exposure is the same.

I assume you mean the example I linked. The exposures are not the same for the two photos -- the linked example shows the same noise in two photos made with the same total amount of light.
What an absurd notion this idea of total light equivalence is – and without any scientific basis whatsoever, which is of course why you will not find a single scientific paper from a reputable source explaining the theory. Plenty on shot noise and other sources of noise in digital images but not a single one that even considers the size of the sensor in this regard.
Ah, a subtle change to your trolling. You used to deny shot noise. Now you're denying that there are any papers which 'even consider the size of the sensor'. Well, I don't know wether there are or not, but since it's such a simple and obvious consequence of shot noise I it's not going to be a research result. Simply, what we know about shot noise is that if you take samples with a mean of n events, then the standard deviation will be √n. The signal to noise is the ratio n/√n = √n. What we know about the photoelectric effect is that individual photoelectrons are released by individual photons. Put the two together and you find that the more photons per sample (pixel) the higher the number of photoelectrons (events) and therefore the higher the SNR.
Try as hard as you like, you will not find a single scientific paper, or even an informal article from a reputable source, that offers a scientific explanation as to why the more light a sensor receives (of a given intensity or brightness) the higher the signal to noise ratio of each pixel or the image as a whole will be.
Ah, now we're back to denying shot noise. Anyway, here is a nice post from Iliah giving a few of the sources that you claim don't exist.

https://www.dpreview.com/forums/post/60115587
The simple fact is that small sensors produce lower quality images than large sensors because the laws of physics and limits of technology mean that it is not possible to produce a small sensor with the signal to noise ratio and dynamic range of a large sensor. The larger the sensor, the easier it becomes.
Please do explain which 'laws of physics' you are talking about, and precisely what are the resulting engineering constraints that make it 'not possible', at least over the range we are talking about for camera sensors (generally sensor diagonals in the tens of millimetres and pixel sizes in the range 3-10 microns). And while we're at it, could we please have a reference to a 'single scientific paper from a reputable source explaining the theory'.
That is also why the quality of images, especially visible noise and dynamic range, improves dramatically when one doubles the size of a tiny sensor but the difference in quality between full frame and medium format sensors is much less, and even often non-existent. Above a certain size, the electronics are working at their maximum in terms of light capture, signal to noise ratio and dynamic range and any further increase in size will give no improvement in quality – but higher resolution of course.
That would not happen if the noise in an image was really dependent on the total amount of light used to make the image. It would simply keep improving through medium and large format, but it does not because it is simply not true.
The same above. Let's have a 'single scientific paper from a reputable source explaining the theory'.
 
Upon further thought I don't quite understand how matlab's bilinear downsizing is implemented. If I just take each quartet and average it I get exactly the same result as uFT and 'binned', nothing like 'bilinear'.
Matlab's bilinear algorithm (and some others) used for downsizing produces some strange results at particular size ratios:


Look at the kurtosis curve here:


Jim
 
It is also telling that they do not have exposure / integration time / anything like that as a user-entered parameter in their calculator, instead it is X axis http://camera.hamamatsu.com/eu/en/technical_guides/relative_snr/index.html
Surprise surprise, no mention of total light on the x-axis, just photons...
Those photons are total light per pixel ;) How we form an image from those pixels is out of their control. They provide just the raw data.
I think we should be campaigning for a better deal for photons.
For the new deal. Part of it must be demosaicking ;)
Something like that. I always thought it was discriminatory, the way poor photons get segregated into 'signal' (good) and 'noise' (bad). Seems to me there's a bit of good and bad in every photon.
 
I assume you mean the example I linked. The exposures are not the same for the two photos -- the linked example shows the same noise in two photos made with the same total amount of light.
What an absurd notion this idea of total light equivalence is – and without any scientific basis whatsoever, which is of course why you will not find a single scientific paper from a reputable source explaining the theory. Plenty on shot noise and other sources of noise in digital images but not a single one that even considers the size of the sensor in this regard.
I believe the main reason you don't find peer-reviewed papers on the subject is that the basic concept follows trivially from the Poisson statistics of photon noise, so would be considered uninteresting and unlikely to pass peer review.
 
It is also telling that they do not have exposure / integration time / anything like that as a user-entered parameter in their calculator, instead it is X axis http://camera.hamamatsu.com/eu/en/technical_guides/relative_snr/index.html
Surprise surprise, no mention of total light on the x-axis, just photons...
Those photons are total light per pixel ;) How we form an image from those pixels is out of their control. They provide just the raw data.
I think we should be campaigning for a better deal for photons.
For the new deal. Part of it must be demosaicking ;)
Something like that. I always thought it was discriminatory, the way poor photons get segregated into 'signal' (good) and 'noise' (bad). Seems to me there's a bit of good and bad in every photon.
True, but that exceeds the limits of the instrumentation that is available to me. So I think, like it is with colour management, I opt for smoothness.
 

Keyboard shortcuts

Back
Top