Common misconception with noise, pixel size and sensor size

take it easy BOB, Its reminds me about 10-12 years ago when I and others read Clarks explanation as the bible.

Today we know lot more through a number of knowledgeable people here at dpreview and it was John Sheehy and others who killed the myths around Clarks explanations. Read: Emil Martinec (and also you BobN2) The_Suede, Gaborek, Wishnowski, John Sheehy and many many more.
What myths? I strive for accuracy and not one of the above mentioned people has ever emailed me about any issues with anything on my web site that I can remember.
I remember quite a lengthy discussion on the Usenet forums between yourself, Emil Martinec, John Sheehy and myself, where, unless I'm remembering wrongly, you flat out refused to countenance the errors that were pointed out to you. Email is not the only means of communication.

--
Bob.
DARK IN HERE, ISN'T IT?
exactly, but the discussion was here and based from my side on a page which I also referred to at that time.
The one I was thunking about was on the Usenet forums, I very distinctly remember Emil encouraging me to join the discussion. At the time, Roger wasn't posting here.

--
Bob.
DARK IN HERE, ISN'T IT?
nope, newer been at usenet, the discussion was here at dpreview 2008 or earlier ? and I referred to a certain side and logic about "one big or several smaller buckets ", which I know since that I was wrong (John, Emil, did the myth killing at that time, especially John who went against conventional thinking and that there is nothing wrong with higher resolution, smaller pixels, on the contrary ) you learn as long as you live

--
Member of Swedish Photographers Association since 1984
Canon, Hasselblad, Leica, Nikon, Linhoff, Sinar,Zeiss, Sony . Phantom 4
 
Last edited:
The lux is not just the unit of light intensity (really luminous power or emittance): it is the light intensity per unit area (its units are lumens per square-meter, and lumens are luminous energy per unit time). The problem is that having the same amount of light per unit area doesn't mean the same light over different areas. Exposure is the light per unit area accumulated over an exposure time, thus lux-seconds. Equal exposures, then for the same light source place the same amount of light per unit area on the sensor. A larger sensor therefore, such as FF vs mFT with 4 times the area, will receive and record 4 times the light, assuming equal QE (properly used).
It might be better to think of it as 4 times the total number of photons.
And, since it has 4 times the total light, it has twice the s/n, considering only shot noise.
I do not think this is true. It would be true for a single photodetector taking 4 sequential measurements to find the average light level, but not for an array of detectors where the signal is the differences between them (such as black and white bars on a test chart).

Your suggestion is equivalent to saying that a longer recording time improves the SNR of a sound recording.
Careful with the analogies. You're saying that a 10 second sound clip sounds better tan a 5 second one, but that's the wrong quantity.
I am saying that it doesn't. The "total light" theory says that it does, simply because more samples are taken.
OK, we agree here. But I fail to see the analogy you're drawing between sound recording and light recording in this case. A 10 second sound recording is not self-similar to a 5 second one, although the noise present in each sample is random and therefore subject to statistical averaging. However, in doing so the intelligence portion of the signal is averaged away.
In sound recording, temporality is an essential aspect of the data record.
And in still photography space (2-dimensional) is the equivalent aspect. A bar test chart for sensors is the same as a square wave signal for audio tests.
No question there. But the noise reduction we employ in still photography averages a spatial signal with ostensibly fixed intelligence portion over TIME. We can also average over space at a loss in resolution - or spatial bandwidth.
In still photography, as long as the subject doesn't move you can trade recording time for signal quality, if you're considering only shot noise.
That is about ETTR, not sensor size.
This is an argument pertaining to the use of a sound recording analogy.
Compare an ISO 6400 image to an ISO 100 image. 5 stops more light in the latter, higher acuity and smoother backgrounds as a consequence. Since the shot noise in each pixel is independent, we can apply the same statistics to the spatial array that we would for a series of repeated trials on a single pixel.
A series of exposures on one pixel gives you a more accurate measurement of the DC illuminance on that pixel. A picture made up from these more accurate measurements will have a better SNR.
Correct.
Multiple or longer exposures have nothing to do with sensor size. You can apply them to any sensor.
Correct. But the total light theory also holds at the pixel level here. A longer exposure of a still image contains more photons in each pixel, meaning that the value stored in the pixel is closer to the expected value of the intelligence portion of the signal. However you slice it, random noise is still random.
Now in sound recording, digital or analog, there is a sampling time dependency to signal quality; reduce the sampling (or in analog, the recording) bandwidth to less than the spectral width of the signal, and your recording will suffer.
Likewise, bigger pixels give you lower resolution and worse aliasing. Nothing to do with sensor size, although I suspect that spatial frequency bandwidths (of noise and signal) do have something to do with the better image quality from larger sensors.
Correct.
In sound recording also the phenomenon being recorded is not quantized; there is noise associated with the recording, but it's not in the signal, or it's so far down it's irrelevant...it's all in the recording electronics.
Sound particles are called phonons. They are quantised. But sound recording seldom explores very low sound levels (although the quality of room acoustics can be quite hard to record, and so can the reverberations in a piano). Photographers regularly work in very low light levels.
True enough, everything is indeed quantized, but with sound recording it isn't a factor.
With light recording, the shot noise component is a significant fraction of the signal, and light's quantized nature is clear.
Sound recording is mostly done in the audio equivalent of bright sunshine.
Exactly my point here, though I didn't say it as precisely.

 
OK, I would very much like to read that thread but have no idea where to start to search for it.
 
take it easy BOB, Its reminds me about 10-12 years ago when I and others read Clarks explanation as the bible.

Today we know lot more through a number of knowledgeable people here at dpreview and it was John Sheehy and others who killed the myths around Clarks explanations. Read: Emil Martinec (and also you BobN2) The_Suede, Gaborek, Wishnowski, John Sheehy and many many more.
What myths? I strive for accuracy and not one of the above mentioned people has ever emailed me about any issues with anything on my web site that I can remember.
I remember quite a lengthy discussion on the Usenet forums between yourself, Emil Martinec, John Sheehy and myself, where, unless I'm remembering wrongly, you flat out refused to countenance the errors that were pointed out to you. Email is not the only means of communication.
 
Likewise, bigger pixels give you lower resolution and worse aliasing. Nothing to do with sensor size, although I suspect that spatial frequency bandwidths (of noise and signal) do have something to do with the better image quality from larger sensors.
Correct.
? What about 16MP on m43 and FF, or 16MP m43 vs. a Sony 42MP on FF and using a lens giving equivalent AOV? What do you mean with resolution here as I may mean s.th. else?
 
take it easy BOB, Its reminds me about 10-12 years ago when I and others read Clarks explanation as the bible.

Today we know lot more through a number of knowledgeable people here at dpreview and it was John Sheehy and others who killed the myths around Clarks explanations. Read: Emil Martinec (and also you BobN2) The_Suede, Gaborek, Wishnowski, John Sheehy and many many more.
What myths? I strive for accuracy and not one of the above mentioned people has ever emailed me about any issues with anything on my web site that I can remember.
I remember quite a lengthy discussion on the Usenet forums between yourself, Emil Martinec, John Sheehy and myself, where, unless I'm remembering wrongly, you flat out refused to countenance the errors that were pointed out to you. Email is not the only means of communication.
I do not recall any such a discussion. I am not discounting that I may have had an error. Emil and I have in general agreed. If you are talking about some discussion single photon pixels I could care less then and still don't care--I have better things to do. I am more interested in real world applications of realistic systems. And I could care less about vague charges.

If you think there is an error anywhere on my website now I would like to know what you think is in error. I will correct it if I can determine what the correction should be.

Roger
 
The lux is not just the unit of light intensity (really luminous power or emittance): it is the light intensity per unit area (its units are lumens per square-meter, and lumens are luminous energy per unit time). The problem is that having the same amount of light per unit area doesn't mean the same light over different areas. Exposure is the light per unit area accumulated over an exposure time, thus lux-seconds. Equal exposures, then for the same light source place the same amount of light per unit area on the sensor. A larger sensor therefore, such as FF vs mFT with 4 times the area, will receive and record 4 times the light, assuming equal QE (properly used).
It might be better to think of it as 4 times the total number of photons.
Good. Although this change is only appropriate if, again, we assume equal QE, for, otherwise, a different proportion of those photons will manifest themselves as charges on the sensor, which is what is really relevant.
You can't just increase the sensor size and bingo there are more photons. The photons must be gathered by something---the lens. In the typical photographer application, say a 2x crop and full frame, the photographer keeps the same field of view and f-ratio on both cameras. The focal length must double from 2x crop to FF. Thus the lens entrance pupil area increases 4x on the FF relative to 2x crop. The lens is the CAUSE of the increased light gathered from the subject, not the sensor.

Roger
 
The lux is not just the unit of light intensity (really luminous power or emittance): it is the light intensity per unit area (its units are lumens per square-meter, and lumens are luminous energy per unit time). The problem is that having the same amount of light per unit area doesn't mean the same light over different areas. Exposure is the light per unit area accumulated over an exposure time, thus lux-seconds. Equal exposures, then for the same light source place the same amount of light per unit area on the sensor. A larger sensor therefore, such as FF vs mFT with 4 times the area, will receive and record 4 times the light, assuming equal QE (properly used).
It might be better to think of it as 4 times the total number of photons.
Good. Although this change is only appropriate if, again, we assume equal QE, for, otherwise, a different proportion of those photons will manifest themselves as charges on the sensor, which is what is really relevant.
You can't just increase the sensor size and bingo there are more photons. The photons must be gathered by something---the lens. In the typical photographer application, say a 2x crop and full frame, the photographer keeps the same field of view and f-ratio on both cameras. The focal length must double from 2x crop to FF. Thus the lens entrance pupil area increases 4x on the FF relative to 2x crop. The lens is the CAUSE of the increased light gathered from the subject, not the sensor.
Yes, assuming the more shallow DOF were not an issue (or, at least, not so much an issue as the greater noise at a deeper DOF). Alternatively, the FF photographer could shoot with the same aperture diameter but 4x the exposure time (assuming motion blur is a non-issue).
 
Last edited:
The lux is not just the unit of light intensity (really luminous power or emittance): it is the light intensity per unit area (its units are lumens per square-meter, and lumens are luminous energy per unit time). The problem is that having the same amount of light per unit area doesn't mean the same light over different areas. Exposure is the light per unit area accumulated over an exposure time, thus lux-seconds. Equal exposures, then for the same light source place the same amount of light per unit area on the sensor. A larger sensor therefore, such as FF vs mFT with 4 times the area, will receive and record 4 times the light, assuming equal QE (properly used).
It might be better to think of it as 4 times the total number of photons.
Good. Although this change is only appropriate if, again, we assume equal QE, for, otherwise, a different proportion of those photons will manifest themselves as charges on the sensor, which is what is really relevant.
You can't just increase the sensor size and bingo there are more photons. The photons must be gathered by something---the lens. In the typical photographer application, say a 2x crop and full frame, the photographer keeps the same field of view and f-ratio on both cameras. The focal length must double from 2x crop to FF. Thus the lens entrance pupil area increases 4x on the FF relative to 2x crop. The lens is the CAUSE of the increased light gathered from the subject, not the sensor.
Hum, well, I'm not sure when you came into this movie, but if you go back a bit, you'll see that I was working in the context of the OP's rather "interesting" proposed experiment in which he had a number of differing format situations all with the same meter reading after the lens. That is, we were indeed talking about what would be equal exposures on the different formats if applied with the same shutter speed. So there was no bingo involved. But, thank you: your clarification is likely to be beneficial to one and to all, including, I would think, the OP. It is, however, a movie I've seen many times.

--
gollywop
I am not a moderator or an official of dpr. My views do not represent, or necessarily reflect, those of dpr
.
http://g4.img-dpreview.com/D8A95C7DB3724EC094214B212FB1F2AF.jpg
 
Last edited:
There's still some things not quite sure though:

Comparing a Nikon DF & a Panny GM1, both spotting 16MP. Let's say I put 100/4 (aperture dia. 25mm) on the DF & 50/4 (aperture dia. 12.5) on the Panny, from Clarkvison:

Two lenses with different focal lengths and the same f/ratio deliver the same photon density in the focal plane, but NOT the same total light from the subject

Now I want the advantage from the bigger aperture of the 100/4, that would means swapping the 50/4 lens for a 50/2 lens. With the bigger aperture I'm getting more light but the shutter now will be faster for the same exposure as using the F4 lens. In this case, what advantage will there be in terms of image quality & noise since exposure is the same? Ignoring the aesthetic appeal like DOF & bokeh.
No, you keep the shutter speed the same, instead you raise the ISO on FF camera. Since the light flux is the same the photon noise will be the same as well at the equivalent ISOs.

There is physical reality behind the exposure parameters that photographer has to take into account. I don't want to go into gory details, just mention it. We choose the shutter speed based on the subject motion, camera shake, etc. We choose physical aperture (not the f-stop) based on desired DOF, available light, diffraction, etc. The only 'meaningless' parameter for photographer is ISO, which is the gain system has to apply to achieve a predefined brightness of the middle gray.
 
Likewise, bigger pixels give you lower resolution and worse aliasing. Nothing to do with sensor size, although I suspect that spatial frequency bandwidths (of noise and signal) do have something to do with the better image quality from larger sensors.
Correct.
? What about 16MP on m43 and FF, or 16MP m43 vs. a Sony 42MP on FF and using a lens giving equivalent AOV? What do you mean with resolution here as I may mean s.th. else?
This line of questioning, as I understood and responded to it, pertained to resolution on the same sensor format. In this case, bigger pixels mean lower sensor resolution and worse aliasing if not properly anti-alias filtered.

Actually, I should modify my statement. I don't quite see why larger sensor formats would have an effect on the noise/signal spatial bandwidth other than it's easier to make a lens that matches up to a sensor with larger formats. For a given sensor resolution the pixels are bigger, which offers larger well capacities which allows for higher bit depth in the conversion, which translates to finer color gradations and lower per-pixel noise.

I will be so glad when Eric finishes his QIS and we can dispense with all this integrating well and A/D nonsense.
 
take it easy BOB, Its reminds me about 10-12 years ago when I and others read Clarks explanation as the bible.

Today we know lot more through a number of knowledgeable people here at dpreview and it was John Sheehy and others who killed the myths around Clarks explanations. Read: Emil Martinec (and also you BobN2) The_Suede, Gaborek, Wishnowski, John Sheehy and many many more.
What myths? I strive for accuracy and not one of the above mentioned people has ever emailed me about any issues with anything on my web site that I can remember.
I remember quite a lengthy discussion on the Usenet forums between yourself, Emil Martinec, John Sheehy and myself, where, unless I'm remembering wrongly, you flat out refused to countenance the errors that were pointed out to you. Email is not the only means of communication.
I do not recall any such a discussion. I am not discounting that I may have had an error. Emil and I have in general agreed. If you are talking about some discussion single photon pixels I could care less then and still don't care--I have better things to do. I am more interested in real world applications of realistic systems. And I could care less about vague charges.

If you think there is an error anywhere on my website now I would like to know what you think is in error. I will correct it if I can determine what the correction should be.

Roger
Fair enough, Roger. I'm not going to go through your site now, but next time someone refers to it and there's something erroneous on it, I'll do you the courtesy of an email. Your site certainly is not nowadays a prime source of error.
 
The lux is not just the unit of light intensity (really luminous power or emittance): it is the light intensity per unit area (its units are lumens per square-meter, and lumens are luminous energy per unit time). The problem is that having the same amount of light per unit area doesn't mean the same light over different areas. Exposure is the light per unit area accumulated over an exposure time, thus lux-seconds. Equal exposures, then for the same light source place the same amount of light per unit area on the sensor. A larger sensor therefore, such as FF vs mFT with 4 times the area, will receive and record 4 times the light, assuming equal QE (properly used).
It might be better to think of it as 4 times the total number of photons.
Good. Although this change is only appropriate if, again, we assume equal QE, for, otherwise, a different proportion of those photons will manifest themselves as charges on the sensor, which is what is really relevant.
You can't just increase the sensor size and bingo there are more photons. The photons must be gathered by something---the lens. In the typical photographer application, say a 2x crop and full frame, the photographer keeps the same field of view and f-ratio on both cameras. The focal length must double from 2x crop to FF. Thus the lens entrance pupil area increases 4x on the FF relative to 2x crop. The lens is the CAUSE of the increased light gathered from the subject, not the sensor.
Yes, assuming the more shallow DOF were not an issue (or, at least, not so much an issue as the greater noise at a deeper DOF). Alternatively, the FF photographer could shoot with the same aperture diameter but 4x the exposure time (assuming motion blur is a non-issue).
Another way of thinking about it is the design of a complete optical system. You can determine the behaviour of the front part, gathering light from given field of view through and entrance pupil, that determines how much light is collected. At the back half, you can decide where you intercept the collected light and the size of the sensor required depends on how far along the optical path you decide to intercept it. Then, of course, you have to sort out the lens so everything is in focus at that point.
 
take it easy BOB, Its reminds me about 10-12 years ago when I and others read Clarks explanation as the bible.

Today we know lot more through a number of knowledgeable people here at dpreview and it was John Sheehy and others who killed the myths around Clarks explanations. Read: Emil Martinec (and also you BobN2) The_Suede, Gaborek, Wishnowski, John Sheehy and many many more.
What myths? I strive for accuracy and not one of the above mentioned people has ever emailed me about any issues with anything on my web site that I can remember.
I remember quite a lengthy discussion on the Usenet forums between yourself, Emil Martinec, John Sheehy and myself, where, unless I'm remembering wrongly, you flat out refused to countenance the errors that were pointed out to you. Email is not the only means of communication.
I do not recall any such a discussion. I am not discounting that I may have had an error. Emil and I have in general agreed. If you are talking about some discussion single photon pixels I could care less then and still don't care--I have better things to do. I am more interested in real world applications of realistic systems. And I could care less about vague charges.

If you think there is an error anywhere on my website now I would like to know what you think is in error. I will correct it if I can determine what the correction should be.

Roger
Hi Roger, after reading your article on the low light photography, may I know why is the area of the lens (aperture) area "The lens area, A = pi * D /4, where D = Lens diameter = focal length / f/ratio, and pi = 3.14159"
and not pi*radius*radius?


Thanks
 
Your suggestion is equivalent to saying that a longer recording time improves the SNR of a sound recording.
Careful with the analogies. You're saying that a 10 second sound clip sounds better tan a 5 second one, but that's the wrong quantity.
I am saying that it doesn't. The "total light" theory says that it does, simply because more samples are taken.
Would it be more accurate to say recording at higher amplitude to improve SNR is equivalent to this total light theory?

That is about ETTR, not sensor size.

This is an argument pertaining to the use of a sound recording analogy.
 
The lux is not just the unit of light intensity (really luminous power or emittance): it is the light intensity per unit area (its units are lumens per square-meter, and lumens are luminous energy per unit time). The problem is that having the same amount of light per unit area doesn't mean the same light over different areas. Exposure is the light per unit area accumulated over an exposure time, thus lux-seconds. Equal exposures, then for the same light source place the same amount of light per unit area on the sensor. A larger sensor therefore, such as FF vs mFT with 4 times the area, will receive and record 4 times the light, assuming equal QE (properly used). And, since it has 4 times the total light, it has twice the s/n, considering only shot noise.

Shot noise, by the way, is not generated by the pixels in the sensor; it comes with the light. The magnitude of the shot noise is the square-root of that of the signal. The noise from the sensor and the camera's electronics and software is called read noise. Read noise is quite small and becomes significant only in shadow regions where the signal and its attendant shot noise are very small.)

--
gollywop
I am not a moderator or an official of dpr. My views do not represent, or necessarily reflect, those of dpr.

http://g4.img-dpreview.com/D8A95C7DB3724EC094214B212FB1F2AF.jpg
So I should say 30lumens instead of lux? Or since no camera is involved only a light meter, should the lux-seconds unit be used due the the similarity working principle of light meter & camera?

I couldn't quite understand what you mean by having the same amount of light per unit area doesn't mean the same light over different areas. Isn't the light coming through the lens uniform throughout the image circle? Ignoring the edge where it gradually fades into no light.

Since shot noise comes from light, shouldn't it increase in proportion to the size of sensor? Increasing size also increase shot noise because now the area is bigger to receive more light? Am I understanding this right?

Thanks for your detailed breakdown.
 
Last edited:
Hi Roger, after reading your article on the low light photography, may I know why is the area of the lens (aperture) area "The lens area, A = pi * D /4, where D = Lens diameter = focal length / f/ratio, and pi = 3.14159"
and not pi*radius*radius?

http://www.clarkvision.com/articles/low.light.photography.and.f-ratios/
Your quote above lost the squared term. It is A = pi * D^2 /4

D/2 = radius. if radius = r, A = pi * r^2 = pi * (D/2)^2 = pi * D^2 /4
Thanks for the clarification. It got lost in copy & paste. I was also confused by the divide by 4 thinking it is for the summing & averaging 4 pixels to 1, in fact it is for making the D in to r.
 
Still couldn't quite make out what you mean Joseph. More light falls on the sensor for larger sensor systems?
For the same exposure, more light falls on larger sensors in proportion to the ratio of the sensor areas (e.g. 4x as much light falls on a FF sensor as an mFT sensor for the same exposure).

On the other hand, the same amount of light falls on the sensor for all formats for Equivalent photos (same DOF and exposure time).
From Clarkvision:

...light from lenses of the same f/ratio has the same light density in the focal plane
Indeed. The light density (exposure, measured in lux seconds, or, equivalently photons/mm²) is the same for the same scene, relative aperture, lens transmission (t-stop vs f-stop). and exposure time.

However, the total amount of light falling on the sensor (measured in lumen seconds, or, equivalently, photons) is the product of the exposure (light density) and sensor area.
...the focal length of the lens spreads out the light...

My understanding is that because longer FL spreads the light more, therefore a bigger aperture is needed to allow more light to pass through to maintain the same light density with respect to the same F-stop. Hence, the same exposure time, same shutter speed with same ISO. How is this more light when the density is the same?
The same light density over a larger area results in a greater total amount of light. For example, a bowl with an 8 inch diameter placed in the rain will collect 4x as much water as a bowl with a 4 inch diameter. Likewise, the same exposure on a sensor with 4x the area results in 4x as much light falling on the sensor.
I still don't quite get why totaling the light on the area plays an important role. I imagine myself as a pixel, & when I look up, I only see the micro lens directing light to me. Knowing the total light on the area is it going to make me produce more charges? The light density is the same as the lens with a smaller image circle at the same F-stop afterall. How is the area collecting light when actually I am the one doing it? I can understand if the bowl above my head is 8" & can direct more light to me.

For the same perspective, framing, and aperture (entrance pupil) diameter, the DOF will be the same. For example, 50mm f/2 on mFT and 100mm f/4 on FF both have the same [diagonal] angle of view and aperture diameter (50mm / 2 = 100mm / 4 = 25mm), so if photos of the same scene are taken from the same position (same perspective), the DOFs will be the same.

Furthermore, if the exposure times are also the same, the same total amount of light will fall on the sensor since the same portion of the scene is being recorded, the aperture diameters are the same, and the exposure times are the same.
But how do you get the same exposure time?
For example, 50mm f/2 1/100 on mFT and 100mm f/4 1/100 on FF (set the ISO setting to taste for the desired output brightness, or just use Auto ISO).
Hmm.... say we manually select ISO, shutter speed & aperture. The 100/4 should have same exposure as 50/4 since light density is the same. I experienced it myself when using the 5D2 & 40D with the same 70-200 F2.8 lens.

By opening 50/4 to F2 to match the physical aperture size of the 100/4, wouldn't that overexpose the shot? Increasing the shutter speed to maintain same exposure, & getting the DOF desired is logical to do here. My qn is: what is the advantage here then, comparing picture quality with the lens at F4 slower shutter speed & the lens at F2 higher shutter speed, ignoring the DOF appeal.
 

Keyboard shortcuts

Back
Top