Common misconception with noise, pixel size and sensor size

This thread is hoping to address the common misconception regarding sensor size, pixel size & noise
But, for the most part, it really is as simple as saying that the more light a photo is made from, the less noisy it will be, that larger sensors of a given generation record more light than smaller sensors in proportion to the ratio of the sensor areas for a given exposure, and that the more pixels the photo is made from, the higher the quality the photo will be.
Technically speaking, more signal (light), more noise, and higher SNR. If you normalize two images so the max signal is max white in both, then the apparent noise is lower for the photo with the larger signal, owing to its higher SNR.
 
take it easy BOB, Its reminds me about 10-12 years ago when I and others read Clarks explanation as the bible.

Today we know lot more through a number of knowledgeable people here at dpreview and it was John Sheehy and others who killed the myths around Clarks explanations. Read: Emil Martinec (and also you BobN2) The_Suede, Gaborek, Wishnowski, John Sheehy and many many more.
What myths? I strive for accuracy and not one of the above mentioned people has ever emailed me about any issues with anything on my web site that I can remember.
 
... "a larger sensor" [does] "not collect more light over a given amount of detail. Only a larger lens can collect and deliver more light."

Source: http://www.clarkvision.com/articles/telephoto.system.performance/

"There are three factors that determine the true exposure in a camera + lens. 1) The lens area, or more accurately, the lens entrance pupil, which is the effective light collection area of a complex lens. The area determines how much light the lens collects to deliver to the sensor. 2) The angular area of the subject. The product of these two values is called Etendue, or A*Ω (A*Omega) product. (A= the lens entrance pupil area, and Ω, omega = the angular area of subject). The third value is 3) exposure time, the length of time the sensor is exposed to light."

Source: http://www.clarkvision.com/articles/low.light.photography.and.f-ratios/

Angular Area Approximation (considering any rectangle within the maximum field of view).

The more general (rectilinear) case: https://en.wikipedia.org/wiki/Steradian#Definition

.

The (Signal + Noise) / Noise Ratio ...

... which can only be a temporal measurement (from combining multiple samples over time when considered on the level of a single photosite) - as opposed to a spatial measurement (simultaneously combining multiple samples on the level of an array of photosites) ...

... varies by the square-root of the product of Signal+Noise multiplied by the Quantum Efficiency (of photon > electron transduction at some optical wavelength of measurement).

.
From Clarkvision site:

Given two sensors with equal numbers of pixels, and each with lenses of the same f/ratio, the larger sensor collects more photons yet has the same spatial resolution.
The lens for the larger sensor would have a longer focal length in order to cover the same field of view is the system with the smaller sensor and the lens will also have a larger aperture diameter,
thus collecting more light. ...

... and the larger pixel simply enable collection of the increased light delivered by the lens
Thanks for the more specific links to my reply.

"The larger the diameter the lens, to more light it collects"
It is confusing reading the 1st time, I kept thinking the entrance pupil refers to the front glass element because the diagrams show only a piece of glass & a sensor. Until I read from the 2nd article link regarding the low light photography & found this:

The 50 mm f/2 lens has a lens diameter of 50/2 = 25 mm

In my head I changed the word from lens & to aperture & re-read, got me some eureka moments. LOL!

There's still some things not quite sure though:

Comparing a Nikon DF & a Panny GM1, both spotting 16MP. Let's say I put 100/4 (aperture dia. 25mm) on the DF & 50/4 (aperture dia. 12.5) on the Panny, from Clarkvison:

Two lenses with different focal lengths and the same f/ratio deliver the same photon density in the focal plane, but NOT the same total light from the subject



Now I want the advantage from the bigger aperture of the 100/4, that would means swapping the 50/4 lens for a 50/2 lens. With the bigger aperture I'm getting more light but the shutter now will be faster for the same exposure as using the F4 lens. In this case, what advantage will there be in terms of image quality & noise since exposure is the same? Ignoring the aesthetic appeal like DOF & bokeh.
 
My question to Eric is, why do you think larger sensors give better dynamic range and less noise ?
Let's consider the case of more pixels of the same size.

As I don't do video, I'm interested only in spatial noise, so that would be DR with (at the dark end) SNR measured across an area of the sensor. At the bright end, we are dealing with full wells, which seem fairly straightforward.
(DR for sensor technologists is the ratio of maximum signal to signal that yields SNR=1)
It seems to me that everyone has a different untenable theory.
I think it is probably due to a loose definition of both "larger" and "dynamic range" and getting mixed up with comparing different technology generations and "perception" vs. measurement. There is NO mystery when it comes to understanding how the sensor is performing.
And yet there is a mystery. Images from larger sensors do have smoother gradation of tones and colours, and the system is more robust in very dim light.

To put it another way, why do professional photographers find it worth while to spend the very high prices of "medium format" digital cameras ?

(Much the same applied to film, but perhaps for different reasons.)
By "noise", I mean unwanted variation across an array of pixels in a still photo, not changes from frame to frame in a video.
So, spatial noise vs. temporal noise.
Definitely spatial noise. There is no temporal noise in a still photograph.
.... and I don't believe any of your claims, at least not at the sensor level or RAW image level. If the FOV is the same, then of course the spatial resolution is better in the larger sensor and this can result in many benefits. In the same size print or display image, spatial noise gets more suppressed by human perception - and is why more pixels in the same size sensor often looks better.
Generally though, medium format cameras often have larger pixel sizes than compact cameras and perhaps also higher pixel count. So this is a win on several fronts, except for your wallet.
In the audio world, equipment with response from 20Hz to 20Khz records better quality than equipment with 40Hz to 14Khz. And playback of 24bits/192Khz from a hi rez album sounds better than its 16/44.1 low rez brother.

Could this be the reason why MF looks to have smoother gradations & colors? Having 16bits & a wider color gamut perhaps? But I can't see the improvement for Canon & Nikon when they switched from 12 to 14bits though.
 
Eric only commented on the numerous errors you've made regarding sensors and sensor behavior. You should also engage in a major revision of your notions about light. Be aware that those equal-valued measurements you're referring to are equal lux values, and lux is a per-unit-area measure (lumens per square-meter). This, I fear, undermines your entire "equal light" thesis – as well it should, because it's flat-out wrong.

--
gollywop
I am not a moderator or an official of dpr. My views do not represent, or necessarily reflect, those of dpr.

http://g4.img-dpreview.com/D8A95C7DB3724EC094214B212FB1F2AF.jpg
Could you be more specific & explain further? From what I gather :

The lux (symbol: lx) is the SI unit of illuminance and luminous emittance, measuring luminous flux per unit area. It is equal to one lumen per square metre. In photometry, this is used as a measure of the intensity, as perceived by the human eye, of light that hits or passes through a surface.



How did I use lux wrongly? Would it be more correct if I said the image circle is 0.xx sq meter since it is not a sq meter in size?



Or maybe the pic of the light meter is wrong, perhaps changing to this one is more accurate



5cd846962095494f87a257d05640fc05.jpg





So if lux as a unit for light intensity is wrongly used here, what should be the correct unit?
 
This thread is hoping to address the common misconception regarding sensor size, pixel size & noise
But, for the most part, it really is as simple as saying that the more light a photo is made from, the less noisy it will be, that larger sensors of a given generation record more light than smaller sensors in proportion to the ratio of the sensor areas for a given exposure, and that the more pixels the photo is made from, the higher the quality the photo will be.
Technically speaking, more signal (light), more noise, and higher SNR.
Indeed. This is why I make every effort (especially in this forum) to say "less noisy" as opposed to "less noise".
If you normalize two images so the max signal is max white in both, then the apparent noise is lower for the photo with the larger signal, owing to its higher SNR.
For sure. A "less noisy" photo has less "apparent noise" (lower NSR; higher SNR).
 
Last edited:
Eric only commented on the numerous errors you've made regarding sensors and sensor behavior. You should also engage in a major revision of your notions about light. Be aware that those equal-valued measurements you're referring to are equal lux values, and lux is a per-unit-area measure (lumens per square-meter). This, I fear, undermines your entire "equal light" thesis – as well it should, because it's flat-out wrong.

--
gollywop
I am not a moderator or an official of dpr. My views do not represent, or necessarily reflect, those of dpr.

http://g4.img-dpreview.com/D8A95C7DB3724EC094214B212FB1F2AF.jpg
Could you be more specific & explain further? From what I gather :

The lux (symbol: lx) is the SI unit of illuminance and luminous emittance, measuring luminous flux per unit area. It is equal to one lumen per square metre. In photometry, this is used as a measure of the intensity, as perceived by the human eye, of light that hits or passes through a surface.

How did I use lux wrongly? Would it be more correct if I said the image circle is 0.xx sq meter since it is not a sq meter in size?

Or maybe the pic of the light meter is wrong, perhaps changing to this one is more accurate

5cd846962095494f87a257d05640fc05.jpg

So if lux as a unit for light intensity is wrongly used here, what should be the correct unit?
The lux is not just the unit of light intensity (really luminous power or emittance): it is the light intensity per unit area (its units are lumens per square-meter, and lumens are luminous energy per unit time). The problem is that having the same amount of light per unit area doesn't mean the same light over different areas. Exposure is the light per unit area accumulated over an exposure time, thus lux-seconds. Equal exposures, then for the same light source place the same amount of light per unit area on the sensor. A larger sensor therefore, such as FF vs mFT with 4 times the area, will receive and record 4 times the light, assuming equal QE (properly used). And, since it has 4 times the total light, it has twice the s/n, considering only shot noise.

Shot noise, by the way, is not generated by the pixels in the sensor; it comes with the light. The magnitude of the shot noise is the square-root of that of the signal. The noise from the sensor and the camera's electronics and software is called read noise. Read noise is quite small and becomes significant only in shadow regions where the signal and its attendant shot noise are very small.)

--
gollywop
I am not a moderator or an official of dpr. My views do not represent, or necessarily reflect, those of dpr.

http://g4.img-dpreview.com/D8A95C7DB3724EC094214B212FB1F2AF.jpg
 
Last edited:
This thread is hoping to address the common misconception regarding sensor size, pixel size & noise from a recent discussion on the m43 forum here. Do spare a few mins & read through what I have to say to determine if it is correct and makes sense. I try to be as concise & simple, so this is going to be long.

My argument is that bigger sensor size (eg. FF) doesn't bring about having more light captured than the smaller sensor. It is the pixel size & its quantum efficiency (QE) that determines the output signal's quality or fidelity (Signal to Noise Ratio). QE is used to describe how well a pixel can convert light photons to electrical charges, the bigger the pixel size, the better the QE, the higher the signal & thus higher SNR.

.

.

.

Have a good day y'all. Feel free to exchange your pointers. :)
Now, the amount of light a photo is made from begins with the amount of light that falls on the sensor, where more light falls on the sensor for larger sensor systems for the same exposure (but the same amount of light falls on the sensor for photos of the same scene at the same DOF with the same exposure time).
.

.
But, for the most part, it really is as simple as saying that the more light a photo is made from, the less noisy it will be, that larger sensors of a given generation record more light than smaller sensors in proportion to the ratio of the sensor areas for a given exposure, and that the more pixels the photo is made from, the higher the quality the photo will be.
Still couldn't quite make out what you mean Joseph. More light falls on the sensor for larger sensor systems? From Clarkvision:

...light from lenses of the same f/ratio has the same light density in the focal plane

...the focal length of the lens spreads out the light...


My understanding is that because longer FL spreads the light more, therefore a bigger aperture is needed to allow more light to pass through to maintain the same light density with respect to the same F-stop. Hence, the same exposure time, same shutter speed with same ISO. How is this more light when the density is the same?





When you mentioned "same scene at the same DOF with the same exposure time" I interpret it as a shorter equivalent FL with the same bigger physical aperture size as the bigger sensor system to maintain DOF. But how do you get the same exposure time?

With a bigger aperture, more light comes in & the shutter speed goes up to maintain the same exposure. So by saying same exposure time, are you implying that the scene be overexposed?

Thanks for the simple explanation. I understood the whole CFA thing. :)
 
In the audio world, equipment with response from 20Hz to 20Khz records better quality than equipment with 40Hz to 14Khz. And playback of 24bits/192Khz from a hi rez album sounds better than its 16/44.1 low rez brother.

Could this be the reason why MF looks to have smoother gradations & colors? Having 16bits & a wider color gamut perhaps? But I can't see the improvement for Canon & Nikon when they switched from 12 to 14bits though.
The argument for higher sample rates is good. The equivalent in photography is a larger number of pixels. This reduces, or even removes, aliasing and moire. There are not yet any consumer cameras with over-sampling sensors (unless a very low resolution lens, or a pinhole, is used).

I don't see it affecting gradation much. For instance, a sky that grades smoothly from dark to light blue will probably not look more smooth just because there are more pixels, within reason. (Extremely low pixel numbers would show square steps of colour.)

The number of bits in the data does certainly make a difference to smoothness of gradation, but the various sizes of sensor all have similar numbers of bits in their ADCs, so that can't explain the improvement with size. You mainly see the effects of low bit depth (e.g. 8 bit in a JPG) when drastically lifting underexposed areas of an image.
 
This thread is hoping to address the common misconception regarding sensor size, pixel size & noise
But, for the most part, it really is as simple as saying that the more light a photo is made from, the less noisy it will be, that larger sensors of a given generation record more light than smaller sensors in proportion to the ratio of the sensor areas for a given exposure, and that the more pixels the photo is made from, the higher the quality the photo will be.
Technically speaking, more signal (light), more noise, and higher SNR.
Indeed. This is why I make every effort (especially in this forum) to say "less noisy" as opposed to "less noise".
If you normalize two images so the max signal is max white in both, then the apparent noise is lower for the photo with the larger signal, owing to its higher SNR.
For sure. A "less noisy" photo has less "apparent noise" (lower NSR; higher SNR).
I think we all have the bad habit of saying "noise" when we mean "Signal-to-noise ratio".
 
The lux is not just the unit of light intensity (really luminous power or emittance): it is the light intensity per unit area (its units are lumens per square-meter, and lumens are luminous energy per unit time). The problem is that having the same amount of light per unit area doesn't mean the same light over different areas. Exposure is the light per unit area accumulated over an exposure time, thus lux-seconds. Equal exposures, then for the same light source place the same amount of light per unit area on the sensor. A larger sensor therefore, such as FF vs mFT with 4 times the area, will receive and record 4 times the light, assuming equal QE (properly used).
It might be better to think of it as 4 times the total number of photons.
And, since it has 4 times the total light, it has twice the s/n, considering only shot noise.
I do not think this is true. It would be true for a single photodetector taking 4 sequential measurements to find the average light level, but not for an array of detectors where the signal is the differences between them (such as black and white bars on a test chart).

Your suggestion is equivalent to saying that a longer recording time improves the SNR of a sound recording.
 
The lux is not just the unit of light intensity (really luminous power or emittance): it is the light intensity per unit area (its units are lumens per square-meter, and lumens are luminous energy per unit time). The problem is that having the same amount of light per unit area doesn't mean the same light over different areas. Exposure is the light per unit area accumulated over an exposure time, thus lux-seconds. Equal exposures, then for the same light source place the same amount of light per unit area on the sensor. A larger sensor therefore, such as FF vs mFT with 4 times the area, will receive and record 4 times the light, assuming equal QE (properly used).
It might be better to think of it as 4 times the total number of photons.
Good. Although this change is only appropriate if, again, we assume equal QE, for, otherwise, a different proportion of those photons will manifest themselves as charges on the sensor, which is what is really relevant.
And, since it has 4 times the total light, it has twice the s/n, considering only shot noise.
I do not think this is true. It would be true for a single photodetector taking 4 sequential measurements to find the average light level, but not for an array of detectors where the signal is the differences between them (such as black and white bars on a test chart).
And now, having established that it has 4 time the number of photons, I guess what you're saying is that an image made from the larger sensor with 4 times the photons, and therefore having Sqrt(4) = 2 times the noise, would not have 4/2 = 2 times the s/n. Is that right?

--
gollywop
I am not a moderator or an official of dpr. My views do not represent, or necessarily reflect, those of dpr.

http://g4.img-dpreview.com/D8A95C7DB3724EC094214B212FB1F2AF.jpg
 
Last edited:
The lux is not just the unit of light intensity (really luminous power or emittance): it is the light intensity per unit area (its units are lumens per square-meter, and lumens are luminous energy per unit time). The problem is that having the same amount of light per unit area doesn't mean the same light over different areas. Exposure is the light per unit area accumulated over an exposure time, thus lux-seconds. Equal exposures, then for the same light source place the same amount of light per unit area on the sensor. A larger sensor therefore, such as FF vs mFT with 4 times the area, will receive and record 4 times the light, assuming equal QE (properly used).
It might be better to think of it as 4 times the total number of photons.
And, since it has 4 times the total light, it has twice the s/n, considering only shot noise.
I do not think this is true. It would be true for a single photodetector taking 4 sequential measurements to find the average light level, but not for an array of detectors where the signal is the differences between them (such as black and white bars on a test chart).

Your suggestion is equivalent to saying that a longer recording time improves the SNR of a sound recording.
Careful with the analogies. You're saying that a 10 second sound clip sounds better tan a 5 second one, but that's the wrong quantity. In sound recording, temporality is an essential aspect of the data record. In still photography, as long as the subject doesn't move you can trade recording time for signal quality, if you're considering only shot noise. Compare an ISO 6400 image to an ISO 100 image. 5 stops more light in the latter, higher acuity and smoother backgrounds as a consequence. Since the shot noise in each pixel is independent, we can apply the same statistics to the spatial array that we would for a series of repeated trials on a single pixel.

Now in sound recording, digital or analog, there is a sampling time dependency to signal quality; reduce the sampling (or in analog, the recording) bandwidth to less than the spectral width of the signal, and your recording will suffer. In sound recording also the phenomenon being recorded is not quantized; there is noise associated with the recording, but it's not in the signal, or it's so far down it's irrelevant...it's all in the recording electronics. With light recording, the shot noise component is a significant fraction of the signal, and light's quantized nature is clear.
 
take it easy BOB, Its reminds me about 10-12 years ago when I and others read Clarks explanation as the bible.

Today we know lot more through a number of knowledgeable people here at dpreview and it was John Sheehy and others who killed the myths around Clarks explanations. Read: Emil Martinec (and also you BobN2) The_Suede, Gaborek, Wishnowski, John Sheehy and many many more.
What myths? I strive for accuracy and not one of the above mentioned people has ever emailed me about any issues with anything on my web site that I can remember.
I remember quite a lengthy discussion on the Usenet forums between yourself, Emil Martinec, John Sheehy and myself, where, unless I'm remembering wrongly, you flat out refused to countenance the errors that were pointed out to you. Email is not the only means of communication.
 
take it easy BOB, Its reminds me about 10-12 years ago when I and others read Clarks explanation as the bible.

Today we know lot more through a number of knowledgeable people here at dpreview and it was John Sheehy and others who killed the myths around Clarks explanations. Read: Emil Martinec (and also you BobN2) The_Suede, Gaborek, Wishnowski, John Sheehy and many many more.
What myths? I strive for accuracy and not one of the above mentioned people has ever emailed me about any issues with anything on my web site that I can remember.
I remember quite a lengthy discussion on the Usenet forums between yourself, Emil Martinec, John Sheehy and myself, where, unless I'm remembering wrongly, you flat out refused to countenance the errors that were pointed out to you. Email is not the only means of communication.

--
Bob.
DARK IN HERE, ISN'T IT?
exactly, but the discussion was here and based from my side on a page which I also referred to at that time.

--
Member of Swedish Photographers Association since 1984
Canon, Hasselblad, Leica, Nikon, Linhoff, Sinar,Zeiss, Sony . Phantom 4
 
Last edited:
This thread is hoping to address the common misconception regarding sensor size, pixel size & noise
But, for the most part, it really is as simple as saying that the more light a photo is made from, the less noisy it will be, that larger sensors of a given generation record more light than smaller sensors in proportion to the ratio of the sensor areas for a given exposure, and that the more pixels the photo is made from, the higher the quality the photo will be.
Technically speaking, more signal (light), more noise, and higher SNR.
Indeed. This is why I make every effort (especially in this forum) to say "less noisy" as opposed to "less noise".
If you normalize two images so the max signal is max white in both, then the apparent noise is lower for the photo with the larger signal, owing to its higher SNR.
For sure. A "less noisy" photo has less "apparent noise" (lower NSR; higher SNR).
I think we all have the bad habit of saying "noise" when we mean "Signal-to-noise ratio".
yes, like to say that the noise will increase at higher iso, nope , but the signal to noise ratio are different
 
The lux is not just the unit of light intensity (really luminous power or emittance): it is the light intensity per unit area (its units are lumens per square-meter, and lumens are luminous energy per unit time). The problem is that having the same amount of light per unit area doesn't mean the same light over different areas. Exposure is the light per unit area accumulated over an exposure time, thus lux-seconds. Equal exposures, then for the same light source place the same amount of light per unit area on the sensor. A larger sensor therefore, such as FF vs mFT with 4 times the area, will receive and record 4 times the light, assuming equal QE (properly used).
It might be better to think of it as 4 times the total number of photons.
And, since it has 4 times the total light, it has twice the s/n, considering only shot noise.
I do not think this is true. It would be true for a single photodetector taking 4 sequential measurements to find the average light level, but not for an array of detectors where the signal is the differences between them (such as black and white bars on a test chart).

Your suggestion is equivalent to saying that a longer recording time improves the SNR of a sound recording.
Careful with the analogies. You're saying that a 10 second sound clip sounds better tan a 5 second one, but that's the wrong quantity.
I am saying that it doesn't. The "total light" theory says that it does, simply because more samples are taken.
In sound recording, temporality is an essential aspect of the data record.
And in still photography space (2-dimensional) is the equivalent aspect. A bar test chart for sensors is the same as a square wave signal for audio tests.
In still photography, as long as the subject doesn't move you can trade recording time for signal quality, if you're considering only shot noise.
That is about ETTR, not sensor size.
Compare an ISO 6400 image to an ISO 100 image. 5 stops more light in the latter, higher acuity and smoother backgrounds as a consequence. Since the shot noise in each pixel is independent, we can apply the same statistics to the spatial array that we would for a series of repeated trials on a single pixel.
A series of exposures on one pixel gives you a more accurate measurement of the DC illuminance on that pixel. A picture made up from these more accurate measurements will have a better SNR.

Multiple or longer exposures have nothing to do with sensor size. You can apply them to any sensor.
Now in sound recording, digital or analog, there is a sampling time dependency to signal quality; reduce the sampling (or in analog, the recording) bandwidth to less than the spectral width of the signal, and your recording will suffer.
Likewise, bigger pixels give you lower resolution and worse aliasing. Nothing to do with sensor size, although I suspect that spatial frequency bandwidths (of noise and signal) do have something to do with the better image quality from larger sensors.
In sound recording also the phenomenon being recorded is not quantized; there is noise associated with the recording, but it's not in the signal, or it's so far down it's irrelevant...it's all in the recording electronics.
Sound particles are called phonons. They are quantised. But sound recording seldom explores very low sound levels (although the quality of room acoustics can be quite hard to record, and so can the reverberations in a piano). Photographers regularly work in very low light levels.
With light recording, the shot noise component is a significant fraction of the signal, and light's quantized nature is clear.
Sound recording is mostly done in the audio equivalent of bright sunshine.

(The word "equivalent" here is not used with reference to the "equivalence theory".)
 
This thread is hoping to address the common misconception regarding sensor size, pixel size & noise from a recent discussion on the m43 forum here. Do spare a few mins & read through what I have to say to determine if it is correct and makes sense. I try to be as concise & simple, so this is going to be long.

My argument is that bigger sensor size (eg. FF) doesn't bring about having more light captured than the smaller sensor. It is the pixel size & its quantum efficiency (QE) that determines the output signal's quality or fidelity (Signal to Noise Ratio). QE is used to describe how well a pixel can convert light photons to electrical charges, the bigger the pixel size, the better the QE, the higher the signal & thus higher SNR.

.

.

.

Have a good day y'all. Feel free to exchange your pointers. :)
Now, the amount of light a photo is made from begins with the amount of light that falls on the sensor, where more light falls on the sensor for larger sensor systems for the same exposure (but the same amount of light falls on the sensor for photos of the same scene at the same DOF with the same exposure time).
.

.
But, for the most part, it really is as simple as saying that the more light a photo is made from, the less noisy it will be, that larger sensors of a given generation record more light than smaller sensors in proportion to the ratio of the sensor areas for a given exposure, and that the more pixels the photo is made from, the higher the quality the photo will be.
Still couldn't quite make out what you mean Joseph. More light falls on the sensor for larger sensor systems?
For the same exposure, more light falls on larger sensors in proportion to the ratio of the sensor areas (e.g. 4x as much light falls on a FF sensor as an mFT sensor for the same exposure).

On the other hand, the same amount of light falls on the sensor for all formats for Equivalent photos (same DOF and exposure time).
From Clarkvision:

...light from lenses of the same f/ratio has the same light density in the focal plane
Indeed. The light density (exposure, measured in lux seconds, or, equivalently photons/mm²) is the same for the same scene, relative aperture, lens transmission (t-stop vs f-stop). and exposure time.

However, the total amount of light falling on the sensor (measured in lumen seconds, or, equivalently, photons) is the product of the exposure (light density) and sensor area.
...the focal length of the lens spreads out the light...

My understanding is that because longer FL spreads the light more, therefore a bigger aperture is needed to allow more light to pass through to maintain the same light density with respect to the same F-stop. Hence, the same exposure time, same shutter speed with same ISO. How is this more light when the density is the same?
The same light density over a larger area results in a greater total amount of light. For example, a bowl with an 8 inch diameter placed in the rain will collect 4x as much water as a bowl with a 4 inch diameter. Likewise, the same exposure on a sensor with 4x the area results in 4x as much light falling on the sensor.
When you mentioned "same scene at the same DOF with the same exposure time" I interpret it as a shorter equivalent FL with the same bigger physical aperture size as the bigger sensor system to maintain DOF.
For the same perspective, framing, and aperture (entrance pupil) diameter, the DOF will be the same. For example, 50mm f/2 on mFT and 100mm f/4 on FF both have the same [diagonal] angle of view and aperture diameter (50mm / 2 = 100mm / 4 = 25mm), so if photos of the same scene are taken from the same position (same perspective), the DOFs will be the same.

Furthermore, if the exposure times are also the same, the same total amount of light will fall on the sensor since the same portion of the scene is being recorded, the aperture diameters are the same, and the exposure times are the same.
But how do you get the same exposure time?
For example, 50mm f/2 1/100 on mFT and 100mm f/4 1/100 on FF (set the ISO setting to taste for the desired output brightness, or just use Auto ISO).
With a bigger aperture, more light comes in & the shutter speed goes up to maintain the same exposure. So by saying same exposure time, are you implying that the scene be overexposed?
You can select any exposure time you like. Alternatively, you can let the camera choose the exposure time for you in various AE (auto exposure) modes and/or adjust the exposure time indirectly with the ISO setting.
Thanks for the simple explanation. I understood the whole CFA thing. :)
Let's see if you'll still thank me for the more in-depth explanation:

http://www.josephjamesphotography.com/equivalence/#exposure

;-)
 
Last edited:
The lux is not just the unit of light intensity (really luminous power or emittance): it is the light intensity per unit area (its units are lumens per square-meter, and lumens are luminous energy per unit time). The problem is that having the same amount of light per unit area doesn't mean the same light over different areas. Exposure is the light per unit area accumulated over an exposure time, thus lux-seconds. Equal exposures, then for the same light source place the same amount of light per unit area on the sensor. A larger sensor therefore, such as FF vs mFT with 4 times the area, will receive and record 4 times the light, assuming equal QE (properly used).
It might be better to think of it as 4 times the total number of photons.
And, since it has 4 times the total light, it has twice the s/n, considering only shot noise.
I do not think this is true. It would be true for a single photodetector taking 4 sequential measurements to find the average light level, but not for an array of detectors where the signal is the differences between them (such as black and white bars on a test chart).
I think it is true: http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/noise-p3.html#bitdepth
 
take it easy BOB, Its reminds me about 10-12 years ago when I and others read Clarks explanation as the bible.

Today we know lot more through a number of knowledgeable people here at dpreview and it was John Sheehy and others who killed the myths around Clarks explanations. Read: Emil Martinec (and also you BobN2) The_Suede, Gaborek, Wishnowski, John Sheehy and many many more.
What myths? I strive for accuracy and not one of the above mentioned people has ever emailed me about any issues with anything on my web site that I can remember.
I remember quite a lengthy discussion on the Usenet forums between yourself, Emil Martinec, John Sheehy and myself, where, unless I'm remembering wrongly, you flat out refused to countenance the errors that were pointed out to you. Email is not the only means of communication.

--
Bob.
DARK IN HERE, ISN'T IT?
exactly, but the discussion was here and based from my side on a page which I also referred to at that time.
The one I was thunking about was on the Usenet forums, I very distinctly remember Emil encouraging me to join the discussion. At the time, Roger wasn't posting here.

--
Bob.
DARK IN HERE, ISN'T IT?
 
Last edited:

Keyboard shortcuts

Back
Top