Larger sensors do not have less low light noise. It is a myth. part 2

Woland65 wrote:
oscarvdvelde wrote:
Woland65 wrote:

No, that is the misconception. The amount of light put on a sensor is NOT determined by the "f number" of the lens. The f-ratio determines the amount of light falling on a square millimeter of the sensor. So, given the same f-ratio, a large sensor will get much more light than a smaller sensor. Of course, the lens for the larger sensor will have to be larger to achieve this. That is why an f2 lens for an ff sensor is much larger than an f2 lens for micro 43 (at equivalent FOV).

The amount of light falling on a sensor is determined by a lens linear aperture in millimeters, NOT by a lens f-ratio.
But it's the photo sites that collect light. It's not the sensor (or film) as a whole integrates all the light in one place and then divides it into pixels. Mathematically it may result in the same thing, as photographer I get lost using that perspective.

Things will be easier if we keep some factors constant, for simplicity, first take the sensor size as constant. Then the f-stop is a very useful value, as it controls photons per square mm. If in that square mm you have 4 times as many pixels (linear resolution doubles, e.g. 24 MP compared to 6 MP), each of them sees a quarter of the corresponding angular scene, it resolves 2x smaller details but with 4x less light captured at each pixel. Gain then is used to compensate for the lower signal to give the same output RGB value.

This does not need to give any problems if you have many photons, but if you are shooting in near darkness the S/N at each pixel of the higher resolving camera is lower (you experience a noisier image when image magnification is 1:1) - unless the read noise of a photo site is also lower by a factor 4 (or the QE higher by a factor 4 but sensors currently already have 40-60% QE).

But the image produced by this finer resolving sensor needs to be 2x less magnified to fit your screen than the low res sensor (or can be viewed from a larger distance to span the same angular view). So even if it were a bit more noisy in the absolute sense at the 100% pixel viewing level, at the typical viewing magnification perceived noise is reduced, as noise "clumps" are smaller relative to resolved details.

Note that Canon's development of a full frame "big pixel" full-HD camera suggests that bigger pixels still have significant advantages on the same sensor size for night vision work compared to just downscaling 20 megapixels to 2.8 megapixels, although the footage of those fireflies looked pretty noisy to me.

We may also keep resolution constant (e.g. 16 MP) and consider a factor 2 difference in sensor size in width (full frame vs 4/3). Shoot at the same f-stop (photons per mm²) and exposure time the same scene (the lens for the larger sensor is bigger and has 2x the focal length, but same field of view). In this case these photons fall in 4 times fewer photo sites per mm² on the larger sensor and produce therefore 4x higher signal per photo site. The smaller sensor needs to compensate for the relative lack of photons by gain, and hopefully with lower read noise and higher QE, else it will suffer from poorer S/N at any given ambient light level. The final image magnification in this case will be the same which simplifies judgment.
 
bobn2 wrote:
Woland65 wrote:
oscarvdvelde wrote:
Woland65 wrote:

No, that is the misconception. The amount of light put on a sensor is NOT determined by the "f number" of the lens. The f-ratio determines the amount of light falling on a square millimeter of the sensor. So, given the same f-ratio, a large sensor will get much more light than a smaller sensor. Of course, the lens for the larger sensor will have to be larger to achieve this. That is why an f2 lens for an ff sensor is much larger than an f2 lens for micro 43 (at equivalent FOV).

The amount of light falling on a sensor is determined by a lens linear aperture in millimeters, NOT by a lens f-ratio.
But it's the photo sites that collect light. It's not the sensor (or film) as a whole integrates all the light in one place and then divides it into pixels. Mathematically it may result in the same thing, as photographer I get lost using that perspective.

Things will be easier if we keep some factors constant, for simplicity, first take the sensor size as constant. Then the f-stop is a very useful value, as it controls photons per square mm. If in that square mm you have 4 times as many pixels (linear resolution doubles, e.g. 24 MP compared to 6 MP), each of them sees a quarter of the corresponding angular scene, it resolves 2x smaller details but with 4x less light captured at each pixel. Gain then is used to compensate for the lower signal to give the same output RGB value.

This does not need to give any problems if you have many photons, but if you are shooting in near darkness the S/N at each pixel of the higher resolving camera is lower (you experience a noisier image when image magnification is 1:1) - unless the read noise of a photo site is also lower by a factor 4 (or the QE higher by a factor 4 but sensors currently already have 40-60% QE).

But the image produced by this finer resolving sensor needs to be 2x less magnified to fit your screen than the low res sensor (or can be viewed from a larger distance to span the same angular view). So even if it were a bit more noisy in the absolute sense at the 100% pixel viewing level, at the typical viewing magnification perceived noise is reduced, as noise "clumps" are smaller relative to resolved details.

Note that Canon's development of a full frame "big pixel" full-HD camera suggests that bigger pixels still have significant advantages on the same sensor size for night vision work compared to just downscaling 20 megapixels to 2.8 megapixels, although the footage of those fireflies looked pretty noisy to me.

We may also keep resolution constant (e.g. 16 MP) and consider a factor 2 difference in sensor size in width (full frame vs 4/3). Shoot at the same f-stop (photons per mm²) and exposure time the same scene (the lens for the larger sensor is bigger and has 2x the focal length, but same field of view). In this case these photons fall in 4 times fewer photo sites per mm² on the larger sensor and produce therefore 4x higher signal per photo site. The smaller sensor needs to compensate for the relative lack of photons by gain, and hopefully with lower read noise and higher QE, else it will suffer from poorer S/N at any given ambient light level. The final image magnification in this case will be the same which simplifies judgment.

--
Weather & Photography
http://www.lightningwizard.com
http://www.flickr.com/photos/lightningwizard
Yes, you are right. Pixel level low light noise increases with increasing number of pixels. (But sensor size does not matter.)
Not right. It is the size of the pixel that matters if you measure at 'pixel level', if the sensor size doesn't matter, you don't know how many pixels of a given size that you have.

--
Bob
The size of the pixel does not matter as long as photon shot noise is important.

The total amount of light that falls on the sensor is given by the size of the lens aperture in millimeters. The total light per pixel, and thus photon shot noise, is given by the total light dividd by the number of pixels. The area of a pixel does not matter, only the number of pixels.
 
Last edited:
Woland65 wrote:
bobn2 wrote:
Woland65 wrote:
oscarvdvelde wrote:
Woland65 wrote:

No, that is the misconception. The amount of light put on a sensor is NOT determined by the "f number" of the lens. The f-ratio determines the amount of light falling on a square millimeter of the sensor. So, given the same f-ratio, a large sensor will get much more light than a smaller sensor. Of course, the lens for the larger sensor will have to be larger to achieve this. That is why an f2 lens for an ff sensor is much larger than an f2 lens for micro 43 (at equivalent FOV).

The amount of light falling on a sensor is determined by a lens linear aperture in millimeters, NOT by a lens f-ratio.
But it's the photo sites that collect light. It's not the sensor (or film) as a whole integrates all the light in one place and then divides it into pixels. Mathematically it may result in the same thing, as photographer I get lost using that perspective.

Things will be easier if we keep some factors constant, for simplicity, first take the sensor size as constant. Then the f-stop is a very useful value, as it controls photons per square mm. If in that square mm you have 4 times as many pixels (linear resolution doubles, e.g. 24 MP compared to 6 MP), each of them sees a quarter of the corresponding angular scene, it resolves 2x smaller details but with 4x less light captured at each pixel. Gain then is used to compensate for the lower signal to give the same output RGB value.

This does not need to give any problems if you have many photons, but if you are shooting in near darkness the S/N at each pixel of the higher resolving camera is lower (you experience a noisier image when image magnification is 1:1) - unless the read noise of a photo site is also lower by a factor 4 (or the QE higher by a factor 4 but sensors currently already have 40-60% QE).

But the image produced by this finer resolving sensor needs to be 2x less magnified to fit your screen than the low res sensor (or can be viewed from a larger distance to span the same angular view). So even if it were a bit more noisy in the absolute sense at the 100% pixel viewing level, at the typical viewing magnification perceived noise is reduced, as noise "clumps" are smaller relative to resolved details.

Note that Canon's development of a full frame "big pixel" full-HD camera suggests that bigger pixels still have significant advantages on the same sensor size for night vision work compared to just downscaling 20 megapixels to 2.8 megapixels, although the footage of those fireflies looked pretty noisy to me.

We may also keep resolution constant (e.g. 16 MP) and consider a factor 2 difference in sensor size in width (full frame vs 4/3). Shoot at the same f-stop (photons per mm²) and exposure time the same scene (the lens for the larger sensor is bigger and has 2x the focal length, but same field of view). In this case these photons fall in 4 times fewer photo sites per mm² on the larger sensor and produce therefore 4x higher signal per photo site. The smaller sensor needs to compensate for the relative lack of photons by gain, and hopefully with lower read noise and higher QE, else it will suffer from poorer S/N at any given ambient light level. The final image magnification in this case will be the same which simplifies judgment.
 
bobn2 wrote:
Woland65 wrote:
bobn2 wrote:
Woland65 wrote:
oscarvdvelde wrote:
Woland65 wrote:

No, that is the misconception. The amount of light put on a sensor is NOT determined by the "f number" of the lens. The f-ratio determines the amount of light falling on a square millimeter of the sensor. So, given the same f-ratio, a large sensor will get much more light than a smaller sensor. Of course, the lens for the larger sensor will have to be larger to achieve this. That is why an f2 lens for an ff sensor is much larger than an f2 lens for micro 43 (at equivalent FOV).

The amount of light falling on a sensor is determined by a lens linear aperture in millimeters, NOT by a lens f-ratio.
But it's the photo sites that collect light. It's not the sensor (or film) as a whole integrates all the light in one place and then divides it into pixels. Mathematically it may result in the same thing, as photographer I get lost using that perspective.

Things will be easier if we keep some factors constant, for simplicity, first take the sensor size as constant. Then the f-stop is a very useful value, as it controls photons per square mm. If in that square mm you have 4 times as many pixels (linear resolution doubles, e.g. 24 MP compared to 6 MP), each of them sees a quarter of the corresponding angular scene, it resolves 2x smaller details but with 4x less light captured at each pixel. Gain then is used to compensate for the lower signal to give the same output RGB value.

This does not need to give any problems if you have many photons, but if you are shooting in near darkness the S/N at each pixel of the higher resolving camera is lower (you experience a noisier image when image magnification is 1:1) - unless the read noise of a photo site is also lower by a factor 4 (or the QE higher by a factor 4 but sensors currently already have 40-60% QE).

But the image produced by this finer resolving sensor needs to be 2x less magnified to fit your screen than the low res sensor (or can be viewed from a larger distance to span the same angular view). So even if it were a bit more noisy in the absolute sense at the 100% pixel viewing level, at the typical viewing magnification perceived noise is reduced, as noise "clumps" are smaller relative to resolved details.

Note that Canon's development of a full frame "big pixel" full-HD camera suggests that bigger pixels still have significant advantages on the same sensor size for night vision work compared to just downscaling 20 megapixels to 2.8 megapixels, although the footage of those fireflies looked pretty noisy to me.

We may also keep resolution constant (e.g. 16 MP) and consider a factor 2 difference in sensor size in width (full frame vs 4/3). Shoot at the same f-stop (photons per mm²) and exposure time the same scene (the lens for the larger sensor is bigger and has 2x the focal length, but same field of view). In this case these photons fall in 4 times fewer photo sites per mm² on the larger sensor and produce therefore 4x higher signal per photo site. The smaller sensor needs to compensate for the relative lack of photons by gain, and hopefully with lower read noise and higher QE, else it will suffer from poorer S/N at any given ambient light level. The final image magnification in this case will be the same which simplifies judgment.
 
Original post: "As long as the image circle of a lens is manufactured to fit the sensor, a large sensor does not produce less low light noise. Low light noise is determined by the amount of light hitting the sensor."

Well, not only is this observation not useful were it perfectly true, because to date the lowest-noise-in-dim-light easily purchasable cameras do indeed tend to have larger sensors, for a whole host of reasons...

But also, there is at least one "theoretical reason" why large sensors will always be somewhat easier to design with higher efficiency at gathering their incident low-light photons with the widest range of lenses than small sensors. And that is because, for a given pixel count requirement, less of the sensor surface need be taken up with the non-useful and inevitable borders around each pixel. Which means less and less need with larger pixels for the top-of-sensor-surface focusing lenses that work great when the light is coming in perpendicular, but are not so great at resolving with perfect accuracy parts of the scene that the imaging system is bringing in to the sensor at glancing angles.

Lots of words. A simple absurd example. Say we've figured out how to have the non-sensor border around each pixel be as ridiculously narrow as 1 nanometer wide. And we want tiny camera "A" to have a square 1000x1000 pixel sensor that measures just 2000 nanometers across. Well if we're going to "get all the photons" that hit this sensor during a given exposure time, we have to come up with lenses on top of each pixel-well---that capture a 2x2 nanometer patch of sensor surface, and funnel that 2x2 nanometer area of light down into a 1x1 nanometer pixel-well.

Then consider mega-camera "B", also wanting a 1000x1000 sensor, but this sensor can be a much larger 2 million nanometers across, not 2000nm. Heck in camera "B", you only have 1000 inter-pixel borders to deal with across the sensor in each pixel row, and each border as above is only 1 nanometer across. Basically instead of worrying about losing half the light to the non-productive sensor border material, you only lose something like one-thousandth of the light, even with absolutely no "light-funneling lenses" on top of the sensor at all.

It's going to be easier to make camera "B" do a great job of gathering all the photons that hits its big sensor, getting them all to the right pixel, no matter what angle the light is coming from, from however "telecentric" the lenses are or aren't that you might use with this camera, right out to the corners of the image with no unnecessary smearing, than it would be to get perfect capture of all photons on the thousand times smaller sensor.

One could also opine that cross-talk between pixels, and electrical leakages of every type, are easier to minimize on a larger sensor for a given level of semiconductor fabrication technology. Also kind of nice that the larger the sensor pixels, the easier it is to have, percentage-wise, all the pixel wells end up with more nearly identical sensitivities. Both those niceties contribute to less artifacts (indeed a certain type of "noise") when recording the very faintest scenes.
 
Last edited:
RussellInCincinnati wrote:

Original post: "As long as the image circle of a lens is manufactured to fit the sensor, a large sensor does not produce less low light noise. Low light noise is determined by the amount of light hitting the sensor."

Well, not only is this observation not useful were it perfectly true, because to date the lowest-noise-in-dim-light easily purchasable cameras do indeed tend to have larger sensors, for a whole host of reasons...

But also, there is at least one "theoretical reason" why large sensors will always be somewhat easier to design with higher efficiency at gathering their incident low-light photons with the widest range of lenses than small sensors. And that is because, for a given pixel count requirement, less of the sensor surface need be taken up with the non-useful and inevitable borders around each pixel. Which means less and less need with larger pixels for the top-of-sensor-surface focusing lenses that work great when the light is coming in perpendicular, but are not so great at resolving with perfect accuracy parts of the scene that the imaging system is bringing in to the sensor at glancing angles.

Lots of words. A simple absurd example. Say we've figured out how to have the non-sensor border around each pixel be as ridiculously narrow as 1 nanometer wide. And we want tiny camera "A" to have a square 1000x1000 pixel sensor that measures just 2000 nanometers across. Well if we're going to "get all the photons" that hit this sensor during a given exposure time, we have to come up with lenses on top of each pixel-well---that capture a 2x2 nanometer patch of sensor surface, and funnel that 2x2 nanometer area of light down into a 1x1 nanometer pixel-well.

Then consider mega-camera "B", also wanting a 1000x1000 sensor, but this sensor can be a much larger 2 million nanometers across, not 2000nm. Heck in camera "B", you only have 1000 inter-pixel borders to deal with across the sensor in each pixel row, and each border as above is only 1 nanometer across. Basically instead of worrying about losing half the light to the non-productive sensor border material, you only lose something like one-thousandth of the light, even with absolutely no "light-funneling lenses" on top of the sensor at all.

It's going to be easier to make camera "B" do a great job of gathering all the photons that hits its big sensor, getting them all to the right pixel, no matter what angle the light is coming from, from however "telecentric" the lenses are or aren't that you might use with this camera, right out to the corners of the image with no unnecessary smearing, than it would be to get perfect capture of all photons on the thousand times smaller sensor.

One could also opine that cross-talk between pixels, and electrical leakages of every type, are easier to minimize on a larger sensor for a given level of semiconductor fabrication technology. Also kind of nice that the larger the sensor pixels, the easier it is to have, percentage-wise, all the pixel wells end up with more nearly identical sensitivities. Both those niceties contribute to less artifacts (indeed a certain type of "noise") when recording the very faintest scenes.
You are absolutely right, but the effects you are talking about are small when comparing the common sensor formats ff, apse and m43. If you would also add in phone camera sensors, then your point would get more important, but still nowhere near as important as the size of the aperture used.

"Well, not only is this observation not useful were it perfectly true, because to date the lowest-noise-in-dim-light easily purchasable cameras do indeed tend to have larger sensors, for a whole host of reasons..."

I am not questioning that. You only need to have a quick look at the dxomark sensor ratings to see that larger sensor cameras generally have less low light noise. The question here is WHY larger sensor cameras have less low light noise. It is because they are used with larger aperture lenses, not because they have larger sensors.
 
Woland65 wrote:

As long as the image circle of a lens is manufactured to fit the sensor, a large sensor does not produce less low light noise.
I would question that: Putting a 35mm f/2.5 lens with image circle for FF or one for APS-C on an APS-C camera does not change the illumination on each pixel. So my understanding is, that provided the image circle il large enough the image circle constraint is irrelevant in this discusson. Or where am I wrong?
Low light noise is determined by the amount of light hitting the sensor. The amount of light hitting the sensor is determined by the lens aperture in millimeters. Sensor size does not matter.

Full frame cameras produce less low light noise because they use lenses with a larger aperture, aperture as measured in millimeters.

However, larger sensor pixel size is good for dynamic range.

(Speed Booster has perhaps made it clearer to people that it is the lens, not the sensor that determines low light noise.)
I cannot quite get my head around this:

take opposite ends of the size range : Pentax Q7 and Canon d5MkII:

The sensor size of the Pentax is 7.44mm x 5.58mm which leads to 289,000 pixels /sqmm

The sensor size of the Canon is 36mm x 24mm with a pixel density of 25,600 pixel / sqmm

The area of each pixel is approx. 11.3 time the size.

Given the same exposure (ie apperture and exposure time) the light falling on each sensor pixel in the Canon is 11.3 as much as on the Pentax

Are you trying to tell me that the low light noise of the Canon pixel is the same as of the Pentax pixel although the light energy is 11.3 times as much?
 
Andreas Stuebs wrote:
Woland65 wrote:

As long as the image circle of a lens is manufactured to fit the sensor, a large sensor does not produce less low light noise.
I would question that: Putting a 35mm f/2.5 lens with image circle for FF or one for APS-C on an APS-C camera does not change the illumination on each pixel. So my understanding is, that provided the image circle il large enough the image circle constraint is irrelevant in this discusson. Or where am I wrong?
Low light noise is determined by the amount of light hitting the sensor. The amount of light hitting the sensor is determined by the lens aperture in millimeters. Sensor size does not matter.

Full frame cameras produce less low light noise because they use lenses with a larger aperture, aperture as measured in millimeters.

However, larger sensor pixel size is good for dynamic range.

(Speed Booster has perhaps made it clearer to people that it is the lens, not the sensor that determines low light noise.)
I cannot quite get my head around this:

take opposite ends of the size range : Pentax Q7 and Canon d5MkII:

The sensor size of the Pentax is 7.44mm x 5.58mm which leads to 289,000 pixels /sqmm

The sensor size of the Canon is 36mm x 24mm with a pixel density of 25,600 pixel / sqmm

The area of each pixel is approx. 11.3 time the size.

Given the same exposure (ie apperture and exposure time) the light falling on each sensor pixel in the Canon is 11.3 as much as on the Pentax

Are you trying to tell me that the low light noise of the Canon pixel is the same as of the Pentax pixel although the light energy is 11.3 times as much?
I think what you are saying is basically correct.

I) If you use an f/2.5 lens you will get less noise with a larger sensor. Absolutely. However, the reason that happens is because an f/2.5 lens for FF has a larger aperture (in mm) than an f/2.5 lens for APS-C. The f/2.5 ff-lens will put more total light on the sensor than the f/2.5 APS-C sensor will. You are using a larger lens for the ff sensor and therefore the ff sensor can produce less noise.

II) As you have already understood, the pentax pixel will produce more noise because it is receiving less light than the canon pixel. However, if you instead used two lenses with the same size aperture, say 10 mm, then you would get essentially the same low light noise for both sensors. (The number of pixels on the sensor also matters lot, but not the size of those pixels.)

It is very easy. If you know the size of the aperture of a lens you know the total amount of light it projects on the sensor. You don't need to know the sensor size. The light per pixel is then simply total light/number of pixels.
 
Andreas Stuebs wrote:
Woland65 wrote:

As long as the image circle of a lens is manufactured to fit the sensor, a large sensor does not produce less low light noise.
I would question that: Putting a 35mm f/2.5 lens with image circle for FF or one for APS-C on an APS-C camera does not change the illumination on each pixel.
When Woland65 says "as long as the image circle of a lens is manufactured to fit the sensor" he means a using a different lens that gives the same FOV as the one on the larger sensor. In the statement above, he also omits the assumption that the physical aperture must also be the same size. So for your example above, it would be a 35mm f/2.5 lens on FF, a 23mm f/1.7 on APS-C, and 17mm f/1.2 lens on u4/3.

Most of the controversy from his posts on this stem from using "sensor" where he should use "system". W/o a lens, a larger sensor obviously collects more light for a given incident level. And the same lens, a larger sensor will also collect more light as long as the image circle is larger than the smaller sensor's area. However, two lens+sensor systems can be designed to have the same total light collected -- if the lenses have the same physical apertures for the same FOV. Note this also assumes the two sensors have the same efficiency -- including off axis light collection.

Of course this concept runs into practical issues as with the 35mm f/2.5 vs. 17mm f/1.2 example. The former is a lens that could be designed and built with the technology of 60 years ago. The latter is an exotic that hasn't been built yet (still 1-stop off with the 17mm f/1.8 and 17mm cine lenses don't have a large enough image circle.)
 
Erik Magnuson wrote:
Andreas Stuebs wrote:
Woland65 wrote:

As long as the image circle of a lens is manufactured to fit the sensor, a large sensor does not produce less low light noise.
I would question that: Putting a 35mm f/2.5 lens with image circle for FF or one for APS-C on an APS-C camera does not change the illumination on each pixel.
When Woland65 says "as long as the image circle of a lens is manufactured to fit the sensor" he means a using a different lens that gives the same FOV as the one on the larger sensor. In the statement above, he also omits the assumption that the physical aperture must also be the same size. So for your example above, it would be a 35mm f/2.5 lens on FF, a 23mm f/1.7 on APS-C, and 17mm f/1.2 lens on u4/3.

Most of the controversy from his posts on this stem from using "sensor" where he should use "system". W/o a lens, a larger sensor obviously collects more light for a given incident level. And the same lens, a larger sensor will also collect more light as long as the image circle is larger than the smaller sensor's area. However, two lens+sensor systems can be designed to have the same total light collected -- if the lenses have the same physical apertures for the same FOV. Note this also assumes the two sensors have the same efficiency -- including off axis light collection.

Of course this concept runs into practical issues as with the 35mm f/2.5 vs. 17mm f/1.2 example. The former is a lens that could be designed and built with the technology of 60 years ago. The latter is an exotic that hasn't been built yet (still 1-stop off with the 17mm f/1.8 and 17mm cine lenses don't have a large enough image circle.)
 
Original Poster: The question here is WHY larger sensor cameras have less low light noise. It is because they are used with larger aperture lenses, not because they have larger sensors.

But that's a really good reason to use a larger sensor...that you don't have to get to really wild new low-F-number lens designs that nobody has figured out yet...if you just let the cameras have a larger sensor. For goodness' sake the "full frame" Sony RX1 only weighs 330 grams, several Nex APS-C interchangeable-lens cams and the Ricoh GR etc entire camera weighs less 250 grams or less. What is the consumer advantage of reducing the sensor size much further, if that also leads to more-element, lowered-transmission-and-flare-resistance lens designs that have the same size front glass as the larger-sensor cameras?

Let's look at a practical example The camera industry knows how to mass-market 24-70mm F/2.8 zooms for full frame 35mm, there's a 900-gram-ish Nikon one in the $1300 dollar range. And there is also a $1300 dollar (list price) Panasonic 12-35mm F/2.8, 350-gram-ish, available for Micro Four Thirds. By your own calculations, for a Micro Four Thirds camera to have "the same low light ability" as a Nikon D-whatever full frame camera, all it needs is a 12-35mm F/1.4 zoom. What would such a u4/3 zoom cost and weigh, how many air-glass surfaces would it have, how much exotic glass would it need, and who knows how to mass-produce it in 2013?

This is not to say that your observation that you could keep lens front glass diameters the same, and simply use ever smaller sensors, is silly or dumb. But some of us are making the counter-argument that the problem of designing real-low-F-number lenses, and the need for small-sensor overlays with microlenses that compensate for the loss of top-of-tiny-sensor real estate due to all the pixel-well borders, are real. And the optical and mechanical steps and wildly complex lenses of several types one would need to compensate throughout the entire imaging system for the really small sensor sizes, make ever-reduction of sensor size counter-productive in terms of expense and overall image quality and even low-light performance.
 
Larger sensor cameras have larger lenses.

You can easily make good 50 mm F1.4 for FF.

To gather the same amount of light and get the same FOV for FourThirds, you need a 25 mm F0.7. I promise you, it is not as easy to make, actually almost impossible, at least if you want high IQ.

For a TwoThirds the lens should be 12 mm and F0.35, and I can tell you that is even theoretically impossible. The lens is then rather F1.4 and you will lose 4 stops of light gathering capacity.

For (large) medium format you can instead use a 100 mm F2.8 to gather the same amount of light. I promise you that will give you much better image quality than the 50 mm F1.4 for FF. And if you have lots of money on your account, why not consider making a 100 mm F1.4 for your (large) medium format, and you will gather 4 times as much light as the 50 mm 1.4 for FF, i.e. 2 stops.

So no - you are wrong.

--
/Roland
X3F tools:
http://www.proxel.se/x3f.html
https://github.com/rolkar/x3f
 
Last edited:
RussellInCincinnati wrote:

Original Poster: The question here is WHY larger sensor cameras have less low light noise. It is because they are used with larger aperture lenses, not because they have larger sensors.

But that's a really good reason to use a larger sensor...that you don't have to get to really wild new low-F-number lens designs that nobody has figured out yet...if you just let the cameras have a larger sensor. For goodness' sake the "full frame" Sony RX1 only weighs 330 grams, several Nex APS-C interchangeable-lens cams and the Ricoh GR etc entire camera weighs less 250 grams or less. What is the consumer advantage of reducing the sensor size much further, if that also leads to more-element, lowered-transmission-and-flare-resistance lens designs that have the same size front glass as the larger-sensor cameras?

Let's look at a practical example The camera industry knows how to mass-market 24-70mm F/2.8 zooms for full frame 35mm, there's a 900-gram-ish Nikon one in the $1300 dollar range. And there is also a $1300 dollar (list price) Panasonic 12-35mm F/2.8, 350-gram-ish, available for Micro Four Thirds. By your own calculations, for a Micro Four Thirds camera to have "the same low light ability" as a Nikon D-whatever full frame camera, all it needs is a 12-35mm F/1.4 zoom. What would such a u4/3 zoom cost and weigh, how many air-glass surfaces would it have, how much exotic glass would it need, and who knows how to mass-produce it in 2013?

This is not to say that your observation that you could keep lens front glass diameters the same, and simply use ever smaller sensors, is silly or dumb. But some of us are making the counter-argument that the problem of designing real-low-F-number lenses, and the need for small-sensor overlays with microlenses that compensate for the loss of top-of-tiny-sensor real estate due to all the pixel-well borders, are real. And the optical and mechanical steps and wildly complex lenses of several types one would need to compensate throughout the entire imaging system for the really small sensor sizes, make ever-reduction of sensor size counter-productive in terms of expense and overall image quality and even low-light performance.
I don't have anything against large sensor cameras. I think they are made for good reasons. Just not always the reasons people think they are made for.
 
Roland Karlsson wrote:

Larger sensor cameras have larger lenses.

You can easily make good 50 mm F1.4 for FF.

To gather the same amount of light and get the same FOV for FourThirds, you need a 25 mm F0.7. I promise you, it is not as easy to make, actually almost impossible, at least if you want high IQ.

For a TwoThirds the lens should be 12 mm and F0.35, and I can tell you that is even theoretically impossible. The lens is then rather F1.4 and you will lose 4 stops of light gathering capacity.

For (large) medium format you can instead use a 100 mm F2.8 to gather the same amount of light. I promise you that will give you much better image quality than the 50 mm F1.4 for FF. And if you have lots of money on your account, why not consider making a 100 mm F1.4 for your (large) medium format, and you will gather 4 times as much light as the 50 mm 1.4 for FF, i.e. 2 stops.

So no - you are wrong.

--
/Roland
X3F tools:
http://www.proxel.se/x3f.html
https://github.com/rolkar/x3f
What do you mean wrong??? I agree with everything you just said.
 
Woland65 wrote:

What do you mean wrong??? I agree with everything you just said.
Your subject line is wrong, or at least strongly misleading.

Your actual text body is technically correct.
 
Great Bustard wrote:
stevo23 wrote:
Great Bustard wrote:
Woland65 wrote:

The obvious question is then why large linear aperture lenses are mostly made for larger sensor cameras? I am no expert on manufacturing but I can see a few reasons:

I) For historical reasons a large number of fairly large lenses were made for 35 mm film. If you want to make full use of those lenses you must use them with an ff sensor.

II) At least for short focal lengths it is surely cheaper to make a large linear aperture lens for a larger sensor. (Depends on how large of course, but we are talking about common camera sensor sizes.)

III) Camera makers probably reason that if you buy a small camera body, with a small sensor, you probably also want small lenses.

What do you think? Why are large linear aperture lenses mostly made for large sensors, thus giving large sensor cameras better low light noise?
Start with this:

http://en.wikipedia.org/wiki/Aperture#Maximum_and_minimum_apertures

The theoretical maximum aperture of a lens made of glass (which has an index of refraction equal to 1.5) and surrounded by air is f/0.5, but if a lens were made with a material with a higher index of refraction, e.g. diamond (index of refraction 2.417) then the theoretical maximum f-number can be lower than 0.5.

and it should be more clear. It becomes exponentially more difficult to make lenses faster as the asymptotically approach f/0.5. So, for example, compare a 50 / 1.8 on FF to it's closest equivalent on mFT, a 25 / 0.95. Alternatively, you could compare the 85 / 1.2L on 1.6x to the 135 / 2L on FF.

So, for wide apertures, larger formats are the way to go.

That said, wider apertures not only put more light on the sensor for a given shutter speed resulting in less noise,
And the increase in light is non linear.
Give us an example of what you are talking about, please.
The Wikipedia page is wrong (and it needs a citation). The lower limit of 0.5 for f-number comes from the equation for a spherical approximation of a lens, where f-number = 1/2(sin theta), and theta is the angle the lens subtends as seen from the sensor (see pg. 49 of the Stanford photo course lecture notes for a diagram). Since sin is at most 1, the f-number is at least 1/2. And 'in air' has nothing to do with the limit; it's a theoretical limit on "equivalent" spherical lenses.

If this approximation is used, then a lens with an f-number of 0.5 (measured using the spherical approximation) could actually let in much much more light than you'd expect, because the lens in practice could be non-spherical. With a hyperbolic lens, the f-number is unlimited. The hyperbolic surface asymptotically approaches the material's critical angle of refraction, so there is no limit on the physical diameter (besides boring old real-world considerations like manufacturing cost).
 
bobn2 wrote:
The total amount of light that falls on the sensor is given by the size of the lens aperture in millimeters. The total light per pixel, and thus photon shot noise, is given by the total light dividd by the number of pixels. The area of a pixel does not matter, only the number of pixels.
Sorry, I had lost the context of the discussion, and the universality of the great truth. Lens aperture in millmetres, yards or parsecs and angle of view. A 10mm aperture on a 10mm lens is going to put a lot more light on the sensor than a 10mm aperture on a 50mm lens (talking same size sensors).
This is why f numbers were invented.

It is the f number that decides how much light falls on a given area of sensor in a given time, not the aperture diameter in mm.
 
Last edited:
D Cox wrote:
bobn2 wrote:
The total amount of light that falls on the sensor is given by the size of the lens aperture in millimeters. The total light per pixel, and thus photon shot noise, is given by the total light dividd by the number of pixels. The area of a pixel does not matter, only the number of pixels.
Sorry, I had lost the context of the discussion, and the universality of the great truth. Lens aperture in millmetres, yards or parsecs and angle of view. A 10mm aperture on a 10mm lens is going to put a lot more light on the sensor than a 10mm aperture on a 50mm lens (talking same size sensors).
This is why f numbers were invented.
Yes
It is the f number that decides how much light falls on a given area of sensor in a given time, not the aperture diameter in mm.
No. It is actually, physically, the aperture diameter in mm and the extent over which the light is collected. The f-number provides an approximate parameter to represent that.
 

Keyboard shortcuts

Back
Top