comparing lens + sensor size combos for low light

... APS-C is about 1,64 times later in area than M4/3, not twice as large.

The relevant crop factor between these formats is about 1,3. The easiest way to compare performances of these two formats is just to multiply the smaller formats aperture number by 1,3, thus f/2,8 on M4/3 approximaely equals f/3,6 on APS-C.

... There is les than a third of a stop difference in "speed" between the lenses mentioned on the formats mentioned.
No, that difference is 2/3 stop.
Difference of f/4 and f/3,6 actually slightly less than 1/3 stop.
Oh, I see... You were referring back to an earlier quote from the other poster rather than your own previous statement. Slightly confusing.
 
Last edited:
I posted the comparison from DXomark based on pixels not print because print DXomark down sizes which combines pixels which increase the S/N, effectively larger pixels.
Please view these two images at full size:

big-sensor.jpg




small-sensor-normal.jpg


The bottom image was taken with crop factor of approximately 4,3. Identical pixels and exposures. The bottom image was normalized to the same output size with the top image. The top image is at "100%" or "1:1" scale, i.e. no combining of pixels were made.

Would you say that the SNR is the same or different? And why?

More on the topic here .

--
Abe R. Ration - amateur photographer, amateur armchair scientist, amaterur camera buff
 
Camera sensors have only one ISO, base ISO.
Actually camera sensors don't have ISOs at all. ISO is an output format metric.

Nitpicking aside, sensors usually have one conversion gain (there are exceptions like Sony A7s and if I recall right, some of the Nikon-1 series cameras).
Increasing "ISO" just amplifies the signal seen by the sensor.
Increasing the ISO setting may amplify the signal either in analogue domain or both analogue and digital domain (or for some cameras it might just be a piece of metadata in the raw files). If the amplification is done in the analogue domain then the input-referred read noise is reduced somewhat. If in digital domain there are no benefits.
Simple way to prove this is to take an exposure at 6400 ISO and then take the same picture using the same f/stop and shutter speed at base ISO (100 for many cameras). Increase exposure in PP and you will have about the same image taken at 6400 ISO.
Exposure can not be adjusted after the image has been captured. You are thinking of setting the same brightness to the output images, or pushing the ISO 100 shot to ISO 6400.

If the sensor is "ISOless" then ISO 6400 and ISO 100 have the same noise characteristics, but the former has far less headroom.

However this is not the case for by far most of the sensors.

Typically increasing ISO reduced the influence of the PGA/ADC-noise in the output image. (PGA = programmable gain amplifier, ADC = analogue to digital converter.) This may not only reduce the input-referred read noise, but also pattern noise.

Here is a sample of Sony A7 pattern noise, ISO 100 vs 1600.

And here are Sony A7 read noise measurements .

Measurement method was more or less this .
 
Abe the bottom one is obviously has a lower S/N. However, I'm not clear as to the description of the two. Should the first sentence read "The top image ...".?

Do you mean the top image was made with a smaller sensor than the bottom image or the same size señor cropped?

Was the bottom image normalized with pixel sharing?
I posted the comparison from DXomark based on pixels not print because print DXomark down sizes which combines pixels which increase the S/N, effectively larger pixels.
Please view these two images at full size:

big-sensor.jpg


small-sensor-normal.jpg


The bottom image was taken with crop factor of approximately 4,3. Identical pixels and exposures. The bottom image was normalized to the same output size with the top image. The top image is at "100%" or "1:1" scale, i.e. no combining of pixels were made.

Would you say that the SNR is the same or different? And why?

More on the topic here .

--
Abe R. Ration - amateur photographer, amateur armchair scientist, amaterur camera buff
http://aberration43mm.wordpress.com/


--
Sony R1, NEX C3 & 5R + Zeiss 24mm, 16-70, & FE 70-200 Lenses, Nikon V1 + 10-30 & 30-110 lenses.
 
Unfortunately two concepts get mixed when discussing "S/N". There is the S/N in the final image presentation which is dependent on the print or display media and size. The other is the S/N ratio due to photon statistics of the sensor itself. This depends on the size of the individual pixels.
 
The print graphs are down sized which combines pixels of the larger sensors improving the S/N. Let me ask you this:

If you crop the center 16 megs from the 36 meg 7R and compare it to the 16 meg sensor of the NEX-6 (assuming the technology of both sensors are the same), would there be a difference in S/N between the two.
No, they would be the same. The full frame sensors advantage comes from it's larger area, if you crop that away it is exactly the same as shooting with an APS-C camera.
 
Unfortunately two concepts get mixed when discussing "S/N". There is the S/N in the final image presentation which is dependent on the print or display media and size. The other is the S/N ratio due to photon statistics of the sensor itself. This depends on the size of the individual pixels.
We are talking about image quality, so the SNR we are referring to is the final image. If you are comparing two images does it make more sense to look at the images at the same size, or to look at different sized crops from each image?
 
Last edited:
So I will start the question with an example. Given these two combinations with premium type lenses:

a6000 type camera (APS-C sensor) + 1670z @ F4

vs

Om-D EM EM5 type camera (m43 sensor) + 12-40 F2.8

The m43 sensor is half the size of the APS-C sensor, and typically the larger sensors are better for reduced noise in shadows and low light conditions for the same generation.
First, here you can find pretty much all the relevant information regarding this.

APS-C is about 1,64 times later in area than M4/3, not twice as large.
Sorry, yes you are correct. m43 is half the size of a full frame sensor, not half the size of APS-C.
The relevant crop factor between these formats is about 1,3. The easiest way to compare performances of these two formats is just to multiply the smaller formats aperture number by 1,3, thus f/2,8 on M4/3 approximaely equals f/3,6 on APS-C.

As the aspect ratios of the sensors are different one could get slighlty more accuracy by using image sensor areas instead of crop factor, but crop factor is very convinient.
However, the m43 lens is a much faster lens in respect to the size of the sensor.
If one does cross format comparison then lens speed depends not only about the f-number, but also the format size. There is les than a third of a stop difference in "speed" between the lenses mentioned on the formats mentioned.
Furthermore, the small sensor means that the DOF for the m43 sensor will be larger,

thus one can employ a smaller aperture and not worry about parts of the image being out of focus in situations where there is no specific focal point.
There is no "deep DoF advantage" for smaller formats - it's a myth.

Instead the more light you capture per time unit (by opening the aperture) the more shallow the DoF is. f/3,6 on APS-C and f/2,8 on M4/3 have approximately equal light collection ability (same "noise" is the other exposure parameters are the same) and DoF.

(Note the "noise" above is about formats - individual cameras can have differing performance curves from each other.)
Rather, I am more interested in how does one compare across formats. Presumably, the reason why the 1670z is F4 and the 12-40 is F2.8 is that designing a F2.8 for an APS-C sensor would result in a larger lens. Thus there is a trade off here.
The f/4 on APS-C is as fast as f/3,1 would be on M4/3.f/4 on one format does not equal f/4 on another .

The size of the format has surprisingly little influence on the size of the lens as long as the lenses for different formats have similar entrance pupil diameter (*) and angle of view properties.
OK, I didn't realise. My statement was based on observations of lenses of different camera systems - the m43 seems smaller, for example the 70-200 F2.8 olympus looks smaller than the 70-200 F2.8 Canon for full frame. I think omission of a mirror also pays a role. But nevertheless, I am not a optics engineer and can't say I understand the ratio of lens size to sensor size.
(*) The f-number tells us how much the diameter is: focal length divided by the number.
Do we go for a smaller sensor and smaller aperture number or a larger sensor with a higher aperture - what is better for low light when you bring the issue of lens size in the equation?
This answers the relevant questions . If the first article is too simple, the other two provide more complete answer.
 
Agree, what I have trouble with is what I perceive is the concept that S/N is a function of sensor size. It is a result of the pixels S/N and their use in producing the final image. Larger sensors have more pixels (some may have larger pixels, A7s, or the same size pixels of smaller sensors) but the final image is a result of combining these pixels in down sizing to produce the final image thereby increasing the S/N. So it may be correct to say the S/N is related to sensor size but it is not a function of sensor size. More light is collected by larger sensors because the have more or larger pixels than smaller sensors and the final S/N of the image is a function of the pixel S/N and the number and size of the pixels used to produce the final image.
 
Agree, what I have trouble with is what I perceive is the concept that S/N is a function of sensor size. It is a result of the pixels S/N and their use in producing the final image. Larger sensors have more pixels (some may have larger pixels, A7s, or the same size pixels of smaller sensors) but the final image is a result of combining these pixels in down sizing to produce the final image thereby increasing the S/N. So it may be correct to say the S/N is related to sensor size but it is not a function of sensor size. More light is collected by larger sensors because the have more or larger pixels than smaller sensors and the final S/N of the image is a function of the pixel S/N and the number and size of the pixels used to produce the final image.
When the discussion is about equivalence, I find it much better not to think of pixels at all - in fact, not to think of sensors at all. Those things are unnecessary complications that muck up the analysis. I think instead of the actual optical images that are produced by the different formats - or the actual image circles produced by lenses designed for different formats - without the peripheral factors that involve the capture of those images.

Imagine that you are inside a sort of perfect camera obscura and looking at live optical images projected by the camera formats you want to compare. Think about the fact that you will need to somehow adjust the different image sizes to match one another in order to properly discuss equivalence. The only way you could get the smaller image to match the larger one is through some sort of optical enlarging process. Even if you have a perfect enlarging mechanism that preserves every detail, any such process will reduce the overall intensity of the image. That can't be helped... and that will reduce not only the apparent S/N ratio, but also the apparent dynamic range (the brightest areas will be not be as bright as the brightest areas of the larger image, but the darkest - black - areas will be equally dark for both.) Coincidentally, those are two of the improvements that people actually report when they move from smaller to larger formats.

You can instead choose to optically reduce the larger image to make it smaller, which will of course also make it brighter... so you get the same end result anyway in terms of the comparison.

Knowing that an optical enlargement or reduction process must involve those compromises, is it logical to think that a digital capture and digital enlargement or reduction process would be immune to the compromises? If so, how exactly is such immunity achieved?
 
Last edited:
In changing from 35mm format to APSC it took a while to start seeing pictures in APSC format. Now I do think in APSC FOV, f stops and focal lengths.

FF is "better" than APSC, which is "better" than 4/3s which is "better" than 1", etc. However, all of these formats would produce acceptable images for most of my photography and for many who participate in these forms. The use of a particular camera and lenses is a very personnel decision and is a combination of IQ (not necessarily the sharpest), Senror size (not necessarily the largest), size, weight, and other properties.
 
For low light where manual focusing is acceptable, I really like using my A6000 with a focal reducer using my 35mm F/1.8 and 50mm F/1.4 lenses. It's like having 24mm F/1.3 and 35mm F/1.0 lenses. Not that costly either.
 
Last edited:
In changing from 35mm format to APSC it took a while to start seeing pictures in APSC format. Now I do think in APSC FOV, f stops and focal lengths.

FF is "better" than APSC, which is "better" than 4/3s which is "better" than 1", etc. However, all of these formats would produce acceptable images for most of my photography and for many who participate in these forms. The use of a particular camera and lenses is a very personnel decision and is a combination of IQ (not necessarily the sharpest), Senror size (not necessarily the largest), size, weight, and other properties.
Certainly.

But this thread is about the concept of equivalence and you said you 'have trouble with' something about it. I have attempted to help alleviate the trouble in my post above by focusing thought on what happens optically and not on what happens digitally.
 
Last edited:
Yes, what you are saying is better than sensor size and S/N. A 50 mm lens on an APSC sensor will give give the same FOV as a 50mm on a full frame sensor. If the same f stop is used the DOF will be the same.
 
A 50 mm lens on an APSC sensor will give give the same FOV as a 50mm on a full frame sensor. If the same f stop is used the DOF will be the same.
Um, no. I'm surprised to see you write this. Most people, even those who struggle with equivalence, know that a 50mm lens on APS-C gives about the same field of view as a 75mm lens on FF (80mm lens if we are talking about Canon's version of APS-C).

Let's put it a notional 50mm lens in front of a notional FF sensor and notionally take an image. Let's call that "image F". Now lets notionally take the image formed only on those pixels in the centre part of the senor which cover the area of an APS-C sensor. Let's call that "image A". Because image A covers less than the whole sensor, it has, by definition, less of a field of view than image F does.

DOF is always measured/calculated WRT a standard print size and viewing distance. Since prints of image A will need to be enlarged more than prints of image F to make a standard sized pront, image A will have less DOF.
 
Yes, what you are saying is better than sensor size and S/N.
Well, it should help simplify the discussion.
A 50 mm lens on an APSC sensor will give give the same FOV as a 50mm on a full frame sensor. If the same f stop is used the DOF will be the same.
I didn't say those things. They are incomplete and thus misleading.

You could say something like this if you wanted to: A 50mm lens at f/2.8 on an APS-C sensor with a subject distance of 15 feet will give the same field of view and the same DOF as a 50mm lens at f/4 on a full frame camera with a subject distance of 10 feet.

However, it would be much simpler and clearer to say that a 50mm lens at f/2.8 on an APS-C sensor will give the same field of view and the same DOF as a 75mm lens at f/4 on a full frame sensor... assuming the same camera to subject distance in both cases.

In addition, the ISO of the full frame camera must be raised by one stop over the APS-C camera to maintain the same shutter speed. This also maintains 'S/N equivalence' between the two formats because of the larger area of the full frame image.

And all of the above also assumes the same final viewing size for both formats.

I thought pretty much everyone understood these things by this point in the thread. Most of them are just mathematical relationships made easily accessible with online tools like this:

http://www.dofmaster.com/dofjs.html

Your profile says you're a physicist. What branch of physics is your specialty?
 
Last edited:
Agree, what I have trouble with is what I perceive is the concept that S/N is a function of sensor size.
It can be expressed that way. It doesn't have to be. It can also be calculated without direct reference to sensor size, as long as one uses certain other parameters. More on that later.
It is a result of the pixels S/N and their use in producing the final image. Larger sensors have more pixels (some may have larger pixels, A7s, or the same size pixels of smaller sensors) but the final image is a result of combining these pixels in down sizing to produce the final image thereby increasing the S/N.
It is not necessary to combine the larger sensor's pixels to produce a final image. One could take a 36MP image with a D810 or an A7r and then print it at 300DPI to produce an 18.4" x 12.27" print.
So it may be correct to say the S/N is related to sensor size but it is not a function of sensor size.
Even if it were true that downsampling increased an image's SNR as you describe, the resultant SNR would still be a function of sensor size, since the amount of downsampling would be function of pixel count. Pixel count in turn is a function of sensor size and pixel size.

However, downsampling doesn't increase the SNR of an image. The only change it could cause to the SNR of an image is to reduce it. Downsampling will result in resultant pixels likely having a higher average SNR than in the no-downsampled image, but the SNR of an image is not linear with the SNR of its individual pixels.
More light is collected by larger sensors because the have more or larger pixels than smaller sensors
Yes that is effectively true. By definition, a larger sensors must have either more pixels, or larger pixels or both.
and the final S/N of the image is a function of the pixel S/N
Yes it is. What is that function, do you know?*

Do you think that an image taken on a 1" sensor, which happens to have each and every pixel having an SNR of R, has the same SNR as an image taken on a FF sensor also with each and every pixel having an SNR or R?
and the number and size of the pixels used to produce the final image.
The number and size of the pixels is inextricably connected to the size of the sensor. If you know any two of those, you know the third factor. So pixel size and pixel count are function of ssnsor size. Anything that is a function of pixel count and/or pixel size is therefore also a function of sensor size. Whenever you say that SNR is a function of pixel size or pixel count, you are saying that SNR is a function of sensor size.

*I'm going to suggest to you that the SNR of an image is the square root of the sum or the squares of the SNRs of all its pixels. You'll note that the size of the sensor is nowhere mentioned. However, the SNR of a pixel is a function of the exposure, the quantum efficiency of the senor and the area of the pixel. The area of the pixel is in turn a function of the number of pixels and the size of the sensor.
 

Keyboard shortcuts

Back
Top