The Myth of Equivalent Aperture and other overly simplistic Camera Formulae

I would swear this is the same thread from the heated front page discussion some months ago, with the same amateur nuclear scientists points and counterpoints.

Or maybe it is the matrix having hiccup.
This same topic repeatedly appears on all forums for the last 2-3 years. It seems there are always people who can't grasp this simple concept. Worse yet, there are some of them always want to start a big debate based on that ignorance, and amazingly they never learn, so the arguments continue in perpetuity.

The equivalence seems to hit some deep psychological issues and that's the reason for ever continuing resistance on the part of some people.
 
Are you mean that join together few iphone sensors until same size of ff sensor than the performance will be equal to ff perfomance?
You won't get the same sensitivity to low light (high ISO performance) and you will have a lower light saturation point (lower dynamic range) but your signal to noise ratio will be very close away from those extreme ends of the lighting spectrum.
The low light capability of such "super-iphone sensor combination" might even be better as the quantum efficiency of that sensor is likely higher than the one of the regular current FFs (ie. all but A7s), though the FF would have slight lower combined readout noise, so it should be about a toss up.

The saturation per area should also be similar for the "super-iphone sensor combination". For kicks I just did a quick and dirty calculation for DR and a sensor roughly similar to the iPhone 6 sensor (not the same as I don't have the specs for it's Sony sensor) and such a "super-iphone sensor combination" of FF size would have slightly larger DR than for example Nikon D810! (Thus the saturation would indeed be higher.)
 
You completely missed the point so clearly put in the post you are replying to.

It doesn't matter which sensor size is in the camera when the lenses are equivalent* -- you get the same image, and the systems can be the same size, all other things being equal.

* - ignoring the secondary effects, and other things that were mentioned before that can complicate the comparison in extreme cases.
But what is always conveniently ignored is at what point the noise level becomes significant or even noticeable. For many people, the situations in which a bigger sensor shows a clear advantage are not situations that tip the balance enough for it to be a significant issue.

Once an image is reduced in size for its intended output media, even an APS-C sensored image can show negligible noise under most circumstances.

For some, a larger sensor can indeed reap rewards. But for many people in many circumstances, the difference between APS-C sized sensors and 36x24mm sized sensors is insignificant in real world use.
 
Now, I am confusing! Are you mean that full image SNR is different from part of image which snr depend on the part of imej size. I assuming full image are all same pattern such as one color only.
 

start reading before this thread makes my brain hurt...
 
Now, I am confusing! Are you mean that full image SNR is different from part of image which snr depend on the part of imej size. I assuming full image are all same pattern such as one color only.
Yes. If you change the number of sampling points (in this case pixels), you change the signa-to-noise-ratio.

The reason for this is that signal and noise increase at different rates when you add more pixels.

If you double the number of pixels, you double the signal. But you only increase noise by factor of square root of 2 (ie. 1.414...).

You can think and measure it this way:
  • Let's have a pixel A and pixel B - they are identical.
  • What happens if we add A and B together -> C = A + B? Doesn't C now have the combined signal of A and B, thus if we double the pixel count we double the signal, right?
  • For measuring (instead of calculating) the noise you need a tool - Fiji (imageJ on steroids) is good for this. Do the following:
  1. Create a small black (empty) 32-bit picture (256x256 pixels is fine)
  2. Add signal to it - in Fiji it's Process/Math/Add-menu - 1000 is a good nice amount
  3. Now add photon shot noise (noise of light itself) to it - Process/Noise/RandomJ/RandomJ Poisson and then make sure you choose the modulatory-option. The number is not relevant.
  4. You may want to adjust the brightness and contrast of the image to see the noise.
  5. Measure the mean and standard deviation of the signal (pressing 'm' should give them)
  6. Now create a new window and repeat the above actions.
  7. Use the Process/Image Calculator to add the two noisy images together
  8. Measure the result
As an example - I did above and image A had mean (signal) of 999.858 and standard deviation (noise) of 31.814. Image B had mean of 999.871 std of 31.554.

The combined image C had mean of 1999.857and std of 44.837.

So the signal doubled when we doubled the pixel count, but the noise only incresed by factor of approximately square root of 2.

Thus when you add more signal to the image the signal increases faster than noise by factor of sqrt(2), thus SNR increases.
 
I had a question? If we just cover the ff sensor inside the camera to APSC size. The sensor performance will drop to aps-c sensor performance?
Yes. It doesn't matter how you crop. Cropping in the computer after the exposure will also cause the same effect.
I think you have to be more precise, on this,

because you wrote already wrote the following, in response to min32:

"If you crop FF to APS-C, you lose performance. If you crop FF to iPhone size you lose even more performance. All this is because of loss of light (and resolution too, but that's a bit off topic) used to create the output image."

-

What you write "Cropping in the computer after the exposure will also cause the same effect" is only true, if you would not change FOV, with cropping.

But cropping after the exposure changes the FOV, or?

...thus, your statement is false (in the context of your whole answers, but not in those two sentences above - please read below!).

-

If you shrink the complete FOV of a bigger sensor, to the size of the smaller one, then you benefit from the bigger size which can collect more light...

...this can be done in the computer, or with a Shapley lens, which is used in the Metabones converter etc...

...but the visible FOV has to stay the same, for both pictures (if you want to compare).

A given FOV at a given output size (light per area, or light density), benefits from a bigger sensor, because it can collect more light for this FOV.

If you change the FOV for the given output size, then you lose the base for comparison...

...and that is what cropping in the computer after the exposure means!

-

The center of the frame isn't affected by the light on the border, which you lose, when cropping (to change the FOV) - no change in the light density!

Thus you only loose the information (the light) within the lost FOV, but not within the middle of the frame, which keeps the same.

This should be clear...
It is (almost) all about how much light is collected. If you crop, you throw away part of the light.
...but you don't change the light density!

...otherwise, if your statement would be right (in general), the middle of the frame would become darker (loss of light), if you crop the borders of the frame (within the computer - after exposure) - which is clearly not the case, or? ;-)

...so, there is a difference between using a smaller part of a sensor, or cutting the edges of a frame after exposure (cropping in the computer) and the benefit of a bigger sensor area, which is used for the same FOV and therefore collects more light per output area.

You shouldn't mix both!

-

If you only wanted to express, that something is lost, if you don't collect it, then you are right, but this is trivial and not really a good fit to the photography related theme:

The Myth of Equivalent Aperture and other overly simplistic Camera Formulae

or

An essay on Equivalence - Or - Does Size Really Matter ?

etc.
 
Last edited:
I had a question? If we just cover the ff sensor inside the camera to APSC size. The sensor performance will drop to aps-c sensor performance?
Yes. It doesn't matter how you crop. Cropping in the computer after the exposure will also cause the same effect.
I think you have to be more precise, on this,

because you wrote already wrote the following, in response to min32:

"If you crop FF to APS-C, you lose performance. If you crop FF to iPhone size you lose even more performance. All this is because of loss of light (and resolution too, but that's a bit off topic) used to create the output image."

-

What you write "Cropping in the computer after the exposure will also cause the same effect" is only true, if you would not change FOV, with cropping.

But cropping after the exposure changes the FOV, or?

...thus, your statement is false (in the context of your whole answers, but not in those two sentences above - please read below!).

-

If you shrink the complete FOV of a bigger sensor, to the size of the smaller one, then you benefit from the bigger size which can collect more light...

...this can be done in the computer, or with a Shapley lens, which is used in the Metabones converter etc...

...but the visible FOV has to stay the same, for both pictures (if you want to compare).

A given FOV at a given output size (light per area, or light density), benefits from a bigger sensor, because it can collect more light for this FOV.

If you change the FOV for the given output size, then you lose the base for comparison...
But that is what cropping is. It is removing part of the projected image to achieve a smaller FOV. I can't think of an instance where cropping means anything other than that.
...and that is what cropping in the computer after the exposure means!

-

The center of the frame isn't affected by the light on the border, which you lose, when cropping (to change the FOV) - no change in the light density!

Thus you only loose the information (the light) within the lost FOV, but not within the middle of the frame, which keeps the same.

This should be clear...
It is (almost) all about how much light is collected. If you crop, you throw away part of the light.
...but you don't change the light density!

...otherwise, if your statement would be right (in general), the middle of the frame would become darker (loss of light), if you crop the borders of the frame (within the computer - after exposure) - which is clearly not the case, or? ;-)

...so, there is a difference between using a smaller part of a sensor, or cutting the edges of a frame after exposure (cropping in the computer) and the benefit of a bigger sensor area, which is used for the same FOV and therefore collects more light per output area.
Those are two different things, but the second one isn't cropping.
You shouldn't mix both!

-

If you only wanted to express, that something is lost, if you don't collect it, then you are right, but this is trivial and not really a good fit to the photography related theme:

The Myth of Equivalent Aperture and other overly simplistic Camera Formulae

or

An essay on Equivalence - Or - Does Size Really Matter ?

etc.
 
I think the biggest mistake in our explanations has been focusing on sensor size or crop size and NOT focusing on amount of light gathered by the lens and projected at the image circle. This erroneously leads to the conclusion that a big sensor is more important than a big lens.
That is of course true, but I am not sure it would be any more pedagogical for people who can't get over per pixel output. I have trouble to think of a better way as to link to the studio comparison scene where anybody can have a look at the FF and cropped format output and compare the 100% and resized view. If that is not enough, I am forced to give up.
 
I think the biggest mistake in our explanations has been focusing on sensor size or crop size and NOT focusing on amount of light gathered by the lens and projected at the image circle. This erroneously leads to the conclusion that a big sensor is more important than a big lens.
That is of course true, but I am not sure it would be any more pedagogical for people who can't get over per pixel output. I have trouble to think of a better way as to link to the studio comparison scene where anybody can have a look at the FF and cropped format output and compare the 100% and resized view. If that is not enough, I am forced to give up.
Extra points for using pedagogical in sentence. :-)

I couldn't find a dpreview comparison that had an FX camera with DX crop compared to a DX camera but I did find this comparison of a D7000 vs. a cropped D800. In it, you can see the D800 cropped performs almost exactly like the D7000 does native.

https://www.flickr.com/photos/kaceyjordan/sets/72157629779460403/

In particular look at these ISO 6400 samples.

D800 crop mode ISO 6400

D7000 ISO 6400

It would be nice if we also had a D800 uncropped with the same field of view then downsampled to D7000 resolution so people could see how much better it would be for signal to noise ratio.

Though he does have a D800 vs. D3s gallery where you can see a D800 uncropped image at ISO 6400 but unfortunately the crop isn't exactly the same and the lighting looks also to be different. Despite the worse crop the D800 image still looks better than D7000 or D800 crop mode.


D800 native ISO 6400
 
Last edited:
It would be nice if we also had a D800 uncropped with the same field of view then downsampled to D7000 resolution so people could see how much better it would be for signal to noise ratio.
Well, you almost get that with the dpreview's studio comparison tool, where all outputs in "PRINT" mode are resized to 8Mpx. 16Mpx would be nicer, but it is still enough to plainly see the difference (Sony NEX-3N has a sensor very similar to Nikon D7000, the A7R to D800):

APS-C 16Mpx vs FF 36Mpx

And of course, you can switch to "FULL" view and see per pixel quality (and sorry for reposting this link once again).
 
I think the biggest mistake in our explanations has been focusing on sensor size or crop size and NOT focusing on amount of light gathered by the lens and projected at the image circle. This erroneously leads to the conclusion that a big sensor is more important than a big lens.
That is of course true, but I am not sure it would be any more pedagogical for people who can't get over per pixel output. I have trouble to think of a better way as to link to the studio comparison scene where anybody can have a look at the FF and cropped format output and compare the 100% and resized view. If that is not enough, I am forced to give up.
Extra points for using pedagogical in sentence. :-)

I couldn't find a dpreview comparison that had an FX camera with DX crop compared to a DX camera but I did find this comparison of a D7000 vs. a cropped D800. In it, you can see the D800 cropped performs almost exactly like the D7000 does native.

https://www.flickr.com/photos/kaceyjordan/sets/72157629779460403/

In particular look at these ISO 6400 samples.

D800 crop mode ISO 6400

D7000 ISO 6400

It would be nice if we also had a D800 uncropped with the same field of view then downsampled to D7000 resolution so people could see how much better it would be for signal to noise ratio.

Though he does have a D800 vs. D3s gallery where you can see a D800 uncropped image at ISO 6400 but unfortunately the crop isn't exactly the same and the lighting looks also to be different. Despite the worse crop the D800 image still looks better than D7000 or D800 crop mode.

https://www.flickr.com/photos/kaceyjordan/sets/72157629748722839/

D800 native ISO 6400
I did the best I could do with the available material. I cropped and downsampled the D800 native shot to get it the same resolution as the others. Despite being at a disadvantage (bigger crop) you can see the signal to noise ratio is better.




D7000 at ISO 6400




D800 crop to DX (APSC) at ISO 6400




D800 at native resolution downsampled to DX (APSC) size
 

Attachments

  • 3046750.jpg
    3046750.jpg
    1.7 MB · Views: 0
  • 3046748.jpg
    3046748.jpg
    1.7 MB · Views: 0
  • 3046749.jpg
    3046749.jpg
    1.7 MB · Views: 0
It would be nice if we also had a D800 uncropped with the same field of view then downsampled to D7000 resolution so people could see how much better it would be for signal to noise ratio.
Well, you almost get that with the dpreview's studio comparison tool, where all outputs in "PRINT" mode are resized to 8Mpx. 16Mpx would be nicer, but it is still enough to plainly see the difference (Sony NEX-3N has a sensor very similar to Nikon D7000, the A7R to D800):

APS-C 16Mpx vs FF 36Mpx

And of course, you can switch to "FULL" view and see per pixel quality (and sorry for reposting this link once again).
The problem with the studio comparison tool isn't the best apples to apples comparison because of things like Fuji's ISO cheating. You really need a comparison taken at the same exposure settings, not same ISO settings and you need a high detail subject to check for noise reduction cheating as well. For example, if you add the RAW X-T1 to your link it looks closer to the A7s than it is to the NEX3. People will proclaim it to be Fuji magic and an example of how an APSC can be almost as good as full frame.
 
I had a question? If we just cover the ff sensor inside the camera to APSC size. The sensor performance will drop to aps-c sensor performance?
Yes. It doesn't matter how you crop. Cropping in the computer after the exposure will also cause the same effect.
I think you have to be more precise, on this,

because you wrote already wrote the following, in response to min32:

"If you crop FF to APS-C, you lose performance. If you crop FF to iPhone size you lose even more performance. All this is because of loss of light (and resolution too, but that's a bit off topic) used to create the output image."

-

What you write "Cropping in the computer after the exposure will also cause the same effect" is only true, if you would not change FOV, with cropping.

But cropping after the exposure changes the FOV, or?

...thus, your statement is false (in the context of your whole answers, but not in those two sentences above - please read below!).

-

If you shrink the complete FOV of a bigger sensor, to the size of the smaller one, then you benefit from the bigger size which can collect more light...

...this can be done in the computer, or with a Shapley lens, which is used in the Metabones converter etc...

...but the visible FOV has to stay the same, for both pictures (if you want to compare).

A given FOV at a given output size (light per area, or light density), benefits from a bigger sensor, because it can collect more light for this FOV.

If you change the FOV for the given output size, then you lose the base for comparison...
But that is what cropping is. It is removing part of the projected image to achieve a smaller FOV. I can't think of an instance where cropping means anything other than that.
Yes, that is what I tried to explain, that one shouldn't mix cropping, with resizing, to the same output size, at the same FOV, which is the needed base for comparison.

As soon as you crop, you lose the common base - that is what I wrote!
...and that is what cropping in the computer after the exposure means!

-

The center of the frame isn't affected by the light on the border, which you lose, when cropping (to change the FOV) - no change in the light density!

Thus you only loose the information (the light) within the lost FOV, but not within the middle of the frame, which keeps the same.

This should be clear...
It is (almost) all about how much light is collected. If you crop, you throw away part of the light.
...but you don't change the light density!

...otherwise, if your statement would be right (in general), the middle of the frame would become darker (loss of light), if you crop the borders of the frame (within the computer - after exposure) - which is clearly not the case, or? ;-)

...so, there is a difference between using a smaller part of a sensor, or cutting the edges of a frame after exposure (cropping in the computer) and the benefit of a bigger sensor area, which is used for the same FOV and therefore collects more light per output area.
Those are two different things, but the second one isn't cropping.
Yes, you seem to understand it right.

I would call it "equivalent", achieved by, resizing to the same output size (with the same FOV).

Similar, like DxO is providing a chart for "screen" and "print"...

...whereas "print" shows the real (normalized) equivalent for comparing sensors with different MP count, in this case.
You shouldn't mix both!

-

If you only wanted to express, that something is lost, if you don't collect it, then you are right, but this is trivial and not really a good fit to the photography related theme:

The Myth of Equivalent Aperture and other overly simplistic Camera Formulae

or

An essay on Equivalence - Or - Does Size Really Matter ?

etc.

--
Envy is the highest form of recognition.
-
Stop to run, start to think.
-
Think twice - that doubles the fun!
-
Your world is as big, as Your mind.
-
Avoid to have only one point of view!
-
U see?
I think the biggest mistake in our explanations has been focusing on sensor size or crop size and NOT focusing on amount of light gathered by the lens and projected at the image circle. This erroneously leads to the conclusion that a big sensor is more important than a big lens.
The point is:

The sensor size defines the area of the image circle...

...the FOV is defined by the lens in conjunction with the sensor size.

-

Thus, we have to look at both...

...therefore we need the same FOV in our comparisons...

...to get a hint of the light density (energy per area) that is collected.

-

Same FOV, same output size and the same average brightness within the available dynamic range (including the same contrast curve)...

...that's all!

-

No big deal, just the common base - to get the equivalent.

--
Envy is the highest form of recognition.
-
Stop to run, start to think.
-
Think twice - that doubles the fun!
-
Your world is as big, as Your mind.
-
Avoid to have only one point of view!
-
U see?
 
Last edited:
I had a question? If we just cover the ff sensor inside the camera to APSC size. The sensor performance will drop to aps-c sensor performance?
Yes. It doesn't matter how you crop. Cropping in the computer after the exposure will also cause the same effect.
I think you have to be more precise, on this,
I disagree. I was precise. What you do below is obfuscating the discussion.
because you wrote already wrote the following, in response to min32:

"If you crop FF to APS-C, you lose performance. If you crop FF to iPhone size you lose even more performance. All this is because of loss of light (and resolution too, but that's a bit off topic) used to create the output image."

-

What you write "Cropping in the computer after the exposure will also cause the same effect" is only true, if you would not change FOV, with cropping.
We are talking about signal-to-noise-ratio. That is the context.

And you certainly have misunderstood what I write. I have no idea how you see contradiction in what I wrote. I have claimed that cropping doesn't change FOV - it's changed the same amount regardless of how you crop.
But cropping after the exposure changes the FOV, or?
It changes the signal-to-noise ratio of the image. Cropping changes the FOV regadless of when and how you do it versus an uncropped image.
...thus, your statement is false (in the context of your whole answers, but not in those two sentences above - please read below!).
No, my statement is right. We're not talking about FOV or even resolution, but signal-to-noise ratio. I have no idea where you found me to be wrong as it's certainly not anywhere where you quoted me. Maybe you think I said something I didn't? Or assumed something?

I have no idea why you want to obfuscate the discussion by talking about it. The question was about if bigger gives better image quality, essentially better SNR than smaller, and the answer is yes.
-

If you shrink the complete FOV of a bigger sensor, to the size of the smaller one, then you benefit from the bigger size which can collect more light...
FOV is not relevant to this discussion.
...this can be done in the computer, or with a Shapley lens, which is used in the Metabones converter etc...
FOV is not relevant.

Also, you're now changing the optical formula, making the lens faster. If you put a Speedbooster to 45mm f/3 lens, the results is 30mm f/2 lens.

And that is way beside the point, way off topic. We're not talking about optics, if we were, I'd point out that the image captured by the big sensor needs less enlargemenet than the one captured by smaller sensor to get the desired output size and that means that the lens aberrations would be more relevant.

Also the smaller sensors saturation capability is not increased with any optical instruments - it's about 2.25 times smaller than FF's (give or take a bit depending on the relevant technologies used). And as the saturation is not increased, neither is the SNR in the context of image quality. Of course if you use a faster lens, you collect more light, but One can always use a faster lens of FF too. But this has nothing to do with what I said and what you for some obscure reason find to be incorrect.

...but the visible FOV has to stay the same, for both pictures (if you want to compare).
We do not have to take any real photos to compare. When you measure camera performance, you take images of evenly lit defocused blank subjects (and optionally dark frames). Optics are not relevant.

In general you will want to minimize the number of parameters, unknonws, to get the most accurate answers.

But if you want to compare FF at 45mm f/3 and APS-C at 30mm and f/2, then sure - if the sensors are idealized, then at the same shutter speed and ambien light those combos create the same information to the image sensors. If you deny this, I'd like to see some evidence, preferrably mathematical or results of simulations or such (to minimize unknown parameters).

A given FOV at a given output size (light per area, or light density), benefits from a bigger sensor, because it can collect more light for this FOV.
Light desnsity is irrelevant. Total light is the relevant thing. When considering total light we don't have to know anything about the physical sizes of the sensors.
If you change the FOV for the given output size, then you lose the base for comparison...
Optics and FOV are not relevant at all. The relevant calculations and measurements don't require them.
...and that is what cropping in the computer after the exposure means!
50mm lens on APS-C gives size x, on FF 50mm lens gives size y - crop the FF output image to size x and you lose away the benefit of extra light. Or use a 75mm lens and not lose any, but that is not cropping but using different lens. And this is all irrelevant to this topic.
-

The center of the frame isn't affected by the light on the border, which you lose, when cropping (to change the FOV) - no change in the light density!
Light density is absolutely irrelevant.
It is (almost) all about how much light is collected. If you crop, you throw away part of the light.
...but you don't change the light density!
Light density is irrelevant.


Think of two pixels, one which is 10µm^2, the other 40m^2. With the same exposure the light density is the same so the big pixel collects 4 times the light, thus SNR will be twice as high.

The same applies at sensor level. You can prove it to yourself by following the advice I gave here: http://www.dpreview.com/forums/post/54555356
...otherwise, if your statement would be right (in general), the middle of the frame would become darker (loss of light), if you crop the borders of the frame (within the computer - after exposure) - which is clearly not the case, or? ;-)
You really are preoccupied with your artificial situtation which is way off topic.

We're talking about SNR of the sensor, not how much your lens/sensor causes vignetting.
...so, there is a difference between using a smaller part of a sensor, or cutting the edges of a frame after exposure (cropping in the computer) and the benefit of a bigger sensor area, which is used for the same FOV and therefore collects more light per output area.
I'm not sure I follow your logic. If I use a APS-C at 50mm and FF at 50mm and then crop the FF to APS-C size in post processing, both images (assuming idealized sensors or technologically identical sensors) collect the same amount of light. There is no difference where you crop.
You shouldn't mix both!
You should not involve FOV at all into this discussion, as it is absolutely irrelevant and just obfuscates the discussion.

-

If you only wanted to express, that something is lost, if you don't collect it, then you are right, but this is trivial
That is what I expressed - I have no idea why you dug the FOV out of bag.

While it is trivial, it is also somethign many people do not think, a blind spot if you will. If you crop, you lose light, no matter how you do it, and if you lose light, you lose SNR. The last part is certainly lost from many people who think in pixel-centric way and imagine that big pixels in small sensor would somehow equalize image quality (apart from resolution) with big sensors. Do you think so?
and not really a good fit to the photography related theme:

The Myth of Equivalent Aperture and other overly simplistic Camera Formulae

or

An essay on Equivalence - Or - Does Size Really Matter ?

etc.
You're now being awfully arrogant. I answered well, and quite precicely and countered with evidence some misconceptions. What you do here is go into irrelvancies and flawed ideas, like the supposed importance of light density. This just makes sensible discussion more difficult. Which begs a question: why do you obfuscate the discussion, what is your motivation?

If we consider two idealized systems: one with a full frame, and one with APS-C, then to get an ideantical image the light densities will have to differ by a factor of 2.25. In practise the APS-C uses 1.5 times shorter lens (focal lengthwise) and an aperture number 1.5 times smaller (to achieve equal aperture size).

If you believe that the light density is relevant and total light is not in the context of comparing sensors of different size, please start a new thread and preferrably in a more appropriate forum and I am more than happy to counter your arguments.
 
I had a question? If we just cover the ff sensor inside the camera to APSC size. The sensor performance will drop to aps-c sensor performance?
Yes. It doesn't matter how you crop. Cropping in the computer after the exposure will also cause the same effect.

It is (almost) all about how much light is collected. If you crop, you throw away part of the light.
so, basically, if i print out a FF picture (say 30x20) and i then phisically CUT 2 inches out of each border, what remains is now worse because i cut threw away part of the light?

are you kidding me?
 
I think the biggest mistake in our explanations has been focusing on sensor size or crop size and NOT focusing on amount of light gathered by the lens and projected at the image circle. This erroneously leads to the conclusion that a big sensor is more important than a big lens.
The problem with the conclusion you have here is that it only applies to situtaion where exposure is limited by something else but not the pixel's full well capacity.

If we consider idealized sensors, then a FF will be able to handle exposures 2 stops larger than a m43, collect 4 times more light and have twice the SNR.

But if the exposure is limited by something else than FWC, then you're right - the lens speed is critical (*).

(*) With the usual caveats:
  • f/2 on FF equals f/1 on m43
  • The current digital sensors are not able to fully benefit from very small aperture-numbers - in practise a f/2 on FF usually allows the sensor to collect more light than f/1 on m43 if the ambient light and shutter speed are identical.
 
The point is:

The sensor size defines the area of the image circle...

...the FOV is defined by the lens in conjunction with the sensor size.

-

Thus, we have to look at both...
No. If we want to compare what is the sensor performance, we don't have to care about the optics at all.
...therefore we need the same FOV in our comparisons...
FOV is not relevant for sensor related measurements. f/2 on 30mm lens and f/2 on 90mm lens both allow the same emount of light to go through.
...to get a hint of the light density (energy per area) that is collected.
Energy-per-area is irrelvant. Much better to think of total light as that if which forms the image. If you compare light density you will have to normalize afterwards so why bother with the extra step?
-

Same FOV, same output size and the same average brightness within the available dynamic range (including the same contrast curve)...
Contrast curve is not something that is part of the topic if you're interested in sensor's or camera's performance. Contrast curve is comething you add in processing, something that alters the very information we're interested in analyzing.

FOV is only interesting if you either want to talk about equivalence from the point of view of comparing different formats lens properties or are taking test photographs. For analysing the relevant camera/sensor performances it's irrelevant and just obcusfates the discussion.
 
I couldn't find a dpreview comparison that had an FX camera with DX crop compared to a DX camera but I did find this comparison of a D7000 vs. a cropped D800. In it, you can see the D800 cropped performs almost exactly like the D7000 does native.
According to http://home.comcast.net/~nikond70/Charts/PDR.htm the D800 actually performs a bit better. (*)

I'm not really a big fan of comparing actual products when discussing the relative benefits of different formats as it involves adding many unknown or non-ideally known parameters into the discussion. In my opinion it's better to keep the discussion as simple as possible and consider idealized cameras and lenses of the relevant formats as much as possible.

(*) Assuming ISO x on D800 equals ISO x on D7000. Should check DxOMark for that, but for whaever reason DxOMark is always veeery slow for me.
 
Last edited:

Keyboard shortcuts

Back
Top