The Myth of Equivalent Aperture and other overly simplistic Camera Formulae

I had a question? If we just cover the ff sensor inside the camera to APSC size. The sensor performance will drop to aps-c sensor performance?
Yes. It doesn't matter how you crop. Cropping in the computer after the exposure will also cause the same effect.

It is (almost) all about how much light is collected. If you crop, you throw away part of the light.
so, basically, if i print out a FF picture (say 30x20) and i then phisically CUT 2 inches out of each border, what remains is now worse because i cut threw away part of the light?

are you kidding me?
Yes, he / she is kidding us...

...especially because he / she wrote in advance:

"If you crop FF to APS-C, you lose performance. If you crop FF to iPhone size you lose even more performance. All this is because of loss of light (and resolution too, but that's a bit off topic) used to create the output image."

-

This is utter nonsense because he / she obviously mixes the effect of cropping, with the effect of resizing...

( I tried to explain it here: http://www.dpreview.com/forums/post/54555644 )

...and refuses to accept, a common base, which would normalize the differences between sensor sizes - just like the DxO "print" graph, which shows the output after normalizing the differences in pixel count.

-

But you know,

some people just like to talk...

...therefore a solution is not their goal.
 
Last edited:
I had a question? If we just cover the ff sensor inside the camera to APSC size. The sensor performance will drop to aps-c sensor performance?
Yes. It doesn't matter how you crop. Cropping in the computer after the exposure will also cause the same effect.

It is (almost) all about how much light is collected. If you crop, you throw away part of the light.
so, basically, if i print out a FF picture (say 30x20) and i then phisically CUT 2 inches out of each border, what remains is now worse because i cut threw away part of the light?

are you kidding me?
 
I think the biggest mistake in our explanations has been focusing on sensor size or crop size and NOT focusing on amount of light gathered by the lens and projected at the image circle. This erroneously leads to the conclusion that a big sensor is more important than a big lens.
The problem with the conclusion you have here is that it only applies to situtaion where exposure is limited by something else but not the pixel's full well capacity.

If we consider idealized sensors, then a FF will be able to handle exposures 2 stops larger than a m43, collect 4 times more light and have twice the SNR.

But if the exposure is limited by something else than FWC, then you're right - the lens speed is critical (*).

(*) With the usual caveats:
  • f/2 on FF equals f/1 on m43
  • The current digital sensors are not able to fully benefit from very small aperture-numbers - in practise a f/2 on FF usually allows the sensor to collect more light than f/1 on m43 if the ambient light and shutter speed are identical.
In the real world, exposure is often limited by the lens, for example when people compare Fuji's 56mm F1.2 to a Canon 85mm F1.4. You can't reach equivalent aperture with the Fuji lens. However, I do see how this can just confuse the issue more. There is also the problem of the Fuji's APSC sensor pixels having to bin more electrons per area at similar total quantity of light which is another place to get tripped up.
 
Last edited:
The point is:

The sensor size defines the area of the image circle...

...the FOV is defined by the lens in conjunction with the sensor size.

-

Thus, we have to look at both...
No. If we want to compare what is the sensor performance, we don't have to care about the optics at all.
Aha, but without optics you get light on the sensor from angles up to 90 degree - on the whole surface, which is pretty uncommon in normal use and completely useless if one want's to get a picture...

...beside that the pixel of a BSI sensor would have a huge advantage in this case - especially those of the NX1, because of their shallow depth, which allows a higher efficiency at greater angles of incidence.
...therefore we need the same FOV in our comparisons...
FOV is not relevant for sensor related measurements. f/2 on 30mm lens and f/2 on 90mm lens both allow the same emount of light to go through.
If you take two different copies of the same lens design, you already get differences in light density per pixel, because of the different Strehl ratio (due to manufacturing issues) - especially in the corners of the frame...
...to get a hint of the light density (energy per area) that is collected.
Energy-per-area is irrelvant. Much better to think of total light as that if which forms the image. If you compare light density you will have to normalize afterwards so why bother with the extra step?
The image is formed by the light, which hits the sensor, in a defined way...

...through a suited lens in comparably low angles of incidence...

...all contributing to the energy per area, which is detectable by the sensor...

...thus, the efficiency of the sensor depends on the optics in front of it...

...thus the (useable) SNR depends on the optics.
-

Same FOV, same output size and the same average brightness within the available dynamic range (including the same contrast curve)...
Contrast curve is not something that is part of the topic if you're interested in sensor's or camera's performance. Contrast curve is comething you add in processing, something that alters the very information we're interested in analyzing.
Every RAW data need a contrast curve, to be usable, as picture, or?
FOV is only interesting if you either want to talk about equivalence from the point of view of comparing different formats lens properties or are taking test photographs. For analysing the relevant camera/sensor performances it's irrelevant and just obcusfates the discussion.
FOV is what photographers use and need to know, if they want to frame something in a proper way, without the need of excessive cropping.

--
Envy is the highest form of recognition.
-
Stop to run, start to think.
-
Think twice - that doubles the fun!
-
Your world is as big, as Your mind.
-
Avoid to have only one point of view!
-
U see?
 
Last edited:
I think the biggest mistake in our explanations has been focusing on sensor size or crop size and NOT focusing on amount of light gathered by the lens and projected at the image circle. This erroneously leads to the conclusion that a big sensor is more important than a big lens.
The problem with the conclusion you have here is that it only applies to situtaion where exposure is limited by something else but not the pixel's full well capacity.

If we consider idealized sensors, then a FF will be able to handle exposures 2 stops larger than a m43, collect 4 times more light and have twice the SNR.

But if the exposure is limited by something else than FWC, then you're right - the lens speed is critical (*).

(*) With the usual caveats:
  • f/2 on FF equals f/1 on m43
  • The current digital sensors are not able to fully benefit from very small aperture-numbers - in practise a f/2 on FF usually allows the sensor to collect more light than f/1 on m43 if the ambient light and shutter speed are identical.
In the real world, exposure is often limited by the lens,
What I meant is that for example I personally shoot almost always at the base ISO with long enough exposure to get close to the saturation point, thus the FWC is the limiting factor, not exposure time or aperture or ambient light. Thus for me a smaller sensor would perform worse.
for example when people compare Fuji's 56mm F1.2 to a Canon 85mm F1.4. You can't reach equivalent aperture with the Fuji lens.
True.
There is also the problem of the Fuji's APSC sensor pixels having to bin more electrons per area at similar total quantity of light which is another place to get tripped up.
I don't understand the above.

Fuji uses Sony's off-the-shelf sensors. They may run them at slightly different parameters to get a bit different performance curve, but for example the quantum efficiency is similar to the competition, as is the read noise. Bill Claff's charts don't show anything abnormal. (Well, the low ISO seems to ISO 200, but I have no idea how Fuji calibrates it's ISOs and am not going to try to waste energy to find out, but the data is sufficient to tell that there's nothing odd one way or the other.)
 
The point is:

The sensor size defines the area of the image circle...

...the FOV is defined by the lens in conjunction with the sensor size.

-

Thus, we have to look at both...
No. If we want to compare what is the sensor performance, we don't have to care about the optics at all.
Aha, but without optics you get light on the sensor from angles up to 90 degree - on the whole surface, which is pretty uncommon in normal use and completely useless if one want's to get a picture...
You miss the point. We don't need to consider lenses or optics when we consider the performance of the sensor.
...beside that the pixel of a BSI sensor would have a huge advantage in this case
Not for the metrics we can measure at home without special tools - the full well capacity and read noise are not influenced on how the light is captured.
- especially those of the NX1, because of their shallow depth, which allows a higher efficiency at greater angles of incidence.
It sounds like you're copying this from marketing brochure, as NX1 BSI doesn't have any inherit advantage over other BSI-designs beause of what you write. The only reason why it may offer better response to light hitting at large angle than that of the small BSI sensors is because of the lateral size of the pixels.

But this is a bit silly as you really read my words like the devil would read the bible.
...therefore we need the same FOV in our comparisons...
FOV is not relevant for sensor related measurements. f/2 on 30mm lens and f/2 on 90mm lens both allow the same emount of light to go through.
If you take two different copies of the same lens design, you already get differences in light density per pixel,
Not really. If you consider this from the statistical point of view the differences are minute. Though, no two lenses are identical, but this has nothing, absolutely nothing to do with the topic. Again just obfuscation.
because of the different Strehl ratio (due to manufacturing issues) - especially in the corners of the frame...
Strehl ratio is of course not a cause, but just a metric.

And also totally irrelevant for this topic. Obfuscation again.

There is copy variation in lenses - yes - and what on earth that has to do with SNR or comparison between formats is beyond me, other than to use for obfuscation purpouses or for psycological reasons.

But if you want to ride to the lens manufacturing road, you should consider that the tolerances for smaller formats are significantly finer than for larger format.
...to get a hint of the light density (energy per area) that is collected.
Energy-per-area is irrelvant. Much better to think of total light as that if which forms the image. If you compare light density you will have to normalize afterwards so why bother with the extra step?
The image is formed by the light, which hits the sensor, in a defined way...

...through a suited lens in comparably low angles of incidence...

...all contributing to the energy per area, which is detectable by the sensor...

...thus, the efficiency of the sensor depends on the optics in front of it...

...thus the (useable) SNR depends on the optics.
The sensor SNR over different saturations as well as individual pixel saturations can be measured and calculated and they do not depend on the lens being used. You can even not use the lens at all for this purpouse, though I prefer to keep my sensor clean so...

But sure, different optics use the sensors potential to a different degree - if we consider just SNR and ignore other possible issues, the vignetting (both lens based and pixel based) will limit the performance. It's the same for all cameras.

However - I have no idea why you think this is relevant to the subject unless you will now start to argue for superiority of lenses for one system over another.
-

Same FOV, same output size and the same average brightness within the available dynamic range (including the same contrast curve)...
Contrast curve is not something that is part of the topic if you're interested in sensor's or camera's performance. Contrast curve is comething you add in processing, something that alters the very information we're interested in analyzing.
Every RAW data need a contrast curve, to be usable, as picture, or?
No. RAW data has no contast curves. It is just linear data. It is not really usable as picture, hence the word RAW (though of course you can view it in non-demosaiced non-gamma adjusted form, but it's going to look ugly and not at all like the image one will turn out with a raw-converter.

I highly recomment that you go and read Emil Martinecs theory.uchicago.edu/~ejm/pix/20d/tests/noise/ - as you learn to measure some properties of your camera's sensor, you will also realize that in order for this to work the data has to be linear.

Also you should familiarize yourself with dcraw raw-decoder and it's document mode and imageJ for doing the relevant measurements and analysis. If you're willing to put in just a little effort you'll see yourself that the RAW-data is linear.
FOV is only interesting if you either want to talk about equivalence from the point of view of comparing different formats lens properties or are taking test photographs. For analysing the relevant camera/sensor performances it's irrelevant and just obcusfates the discussion.
FOV is what photographers use and need to know, if they want to frame something in a proper way, without the need of excessive cropping.
You're missing the point. This thread was not about how to frame or anything like that.

FOV is not relevant when you want to analyse or compare the camera performance.

(Unless of course you for example shoot a test-target in which case it may be a good idea, but not necessarily is, depending on the nature of the target.)
 
I had a question? If we just cover the ff sensor inside the camera to APSC size. The sensor performance will drop to aps-c sensor performance?
Yes. It doesn't matter how you crop. Cropping in the computer after the exposure will also cause the same effect.

It is (almost) all about how much light is collected. If you crop, you throw away part of the light.
so, basically, if i print out a FF picture (say 30x20) and i then phisically CUT 2 inches out of each border, what remains is now worse because i cut threw away part of the light?

are you kidding me?
No. You misunderstood me. Maybe I should have been clearer.

If we consider the signal-to-noise-ratio, then cropping reduces the it. If you crop half of the image, your SNR is reduced by factor of 1.414.

I didn't mean that if I crop part of the image out the somehow the rest is somehow also influenced. That would be a silly idea :)

What I meant is simply this:

You take two images with your camera (the same camera) - one image with 50mm lens, and another with 100mm lens. Now if you crop the image taken with the 50mm lens to match the field of view of the 100mm lens, the output image you get has about a stop lower signal to noise ratio than the image which was taken with the 100mm lens has. This is true regardless of how the crop is performed, wether is the sensor (in this case four-thirds) or done in (post)processing.
 
I had a question? If we just cover the ff sensor inside the camera to APSC size. The sensor performance will drop to aps-c sensor performance?
Yes. It doesn't matter how you crop. Cropping in the computer after the exposure will also cause the same effect.

It is (almost) all about how much light is collected. If you crop, you throw away part of the light.
so, basically, if i print out a FF picture (say 30x20) and i then phisically CUT 2 inches out of each border, what remains is now worse because i cut threw away part of the light?

are you kidding me?
No. You misunderstood me. Maybe I should have been clearer.

If we consider the signal-to-noise-ratio, then cropping reduces the it. If you crop half of the image, your SNR is reduced by factor of 1.414.

I didn't mean that if I crop part of the image out the somehow the rest is somehow also influenced. That would be a silly idea :)

What I meant is simply this:

You take two images with your camera (the same camera) - one image with 50mm lens, and another with 100mm lens. Now if you crop the image taken with the 50mm lens to match the field of view of the 100mm lens, the output image you get has about a stop lower signal to noise ratio than the image which was taken with the 100mm lens has. This is true regardless of how the crop is performed, wether is the sensor (in this case four-thirds) or done in (post)processing.
Here I done this very experiment shot using a constant light source

014_8827%20as%20Smart%20Object-1.jpg


014_8826%20as%20Smart%20Object-1.jpg


If there was no SNR difference when cropping an image then there would be no way that one could shoot at a F stop that restricted the intensity of light by 2 stop and still contain the same visual amount of noise between the 2 images



--
The Camera is only a tool, photography is deciding how to use it.
The hardest part about capturing wildlife is not the photographing portion; it’s getting them to sign a model release
 
The point is:

The sensor size defines the area of the image circle...

...the FOV is defined by the lens in conjunction with the sensor size.

-

Thus, we have to look at both...
No. If we want to compare what is the sensor performance, we don't have to care about the optics at all.
Aha, but without optics you get light on the sensor from angles up to 90 degree - on the whole surface, which is pretty uncommon in normal use and completely useless if one want's to get a picture...
You miss the point. We don't need to consider lenses or optics when we consider the performance of the sensor.
This doesn't become true from repeating...

Example:

A sensor which has his highest sensitivity in the UV region is used in visible light...

...A sensor which has the highest sensitivity in IR is used in visible light...

...A sensor, which has the highest sensitivity in the visible light, but also has microlenses,

which filter the approaching light and reflect some light...

...the color matrix that is needed to calculate a useable RGB image from the different signals...

...add different angle of incidence, different penetration depth, different pixel depth etc...

A example (best viewed on the gallery page as original due to color management issues):


a spectrum I took to see and to solve the problem, of a camera, which has too much IR sensitivity

But if you step into the door and claim, that everything need not to be considered, then it is true, or what? :-D
...beside that the pixel of a BSI sensor would have a huge advantage in this case
Not for the metrics we can measure at home without special tools - the full well capacity and read noise are not influenced on how the light is captured.
The biggest problem with measurements is to define and to deliver reliable general conditions for the measurement - not the measurement itself...

...and I have great doubts, that you provide suited general conditions for the measurement...

...no need to think about the measurement itself.
- especially those of the NX1, because of their shallow depth, which allows a higher efficiency at greater angles of incidence.
It sounds like you're copying this from marketing brochure, as NX1 BSI doesn't have any inherit advantage over other BSI-designs beause of what you write. The only reason why it may offer better response to light hitting at large angle than that of the small BSI sensors is because of the lateral size of the pixels.

But this is a bit silly as you really read my words like the devil would read the bible.
No argument, but accusations...

...means no knowledge, but the need to write 'important' things. ;-)
...therefore we need the same FOV in our comparisons...
FOV is not relevant for sensor related measurements. f/2 on 30mm lens and f/2 on 90mm lens both allow the same emount of light to go through.
If you take two different copies of the same lens design, you already get differences in light density per pixel,
Not really. If you consider this from the statistical point of view the differences are minute. Though, no two lenses are identical, but this has nothing, absolutely nothing to do with the topic. Again just obfuscation.
*MOD EDITED*
because of the different Strehl ratio (due to manufacturing issues) - especially in the corners of the frame...
Strehl ratio is of course not a cause, but just a metric.
You think, that the bottom of a bottle vs. a APO makes no difference...

...just look with your own eyes, instead of a camera sensor, to see the difference.
And also totally irrelevant for this topic. Obfuscation again.
I see, that you are obfuscated, but this is not my fault.
There is copy variation in lenses - yes - and what on earth that has to do with SNR or comparison between formats is beyond me, other than to use for obfuscation purpouses or for psycological reasons.
If you ever used a telescope to see faintest stars, or to photograph them, then you would know, that the detection limit has to do with the Strehl ratio of the used optics.
But if you want to ride to the lens manufacturing road, you should consider that the tolerances for smaller formats are significantly finer than for larger format.
I adjusted, calculated, constructed and built optics...
...to get a hint of the light density (energy per area) that is collected.
Energy-per-area is irrelvant. Much better to think of total light as that if which forms the image. If you compare light density you will have to normalize afterwards so why bother with the extra step?
The image is formed by the light, which hits the sensor, in a defined way...

...through a suited lens in comparably low angles of incidence...

...all contributing to the energy per area, which is detectable by the sensor...

...thus, the efficiency of the sensor depends on the optics in front of it...

...thus the (useable) SNR depends on the optics.
The sensor SNR over different saturations as well as individual pixel saturations can be measured and calculated and they do not depend on the lens being used. You can even not use the lens at all for this purpouse, though I prefer to keep my sensor clean so...
You seem to have missed the point, that the sensor will deliver a different performance depending on the testing conditions...

...so, the conditions, that match the reality the most, should be the preferred.
But sure, different optics use the sensors potential to a different degree - if we consider just SNR and ignore other possible issues, the vignetting (both lens based and pixel based) will limit the performance. It's the same for all cameras.
The vignetting is nothing to worry about, because you can always use the center part of the frame with the same lens.
However - I have no idea why you think this is relevant to the subject unless you will now start to argue for superiority of lenses for one system over another.
That you have no idea is meanwhile sufficiently clear.
-

Same FOV, same output size and the same average brightness within the available dynamic range (including the same contrast curve)...
Contrast curve is not something that is part of the topic if you're interested in sensor's or camera's performance. Contrast curve is comething you add in processing, something that alters the very information we're interested in analyzing.
Every RAW data need a contrast curve, to be usable, as picture, or?
No. RAW data has no contast curves. It is just linear data.
"Need" is different to "has", isn't it?

*MOD EDITED*
FOV is only interesting if you either want to talk about equivalence from the point of view of comparing different formats lens properties or are taking test photographs. For analysing the relevant camera/sensor performances it's irrelevant and just obcusfates the discussion.
FOV is what photographers use and need to know, if they want to frame something in a proper way, without the need of excessive cropping.
You're missing the point. This thread was not about how to frame or anything like that.
If you want to review (*MOD EDITED*) theoretical possible performances, if no image is taken, you should open your own thread...
FOV is not relevant when you want to analyse or compare the camera performance.
A sensor on its own is completely irrelevant without considering the general conditions.
(Unless of course you for example shoot a test-target in which case it may be a good idea, but not necessarily is, depending on the nature of the target.)
 

Attachments

  • 786678.jpg
    786678.jpg
    16.8 KB · Views: 0
Last edited by a moderator:
cesjr wrote;

Dude you're hardly on a roll. You're dug in and your arguments are not convincing a lot of folks. You can blame that on them sure. But maybe it's your arguments? Shudder, can't be I guess
You seem like the kind of "dude" who would benefit from watching this video seeing as you seem to totally ignore Fraulain's excellent explanations (which are much better written and scientifically backed up than my own.)


You could also read this but I suspect it isn't going to "take."

http://www.josephjamesphotography.com/equivalence/

The main thing to take away from that website is this statement, "The same total light will result in the same noise for equally efficient sensors (regardless of pixel count and regardless of the ISO setting."
I read that long winded piece and watched that video. It's just the same old FF has More Light argument. Which Is just plain stupid. Sure there's more light coming in but it's just feeding more pixels and more resolution. The same amount of light is falling on each pixel. I cannot believe people are even sucked into this argument. It makes no sense whatsoever
 
I had a question? If we just cover the ff sensor inside the camera to APSC size. The sensor performance will drop to aps-c sensor performance?
Yes. It doesn't matter how you crop. Cropping in the computer after the exposure will also cause the same effect.

It is (almost) all about how much light is collected. If you crop, you throw away part of the light.
so, basically, if i print out a FF picture (say 30x20) and i then phisically CUT 2 inches out of each border, what remains is now worse because i cut threw away part of the light?

are you kidding me?
No. You misunderstood me. Maybe I should have been clearer.

If we consider the signal-to-noise-ratio, then cropping reduces the it. If you crop half of the image, your SNR is reduced by factor of 1.414.

I didn't mean that if I crop part of the image out the somehow the rest is somehow also influenced. That would be a silly idea :)

What I meant is simply this:

You take two images with your camera (the same camera) - one image with 50mm lens, and another with 100mm lens. Now if you crop the image taken with the 50mm lens to match the field of view of the 100mm lens, the output image you get has about a stop lower signal to noise ratio than the image which was taken with the 100mm lens has. This is true regardless of how the crop is performed, wether is the sensor (in this case four-thirds) or done in (post)processing.
Here I done this very experiment shot using a constant light source

014_8827%20as%20Smart%20Object-1.jpg


014_8826%20as%20Smart%20Object-1.jpg


If there was no SNR difference when cropping an image then there would be no way that one could shoot at a F stop that restricted the intensity of light by 2 stop and still contain the same visual amount of noise between the 2 images



--
The Camera is only a tool, photography is deciding how to use it.
The hardest part about capturing wildlife is not the photographing portion; it’s getting them to sign a model release
Is this supposed to show something? I don't see a difference but admittedly these are low res jpegs
 
I would swear this is the same thread from the heated front page discussion some months ago, with the same amateur nuclear scientists points and counterpoints.

Or maybe it is the matrix having hiccup.
This same topic repeatedly appears on all forums for the last 2-3 years. It seems there are always people who can't grasp this simple concept. Worse yet, there are some of them always want to start a big debate based on that ignorance, and amazingly they never learn, so the arguments continue in perpetuity.

The equivalence seems to hit some deep psychological issues and that's the reason for ever continuing resistance on the part of some people.
I think you're over analyzing it. Equivalence die hards are just wedded to FF superiority. It's not that complicated.
 
I had a question? If we just cover the ff sensor inside the camera to APSC size. The sensor performance will drop to aps-c sensor performance?
Yes. It doesn't matter how you crop. Cropping in the computer after the exposure will also cause the same effect.

It is (almost) all about how much light is collected. If you crop, you throw away part of the light.
so, basically, if i print out a FF picture (say 30x20) and i then phisically CUT 2 inches out of each border, what remains is now worse because i cut threw away part of the light?

are you kidding me?
No. You misunderstood me. Maybe I should have been clearer.

If we consider the signal-to-noise-ratio, then cropping reduces the it. If you crop half of the image, your SNR is reduced by factor of 1.414.

I didn't mean that if I crop part of the image out the somehow the rest is somehow also influenced. That would be a silly idea :)

What I meant is simply this:

You take two images with your camera (the same camera) - one image with 50mm lens, and another with 100mm lens. Now if you crop the image taken with the 50mm lens to match the field of view of the 100mm lens, the output image you get has about a stop lower signal to noise ratio than the image which was taken with the 100mm lens has. This is true regardless of how the crop is performed, wether is the sensor (in this case four-thirds) or done in (post)processing.
Here I done this very experiment shot using a constant light source

014_8827%20as%20Smart%20Object-1.jpg


014_8826%20as%20Smart%20Object-1.jpg


If there was no SNR difference when cropping an image then there would be no way that one could shoot at a F stop that restricted the intensity of light by 2 stop and still contain the same visual amount of noise between the 2 images

--
The Camera is only a tool, photography is deciding how to use it.
The hardest part about capturing wildlife is not the photographing portion; it’s getting them to sign a model release
Is this supposed to show something? I don't see a difference but admittedly these are low res jpegs
And understanding why you don’t see a difference is the propose of this test. One photo is taken with a light intensity 2 stops lower while they both show the same amount of noise. If cropping has no effect on SNR then this test would have shown otherwise.



--
The Camera is only a tool, photography is deciding how to use it.
The hardest part about capturing wildlife is not the photographing portion; it’s getting them to sign a model release
 
I had a question? If we just cover the ff sensor inside the camera to APSC size. The sensor performance will drop to aps-c sensor performance?
Yes. It doesn't matter how you crop. Cropping in the computer after the exposure will also cause the same effect.

It is (almost) all about how much light is collected. If you crop, you throw away part of the light.
so, basically, if i print out a FF picture (say 30x20) and i then phisically CUT 2 inches out of each border, what remains is now worse because i cut threw away part of the light?

are you kidding me?
No. You misunderstood me. Maybe I should have been clearer.

If we consider the signal-to-noise-ratio, then cropping reduces the it. If you crop half of the image, your SNR is reduced by factor of 1.414.

I didn't mean that if I crop part of the image out the somehow the rest is somehow also influenced. That would be a silly idea :)

What I meant is simply this:

You take two images with your camera (the same camera) - one image with 50mm lens, and another with 100mm lens. Now if you crop the image taken with the 50mm lens to match the field of view of the 100mm lens, the output image you get has about a stop lower signal to noise ratio than the image which was taken with the 100mm lens has. This is true regardless of how the crop is performed, wether is the sensor (in this case four-thirds) or done in (post)processing.
Here I done this very experiment shot using a constant light source

014_8827%20as%20Smart%20Object-1.jpg


014_8826%20as%20Smart%20Object-1.jpg


If there was no SNR difference when cropping an image then there would be no way that one could shoot at a F stop that restricted the intensity of light by 2 stop and still contain the same visual amount of noise between the 2 images

--
The Camera is only a tool, photography is deciding how to use it.
The hardest part about capturing wildlife is not the photographing portion; it’s getting them to sign a model release
Is this supposed to show something? I don't see a difference but admittedly these are low res jpegs
And understanding why you don’t see a difference is the propose of this test. One photo is taken with a light intensity 2 stops lower while they both show the same amount of noise. If cropping has no effect on SNR then this test would have shown otherwise.



--
The Camera is only a tool, photography is deciding how to use it.
The hardest part about capturing wildlife is not the photographing portion; it’s getting them to sign a model release
Yea I'd like some proof of the exposures and some better way to compare the noise. Are these even high ISO shots?
 
I had a question? If we just cover the ff sensor inside the camera to APSC size. The sensor performance will drop to aps-c sensor performance?
Yes. It doesn't matter how you crop. Cropping in the computer after the exposure will also cause the same effect.

It is (almost) all about how much light is collected. If you crop, you throw away part of the light.
so, basically, if i print out a FF picture (say 30x20) and i then phisically CUT 2 inches out of each border, what remains is now worse because i cut threw away part of the light?

are you kidding me?
No. You misunderstood me. Maybe I should have been clearer.

If we consider the signal-to-noise-ratio, then cropping reduces the it. If you crop half of the image, your SNR is reduced by factor of 1.414.

I didn't mean that if I crop part of the image out the somehow the rest is somehow also influenced. That would be a silly idea :)

What I meant is simply this:

You take two images with your camera (the same camera) - one image with 50mm lens, and another with 100mm lens. Now if you crop the image taken with the 50mm lens to match the field of view of the 100mm lens, the output image you get has about a stop lower signal to noise ratio than the image which was taken with the 100mm lens has. This is true regardless of how the crop is performed, wether is the sensor (in this case four-thirds) or done in (post)processing.
Here I done this very experiment shot using a constant light source

014_8827%20as%20Smart%20Object-1.jpg


014_8826%20as%20Smart%20Object-1.jpg


If there was no SNR difference when cropping an image then there would be no way that one could shoot at a F stop that restricted the intensity of light by 2 stop and still contain the same visual amount of noise between the 2 images

--
The Camera is only a tool, photography is deciding how to use it.
The hardest part about capturing wildlife is not the photographing portion; it’s getting them to sign a model release
Is this supposed to show something? I don't see a difference but admittedly these are low res jpegs
And understanding why you don’t see a difference is the propose of this test. One photo is taken with a light intensity 2 stops lower while they both show the same amount of noise. If cropping has no effect on SNR then this test would have shown otherwise.

--
The Camera is only a tool, photography is deciding how to use it.
The hardest part about capturing wildlife is not the photographing portion; it’s getting them to sign a model release
Yea I'd like some proof of the exposures and some better way to compare the noise. Are these even high ISO shots?
All the information is there

One image is shot at iso 800 and the other at a iso 4 times higher iso 3200



--
The Camera is only a tool, photography is deciding how to use it.
The hardest part about capturing wildlife is not the photographing portion; it’s getting them to sign a model release
 
I had a question? If we just cover the ff sensor inside the camera to APSC size. The sensor performance will drop to aps-c sensor performance?
Yes. It doesn't matter how you crop. Cropping in the computer after the exposure will also cause the same effect.

It is (almost) all about how much light is collected. If you crop, you throw away part of the light.
so, basically, if i print out a FF picture (say 30x20) and i then phisically CUT 2 inches out of each border, what remains is now worse because i cut threw away part of the light?

are you kidding me?
No. You misunderstood me. Maybe I should have been clearer.

If we consider the signal-to-noise-ratio, then cropping reduces the it. If you crop half of the image, your SNR is reduced by factor of 1.414.

I didn't mean that if I crop part of the image out the somehow the rest is somehow also influenced. That would be a silly idea :)

What I meant is simply this:

You take two images with your camera (the same camera) - one image with 50mm lens, and another with 100mm lens. Now if you crop the image taken with the 50mm lens to match the field of view of the 100mm lens, the output image you get has about a stop lower signal to noise ratio than the image which was taken with the 100mm lens has. This is true regardless of how the crop is performed, wether is the sensor (in this case four-thirds) or done in (post)processing.
Here I done this very experiment shot using a constant light source

014_8827%20as%20Smart%20Object-1.jpg


014_8826%20as%20Smart%20Object-1.jpg


If there was no SNR difference when cropping an image then there would be no way that one could shoot at a F stop that restricted the intensity of light by 2 stop and still contain the same visual amount of noise between the 2 images

--
The Camera is only a tool, photography is deciding how to use it.
The hardest part about capturing wildlife is not the photographing portion; it’s getting them to sign a model release
Is this supposed to show something? I don't see a difference but admittedly these are low res jpegs
And understanding why you don’t see a difference is the propose of this test. One photo is taken with a light intensity 2 stops lower while they both show the same amount of noise. If cropping has no effect on SNR then this test would have shown otherwise.

--
The Camera is only a tool, photography is deciding how to use it.
The hardest part about capturing wildlife is not the photographing portion; it’s getting them to sign a model release
Yea I'd like some proof of the exposures and some better way to compare the noise. Are these even high ISO shots?
All the information is there

One image is shot at iso 800 and the other at a iso 4 times higher iso 3200

--
The Camera is only a tool, photography is deciding how to use it.
The hardest part about capturing wildlife is not the photographing portion; it’s getting them to sign a model release
I don't think its a very scientific noise comparison. Plus what hou need to show is an actual degradation from cropping. That your claim ( which is quite silly really )
 
Jeez. Noise does not increase from cropping. All cropping does is cut down the FOV. The same noise is there. I can't believe anyone is really arguing this. It's absurd.
 
Jeez. Noise does not increase from cropping. All cropping does is cut down the FOV. The same noise is there. I can't believe anyone is really arguing this. It's absurd.
But cropping does have an affect on how you view the noise
 
I had a question? If we just cover the ff sensor inside the camera to APSC size. The sensor performance will drop to aps-c sensor performance?
Yes. It doesn't matter how you crop. Cropping in the computer after the exposure will also cause the same effect.

It is (almost) all about how much light is collected. If you crop, you throw away part of the light.
so, basically, if i print out a FF picture (say 30x20) and i then phisically CUT 2 inches out of each border, what remains is now worse because i cut threw away part of the light?

are you kidding me?
No. You misunderstood me. Maybe I should have been clearer.

If we consider the signal-to-noise-ratio, then cropping reduces the it. If you crop half of the image, your SNR is reduced by factor of 1.414.

I didn't mean that if I crop part of the image out the somehow the rest is somehow also influenced. That would be a silly idea :)

What I meant is simply this:

You take two images with your camera (the same camera) - one image with 50mm lens, and another with 100mm lens. Now if you crop the image taken with the 50mm lens to match the field of view of the 100mm lens, the output image you get has about a stop lower signal to noise ratio than the image which was taken with the 100mm lens has. This is true regardless of how the crop is performed, wether is the sensor (in this case four-thirds) or done in (post)processing.
Here I done this very experiment shot using a constant light source

014_8827%20as%20Smart%20Object-1.jpg


014_8826%20as%20Smart%20Object-1.jpg


If there was no SNR difference when cropping an image then there would be no way that one could shoot at a F stop that restricted the intensity of light by 2 stop and still contain the same visual amount of noise between the 2 images

--
The Camera is only a tool, photography is deciding how to use it.
The hardest part about capturing wildlife is not the photographing portion; it’s getting them to sign a model release
Is this supposed to show something? I don't see a difference but admittedly these are low res jpegs
And understanding why you don’t see a difference is the propose of this test. One photo is taken with a light intensity 2 stops lower while they both show the same amount of noise. If cropping has no effect on SNR then this test would have shown otherwise.

--
The Camera is only a tool, photography is deciding how to use it.
The hardest part about capturing wildlife is not the photographing portion; it’s getting them to sign a model release
Yea I'd like some proof of the exposures and some better way to compare the noise. Are these even high ISO shots?
All the information is there

One image is shot at iso 800 and the other at a iso 4 times higher iso 3200

--
The Camera is only a tool, photography is deciding how to use it.
The hardest part about capturing wildlife is not the photographing portion; it’s getting them to sign a model release
I don't think its a very scientific noise comparison. Plus what hou need to show is an actual degradation from cropping. That your claim ( which is quite silly really )




when viewed at the same output size the cropped image will show more noise as you are enlarging how you view the noise

these are one image, one is cropped and the other is not and both are viewed a the same output resolution. I have used blue wall out of focus at to eliminate FOV so we can just deal with SNR

Or lets break it down mathematically what is going to show more noise 1 data point (pixel) with the same SNR in 36,000,000 data points or 1 data point (pixel) in 16,000,000 data points <--- this would be using a cropped image with a crop factor of 1.5

--
The Camera is only a tool, photography is deciding how to use it.
The hardest part about capturing wildlife is not the photographing portion; it’s getting them to sign a model release
 
Last edited:
I think you miss my point. What I mean is capture photo with full image without cover the sensor compare to image captured with covered sensor. So, crop the full image size same as covered sensor image so same pixels information.
 

Keyboard shortcuts

Back
Top