Is my thinking about equivalence right?

Started 3 months ago | Questions thread
Serguei Palto Contributing Member • Posts: 733
Re: Is my thinking about equivalence right?
2

Mark Ransom wrote:

RobBobW wrote:

- the argument about total light is pointless as what is important is light density. Yes FF will bring in 4 times the light, but FF also has 4 times the surface area of sensor to illuminate, so it is a wash. Faster lenses bring in more light per unit area, period.

This is simply false. The reason apertures are measured by F-stops is because this equalizes the light density per unit area between lenses of different characteristics. A lens at F/1.2 will produce the same light density, no matter the focal length of the lens or the size of the sensor behind it. This means that a sensor with 4x the area really will collect 4x the light, when measured over the whole image, as long as the F-stops/T-stops of the lenses are the same.

The reason this matters is the nature of noise. The majority of noise in today's cameras is from photon shot noise, which is a property of the light itself and not of the lens or sensor or any other camera electronics. The only way to reduce shot noise is to collect more light. Whether you do this with a larger sensor, a larger aperture, or a slower shutter speed is immaterial.

RobBobW is right!

What you are saying on F-numbers is correct, but there is no contradiction with what RobBobW is saying: "Faster lenses bring in more light per unit area, period."

I only want to add that in addition to the light intensity  the pixels size is another important factor.

For example, when we check for the noise we usually go to 1:1 view to resolve individual pixels. At this moment there is nothing to do with the total light captured by the sensor. It is just evident that what you see on your display at 1:1 view is tipically a small fraction of a whole sensor. If you use the lenses of the same focal length and sensors with the same pixel density then independently on the sensor size you will get the same S/N ratio at the same F-number (of course, same sensor technology and lens quality are assumed).

We can tell nothing on S/N ratio if we look just on a value captured by a single pixel. It is because a single pixel can't form an image. So we need many pixels. But we also are not able to judge on S/N if we look at the whole image at conditions when the individual pixels are not resolved. For example, it is well known that visible S/N depends on a distance from a viewer to a printed photo. The larger distance - the higher S/N. It is because we loose the resolution with increasing the distance. The human vision itself works as a low pass filter. The larger distance results in filtration the higher frequency components. So with increasing the distance we are narrawing the spectral pass band.

In other words the S/N must be defined at conditions when there are no additional filters which can influence the spectral band width of an image. Moreover, in case of different sensors we must compare the S/N at the same spectral band width defined explicitely by a sensor properties. The spectral band width is the difference between the highest frequency the sensor can capture and the lowest frequency. In ideal case the highest frequency is the half of the Nyquist frequency that is defined by the pixel size, while the lowest frequency is defined by the sensor size. In case of a sensor size is significantly higher than the pixel size, the lowest frequency can be considered as zero. Thus the sensor frequency band is just defined by the pixel size which is the main factor responsible for true (whole band) S/N .

In spectral terms, the total light captured by a sensor is just a magnitude of a single spectral component at zero frequency in an image, while an image consists of a huge number of the frequency components. Actually, the number of frequency components in the image  is equal to number of pixels in a sensor. For that reason the so-called EQ-"theory" based on a total light or just a magnitude of a single frequency component at zero frequency is a missleading concept.

In terms of spectral approach the FF-sensor performs better than m43 only because of the larger pixel size (the same number of the pixels is assumed).

If someone wants to learn something on S/N then EQ must be forgotten as a "theory". This concept has been created just as a practical tool for quick recalculating FOV and DOF in case someone uses cameras of different formats, and it can't be used for something else.

All what I am saying is implemented in my iWE raw processing software, which is free .

Best regards, SP

Post (hide subjects) Posted by
Yxa
Yxa
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow