Is my thinking about equivalence right?

Started 3 months ago | Questions thread
bobn2
bobn2 Forum Pro • Posts: 69,811
Re: Is my thinking about equivalence right?
10

Serguei Palto wrote:

Mark Ransom wrote:

RobBobW wrote:

- the argument about total light is pointless as what is important is light density. Yes FF will bring in 4 times the light, but FF also has 4 times the surface area of sensor to illuminate, so it is a wash. Faster lenses bring in more light per unit area, period.

This is simply false. The reason apertures are measured by F-stops is because this equalizes the light density per unit area between lenses of different characteristics. A lens at F/1.2 will produce the same light density, no matter the focal length of the lens or the size of the sensor behind it. This means that a sensor with 4x the area really will collect 4x the light, when measured over the whole image, as long as the F-stops/T-stops of the lenses are the same.

The reason this matters is the nature of noise. The majority of noise in today's cameras is from photon shot noise, which is a property of the light itself and not of the lens or sensor or any other camera electronics. The only way to reduce shot noise is to collect more light. Whether you do this with a larger sensor, a larger aperture, or a slower shutter speed is immaterial.

You appear to have missed your last tutorial class, Mr Palto. I know that some members of your class are impressed by your displays, but the examiners will be less forgiving. They will not have embedded preconceptions and if they conclude that you are bending the physics to fit a 'political' position it won't go so well for you. Unfortunately, there is no opportunity to re-arrange the classes that you've been skipping, so I've added some annotations to your work. I hope that you'll find them helpful.

RobBobW is right!

What you are saying on F-numbers is correct, but there is no contradiction with what RobBobW is saying: "Faster lenses bring in more light per unit area, period."

This indicative of what I meant by playing to the crowd of your classmates, rather than performing what should be cold, objective science. You have chosen to identify yourself with RobBobW's statement that "Faster lenses bring more light per unit area, period." Whilst it is true that faster lenses bring more light per unit area (of course, dependent on usage of the word 'faster', which is a colloquialism), the addition of 'period' is intended to suggest that there is no more to be discussed after the making of that statement. Of course, there is plenty to be discussed.

I only want to add that in addition to the light intensity the pixels size is another important factor.

Factor of what? This compounds the emptiness of RobBobW's statement. He failed to provide the context within which light per unit area is significant, and now you claim to be identifying a 'factor' in the same unstated context.

For example, when we check for the noise we usually go to 1:1 view to resolve individual pixels.

I would avoid the use of 'we'. The rhetorical intent is probably to try to identify yourself with experts in the field. There is a reason that science is written in the passive voice, to avoid this sort of specious appeal to authority.

At this moment there is nothing to do with the total light captured by the sensor. It is just evident that what you see on your display at 1:1 view is tipically a small fraction of a whole sensor. If you use the lenses of the same focal length and sensors with the same pixel density then independently on the sensor size you will get the same S/N ratio at the same F-number (of course, same sensor technology and lens quality are assumed).

We can tell nothing on S/N ratio if we look just on a value captured by a single pixel. It is because a single pixel can't form an image. So we need many pixels. But we also are not able to judge on S/N if we look at the whole image at conditions when the individual pixels are not resolved. For example, it is well known that visible S/N depends on a distance from a viewer to a printed photo. The larger distance - the higher S/N. It is because we loose the resolution with increasing the distance. The human vision itself works as a low pass filter. The larger distance results in filtration the higher frequency components. So with increasing the distance we are narrawing the spectral pass band.

There is a lot of confused language, false propositions and poor reasoning here, mixed in with some correct or partially correct statements. Probably the best approach is to rephrase what you're trying to say.

There is no SNR for a single pixel because SNR is a statistical measurement, with noise being the standard deviation of a set of observations from an expected or mean value. Noise is bandwidth dependent, that is, noise power increases with the frequency of the observation. For this reason it is established practice to state the frequency band over which an SNR is measured, and when two SNRs are to be compared, which have been measured over different frequency bands, to normalise them. The precise form of normalisation depends on the purpose of the comparison. I've noticed a tendency in your previous work to fail to understand the importance of context, and to seek to apply textbook formulae in places where they are unhelpful without appropriate normalisation.

In other words the S/N must be defined at conditions when there are no additional filters which can influence the spectral band width of an image. Moreover, in case of different sensors we must compare the S/N at the same spectral band width defined explicitely by a sensor properties. The spectral band width is the difference between the highest frequency the sensor can capture and the lowest frequency. In ideal case the highest frequency is the half of the Nyquist frequency that is defined by the pixel size, while the lowest frequency is defined by the sensor size. In case of a sensor size is significantly higher than the pixel size, the lowest frequency can be considered as zero. Thus the sensor frequency band is just defined by the pixel size which is the main factor responsible for true (whole band) S/N .

Again, this ignores the whole concept of normalisation. Saying "the S/N must be defined at conditions when there are no additional filters which can influence the spectral band width of an image" does not make it so. How the SNR 'must be defined' depends on what you want to use the SNR for. In the context of photography, most photographers will want to use SNR as an indicator of how noisy their photos will look. Therefore, when adopting SNR as a photographic metric it makes sense to normalise it to conditions which reflect the normal usage. There is space for a discussion about what should be the normalisation adopted, but whatever that is, comparing differently normalised SNRs doesn't make any practical sense.

In spectral terms, the total light captured by a sensor is just a magnitude of a single spectral component at zero frequency in an image, while an image consists of a huge number of the frequency components. Actually, the number of frequency components in the image is equal to number of pixels in a sensor. For that reason the so-called EQ-"theory" based on a total light or just a magnitude of a single frequency component at zero frequency is a missleading concept.

You should check the statements you make for logical consistency against other broadly accepted concepts. 'Total light' is simply exposure multiplied by sensor area. So everything that you say about 'total light' apples equally to exposure, yet exposure has been the central concept in sensitometry since its inception. The value of exposure as typically used for exposure calculations is simply the integration of the exposure at each element of the image. That exposure does not change dependent on the size of the element or the number. You seem to be suggesting that the value for 'total light' would be dependent on the sampling frequency, which is obviously wrong. If your conjecture was correct, the exposure would also be dependent on sampling frequency, and it isn't.

In terms of spectral approach the FF-sensor performs better than m43 only because of the larger pixel size (the same number of the pixels is assumed).

The assumption makes the pixel size question irrelevent, since if you assume the same number of pixels, pixel area is proportional to sensor area - and you cannot distinguish whether it is sensor size or pixel size which is responsible. If your theory is that it is pixel size, then it is erroneous, and easily disproven with experimental results. If correct, then a micro Four Thirds camera should produce the same apparent noisiness as a FF camera with the same pixel size.

Clearly, your proposition does not predict reality.

If someone wants to learn something on S/N then EQ must be forgotten as a "theory". This concept has been created just as a practical tool for quick recalculating FOV and DOF in case someone uses cameras of different formats, and it can't be used for something else.

Again, it is sensible to avoid statements about what people should do if they 'want to learn' until you have done the learning yourself. Providing a set of statements which are very clearly incorrect and easily disproven does not give any confidence in how much learning has been accomplished.

All what I am saying is implemented in my iWE raw processing software, which is free .

It may be a good idea to expend more time on your studies and less on hobby activities, at least until you have mastered this part of the curriculum. In any case, given the number of fallacies and false statements in what you have said, telling people that it is 'implemented' in your software probably isn't likely give them much confidence in the software.

-- hide signature --

Is it always wrong
for one to have the hots for
Comrade Kim Yo Jong?

Post (hide subjects) Posted by
Yxa
Yxa
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow