A couple of questions for Great Bustard…

But at the limit of my understanding, in for the case of lens with perfect refractions, then these conservation principles tell us really only need to think about what happen on the subject side of the lens. Isn't that correct?
This is a very good way to think about it. Assuming that the lens does what it is supposed to do, you can just look at the light falling on it (to be more precise, on a single element lens with the same aperture). No need to know what is behind the lens or even the FL; except, well, the AOV which is determined by those.
Thanks for the feedback. There was a similar thread some months ago where Roger Clark patiently explained his interpretations of etendue that I found very helpful. At least for me, it's just more intuitive and useful to think in terms of the geometry in front of the lens rather than what goes on behind it.
 
The projected image does become dimmer per square millimeter on the surface it is projected on, but not dimmer per proportion of the photo. For example, one millionth of the projected photo would be made from the same amount of light no matter how far the screen was from the source.
OK, let me rephrase my original question just for the sake of clarity:

Do you think the Inverse Square Law applies to the projected cone of light the lens projects onto the camera’s image plane?
For the sake of clarity can you explain precisely how you are applying the inverse square law? What is the origin of the distance measurement?

Is this for a point source imaged by the lens? Here the intensity varies approximately inversely as the square of the distance from the image plane, sufficiently far from the image. The approximation fails for distances less than the diameter of the Airy disk divided by the lens working numerical aperture - or the aberration-limited image diameter divided by NA if this is larger.

For an extended scene imaged by a small aperture lens, with aperture diameter much less than both image width and focal length, the intensity will decrease with distance from the lens exit pupil. For large distances, the relationship is approximately inverse square - as in your example of a projector illuminating a wall. Again, there is a near field regime where the inverse square relationship clearly fails. For a 5 mm diameter aperture, the intensity does not drop to 1/4 if the distance from the aperture is doubled from 1 mm to 2 mm.

Are you considering the inverse square law as it applies to a fixed optical configuration, or are you adjusting focus or even focal length between comparisons? Are you only interested in the image plane intensity for different focal lengths, but the same physical aperture diameter (exit pupil)? Alternatively for different focal length but the same relative aperture or F-number?

It is straightforward to calculate the result in each case, but you need to specify the question sufficiently precisely. As others have said, it may be easier to work with conservation of energy or conservation of étendue and luminance, rather than the inverse square law .
Thank you for saying that. It strikes me that a lot of these discussions would be greatly simplified by simply talking about the etendue or luminance. I'm not enough of an expert to know if that deals with all of the issues raised by JACS, especially in the diffraction limit.

But at the limit of my understanding, in for the case of lens with perfect refractions, then these conservation principles tell us really only need to think about what happen on the subject side of the lens. Isn't that correct?
Jeff -- that's a perfectly good place to start.

For a lens with a fixed physical aperture (entrance pupil):

Incident intensity (illuminance) at entrance pupil = source luminance x solid angle subtended by source x cosine(angle of incidence at lens) x aperture area

For a uniform subject and a sufficiently narrow field of view, cos(angle of incidence) is approximately unity, and this becomes a simple multiplication. Otherwise we integrate over the field of view and over the lens entrance pupil.

In any case, for a fixed subject, aperture and field of view, the total incident power is fixed. If there are no losses, all the light is projected onto the image plane, and from simple conservation of energy the intensity at the sensor is inversely proportional to the square of the linear magnification, and so inversely proportional to the distance from lens rear principal point to the sensor.

This appears similar to an inverse square dependence, and seems to risk JACS' infinite intensity singularity as the focal length approaches zero. The problem lies in the assumption of fixed aperture and no losses. In practice a properly corrected lens satisfies the Abbe sine condition, and has a spherical principal surface centred on the focus. A consequence is that the radius of the entrance pupil cannot be larger than the focal length of the lens, so we can't maintain the assumption of fixed aperture and no change in total light captured for the shortest focal lengths.

Another consequence (or independent constraint) is that the numerical aperture must be less than unity for a lens in air (corresponding to working f-number > 0.5).

Diffraction limits the spot size (proportional to wavelength/NA), so avoids infinite intensity at the sensor, even for a point source in the field of view.

HTH
 
My understanding of itendue towards optical imaging systems l.e. Lenses and solar heat collection systems has lot of difference . Though it is one of the factor. Even in celestial objects photogrsphy light beams have small angle of incidence.

now here from air as a medium light enters glass medium which is denser medium. So rules of critical angles also apply. Lens front element being convex in nature for lenses incidence angle changes as tangent to lens surface goes on changing angle.We are not dealing with light falling on flat surfaces Lens designers manufacturers correct further by nano coatings on front glass.

Lens designers know what the front element size should be beyond which it is not useful for defining an image. That is geometry of that perticular system.

so even assumption that double dia front element four time light is erroneous for different system to achieve perticalar FOV. If that would have been all then Canon 300 f4 lens a well made FF lens with 2x extender having front element size 77 mm at f8 fully open should give similar results as another well made Olympus 300 f4 pro mft lens front element size 77 mm fully open both at same FOV would have given similar optical results typical tranmission losses of 1or2 % after it had entered lens element .

or we are making grave errors in quantifying equivalence itself.We can aso do experimental trials based on actual equipment which available In market Earlier at launch of Olympus 300 f4 few months back I have suggested comparing 300 f 4 mft with 600 f4 FF on the respective system for same scene same distance side by side. That would be really informed decision making help. Other than DPR who else is there to offer help and equipment access for such an experiments and to give informed advice ? There many advocating such informed decision help offering fellas here. And all those bullet points of equivalence will have some testing here. I have put forward my views with limited knowledge I have.All suggestions welcome.- Sanjay
 
Last edited:
Small addendum for photography light is filtered out to correct flare etc and for other optical errors to optimize image quality Further some part of light is not used to define image for pleasant looking bokeh. Thinner the DOF more the light is used for pleasant looking bokeh. - Sanjay
 
I have suggested comparing 300 f 4 mft with 600 f4 FF on the respective system for same scene same distance side by side.
I wonder if the result of such a test would be representative of what can be expected when the two formats are set up equivalently: the Oly 300/4 cannot go to f/2, and the 600/4 is pushing the envelope of practical photographic lenses.

Imho a better comparison would be 25 mFT vs 50 FF, 42.5 vs 85, 100 vs 200 or 150 vs 300. Beyond that we are no longer comparing base performance but each system's peripheral limitations instead.
 
Somewhat true but 300 f4 offers just enough depth for the purpose it is intended, And that is wildlife 70 % of usage where terrain is constraint wiz shore marsh land aquatic wild life or simply risk of attack etc, carrying system outdoors is involved Further those lenses are costly in nature. My bit - Sanjay
 
The projected image does become dimmer per square millimeter on the surface it is projected on, but not dimmer per proportion of the photo. For example, one millionth of the projected photo would be made from the same amount of light no matter how far the screen was from the source.
OK, let me rephrase my original question just for the sake of clarity:

Do you think the Inverse Square Law applies to the projected cone of light the lens projects onto the camera’s image plane?
For the sake of clarity can you explain precisely how you are applying the inverse square law? What is the origin of the distance measurement?

Is this for a point source imaged by the lens? Here the intensity varies approximately inversely as the square of the distance from the image plane, sufficiently far from the image. The approximation fails for distances less than the diameter of the Airy disk divided by the lens working numerical aperture - or the aberration-limited image diameter divided by NA if this is larger.

For an extended scene imaged by a small aperture lens, with aperture diameter much less than both image width and focal length, the intensity will decrease with distance from the lens exit pupil. For large distances, the relationship is approximately inverse square - as in your example of a projector illuminating a wall. Again, there is a near field regime where the inverse square relationship clearly fails. For a 5 mm diameter aperture, the intensity does not drop to 1/4 if the distance from the aperture is doubled from 1 mm to 2 mm.

Are you considering the inverse square law as it applies to a fixed optical configuration, or are you adjusting focus or even focal length between comparisons? Are you only interested in the image plane intensity for different focal lengths, but the same physical aperture diameter (exit pupil)? Alternatively for different focal length but the same relative aperture or F-number?

It is straightforward to calculate the result in each case, but you need to specify the question sufficiently precisely. As others have said, it may be easier to work with conservation of energy or conservation of étendue and luminance, rather than the inverse square law .
Thank you for saying that. It strikes me that a lot of these discussions would be greatly simplified by simply talking about the etendue or luminance. I'm not enough of an expert to know if that deals with all of the issues raised by JACS, especially in the diffraction limit.

But at the limit of my understanding, in for the case of lens with perfect refractions, then these conservation principles tell us really only need to think about what happen on the subject side of the lens. Isn't that correct?
Jeff -- that's a perfectly good place to start.

For a lens with a fixed physical aperture (entrance pupil):

Incident intensity (illuminance) at entrance pupil = source luminance x solid angle subtended by source x cosine(angle of incidence at lens) x aperture area

For a uniform subject and a sufficiently narrow field of view, cos(angle of incidence) is approximately unity, and this becomes a simple multiplication. Otherwise we integrate over the field of view and over the lens entrance pupil.

In any case, for a fixed subject, aperture and field of view, the total incident power is fixed. If there are no losses, all the light is projected onto the image plane, and from simple conservation of energy the intensity at the sensor is inversely proportional to the square of the linear magnification, and so inversely proportional to the distance from lens rear principal point to the sensor.

This appears similar to an inverse square dependence, and seems to risk JACS' infinite intensity singularity as the focal length approaches zero. The problem lies in the assumption of fixed aperture and no losses. In practice a properly corrected lens satisfies the Abbe sine condition, and has a spherical principal surface centred on the focus. A consequence is that the radius of the entrance pupil cannot be larger than the focal length of the lens, so we can't maintain the assumption of fixed aperture and no change in total light captured for the shortest focal lengths.

Another consequence (or independent constraint) is that the numerical aperture must be less than unity for a lens in air (corresponding to working f-number > 0.5).

Diffraction limits the spot size (proportional to wavelength/NA), so avoids infinite intensity at the sensor, even for a point source in the field of view.
Hi Alan,

Your fine mind always amazes me with its comprehensive reaches and application of knowledge. Just a few thoughts offered that (you tell me) may or may not make some sense.

A concept of "point source" seems more a statement about the "method of questioning of Nature" associated with an imaging system's optical perspective. Whether isotropic in radiation pattern or not, is seems to describe a "viewer isolate-able" source (appearing within a recorded image-frame). Such light rays, even if to some extent collimated, cannot be mathematically described as a "point source" - because "nothing is ever a true point source" ... [as] ... "the size of the source must be included in any calculation."

It seems to me perhaps not very demonstrably useful (that I can think of) to ponder and reason in an isolated sense surrounding lens-system exit-pupil to image-plane surface.

If the (sum) amount of light energy irradiating/illuminating the entire photo-active area (or some sub-portion of the entire photo-active area under consideration) - and effects of the total photon-transduction ("shot') noise upon (input-referred) net, composite imaging system Signal/Noise Ratio dominates over internal Readout Noise - then (it seems that) Etendue takes into consideration system SNRs existing at any scale within an image-frame.

Etendue (as I understand it) is proportional to the mathematical product of two measures:

[1] (Typically rectangular, due to the shape of image-sensor active-area, if considering a full recorded image-frame, or potentially otherwise shaped when considering a sub-portion of the full recorded image-frame), the resulting Angular Area (located in object-space) of the imaged subject-matter under consideration, determined by the Field (angle) of View of the Chief Rays, which exist as a projection of what is the considered portion of the Exit Window:

Source: https://ocw.mit.edu/courses/mechanical-engineering/2-71-optics-spring-2009/video-lectures/lecture-6-terms-apertures-stops-pupils-and-windows-single-lens-camera/MIT2_71S09_lec06.pdf
Source: https://ocw.mit.edu/courses/mechani...ndows-single-lens-camera/MIT2_71S09_lec06.pdf

... multiplied by:

[2] The lens-system Entrance Pupil Area as it is determined by the Marginal Rays which exist as a projection of the lens-system's physical Aperture Stop size - independent of a particular (full) physical image-sensor active-area size, or sub-portions being analytically considered:

Source: https://ocw.mit.edu/courses/mechanical-engineering/2-71-optics-spring-2009/video-lectures/lecture-6-terms-apertures-stops-pupils-and-windows-single-lens-camera/MIT2_71S09_lec06.pdf
Source: https://ocw.mit.edu/courses/mechani...ndows-single-lens-camera/MIT2_71S09_lec06.pdf

.

Such an (Etendue) product "says it all" about [photon-transduction ("shot") noise dominated applications] Signal/Noise Ratios achievable by an imaging system, for a given Depth of Field, and for a given constant valued Exposure Time of the imaging system performance analyzed.

.

DM
 
Last edited:
further for telephoto and macro or wide angle photography we are using equipments at extreme end these situations are common. No reach is completely satisfactory or no angle is wide enough as our personal requirement, - Sanjay
 
If the light from the scene is not focused, thus, the ISL applies. For example, if you are twice as far from the scene, the intensity of the light reaching you is 1/4 as great.
Are you saying that the scene is emitting light?
Emitting and/or reflecting. Usually reflecting. Neither here nor there, really.
This is a bit vague. I agree with usually reflecting, though. Where does the light usually originate from in those cases?
Not sure what you're asking. If I'm taking a landscape photo, for example, light emitted by the sun is reflected by the scene.
I agree. In the vast majority of cases the sun is our light source. Is sunlight subject to the Inverse Square Law?
 
... In the vast majority of cases the sun is our light source. Is sunlight subject to the Inverse Square Law?
No. Due to the fact that nuclear fusion takes place within the Sun, it is a special case where the "inverse square law" is theoretically suspended in utter and sustained total disbelief ...

:P
 
Last edited:
It is straightforward to calculate the result in each case, but you need to specify the question sufficiently precisely. As others have said, it may be easier to work with conservation of energy or conservation of étendue and luminance, rather than the inverse square law .
Thank you for saying that. It strikes me that a lot of these discussions would be greatly simplified by simply talking about the etendue or luminance. I'm not enough of an expert to know if that deals with all of the issues raised by JACS, especially in the diffraction limit.

But at the limit of my understanding, in for the case of lens with perfect refractions, then these conservation principles tell us really only need to think about what happen on the subject side of the lens. Isn't that correct?
Jeff -- that's a perfectly good place to start.

For a lens with a fixed physical aperture (entrance pupil):

Incident intensity (illuminance) at entrance pupil = source luminance x solid angle subtended by source x cosine(angle of incidence at lens) x aperture area

For a uniform subject and a sufficiently narrow field of view, cos(angle of incidence) is approximately unity, and this becomes a simple multiplication. Otherwise we integrate over the field of view and over the lens entrance pupil.

In any case, for a fixed subject, aperture and field of view, the total incident power is fixed. If there are no losses, all the light is projected onto the image plane, and from simple conservation of energy the intensity at the sensor is inversely proportional to the square of the linear magnification, and so inversely proportional to the distance from lens rear principal point to the sensor.

Diffraction limits the spot size (proportional to wavelength/NA), so avoids infinite intensity at the sensor, even for a point source in the field of view.
A concept of "point source" seems more a statement about the "method of questioning of Nature" associated with an imaging system's optical perspective. Whether isotropic in radiation pattern or not, is seems to describe a "viewer isolate-able" source (appearing within a recorded image-frame). Such light rays, even if to some extent collimated, cannot be mathematically described as a "point source" - because "nothing is ever a true point source"
Sure, but we can get very close. Considering how a model of reality handles limiting cases can tell you a lot about how robust both the model and one's understanding are.

The key characteristic of a point source is that it produces a radiation pattern with a spherical wavefront - from which it is trivial to deduce an inverse square variation of intensity if one assumes that energy is conserved.

Folk here https://lasers.llnl.gov/ or here https://www.ligo.caltech.edu/page/optics know a fair amount about minimising real world wavefront distortions. Alternatively, look at the night sky. Stars are large, but practically indistinguishable from point sources if one's telescope objective is less than a few metres across - even in the vacuum of space.

My reply was to to Jeff, who in turn responded to JACS' demonstration that a simple unqualified "inverse square law" assumption leads to unrealistic or non-physical conclusions. I was trying to offer a more robust analysis.
True for geometric optics calculations. However, by including diffraction, we still get sensible answers for a point source - provided the total energy entering the lens is finite.
It seems to me perhaps not very demonstrably useful (that I can think of) to ponder and reason in an isolated sense surrounding lens-system exit-pupil to image-plane surface.

If the (sum) amount of light energy irradiating/illuminating the entire photo-active area (or some sub-portion of the entire photo-active area under consideration) - and effects of the total photon-transduction ("shot') noise upon (input-referred) net, composite imaging system Signal/Noise Ratio dominates over internal Readout Noise - then (it seems that) Etendue takes into consideration system SNRs existing at any scale within an image-frame.
If you know the total energy reaching the sensor (or part of it), then you can figure out the energy in a smaller area or an individual pixel, and derive the shot-limited SNR.
Etendue (as I understand it) is proportional to the mathematical product of two measures:

[1] (Typically rectangular, due to the shape of image-sensor active-area, if considering a full recorded image-frame, or potentially otherwise shaped when considering a sub-portion of the full recorded image-frame), the resulting Angular Area (located in object-space) of the imaged subject-matter under consideration, determined by the Field (angle) of View of the Chief Rays, which exist as a projection of what is the considered portion of the Exit Window:

... multiplied by:

[2] The lens-system Entrance Pupil Area as it is determined by the Marginal Rays which exist as a projection of the lens-system's physical Aperture Stop size - independent of a particular (full) physical image-sensor active-area size, or sub-portions being analytically considered:

Such an (Etendue) product "says it all" about [photon-transduction ("shot") noise dominated applications] Signal/Noise Ratios achievable by an imaging system, for a given Depth of Field, and for a given constant valued Exposure Time of the imaging system performance analyzed.
Strictly speaking it is not a simple multiplication, but

etendue = (NA^2 x area) = constant

does tell you much of what you need most of the time.

More generally to compute the etendue of the system, one must consider the contribution of each point on the surface of the light source as they cast rays to each point on the receiver. Note also that for each point on the surface and for each direction there is a cosine inclination factor multiplying the product of area and solid angle.

For a circular aperture and a rectangular sensor area, the calculation is somewhat simpler at the sensor, rather than the entrance pupil. In the absence of vignetting:

Etendue = pi x NA^2 x width x height

This will be an over-estimate if there is significant vignetting at the edge of the field of view. An aspect of this phenomenon for wide aperture standard lenses, and for super-wide angle and fish-eye lenses, is that the apparent size and shape of the entrance pupil can vary with viewing angle. This needs to be accounted for if the exact illuminance at the edge of the field is important, but can often be ignored for the more generic arm-waving discussions we have here.

Regards
 
It is straightforward to calculate the result in each case, but you need to specify the question sufficiently precisely. As others have said, it may be easier to work with conservation of energy or conservation of étendue and luminance, rather than the inverse square law .
Thank you for saying that. It strikes me that a lot of these discussions would be greatly simplified by simply talking about the etendue or luminance. I'm not enough of an expert to know if that deals with all of the issues raised by JACS, especially in the diffraction limit.

But at the limit of my understanding, in for the case of lens with perfect refractions, then these conservation principles tell us really only need to think about what happen on the subject side of the lens. Isn't that correct?
Jeff -- that's a perfectly good place to start.

For a lens with a fixed physical aperture (entrance pupil):

Incident intensity (illuminance) at entrance pupil = source luminance x solid angle subtended by source x cosine(angle of incidence at lens) x aperture area

For a uniform subject and a sufficiently narrow field of view, cos(angle of incidence) is approximately unity, and this becomes a simple multiplication. Otherwise we integrate over the field of view and over the lens entrance pupil.

In any case, for a fixed subject, aperture and field of view, the total incident power is fixed. If there are no losses, all the light is projected onto the image plane, and from simple conservation of energy the intensity at the sensor is inversely proportional to the square of the linear magnification, and so inversely proportional to the distance from lens rear principal point to the sensor.

Diffraction limits the spot size (proportional to wavelength/NA), so avoids infinite intensity at the sensor, even for a point source in the field of view.
A concept of "point source" seems more a statement about the "method of questioning of Nature" associated with an imaging system's optical perspective. Whether isotropic in radiation pattern or not, is seems to describe a "viewer isolate-able" source (appearing within a recorded image-frame). Such light rays, even if to some extent collimated, cannot be mathematically described as a "point source" - because "nothing is ever a true point source"
Sure, but we can get very close. Considering how a model of reality handles limiting cases can tell you a lot about how robust both the model and one's understanding are.
Yes, and point sources and the accepted models go well together. For the Helmholtz equation, for example, on can choose a point source delta(x), and this is how the Green's functions C*exp(+ikr)/r and C*exp(-ikr)/r are obtained.
 
If the light from the scene is not focused, thus, the ISL applies. For example, if you are twice as far from the scene, the intensity of the light reaching you is 1/4 as great.
Are you saying that the scene is emitting light?
Emitting and/or reflecting. Usually reflecting. Neither here nor there, really.
This is a bit vague. I agree with usually reflecting, though. Where does the light usually originate from in those cases?
Not sure what you're asking. If I'm taking a landscape photo, for example, light emitted by the sun is reflected by the scene.
I agree. In the vast majority of cases the sun is our light source. Is sunlight subject to the Inverse Square Law?
Of course it is. Why do you ask?
 
If the light from the scene is not focused, thus, the ISL applies. For example, if you are twice as far from the scene, the intensity of the light reaching you is 1/4 as great.
Are you saying that the scene is emitting light?
Emitting and/or reflecting. Usually reflecting. Neither here nor there, really.
This is a bit vague. I agree with usually reflecting, though. Where does the light usually originate from in those cases?
Not sure what you're asking. If I'm taking a landscape photo, for example, light emitted by the sun is reflected by the scene.
I agree. In the vast majority of cases the sun is our light source. Is sunlight subject to the Inverse Square Law?
 
If the light from the scene is not focused, thus, the ISL applies. For example, if you are twice as far from the scene, the intensity of the light reaching you is 1/4 as great.
Are you saying that the scene is emitting light?
Emitting and/or reflecting. Usually reflecting. Neither here nor there, really.
This is a bit vague. I agree with usually reflecting, though. Where does the light usually originate from in those cases?
Not sure what you're asking. If I'm taking a landscape photo, for example, light emitted by the sun is reflected by the scene.
I agree. In the vast majority of cases the sun is our light source. Is sunlight subject to the Inverse Square Law?
Of course it is. Why do you ask?
Of course, I agree. It’s just that I find it odd that you should then apply the Inverse Square Law again to the camera to scene distance.

The light source is so enormously far away that by the time the light finally reaches us way down here on Earth the relative distances will be so great as to nullify any effects of the Inverse Square Law. The light rays are essentially parallel and not diverging. The sunny 16 rule applies whether you're on top of a mountain or in death valley, i.e. the Inverse Square Law can be ignored for all general outdoor photography.

Yet in the part I’ve marked in bold above and in the following quote from your website you apply the Inverse Square Law to the camera to scene distance. Why would you want to do that?

From your website:

“The amount of light from the scene reaching the aperture also depends on how far we are from the scene -- the further away we are, the less of that light that reaches the lens. For example, if we are twice as far away, only 1/4 as much light will fall on the lens in any given time interval.”
 
If the light from the scene is not focused, thus, the ISL applies. For example, if you are twice as far from the scene, the intensity of the light reaching you is 1/4 as great.
Are you saying that the scene is emitting light?
Emitting and/or reflecting. Usually reflecting. Neither here nor there, really.
This is a bit vague. I agree with usually reflecting, though. Where does the light usually originate from in those cases?
Not sure what you're asking. If I'm taking a landscape photo, for example, light emitted by the sun is reflected by the scene.
I agree. In the vast majority of cases the sun is our light source. Is sunlight subject to the Inverse Square Law?
Of course it is. Why do you ask?
Of course, I agree. It’s just that I find it odd that you should then apply the Inverse Square Law again to the camera to scene distance.
I don't follow why you find it to be odd.
The light source is so enormously far away that by the time the light finally reaches us way down here on Earth the relative distances will be so great as to nullify any effects of the Inverse Square Law. The light rays are essentially parallel and not diverging. The sunny 16 rule applies whether you're on top of a mountain or in death valley, i.e. the Inverse Square Law can be ignored for all general outdoor photography.
We are talking about the distance that unfocused light from the scene had to travel to reach the camera, not the distance that the illuminating light traversed.
Yet in the part I’ve marked in bold above and in the following quote from your website you apply the Inverse Square Law to the camera to scene distance. Why would you want to do that?

From your website:

“The amount of light from the scene reaching the aperture also depends on how far we are from the scene -- the further away we are, the less of that light that reaches the lens. For example, if we are twice as far away, only 1/4 as much light will fall on the lens in any given time interval.”
That's all correct. I think you are confused because you feel that the distance the light traveled from the illuminating source to the scene has some bearing. Well, it does in that the further the scene is from the source of the light, the more dim the scene is. However, what is relevant here is how much of the light coming from the scene makes it to the sensor, not how much of the light from the source makes it to the scene.

Basically, it seems you are trying to relate the brightness of the source to the brightness of the image projected on the sensor, whereas what I'm talking about is how the brightness of the scene relates to the brightness of the image projected on the sensor.
 
Last edited:
“The amount of light from the scene reaching the aperture also depends on how far we are from the scene -- the further away we are, the less of that light that reaches the lens. For example, if we are twice as far away, only 1/4 as much light will fall on the lens in any given time interval.”
That's all correct.
No, that´s all wrong.
 
If the light from the scene is not focused, thus, the ISL applies. For example, if you are twice as far from the scene, the intensity of the light reaching you is 1/4 as great.
Are you saying that the scene is emitting light?
Emitting and/or reflecting. Usually reflecting. Neither here nor there, really.
This is a bit vague. I agree with usually reflecting, though. Where does the light usually originate from in those cases?
Not sure what you're asking. If I'm taking a landscape photo, for example, light emitted by the sun is reflected by the scene.
I agree. In the vast majority of cases the sun is our light source. Is sunlight subject to the Inverse Square Law?
Of course it is. Why do you ask?
Of course, I agree. It’s just that I find it odd that you should then apply the Inverse Square Law again to the camera to scene distance.
I don't follow why you find it to be odd.
The light source is so enormously far away that by the time the light finally reaches us way down here on Earth the relative distances will be so great as to nullify any effects of the Inverse Square Law. The light rays are essentially parallel and not diverging. The sunny 16 rule applies whether you're on top of a mountain or in death valley, i.e. the Inverse Square Law can be ignored for all general outdoor photography.
We are talking about the distance that unfocused light from the scene had to travel to reach the camera, not the distance that the illuminating light traversed.
Yet in the part I’ve marked in bold above and in the following quote from your website you apply the Inverse Square Law to the camera to scene distance. Why would you want to do that?

From your website:

“The amount of light from the scene reaching the aperture also depends on how far we are from the scene -- the further away we are, the less of that light that reaches the lens. For example, if we are twice as far away, only 1/4 as much light will fall on the lens in any given time interval.”
That's all correct. I think you are confused because you feel that the distance the light traveled from the illuminating source to the scene has some bearing. Well, it does in that the further the scene is from the source of the light, the more dim the scene is. However, what is relevant here is how much of the light coming from the scene makes it to the sensor, not how much of the light from the source makes it to the scene.

Basically, it seems you are trying to relate the brightness of the source to the brightness of the image projected on the sensor, whereas what I'm talking about is how the brightness of the scene relates to the brightness of the image projected on the sensor.
Just to further what GB indicated above, anytime light encounters a reflective surface (like the scene), the reflected photon wavefront will change shape and direction according to the surface shape and finish (among other things) as it propagates towards the camera. So in your example if the sun's rays (which are virtually parallel) hit a small chrome ball, light from the specular reflection will radiate out in almost all directions, following very closely to the ISL. If it hit a flat shiny surface instead, it would not (it may stay parallel). If it hit a flat rough surface it may, but most likely not exactly at a second order. So it's the scene-to-camera interface that matters, and in general, light intensity drops the further away you are. The precise rate depends greatly on the scene's size and reflective properties.
 

Keyboard shortcuts

Back
Top