Lens Physics/Optics Question

xinogage

Well-known member
Messages
181
Reaction score
1
Location
US
Sorry if this has been posted before, but I am having trouble finding any information on it. Any help would be greatly appreciated.

There is something that has been bothering me for the longest time. It relates to when a lens is stopped down. We all know that as the aperture is decreased, the depth of field is increased, but the amount of available light is also decreased. One thing I can never get my mind around is, how is it possible for the image on the sensor, and the viewfinder, to be exactly the same regardless of the aperture size?

For example, when looking through the lens directly, one sees a circular shape. When one stops the lens down, the circle is decreased. Clearly, the image that the eye can see is no longer the same; the stopped down shows much less information. Another way of visualizing is imagining looking at a bullseye. When the lens is wide open, the entire bullseye can be seen. When stopped down, just the center point is visible.

However, this is not the case for the view on a sensor/viewfinder. The viewfinder and sensor shows the same image, but just different light levels, regardless of the aperture. How can this be? Please, if anyone can shed some light (pun intended) on this matter to help me, I would greatly appreciate it.

--
http://www.flickr.com/photos/xinogage
 
Another way of visualizing is imagining looking at a bullseye. When the lens is wide open, the entire bullseye can be seen. When stopped down, just the center point is visible.
If you move your eye (and head) from side to side or up and down, you will be able to see each part of the bullseye. Of course you won't see it all simultaneously.

The camera gets around this by having a sensor which covers a rectangular area. Each pixel is in effect looking through the same small hole, but will see a different part of the subject, since the pixels are spread out over the entire sensor area.

Hope this helps,
Peter
 
Every point of the subject radiates light in every direction (which means it's visible from every side). So one subject point doesn't just shoot 1 ray of light at one specific point of the lens, its light hits the entire lens area. It is up to the lens to bend these light rays back into a single point. And it does that for things that are in focus (out of focus areas are not completely rendered as a point, but rather as a circle). Depending on the lens, it can only do this for points close to the optical axis (= tele lens) or it can take bend light sufficiently to render points that are very far from the optical axis (wideangle). Light from points that are outside of this "field of view" hits (the inside of) the lens barrel and is absorbed (unless the lightsource is very bright, in which case a portion may be reflected and entered into the path of light that is inside the field of view --> Flare!)

If you close down the aperture, the set of points of which the lens can bend the light sufficiently for it to come out of the exit pupil and hit the sensor (or focussing screen) remains the same. All that changes is that the circles created by out-of-focus areas becomes smaller, as the aperture is smaller (hence the increased depth-of-field with smaller apertures).
 
There is something that has been bothering me for the longest time. It relates to when a lens is stopped down. We all know that as the aperture is decreased, the depth of field is increased, but the amount of available light is also decreased. One thing I can never get my mind around is, how is it possible for the image on the sensor, and the viewfinder, to be exactly the same regardless of the aperture size?
It isn't. As you stop down, (i) the depth of field increases, and (ii) the intensity of the light decreases - which needs a longer shutter speed to let more light in. If you halve the area of the aperture and reduce the light intensity by 50%, you need to open the shutter for twice as long to let the same amount of light in and get a correct exposure.
For example, when looking through the lens directly, one sees a circular shape. When one stops the lens down, the circle is decreased. Clearly, the image that the eye can see is no longer the same; the stopped down shows much less information. Another way of visualizing is imagining looking at a bullseye. When the lens is wide open, the entire bullseye can be seen. When stopped down, just the center point is visible.
The size of the image circle may decrease a bit as you stop down but it is still larger than the area covered by the sensor so that doesn't make any difference. Mainly as you stop down the image circle just gets dimmer. The parts outside the bullseye are still there!
However, this is not the case for the view on a sensor/viewfinder. The viewfinder and sensor shows the same image, but just different light levels, regardless of the aperture. How can this be? Please, if anyone can shed some light (pun intended) on this matter to help me, I would greatly appreciate it.
As said above - it's not the same image (depth of field changes), and any decrease in the size of the image circle is not noticed because you are only looking at a small rectangular area in the centre of the circle and you don't see the periphery.

Best wishes
--
Mike
 
However, this is not the case for the view on a sensor/viewfinder. The viewfinder and sensor shows the same image, but just different light levels, regardless of the aperture. How can this be? Please, if anyone can shed some light (pun intended) on this matter to help me, I would greatly appreciate it.
The lens is not a window of plain glass. The lens is made of curved glasses that actually FORM the image on the sensor....

...... and it is important to realise that.....

.... each and every part of the lens receives light from each and every part of the subject, AND THEN each and every part of the lens sends light to each and every part of the sensor .

This means that if you were to cut the lens in half from top to bottom, and blank off the missing part with a metal plate, you would still get a full image on the sensor, but because less light was coming in it would be dimmer. Same thing if you were to cut the lens in half sideways.

It happens that the amount of light is controlled by using a circular blanking plate that "stops" the lens around the outside, which is the part of the glass where the curves are most extreme, and the correct alignment of the rays most problematical....

Therefore, three things happen at once when we stop down a lens from maximum aperture...
  • 1) The amount of light entering the camera is controlled...
  • 2) Depth of Field is increased.....
  • 3) The least good parts of the lens are removed from the image-forming process, so that general image quality is improved.
I hope this makes sense.

Here is a little piece of homework for your amusement:
  • Take your camera and zoom to a midrange f-length, (neither tele nor wide).
  • Stick a little strip of (black) paper across the centre of the lens... 1/4" wide, say..
  • Select manual focus and look through the viewfinder. In particular, take note of the nature of the out of focus zones, and what happens to them when you operate the focusing ring.....
What do you see? :-)
--
Regards,
Baz
 
.... each and every part of the lens receives light from each and every part of the subject, AND THEN each and every part of the lens sends light to each and every part of the sensor .
  • 3) The least good parts of the lens are removed from the image-forming process, so that general image quality is improved.
It is important that the second point is not misunderstood as a contradiction of the first. Only the extreme edges of the lens are removed from image-formation with small apertures. The practical importance of this is that although the lens hood length that will optimally reduce flare without causing vignetting is longer when apertures are small the difference is not huge.
--
'I am the living proof that blondes are not stupid'. Paris Hilton
 
Hi,

Well, I'd say that with a wide open aperture a lot of light is focused and with a small aperture a lot less light is focused. But regardless of the aperture the light is focused to form an image.

Regards, David
 
Sure, but a lot of folk have the idea that at small apertures only light through the centre of the lens, near and parallel to the optical axis, is used to form the image.
--
'I am the living proof that blondes are not stupid'. Paris Hilton
 
Sure, but a lot of folk have the idea that at small apertures only light through the centre of the lens, near and parallel to the optical axis, is used to form the image.
Hmmm, well, light near and parallel to the axis would form a small dot in the centre of the CCD or film.
--
'I am the living proof that blondes are not stupid'. Paris Hilton
How did you know I had fair to grey hair?

Regards, David
 
Sure, but a lot of folk have the idea that at small apertures only light through the centre of the lens, near and parallel to the optical axis, is used to form the image.
Hmmm, well, light near and parallel to the axis would form a small dot in the centre of the CCD or film.
Isn't it the fact that the light has been bent less with smaller apertures that gives rise to more DOF?
 
Isn't it the fact that the light has been bent less with smaller apertures that gives rise to more DOF?
No. the light paths don't change. When an iris closes it simply blocks some of the light.

Areas of the scene that are out of focus make circles on the image plane, instead of points. That why they look out of focus. When the iris closes it blocks the light that makes up outermost area of those circles...the circles get smaller. That makes the image look more focused.

.
 
Graystar's statements are entirely in line with the above tutorial, which illustrates what he was saying quite nicely.

The circles (of confusion) to which he referred are clearly shown... as is the fact that the smaller they get, the more like [real] points they become... (shown ranged either side of focused point in the second diagram under the table picture).

Err.... Are you still having trouble with this?
--
Regards,
Baz
 
Err.... Are you still having trouble with this?
I'm not sure. I made the comment that with smaller apertures the light entering was being bent less hence was more parallel and that gave rise to more DOF, and subsequently found DEPTH OF FOCUS & APERTURE VISUALIZATION on http://www.cambridgeincolour.com/tutorials/depth-of-field.htm which appears to confirm my thoughts.

Graystar (and you) appear to be saying:
No. the light paths don't change. When an iris closes it simply blocks some of the light.
So I thought I had understood a concept and am being told I don't.
 
I think there's some confusion between the two of you :)

If you stop down, only the central area of the lens will transmit light. This is also the area where light is bent the least on its way from source to sensor. But only the fact that it's "bent less" doesn't result in increased depth of field, the cause of the increased depth of field is the decreased size of the circle of confusion, which is directly linked to the size of the aperture.
 
Err.... Are you still having trouble with this?
I'm not sure. I made the comment that with smaller apertures the light entering was being bent less hence was more parallel and that gave rise to more DOF, and subsequently found DEPTH OF FOCUS & APERTURE VISUALIZATION on http://www.cambridgeincolour.com/tutorials/depth-of-field.htm which appears to confirm my thoughts.

Graystar (and you) appear to be saying:
No. the light paths don't change. When an iris closes it simply blocks some of the light.
Aha! I think I'm now seeing where you are coming from, and no, your conceptualisation is not "wrong", exactly....

However, none of the light rays ARE changed or re-directed by stopping down.

What happens is that the proportion of rays in the bundle issuing from the lens towards the sensor is altered... and it is done by "stopping" those at the periphery from reaching the sensor altogether, and these edge rays are the ones being bent more steeply by the angled surfaces of the lens. Result, those rays which are less divergent when reaching the sensor now PREDOMINATE in the bundle coming from the lens....

.... and this has the effect of making the light appear more (generally) parallel to the lens' axis.
  • It is a subtle difference, so I can understand your confusion. :-)
However, that is not what makes for extra DoF. That is down to smaller circles of confusion.
--
Regards,
Baz
 
I'm not sure. I made the comment that with smaller apertures the light entering was being bent less hence was more parallel and that gave rise to more DOF, and subsequently found DEPTH OF FOCUS & APERTURE VISUALIZATION on http://www.cambridgeincolour.com/tutorials/depth-of-field.htm which appears to confirm my thoughts.

Graystar (and you) appear to be saying:
No. the light paths don't change. When an iris closes it simply blocks some of the light.
So I thought I had understood a concept and am being told I don't.
I think I see the problem. The drawing from CIC is incorrect. In the "Large aperture"/ "Small aperture" drawings, I believe the intent was to show the aperture only. However, by having the light paths from the point bend at the aperture...it appears to be a lens. That's confusing.

Take the "Large aperture" drawing and just add a donut between the lens and the sensor. This blocks the light on the outer edge only...allowing the light in the middle to come through. Then it will be correct, and similar to the "Small aperture" drawing.

.
 
Thanks all for your input. Those diagrams, although useful, does not explain what I was trying to understand. I have seen these diagrams a million times, and the theory is never a problem for me.

The simplest, and best answer so far goes to the person who said: All you have to do is move your eyes around the tiny point, and you will see more of the image projected. Thanks.
--
http://www.flickr.com/photos/xinogage
 
Thanks all for your input. Those diagrams, although useful, does not explain what I was trying to understand. I have seen these diagrams a million times, and the theory is never a problem for me.

The simplest, and best answer so far goes to the person who said: All you have to do is move your eyes around the tiny point, and you will see more of the image projected. Thanks.
--
http://www.flickr.com/photos/xinogage
That may have been my input.

Actually, and with respect, it sounds like the theory is a problem. Reminds me of one of my physics teachers at school. He knew the stuff inside out and back to front, could teach it and pass all the exams. But he didn't actually understand it.

Regards,
Peter
 

Keyboard shortcuts

Back
Top