Is there a theoretical limit to aperture?

Knoxis

Leading Member
Messages
628
Solutions
1
Reaction score
221
Hi,

So I know how when one opens up the aperture, say from F4 to F2.8, you get one more stoop light, which basically means the resultant exposure will be twice as bright if using the same shutter speed and ISO. However, I would like to know if there is a theoretical limit to how much light an aperture can let in. Based off of a video posted on YouTube by Matt Granger, there is technically no theoretical limit to how big aperture can get. You could get an F0.01 lens if the money and resources were available. However, I don't see how one could continuously increase aperture and constantly increase exposure at the same time. One can see light levels change throughout the day, so by technicality there is a finite limit to the amount of light present in a scene. So is there a theoretical limit to how big one can increase the aperture so that no more light can be let in? For example, is it possible for the all the possible receivable light in the scene to ever be let in through a lens, so that opening it one more stope lets in no more light as there is technically no more left to let in? There can't possibly be no limit to the light that can be let in, if there is no light left to let in, right?

What's your thought?
 
Hello,

It doesn't seem like you guys are listening to everything I wrote above. From my original post and others in the past, F# = f/d as well as sin(theta) = 1 / (2*F#) = NA are all self-consistent. Please reread what I've written.

Chris
 
Hello,

It doesn't seem like you guys are listening to everything I wrote above. From my original post and others in the past, F# = f/d as well as sin(theta) = 1 / (2*F#) = NA are all self-consistent. Please reread what I've written.

Chris
I have. And I find it very instructive and useful. Thanks for it.

Doesn't sin(theta) = 1/ (2* F#) require the use of the paraxial assumption?

[If H = the hypotenuse of that triangle then sin(theta) = (D/2)/H which would equal 1/(2*F#), if and only if, H=f (which is only paraxially true) and if we define F# as f/D]

I am not questioning the above relations for small theta. I am asking if there are practical situations where it is necessary to go back to the "more general", (Nakamura) form to avoid the paraxial assumption in the case where thin lens approximations are getting invalid, perhaps in looking at DoF when using close-up lenses.

What am I missing?
 
Hello,

It doesn't seem like you guys are listening to everything I wrote above. From my original post and others in the past, F# = f/d as well as sin(theta) = 1 / (2*F#) = NA are all self-consistent. Please reread what I've written.

Chris
I have. And I find it very instructive and useful. Thanks for it.

Doesn't sin(theta) = 1/ (2* F#) require the use of the paraxial assumption?
No, the only thing that's required is a well corrected lens and the spherical principal surface.
[If H = the hypotenuse of that triangle then sin(theta) = (D/2)/H which would equal 1/(2*F#), if and only if, H=f (which is only paraxially true) and if we define F# as f/D]
If you draw 'that triangle' correctly, as I have described, then H = f , that's the key, f is on the hypotenuse of the triangle, this and f/d holds irregardless of whether the F# is slow or fast.
I am not questioning the above relations for small theta. I am asking if there are practical situations where it is necessary to go back to the "more general", (Nakamura) form to avoid the paraxial assumption in the case where thin lens approximations are getting invalid, perhaps in looking at DoF when using close-up lenses.

What am I missing?

--
Tom
The best part of growing old is having the opportunity to do so.
https://brtthome.wordpress.com/
 
...there is technically no theoretical limit to how big aperture can get. You could get an F0.01 lens if the money and resources were available.
Joe is correct, the physical lower limit on f-number (N) is 0.5. The reason is that f/D is only an approximation valid when the opening angle theta' is small. The actual definition of f-number in air is

N = 1/[2sin(theta')]

from which it becomes obvious that N can never be less than 0.5, as better explained by Nakamura .
This does not take the index of refraction into account and does not explain why we cannot have a lens faster than 1/2 in f/D sense. Like a 50mm (single element) lens with 1m diameter. I believe the answer is that such a lens would not be able to focus rays in an acceptable way but I have not seen a good exposition. For a single lens element that should be doable but I am not sure about a multiple element one.
Hi, Jack, Nakamura correctly writes down the formula for F#, but as JACS points out doesn't say where it comes from. It comes from a well corrected lens satisfying the Abbe sine condition, thereby making the principal surface of the lens curved (spherical).

Most of the time, I see lenses being drawn with flat principal surfaces, but as you go faster, the curvature of the principal surface begins to show up (brian used to write about this back in the day). So you can easily draw the situation as follows: draw a horizontal line as your optical axis, and mark the image focus point on it on the right. Then take a compass, set it to your focal length f, and centered on the focus point, sweep out an arc above and below the axis. Now, draw a line parallel to the axis and a distance r away from the axis. r is = d/2, the radius of your entrance pupil. Now, mark the point where this line intersects the arc. So, the arc from the axis to this point is half of the principal surface. From this point, draw a ray to the focus point, which is the marginal ray and defines theta`, it is also of length f because it is on the circle. From this layout you can immediately see that r/f = sin(theta`).

Now, how big can this curve get? Well, if you keep increasing r, you see that the maximum size it can go is a forward facing hemisphere with r = f, and so f/d, (which still holds by the way) has its minimum value of F# = f/d = f/(2f) = 0.5!

I wrote about this more here (with a reference on where you can see a curved principal surface):

http://www.dpreview.com/forums/post/39973191

Chris
Hi Chris,

Thanks for such a clear and elegant explanation and pointers to additional reading. Much appreciated.

Thinking about this, at the limit N = 0.5 the principal surface would be a hemisphere with the bulbous end facing the subject. So the marginal ray (hope I'm using the terminology correctly) would enter the lens tangent to the principal surface. Clearly that's a pretty extreme situation.

If I remember an undergraduate physics lab correctly from years ago, Brewster's angle is the point at which at which a polarized component of entering ray of light would be reflected off the surface of the lens. If one wanted to avoid losing light, wouldn't that set up a practical upper limit on theta, and therefore a practical lower limit on N?
 
Hello,

It doesn't seem like you guys are listening to everything I wrote above. From my original post and others in the past, F# = f/d as well as sin(theta) = 1 / (2*F#) = NA are all self-consistent. Please reread what I've written.

Chris
I have. And I find it very instructive and useful. Thanks for it.

Doesn't sin(theta) = 1/ (2* F#) require the use of the paraxial assumption?
No, the only thing that's required is a well corrected lens and the spherical principal surface.
[If H = the hypotenuse of that triangle then sin(theta) = (D/2)/H which would equal 1/(2*F#), if and only if, H=f (which is only paraxially true) and if we define F# as f/D]
If you draw 'that triangle' correctly, as I have described, then H = f , that's the key, f is on the hy"potenuse of the triangle, this and f/d holds irregardless of whether the F# is slow or fast.
OK! "then H = f"

Thanks a lot. This serves me right, trying to visualize your construction instead of getting a piece of paper and a compass out! The old grey cells ain't what they used to be!
I am not questioning the above relations for small theta. I am asking if there are practical situations where it is necessary to go back to the "more general", (Nakamura) form to avoid the paraxial assumption in the case where thin lens approximations are getting invalid, perhaps in looking at DoF when using close-up lenses.

What am I missing?
 
As discussed above, there is the f/0.5 limit, although it's more practically an f/1 (f/0.95) limit. However, with Equivalence, we can go as low as we want.
Exactly, and that's why people who write down a ton of formulas sometimes miss the big picture.

As you well know, F-stop is relative to sensor size. So, like your example, you can get F/0.1 equivalent performance on a cell phone by using F1.2 or so on a large format sensor, even though you can't actually design a F/0.1 cell phone lens. Limiting f-stop only has a practical implication if you're stuck with a given sensor size. Assuming you can choose the sensor, DOF and light gathering are NOT limited by the theoretical F-stop.

F-stop is pretty useless number.
These types of dimensionless parameter come about because people working in the field use them regularly in calculations; but if you mis-use them through lack of understanding, then of course you are on your own.

Joe
 
As discussed above, there is the f/0.5 limit, although it's more practically an f/1 (f/0.95) limit. However, with Equivalence, we can go as low as we want.
Exactly, and that's why people who write down a ton of formulas sometimes miss the big picture.

As you well know, F-stop is relative to sensor size. So, like your example, you can get F/0.1 equivalent performance on a cell phone by using F1.2 or so on a large format sensor, even though you can't actually design a F/0.1 cell phone lens. Limiting f-stop only has a practical implication if you're stuck with a given sensor size. Assuming you can choose the sensor, DOF and light gathering are NOT limited by the theoretical F-stop.

F-stop is pretty useless number.
Folks, this kind of misunderstanding is the exact reason why it is useful to understand that f/D is just an approximation and the actual definition of f-number for practical photographic lenses in air is 1/[sin(theta')]. Equivalence is based on the approximation and fails if used when the approximation no longer applies.

Jack
 
As discussed above, there is the f/0.5 limit, although it's more practically an f/1 (f/0.95) limit. However, with Equivalence, we can go as low as we want.
Exactly, and that's why people who write down a ton of formulas sometimes miss the big picture.

As you well know, F-stop is relative to sensor size. So, like your example, you can get F/0.1 equivalent performance on a cell phone by using F1.2 or so on a large format sensor, even though you can't actually design a F/0.1 cell phone lens. Limiting f-stop only has a practical implication if you're stuck with a given sensor size. Assuming you can choose the sensor, DOF and light gathering are NOT limited by the theoretical F-stop.

F-stop is pretty useless number.
Folks, this kind of misunderstanding is the exact reason why it is useful to understand that f/D is just an approximation and the actual definition of f-number for practical photographic lenses in air is 1/[sin(theta')]. Equivalence is based on the approximation and fails if used when the approximation no longer applies.

Jack
I agree that there is a lot of misunderstanding about f/D, including the one you reply to. I contributed to some misunderstanding by wrongly interpreting Nakamura's diagram (page 22) and statement (on page 25).

I have followed cpw's discussion and now realize that f/D == 1/[sin(theta')] for lenses that meet the Abbe sine condition where the principal plane is NOT planar but a spherical surface, since in that case f = r, the radius of that principal (spherical) surface.

There are some references that I quote in my reply to gollywop here.

So for lenses that meet the Abbe sine condition, there is no paraxial approximation and f/D == 1/[sin(theta')]. Other lenses it seems to be the convention to use principal planes in calculations of thick lens parameters thus requiring the paraxial assumption.

I've read (and enjoyed) http://www.strollswithmydog.com/equivalence-focal-length-fnumber-diffraction/ but don't see how you can say:
Equivalence is based on the approximation and fails if used when the approximation no longer applies.
Maybe you are saying equivalence "fails" when it is interpreted (as tko does) so that an equivalent f/ is a property of the lens itself, which of course, it isn't. Here the failure is one of understanding what equivalence means, not the applicability of f/D as an approximation. No?

Or perhaps you are referring to this exchange: http://www.dpreview.com/forums/post/57350201 (Which has to include the image circle in the discussion. And once you include the image circle you are bound to respect sin(theta'), I think.)

Many thanks.
 
A note on this subject a long while ago (if memory serves me) by Bob indicated that the issue is with the index of refraction of modern glass. He intimated that new novel materials might overcome the problem.
I believe the 0.5 limit is a hard limit set by the fact that sin(x) is never great than 1 (for real x). Approaching the limit in a real lens may well be a question of the available high refractive index glass. I guess diffractive optics can approach the limit closer but at the cost of other problems.

Joe
Check out gradient index lenses (GRIN). I've read numerical apertures <0.2 are being produced. A google search should find them.
Here are GRIN lenses with NA 0.46: http://www.thorlabs.de/newgrouppage9.cfm?objectgroup_id=7176

Or from Edmund Optics (though NA and f-number are not entirely consistent).

FWIW, Thorlabs moulded Aspheric condenser lenses are available with NA up to 0.79, or NA up to 0.61 with better surface quality. Aberrations of these lenses will degrade for off-axis imaging.
 
A note on this subject a long while ago (if memory serves me) by Bob indicated that the issue is with the index of refraction of modern glass. He intimated that new novel materials might overcome the problem.
I believe the 0.5 limit is a hard limit set by the fact that sin(x) is never great than 1 (for real x). Approaching the limit in a real lens may well be a question of the available high refractive index glass. I guess diffractive optics can approach the limit closer but at the cost of other problems.

Joe
This is my impression as well. However, maybe there are image forming technologies that we have not yet considered as viable. For example, simple non-image forming technology can focus the sunlight into a "single small area" where the "small area" temperature is greater than the surface temperature of the sun.
Do you have a reference? This seems incompatible with the second law of thermodynamics. :-(
 
A note on this subject a long while ago (if memory serves me) by Bob indicated that the issue is with the index of refraction of modern glass. He intimated that new novel materials might overcome the problem.
I believe the 0.5 limit is a hard limit set by the fact that sin(x) is never great than 1 (for real x). Approaching the limit in a real lens may well be a question of the available high refractive index glass. I guess diffractive optics can approach the limit closer but at the cost of other problems.

Joe
Check out gradient index lenses (GRIN). I've read numerical apertures <0.2 are being produced. A google search should find them.
Here are GRIN lenses with NA 0.46: http://www.thorlabs.de/newgrouppage9.cfm?objectgroup_id=7176

Or from Edmund Optics (though NA and f-number are not entirely consistent).

FWIW, Thorlabs moulded Aspheric condenser lenses are available with NA up to 0.79, or NA up to 0.61 with better surface quality. Aberrations of these lenses will degrade for off-axis imaging.
Thanks for the references, Alan. f-number (N) and Numerical Aperture (NA) are related by N = 1/(2NA). So:

NA <0.2 correspond to f-numbers > 2.5
NA = 0.46, f-number = 1.09
NA = 0.79, f-number = 0.63

For a lens to break the reality barrier it needs to have a NA > 1.

Jack
 
Last edited:
...don't see how you can say:
Equivalence is based on the approximation and fails if used when the approximation no longer applies.
Hello Tom,

Just like the incorrect f/# = f/D simile gives the false impression that one can have an arbitrarily low f/# simply by choosing an arbitrarily large aperture diameter (D), with Equivalence it gives the false impression that relative to a reference lens one can have an arbitrarily low equivalent f/# by reducing focal length and sensing diagonal by the same arbitrary factor. Since D is unchanged and the focal length is now 1/24th so must be the f/# aotbe, the reasoning goes. Not: garbage in, garbage out.

Here is a thought experiment. The first assumption of Equivalence is that equivalent images must have the same Angle of View. Most photographic imaging systems can be represented by some version of the following diagram:

Simplified thin lens diagram in air.

Simplified thin lens diagram in air.

Now shorten the focal length to its absolute physical minimum, where the sensing area touches the surface of the thin lens. The sensing area is sized to fit correctly within the angle of view. N by the correct definition is greater than 0.5 because theta' is very close to but still less than 90 degrees. Can such a setup form a reasonable image on the sensor? Even if it could, what would be the actual effective aperture diameter (D')? The actual effective Exposure? Now make D ten times bigger, all other things equal. Has N become 10 times smaller? Has Exposure become 100 times smaller?

Note that the diagram is dimensionless, so your deductions are valid for lens/sensor combinations of any size. That's why there can be no practical 25mm/0.1 equivalent lens for us photographers and use of equivalence is best limited to a useful rule of thumb for back of the envelope comparisons of affordable camera/lens combinations.

Jack
 
Last edited:
A note on this subject a long while ago (if memory serves me) by Bob indicated that the issue is with the index of refraction of modern glass. He intimated that new novel materials might overcome the problem.
I believe the 0.5 limit is a hard limit set by the fact that sin(x) is never great than 1 (for real x). Approaching the limit in a real lens may well be a question of the available high refractive index glass. I guess diffractive optics can approach the limit closer but at the cost of other problems.

Joe
Check out gradient index lenses (GRIN). I've read numerical apertures <0.2 are being produced. A google search should find them.
Here are GRIN lenses with NA 0.46: http://www.thorlabs.de/newgrouppage9.cfm?objectgroup_id=1209 (EDIT: corrected link)

Or from Edmund Optics (though NA and f-number are not entirely consistent).

FWIW, Thorlabs moulded Aspheric condenser lenses are available with NA up to 0.79, or NA up to 0.61 with better surface quality. Aberrations of these lenses will degrade for off-axis imaging.
Thanks for the references, Alan. f-number (N) and Numerical Aperture (NA) are related by N = 1/(2NA). So:

NA <0.2 correspond to f-numbers > 2.5
NA = 0.46, f-number = 1.09
NA = 0.79, f-number = 0.63
Jack, I am well aware of the theoretical relationship.

I was pointing out that the Edmund numbers are not entirely consistent with this formula. For example, #64-521, #64-523, #64-525, all have the same (810 nm) design wavelength and same nominal numerical aperture NA = 0.55, but f-numbers are 0.90, 0.93, 0.95.

As far as I can tell, Edmund's f-numbers are (effective focal length)/(outer diameter of lens). These values are broadly consistent with the specified gradient constants for the 1/4 pitch lenses. Numerical apertures calculated from the gradient constants are 0.55, 0.54, and 0.53 for the 0.5, 1.0 and 1.8 mm diameter 810 nm 1/4 pitch lenses, and these values are more consistent with the specified f-numbers.

Note that the link to the Thorlabs GRIN lenses (above) is: http://www.thorlabs.de/newgrouppage9.cfm?objectgroup_id=1209

In contrast, for their aspheric lenses Thorlabs specify f-number as Focal_length / Outer_diameter, while Numerical Aperture is the sine of the marginal ray through the edge of the clear aperture (which is smaller than the outer diameter). Arguably their f-number is incorrect on two counts (wrong forumula and over-optimistic aperture), but at least they state clearly how it is defined, and provide Zemax and AutoCad models for those that can use them.
For a lens to break the reality barrier it needs to have a NA > 1.
It need not break reality if the rear element is bonded directly to the sensor stack. Not very practical for interchangeable lenses, but it could be done for a fixed lens camera, or something like the Ricoh GXR system. NA ~ 1.3 is common for oil-immersion microscope objectives, such as those from Olympus or Nikon.

Cheers,
 
Last edited:
...don't see how you can say:
Equivalence is based on the approximation and fails if used when the approximation no longer applies.
Hello Tom,

Just like the incorrect f/# = f/D simile gives the false impression that one can have an arbitrarily low f/# simply by choosing an arbitrarily large aperture diameter (D), with Equivalence it gives the false impression that relative to a reference lens one can have an arbitrarily low equivalent f/# by reducing focal length and sensing diagonal by the same arbitrary factor. Since D is unchanged and the focal length is now 1/24th so must be the f/# aotbe, the reasoning goes. Not: garbage in, garbage out.

Here is a thought experiment. The first assumption of Equivalence is that equivalent images must have the same Angle of View. Most photographic imaging systems can be represented by some version of the following diagram:

Simplified thin lens diagram in air.

Simplified thin lens diagram in air.

Now shorten the focal length to its absolute physical minimum, where the sensing area touches the surface of the thin lens.
Nakamura definition F==1/[2sin(theta')] == 1/[2(d/2)/r) ==r/D

As the focal length, f, is shortened, theta' -> 90º, sin (theta') ->1, the Nakamura definition of F ->1/2, and r->D/2
The sensing area is sized to fit correctly within the angle of view.
Hmm. How to correctly size the sensing area, (diagonal):

If A = (Angle of View); S = sensing diagonal in the image plane (length of the vertical red line in above diagram) and L = distance from principal plane to sensor (image plane).

Then, by similar triangles (S/2):(L-f) = (D/2):f and S/(L-f) = D/f

Or (S/D) =(L-f)/f

Or S = D(L-f)/f

Hold D & f constant, S increases linearly with L. (Are "flange distances" generally a linear function of format size (diagonal)?)

Hold D and L constant, reduce f, then S = DL/f -D (S decreases hyperbolically with f, to asymptote, -D)

At S=0, L=f which makes sense. The equation appears invalid for L<f
N by the correct definition is greater than 0.5 because theta' is very close to but still less than 90 degrees.
Right!
Can such a setup form a reasonable image on the sensor?
The sensor would be very small, so probably not.
Even if it could, what would be the actual effective aperture diameter (D')?
I guess that D' would be smaller than D. Since the rays from the circumference of the lens would probably suffer from geometric effects at the sensels.
The actual effective Exposure?
Decreased for reasons above.
Now make D ten times bigger, all other things equal. Has N become 10 times smaller? Has Exposure become 100 times smaller?
Nope.
Note that the diagram is dimensionless, so your deductions are valid for lens/sensor combinations of any size. That's why there can be no practical 25mm/0.1 equivalent lens for us photographers and use of equivalence is best limited to a useful rule of thumb for back of the envelope comparisons of affordable camera/lens combinations.
We'll leave it to DM to calculate the limits of the "rule of thumb" approximations. He is good at that stuff. I reached my limits (of grade 8 geometry) when working through your example above.
Thanks! Very helpful.

--
Tom
The best part of growing old is having the opportunity to do so.
 
alanr0 wrote: Jack, I am well aware of the theoretical relationship.

I was pointing out that the Edmund numbers are not entirely consistent with this formula. For example, #64-521, #64-523, #64-525, all have the same (810 nm) design wavelength and same nominal numerical aperture NA = 0.55, but f-numbers are 0.90, 0.93, 0.95.

As far as I can tell, Edmund's f-numbers are (effective focal length)/(outer diameter of lens). These values are broadly consistent with the specified gradient constants for the 1/4 pitch lenses. Numerical apertures calculated from the gradient constants are 0.55, 0.54, and 0.53 for the 0.5, 1.0 and 1.8 mm diameter 810 nm 1/4 pitch lenses, and these values are more consistent with the specified f-numbers.

Note that the link to the Thorlabs GRIN lenses (above) is: http://www.thorlabs.de/newgrouppage9.cfm?objectgroup_id=1209

In contrast, for their aspheric lenses Thorlabs specify f-number as Focal_length / Outer_diameter, while Numerical Aperture is the sine of the marginal ray through the edge of the clear aperture (which is smaller than the outer diameter). Arguably their f-number is incorrect on two counts (wrong forumula and over-optimistic aperture), but at least they state clearly how it is defined, and provide Zemax and AutoCad models for those that can use them.
For a lens to break the reality barrier it needs to have a NA > 1.
It need not break reality if the rear element is bonded directly to the sensor stack. Not very practical for interchangeable lenses, but it could be done for a fixed lens camera, or something like the Ricoh GXR system. NA ~ 1.3 is common for oil-immersion microscope objectives, such as those from Olympus or Nikon.
Well said Alan. My comments in this thread all refer to practical photographic lenses in air.

Jack
 
...there is technically no theoretical limit to how big aperture can get. You could get an F0.01 lens if the money and resources were available.
Joe is correct, the physical lower limit on f-number (N) is 0.5. The reason is that f/D is only an approximation valid when the opening angle theta' is small. The actual definition of f-number in air is

N = 1/[2sin(theta')]

from which it becomes obvious that N can never be less than 0.5, as better explained by Nakamura .
This does not take the index of refraction into account and does not explain why we cannot have a lens faster than 1/2 in f/D sense. Like a 50mm (single element) lens with 1m diameter. I believe the answer is that such a lens would not be able to focus rays in an acceptable way but I have not seen a good exposition. For a single lens element that should be doable but I am not sure about a multiple element one.
Hi, Jack, Nakamura correctly writes down the formula for F#, but as JACS points out doesn't say where it comes from. It comes from a well corrected lens satisfying the Abbe sine condition, thereby making the principal surface of the lens curved (spherical).

Most of the time, I see lenses being drawn with flat principal surfaces, but as you go faster, the curvature of the principal surface begins to show up (brian used to write about this back in the day). So you can easily draw the situation as follows: draw a horizontal line as your optical axis, and mark the image focus point on it on the right. Then take a compass, set it to your focal length f, and centered on the focus point, sweep out an arc above and below the axis. Now, draw a line parallel ...
Did you possibly intend to (directly above) write "draw a line perpendicular ..." ?
... to the axis and a distance r away from the axis. r is = d/2, the radius of your entrance pupil. Now, mark the point where this line intersects the arc. So, the arc from the axis to this point is half of the principal surface. From this point, draw a ray to the focus point, which is the marginal ray and defines theta`, it is also of length f because it is on the circle. From this layout you can immediately see that r/f = sin(theta`).

Now, how big can this curve get? Well, if you keep increasing r, you see that the maximum size it can go is a forward facing hemisphere with r = f, and so f/d, (which still holds by the way) has its minimum value of F# = f/d = f/(2f) = 0.5!

I wrote about this more here (with a reference on where you can see a curved principal surface):

http://www.dpreview.com/forums/post/39973191
.

Chris,

I think (with the perpendicular/parallel substitution asked about above performed) that I follow you above.

In my attempts to leave behind a simplistic (image-side) single-lens analysis [about Numerical Aperture (NA)], as shown in this diagram on the .Wikipedia web-page. including the (in my case misleading) identity:

Source: https://upload.wikimedia.org/wikipedia/commons/thumb/0/08/Numerical_aperture_for_a_lens.svg/568px-Numerical_aperture_for_a_lens.svg.png

Source: https://upload.wikimedia.org/wikipe...g/568px-Numerical_aperture_for_a_lens.svg.png

... am pondering the MIT diagrams displayed below. A simplifying assumption that the lens-system is focused at "infinity" (unaffected by non-unity values of Image, and Pupil, Magnification factors) is made.

It seems (set me straight if I am off-base in my interpretations) that (in the case of a multi-element lens-system), a region of (common analytical) interest exists relative to the Entrance Pupil (as analytically considered from the "object-side" of the lens-system), which is (sometimes) more often referred-to.

.

While the depicted Numerical Aperture (in the 1st image below) is directly related to the size of the Entrance Pupil (using Marginal Rays originating in the center of the Entrance Window) and is a function of the size of the Aperture Stop), period, ...

... the Field of View (in the 1st image below) is not directly related to the size of the Entrance Pupil, and is defined by the size of Field Stop (using Chief Rays originating from the outer points of the Entrance Window), which itself determines both the Entrance Window size as well as the projected size of the Exit Window [with a maximum (relevant) value corresponding to maximum numerical physical size of the linear dimensions of a (however-shaped) image-sensor photo-active-area].

.

Lens-system "focal length" (of which two separate values exist for a thick-lens-system's image-side and object-side) seems not to be an implicitly required known quantity in order to determine the Numerical Aperture [and thus, (at least, in "paraxial" regions within an image-frame), the image-plane Exposure].

The particular physical location of the Front Nodal Point (from which Focus Distance is derived for use in precise calculations involving Hyperfocal Distances and Depths of Field based upon some "deemed COC" diameter), as well as the particular physical location of the Principle Planes, appear to also not be implicitly required known quantities in order to determine the Field of View (as well as the non-specified physical location of Panoramic Pivot Point located at the center of the lens-system Entrance Pupil).

.

6f02269ed8e84bd4b2815575e506961b.jpg


Source: Pages 1-12: http://ocw.mit.edu/courses/mechanic...ndows-single-lens-camera/MIT2_71S09_lec06.pdf

.

05288016d6274360846bd6da44cbf864.jpg


Source: Pages 3-4: http://ocw.mit.edu/courses/mechanic...ndows-single-lens-camera/MIT2_71S09_lec06.pdf

.

Question: I make an assumption that the (effective, optical, from the perspective of the image-sensor) apparent maximum user-adjustable size of the Aperture Stop (in the multi-element lens-system diagram displayed above) cannot exceed the non-user-adjustable size of any Field Stop(s) existing with a system - or else the user would unable to fully "open up" an adjustable Aperture Stop.

(If) at that widest above-described user-adjustable Aperture Stop, some additional larger (effective, optical, from the perspective of the image-sensor) designed size "margin" exists where it comes to any linear/radial dimensions of Field Stop(s), then all seems to make sense - without any "fuss or muss".

(However), if an (effective, optical, from the perspective of the image-sensor) fully opened Aperture Stop coincided with, or exceeded any Field Stop(s) [in any analytically relevant (effective, optical, apparent) dimension(s)], then it seems (to me) that the system would have not one, but two (or potentially more), intra-system locations at which diffraction-patterns would be generated (?) ...

Such (it seems) would be "a real mess" ! I presume that such a situation is never the (designed) case ?

.

DM
 
Last edited:
A note on this subject a long while ago (if memory serves me) by Bob indicated that the issue is with the index of refraction of modern glass. He intimated that new novel materials might overcome the problem.
I believe the 0.5 limit is a hard limit set by the fact that sin(x) is never great than 1 (for real x). Approaching the limit in a real lens may well be a question of the available high refractive index glass. I guess diffractive optics can approach the limit closer but at the cost of other problems.

Joe
This is my impression as well. However, maybe there are image forming technologies that we have not yet considered as viable. For example, simple non-image forming technology can focus the sunlight into a "single small area" where the "small area" temperature is greater than the surface temperature of the sun.
Do you have a reference? This seems incompatible with the second law of thermodynamics. :-(
No direct reference. I was watching a science documentary about "light". One of the expositions was about a light-concentrating facility in the USA in which a very large number of adjustable mirrors (think of a couple of football field or more in terms of surface area) could be accurately focused onto a point the size of a brick or smaller. The scientist stated that, at the point of focus, the temperature could be hotter than the surface of the sun.

Makes to me ... and does not break the second law (in any way that I can see)
 
A note on this subject a long while ago (if memory serves me) by Bob indicated that the issue is with the index of refraction of modern glass. He intimated that new novel materials might overcome the problem.
I believe the 0.5 limit is a hard limit set by the fact that sin(x) is never great than 1 (for real x). Approaching the limit in a real lens may well be a question of the available high refractive index glass. I guess diffractive optics can approach the limit closer but at the cost of other problems.

Joe
This is my impression as well. However, maybe there are image forming technologies that we have not yet considered as viable. For example, simple non-image forming technology can focus the sunlight into a "single small area" where the "small area" temperature is greater than the surface temperature of the sun.
Do you have a reference? This seems incompatible with the second law of thermodynamics. :-(
No direct reference. I was watching a science documentary about "light". One of the expositions was about a light-concentrating facility in the USA in which a very large number of adjustable mirrors (think of a couple of football field or more in terms of surface area) could be accurately focused onto a point the size of a brick or smaller. The scientist stated that, at the point of focus, the temperature could be hotter than the surface of the sun.

Makes to me ... and does not break the second law (in any way that I can see)
 
...Now, draw a line parallel ...
Did you possibly intend to (directly above) write "draw a line perpendicular ..." ?
Hi Detail Man and Jack,

I meant parallel. So I guess I didn't explain too well, but here is a drawing:



d8447c16d4e64345ab8f1c706435ec8f.jpg


You can see the spherical principal surface, and I've drawn this for the case of theta`=24 deg, NA is then 0.407, and F# = 1.23. I disagree with what Nakamura is writing, (i.e. about F# = f/d as being only an approximation), F# is defined as f/d, it does not lose accuracy as shown above, even for the case on down to F# = 0.5. What is the approximation (and what loses accuracy when we go faster), is the way it's usually drawn. The accurate way above has f on the hypotenuse of that triangle (not the base). You can see as we go on down to this 0.5, the spherical surface grows to its hemisphere shape, and how this spherical surface naturally limits the F#=f/d from going below this 0.5.



Chris
 

Keyboard shortcuts

Back
Top