F0.95 ???

bobn2 wrote:
The angle of the light cone focussed to a point in the focal plane depends only on the f-number of the lens, not the distance of the exit pupil.
Yes, of course.

The confusion seems to be that we are talking about different things.

If I understand it correctly, what is said is that objects at infinity will produce a number of light rays focussed in the infinity focal plane, and those light rays are limited by F0.95. And it does not matter where the sensor is. So ... an OOF image from such an object has the angles of incident of an F0.95 lens. So, if you focus at something nearby, then the OOF pattern will still be limited by the micro lenses having a smaller aperture than F0.95.

That sounds plausible.

But - to be able to use that fact for comparing different lenses, you have to really make sure that you have misfocussed exactly the same amount. For a lens, where all lens elements are moved like one packet, you could then measure the amount of movement, e.g. 3 mm. For inner focussing, that is hard to to, also remembering that inner focussing lenses maybe changes the focal length and max F-stop. You can try to keep the size of the nearby object constant on the sensor. Keeping the correct distance to the nearby object is difficult. I assume you shall keep the distance to the front input pupil.

And to put onion on the salmon (as we say in Swedish) you probably also have to take the optical design into account. What happens for tele centric lenses? For a pure tele centric lens, the blurred OOF image will not change size, but for a normal (thin) lens it will. That has to affect the appearance.
 
bobn2 wrote:
The angle of the light cone focussed to a point in the focal plane depends only on the f-number of the lens, not the distance of the exit pupil.
But - to be able to use that fact for comparing different lenses, you have to really make sure that you have misfocussed exactly the same amount.
The amount of misfocus will neither change the angle of the light rays nor the total amount of light received. It will only change the size of the blur circle.

Try to imagine a situation where the object and the lens is fixed, and you focus by moving the sensor. It is pretty easy to understand that you are moving the sensor within a fixed cone of light.
 
Last edited:
The amount of misfocus will neither change the angle of the light rays nor the total amount of light received. It will only change the size of the blur circle.
Yes, I understand that. It is quite obvious.
Try to imagine a situation where the object and the lens is fixed, and you focus by moving the sensor. It is pretty easy to understand that you are moving the sensor within a fixed cone of light.
 
Last edited:
bobn2 wrote:
The angle of the light cone focussed to a point in the focal plane depends only on the f-number of the lens, not the distance of the exit pupil.
Yes, of course.

The confusion seems to be that we are talking about different things.

If I understand it correctly, what is said is that objects at infinity will produce a number of light rays focussed in the infinity focal plane, and those light rays are limited by F0.95. And it does not matter where the sensor is. So ... an OOF image from such an object has the angles of incident of an F0.95 lens. So, if you focus at something nearby, then the OOF pattern will still be limited by the micro lenses having a smaller aperture than F0.95.

That sounds plausible.

But - to be able to use that fact for comparing different lenses, you have to really make sure that you have misfocussed exactly the same amount. For a lens, where all lens elements are moved like one packet, you could then measure the amount of movement, e.g. 3 mm. For inner focussing, that is hard to to, also remembering that inner focussing lenses maybe changes the focal length and max F-stop. You can try to keep the size of the nearby object constant on the sensor. Keeping the correct distance to the nearby object is difficult. I assume you shall keep the distance to the front input pupil.

And to put onion on the salmon (as we say in Swedish) you probably also have to take the optical design into account. What happens for tele centric lenses? For a pure tele centric lens, the blurred OOF image will not change size, but for a normal (thin) lens it will. That has to affect the appearance.
Sounds like a good argument to me, and going somewhay to understanding why bokeh can be very different for lenses of the same f-number.
 
I remember a discussion about whether this affected either the total amount of light hitting the sensor, the depth-of-field, or both.

I realize that the t-stop accounts for things like lossy glass (that f-stop does not), but what we are talking about here is a geometric "flaw" of the microlens, causing some light angles to be attenuated. Intuitively, this should cause _both_ less total light hitting the sensor, _and_ changes to DOF?

My Canon 7D was the camera tested by DxO that was most affected by this. Is it reasonable to speculate that if you:

1. Push pixel count vs sensor manufacture process (18MP APS-C for 500nm process), thus having relatively poor light-sensitive fraction of sensor area

2. Want to maintain photon efficiency by compensating with micro lenses

3. What has to "give" is sensitivity to off-angle light, thus potential issues with large-aperture and/or compact wide-angle lenses

If I am right, then there is some real-world trade-off between sensel count/density and image quality, only more complex than the "anti-megapixel" crowd have claimed. You must (if I am right) trade high pixel count vs high photon efficiency vs gracing angle light sensitivity for a given process node.

Given the 7D target users, it would seem like a reasonable trade-off (even better if Canon switched to a cutting-edge process...)

What would you get by going to the other extreme? Choosing a moderate sensel count/density, using no or moderate micro lenses? I guess the 5D classic is an example of this, but say that you did this in a modern design (the Sony A7S?). Could each sensel be essentially "isotropic" (within the angles of practical interest)? Is that a viable approach for a large-sensor, low-light-capable/large aperture, wide-ish angle, highly compact camera (e.g. Sony RX-1)?

-h
 
Last edited:
Interesting thoughts.

I am one of those that claim that more pixels (at last for big sensors) is generally an increase in image quality, maybe losing some ultra high ISO performance. And I also use to say that when someone wants an 8 MP FF sensor.

But, hmmmmm ... this is not so clear cut when talking about the acceptance angle of micro lenses. There is no free lunch. If the fill factor of the naked sensor is small, then you cannot get everything with micro lenses. You can increase the fill factor, but only within a certain angle. Optic laws and conservation of energy both gives that result.

Hmmmm ... so I assume the best thing you can do is increase the naked sensor fill factor and skip that micro lens. Then you get the best of both worlds. Good fill factor and good acceptance angle.
 
Roland Karlsson wrote: The Voigtländer 25 mm Nokton for u43 is F0.95. That is cool, sort of. A great potential for shallow DOF, or ... ? I thought, correct me if I am wrong, that the photo sensor, with micro lenses, Bayer filters, etc, etc, cannot see more than F1.5 or so. Everything else is wasted.
Agreed that when a light ray starts going through lots of glass at ever more extreme angles, at super wide apertures, hitting sensors at ever more extreme angles, that one cannot strictly predict the "T-stop" from the F-stop. Heck just look at the max T-stop measurements on the DxOmark lens measurements, they are rarely exactly the same as the nominal widest F-stop/aperture.

But it feels a bit pessimistic to predict that the Voigtlander 25mm Nokton with a geometric aperture of F/0.95 is going to have a T-stop rating of F/1.5 or higher. After all, the Canon 50 and 85mm F/1.2's get a T-stop rating of F/1.4. I suppose that is on full frame cameras, that might well make more efficient use of oblique light than the ultra dense pixel setup of a micro Four thirds camera. But it's not like getting below T/1.4 is harder than breaking the sound barrier.
This isn't a T-stop issue, it's to do with the f-number of the microlens on the sensor. Essentially, the microlens has to be faster than the taking lens, otherwise it can't couple all the light from the exit pupil of the lens onto the photoreceptor. Mostly, they seem to be giving out at f/2-f/1.8, though the old style Panasonic 'Maicovicon' sensors were faster. I don't know about the newer CMOS ones - but I'd hazard a guess they're much like all the other CMOS.
Why would the light utilization rate of the sensor change depending on how the f-stop of the lens compares with that of the microlenses?

As far as I can see, the utilization rate for any given sensor should change with the angle of incidence of the incoming light rather than with its intensity. For any given lens, the maximum angle of incidence will increase as the aperture increases, and this may lead to a lower rate of light utilization. But two different lenses may provide different angles of incidence and thus different rates of light utilization at the same f-stop.
 
Why would the light utilization rate of the sensor change depending on how the f-stop of the lens compares with that of the microlenses?
What is meant by 'the rate of light untilization'? What are it's units?

I did Google but it seems to be a biochemistry term.
 
Why would the light utilization rate of the sensor change depending on how the f-stop of the lens compares with that of the microlenses?
What is meant by 'the rate of light untilization'?
The extent to which the light that falls on the sensor is converted into current. Another term would be quantum efficiency.
What are it's units?
Percentages or proportions.
I did Google but it seems to be a biochemistry term.
Plants also vary in their rate of light utilization.
 
Roland Karlsson wrote: The Voigtländer 25 mm Nokton for u43 is F0.95. That is cool, sort of. A great potential for shallow DOF, or ... ? I thought, correct me if I am wrong, that the photo sensor, with micro lenses, Bayer filters, etc, etc, cannot see more than F1.5 or so. Everything else is wasted.
Agreed that when a light ray starts going through lots of glass at ever more extreme angles, at super wide apertures, hitting sensors at ever more extreme angles, that one cannot strictly predict the "T-stop" from the F-stop. Heck just look at the max T-stop measurements on the DxOmark lens measurements, they are rarely exactly the same as the nominal widest F-stop/aperture.

But it feels a bit pessimistic to predict that the Voigtlander 25mm Nokton with a geometric aperture of F/0.95 is going to have a T-stop rating of F/1.5 or higher. After all, the Canon 50 and 85mm F/1.2's get a T-stop rating of F/1.4. I suppose that is on full frame cameras, that might well make more efficient use of oblique light than the ultra dense pixel setup of a micro Four thirds camera. But it's not like getting below T/1.4 is harder than breaking the sound barrier.
This isn't a T-stop issue, it's to do with the f-number of the microlens on the sensor. Essentially, the microlens has to be faster than the taking lens, otherwise it can't couple all the light from the exit pupil of the lens onto the photoreceptor. Mostly, they seem to be giving out at f/2-f/1.8, though the old style Panasonic 'Maicovicon' sensors were faster. I don't know about the newer CMOS ones - but I'd hazard a guess they're much like all the other CMOS.
Why would the light utilization rate of the sensor change depending on how the f-stop of the lens compares with that of the microlenses?
Not sure what you are asking here, Anders. The point is that the amount of light from the exit pupil that can get projected on the photoreceptor by the microlens depends on the f-number of the microlens. This is quite a well known result. If you look back at the old thread linked earlier in this one, it contained quite a thorough discussion, and linked this paper. I derived the exact same result through a different path of reasoning - so I'm pretty sure it's right.
As far as I can see, the utilization rate for any given sensor should change with the angle of incidence of the incoming light rather than with its intensity. For any given lens, the maximum angle of incidence will increase as the aperture increases, and this may lead to a lower rate of light utilization. But two different lenses may provide different angles of incidence and thus different rates of light utilization at the same f-stop.
No they don't. The angle is dependent on the f-number only for a lens focussed on infinity. With close focussing the difference between the entrance and exit pupil is what leads us to need to calculate an 'effective aperture' but focussed on infinity, the exit pupil is always somewhere on the cone of light formed by the equivalent simple lens forming a point image on the image plane (or its extension towards the object, in the case of 'retrofocus' lenses - why they get so big). The size and positiion depends on the pupil magnification. See this illustration



Exit pupil size and position relative to pupil magnification (P)

Exit pupil size and position relative to pupil magnification (P)



from here .



--
Bob
 
Anders W wrote:
Why would the light utilization rate of the sensor change depending on how the f-stop of the lens compares with that of the microlenses?
The micro lens has a cone of acceptance. If you fall outside of that cone, then the light will not hit the detector. How this "cone of acceptance" looks depends on how far from the detector the micro lens is. The cone does not necessarily have to circular either. It dpends on the form of the detector, and maybe also on the form of the micro lens.
As far as I can see, the utilization rate for any given sensor should change with the angle of incidence of the incoming light rather than with its intensity. For any given lens, the maximum angle of incidence will increase as the aperture increases, and this may lead to a lower rate of light utilization. But two different lenses may provide different angles of incidence and thus different rates of light utilization at the same f-stop.
I have wondered about this actually. Is this the case?
 
Why would the light utilization rate of the sensor change depending on how the f-stop of the lens compares with that of the microlenses?
What is meant by 'the rate of light untilization'?
The extent to which the light that falls on the sensor is converted into current.
It's not converted to current, it's converted to charge.
Interesting discussion and juggling with words :)

But, I believe you gentlemen are now walking very far from the original discussion. It is purely optical and has nothing to do with either current or charge :P
 
No they don't. The angle is dependent on the f-number only for a lens focussed on infinity. With close focussing the difference between the entrance and exit pupil is what leads us to need to calculate an 'effective aperture' but focussed on infinity, the exit pupil is always somewhere on the cone of light formed by the equivalent simple lens forming a point image on the image plane (or its extension towards the object, in the case of 'retrofocus' lenses - why they get so big). The size and positiion depends on the pupil magnification. See this illustration
And you are totally sure about that?

BTW - I believe it to be so, but have no proof for it. Maybe I shall look up some proof on the net so that this can be settled. I have been wondering about it for some time. Do you have a pointer?
 
Why would the light utilization rate of the sensor change depending on how the f-stop of the lens compares with that of the microlenses?
What is meant by 'the rate of light untilization'?
The extent to which the light that falls on the sensor is converted into current.
It's not converted to current, it's converted to charge.
Interesting discussion and juggling with words :)
It's not 'juggling with words'. The difference between 'charge' and 'current' is quite an important one, if you want to have the technical discussion. If you don't, fine but then just don't bother with it at all.
But, I believe you gentlemen are now walking very far from the original discussion. It is purely optical and has nothing to do with either current or charge :P
The fill factor of the pixel, that is the ratio between the parts of the pixel that generate charge with incident light and those that don't is relevant.
 
No they don't. The angle is dependent on the f-number only for a lens focussed on infinity. With close focussing the difference between the entrance and exit pupil is what leads us to need to calculate an 'effective aperture' but focussed on infinity, the exit pupil is always somewhere on the cone of light formed by the equivalent simple lens forming a point image on the image plane (or its extension towards the object, in the case of 'retrofocus' lenses - why they get so big). The size and positiion depends on the pupil magnification. See this illustration
And you are totally sure about that?
Yes I am.
BTW - I believe it to be so, but have no proof for it. Maybe I shall look up some proof on the net so that this can be settled. I have been wondering about it for some time. Do you have a pointer?
Waste your time if you like, it's your time.
 
Why would the light utilization rate of the sensor change depending on how the f-stop of the lens compares with that of the microlenses?
What is meant by 'the rate of light untilization'?
The extent to which the light that falls on the sensor is converted into current.
It's not converted to current, it's converted to charge.
Interesting discussion and juggling with words :)
It's not 'juggling with words'. The difference between 'charge' and 'current' is quite an important one, if you want to have the technical discussion. If you don't, fine but then just don't bother with it at all.
But, I believe you gentlemen are now walking very far from the original discussion. It is purely optical and has nothing to do with either current or charge :P
The fill factor of the pixel, that is the ratio between the parts of the pixel that generate charge with incident light and those that don't is relevant.
The fill factor is interesting. It is a pure geometrical value and it is measured in percentage or something similar.

If the sensor then measures the detected photons in charge, electrons, volt or ampere is totally irrelevant. For this discussion at least.
 
Why would the light utilization rate of the sensor change depending on how the f-stop of the lens compares with that of the microlenses?
What is meant by 'the rate of light untilization'?
The extent to which the light that falls on the sensor is converted into current.
It's not converted to current, it's converted to charge.
Interesting discussion and juggling with words :)
It's not 'juggling with words'. The difference between 'charge' and 'current' is quite an important one, if you want to have the technical discussion. If you don't, fine but then just don't bother with it at all.
But, I believe you gentlemen are now walking very far from the original discussion. It is purely optical and has nothing to do with either current or charge :P
The fill factor of the pixel, that is the ratio between the parts of the pixel that generate charge with incident light and those that don't is relevant.
The fill factor is interesting. It is a pure geometrical value and it is measured in percentage or something similar.

If the sensor then measures the detected photons in charge, electrons, volt or ampere is totally irrelevant. For this discussion at least.
Try reading Catrysse and Wandell's paper and you'll see how fill factor is relevant.
 

Keyboard shortcuts

Back
Top