Lens design for different auto-focus methods

KWNJr

Leading Member
Messages
924
Solutions
1
Reaction score
137
Location
US
Several times I have seen it mentioned in DPReview articles and/or threads that lensesmay be optomized for PDAF or CDAF. Esp. concerning lenses adapted for use on another, usually mirrorless, camera with a different AF system.

What lens characteristics would create these design aspects?

Must a lens be designed for one type of AF. Or could it be designed for both?

If designed for both, what compromises woud be implemented?
 
Several times I have seen it mentioned in DPReview articles and/or threads that lensesmay be optomized for PDAF or CDAF. Esp. concerning lenses adapted for use on another, usually mirrorless, camera with a different AF system.

What lens characteristics would create these design aspects?

Must a lens be designed for one type of AF. Or could it be designed for both?

If designed for both, what compromises woud be implemented?
Autofocus is just moving either the entire lens assembly (uncommon these days) or a group to change where the lens is focused.

Different motor technologies have different characteristics. For example, an ultrasonic ring motor (Canon USM, Nikon SWM, etc) has high torque (lots of power) and is well-suited to the barrel-like geometry of a lens. They do not have the ability to quickly change direction.

Conversely, a linear electromagnetic motor (Sony FE, etc) has lower torque, but is very high acceleration in the design type used.

The high torque of a USM motor allows it to move a bigger optical assembly. There are no optical advantages that I know of to a linear motor.

A design for CDAF tends towards these low torque high acceleration design, where the motor is "chirped" being commanded with many many small steps per second. In contrast, the control loop for a USM-type motor is slower, because the motor has worse acceleration characteristics. If you told it "spin left at 500RPM" and shortly after told it "525RPM!" then "500RPM!" again it would be detrimental to its performance.

This is because PDAF generally knows immediately where it is going (being something like a positional sensor) while CDAF only knows which way to go to make an improvement (something like a differential sensor). Since CDAF is prone to "wiggle" around the final focus position, the motor has to be able to make these wiggles quickly. USM simply can't do this, where a linear motor can.

To my knowledge, most of the design tradeoffs are mechanical and electronic, and not optical. Though I suppose e.g. the Sony FE 70-200 GM has a dual motor design (USM + Linear) to leverage both of these characteristics. It was also under-delivering and has significant optical issues as a result of this, so I think the integration is underbaked.
 
Thank you for your information.
Several times I have seen it mentioned in DPReview articles and/or threads that lensesmay be optomized for PDAF or CDAF. Esp. concerning lenses adapted for use on another, usually mirrorless, camera with a different AF system.

What lens characteristics would create these design aspects?

Must a lens be designed for one type of AF. Or could it be designed for both?

If designed for both, what compromises woud be implemented?
Autofocus is just moving either the entire lens assembly (uncommon these days) or a group to change where the lens is focused.

Different motor technologies have different characteristics. For example, an ultrasonic ring motor (Canon USM, Nikon SWM, etc) has high torque (lots of power) and is well-suited to the barrel-like geometry of a lens. They do not have the ability to quickly change direction.

Conversely, a linear electromagnetic motor (Sony FE, etc) has lower torque, but is very high acceleration in the design type used.

The high torque of a USM motor allows it to move a bigger optical assembly. There are no optical advantages that I know of to a linear motor.

A design for CDAF tends towards these low torque high acceleration design, where the motor is "chirped" being commanded with many many small steps per second. In contrast, the control loop for a USM-type motor is slower, because the motor has worse acceleration characteristics. If you told it "spin left at 500RPM" and shortly after told it "525RPM!" then "500RPM!" again it would be detrimental to its performance.

This is because PDAF generally knows immediately where it is going (being something like a positional sensor) while CDAF only knows which way to go to make an improvement (something like a differential sensor). Since CDAF is prone to "wiggle" around the final focus position, the motor has to be able to make these wiggles quickly. USM simply can't do this, where a linear motor can.

To my knowledge, most of the design tradeoffs are mechanical and electronic, and not optical.
I was worried that some peculiar aspect of optical design was involved, not just the AF motor.
Though I suppose e.g. the Sony FE 70-200 GM has a dual motor design (USM + Linear) to leverage both of these characteristics. It was also under-delivering and has significant optical issues as a result of this, so I think the integration is underbaked.
 
Thank you for your information.
I was worried that some peculiar aspect of optical design was involved, not just the AF motor.
The goals are broadly similar. No matter what the motor is, a smaller and lighter payload is always quicker and easier to move.
 
Several times I have seen it mentioned in DPReview articles and/or threads that lensesmay be optomized for PDAF or CDAF. Esp. concerning lenses adapted for use on another, usually mirrorless, camera with a different AF system.

What lens characteristics would create these design aspects?

Must a lens be designed for one type of AF. Or could it be designed for both?

If designed for both, what compromises woud be implemented?
Autofocus is just moving either the entire lens assembly (uncommon these days) or a group to change where the lens is focused.

Different motor technologies have different characteristics. For example, an ultrasonic ring motor (Canon USM, Nikon SWM, etc) has high torque (lots of power) and is well-suited to the barrel-like geometry of a lens. They do not have the ability to quickly change direction.

Conversely, a linear electromagnetic motor (Sony FE, etc) has lower torque, but is very high acceleration in the design type used.

The high torque of a USM motor allows it to move a bigger optical assembly. There are no optical advantages that I know of to a linear motor.

A design for CDAF tends towards these low torque high acceleration design, where the motor is "chirped" being commanded with many many small steps per second. In contrast, the control loop for a USM-type motor is slower, because the motor has worse acceleration characteristics. If you told it "spin left at 500RPM" and shortly after told it "525RPM!" then "500RPM!" again it would be detrimental to its performance.

This is because PDAF generally knows immediately where it is going (being something like a positional sensor) while CDAF only knows which way to go to make an improvement (something like a differential sensor). Since CDAF is prone to "wiggle" around the final focus position, the motor has to be able to make these wiggles quickly. USM simply can't do this, where a linear motor can.

To my knowledge, most of the design tradeoffs are mechanical and electronic, and not optical. Though I suppose e.g. the Sony FE 70-200 GM has a dual motor design (USM + Linear) to leverage both of these characteristics. It was also under-delivering and has significant optical issues as a result of this, so I think the integration is underbaked.
Where do Canon's STM lenses stand - linear like Sony or not?
 
Several times I have seen it mentioned in DPReview articles and/or threads that lensesmay be optomized for PDAF or CDAF. Esp. concerning lenses adapted for use on another, usually mirrorless, camera with a different AF system.

What lens characteristics would create these design aspects?

Must a lens be designed for one type of AF. Or could it be designed for both?

If designed for both, what compromises woud be implemented?
Autofocus is just moving either the entire lens assembly (uncommon these days) or a group to change where the lens is focused.

Different motor technologies have different characteristics. For example, an ultrasonic ring motor (Canon USM, Nikon SWM, etc) has high torque (lots of power) and is well-suited to the barrel-like geometry of a lens. They do not have the ability to quickly change direction.

Conversely, a linear electromagnetic motor (Sony FE, etc) has lower torque, but is very high acceleration in the design type used.

The high torque of a USM motor allows it to move a bigger optical assembly. There are no optical advantages that I know of to a linear motor.

A design for CDAF tends towards these low torque high acceleration design, where the motor is "chirped" being commanded with many many small steps per second. In contrast, the control loop for a USM-type motor is slower, because the motor has worse acceleration characteristics. If you told it "spin left at 500RPM" and shortly after told it "525RPM!" then "500RPM!" again it would be detrimental to its performance.

This is because PDAF generally knows immediately where it is going (being something like a positional sensor) while CDAF only knows which way to go to make an improvement (something like a differential sensor). Since CDAF is prone to "wiggle" around the final focus position, the motor has to be able to make these wiggles quickly. USM simply can't do this, where a linear motor can.

To my knowledge, most of the design tradeoffs are mechanical and electronic, and not optical. Though I suppose e.g. the Sony FE 70-200 GM has a dual motor design (USM + Linear) to leverage both of these characteristics. It was also under-delivering and has significant optical issues as a result of this, so I think the integration is underbaked.
Where do Canon's STM lenses stand - linear like Sony or not?
STM stands for stepper motor and is another term, I believe, for the linear electromagnetic type of motor. Nikon's new AF-P lenses also use stepper motors.

With regards to optical characteristics of the lens, the chromatic distortions of the lens have an effect on the precision of focusing, because they cause the various colors of light to focus in different planes. This effect depends on the color response of the AF system. A case in point: some Nikon D5500/5600 bodies have been reported to have AF accuracy that varies with the color of light. This may be related to a submirror misalignment. In general, though, to my knowledge there isn't a specific requirement for mirrorless or DSLR optical design other than - keep the moving mass small.
 
Several times I have seen it mentioned in DPReview articles and/or threads that lensesmay be optomized for PDAF or CDAF. Esp. concerning lenses adapted for use on another, usually mirrorless, camera with a different AF system.

What lens characteristics would create these design aspects?

Must a lens be designed for one type of AF. Or could it be designed for both?

If designed for both, what compromises woud be implemented?
Autofocus is just moving either the entire lens assembly (uncommon these days) or a group to change where the lens is focused.

Different motor technologies have different characteristics. For example, an ultrasonic ring motor (Canon USM, Nikon SWM, etc) has high torque (lots of power) and is well-suited to the barrel-like geometry of a lens. They do not have the ability to quickly change direction.

Conversely, a linear electromagnetic motor (Sony FE, etc) has lower torque, but is very high acceleration in the design type used.

The high torque of a USM motor allows it to move a bigger optical assembly. There are no optical advantages that I know of to a linear motor.

A design for CDAF tends towards these low torque high acceleration design, where the motor is "chirped" being commanded with many many small steps per second. In contrast, the control loop for a USM-type motor is slower, because the motor has worse acceleration characteristics. If you told it "spin left at 500RPM" and shortly after told it "525RPM!" then "500RPM!" again it would be detrimental to its performance.

This is because PDAF generally knows immediately where it is going (being something like a positional sensor) while CDAF only knows which way to go to make an improvement (something like a differential sensor). Since CDAF is prone to "wiggle" around the final focus position, the motor has to be able to make these wiggles quickly. USM simply can't do this, where a linear motor can.

To my knowledge, most of the design tradeoffs are mechanical and electronic, and not optical. Though I suppose e.g. the Sony FE 70-200 GM has a dual motor design (USM + Linear) to leverage both of these characteristics. It was also under-delivering and has significant optical issues as a result of this, so I think the integration is underbaked.
Where do Canon's STM lenses stand - linear like Sony or not?
STM stands for stepper motor and is another term, I believe, for the linear electromagnetic type of motor. Nikon's new AF-P lenses also use stepper motors.
No, a linear motor has a magnetic component and a non magnetic component. The two are lubricated and in contact -- when the magnet is subjected to a field by something on its carriage, it slides along the guide rail.

Linear-Motors-Profile-Linmot.jpg


A stepper motor is the "classic" DC motor in robotics and uses multiple coils to kick a geared rod around in a circle,

HT24-rotor-stator.png


A linear motor is fundamentally continuous, while a stepper is discrete. Most steppers these days do microstepping, where you can make kicks smaller than one gear pitch to smooth out the motion.



STM is its own, niche class.
 
Several times I have seen it mentioned in DPReview articles and/or threads that lensesmay be optomized for PDAF or CDAF. Esp. concerning lenses adapted for use on another, usually mirrorless, camera with a different AF system.

What lens characteristics would create these design aspects?

Must a lens be designed for one type of AF. Or could it be designed for both?

If designed for both, what compromises woud be implemented?
Autofocus is just moving either the entire lens assembly (uncommon these days) or a group to change where the lens is focused.

Different motor technologies have different characteristics. For example, an ultrasonic ring motor (Canon USM, Nikon SWM, etc) has high torque (lots of power) and is well-suited to the barrel-like geometry of a lens. They do not have the ability to quickly change direction.

Conversely, a linear electromagnetic motor (Sony FE, etc) has lower torque, but is very high acceleration in the design type used.

The high torque of a USM motor allows it to move a bigger optical assembly. There are no optical advantages that I know of to a linear motor.

A design for CDAF tends towards these low torque high acceleration design, where the motor is "chirped" being commanded with many many small steps per second. In contrast, the control loop for a USM-type motor is slower, because the motor has worse acceleration characteristics. If you told it "spin left at 500RPM" and shortly after told it "525RPM!" then "500RPM!" again it would be detrimental to its performance.

This is because PDAF generally knows immediately where it is going (being something like a positional sensor) while CDAF only knows which way to go to make an improvement (something like a differential sensor). Since CDAF is prone to "wiggle" around the final focus position, the motor has to be able to make these wiggles quickly. USM simply can't do this, where a linear motor can.

To my knowledge, most of the design tradeoffs are mechanical and electronic, and not optical. Though I suppose e.g. the Sony FE 70-200 GM has a dual motor design (USM + Linear) to leverage both of these characteristics. It was also under-delivering and has significant optical issues as a result of this, so I think the integration is underbaked.
Where do Canon's STM lenses stand - linear like Sony or not?
STM stands for stepper motor and is another term, I believe, for the linear electromagnetic type of motor. Nikon's new AF-P lenses also use stepper motors.
No, a linear motor has a magnetic component and a non magnetic component. The two are lubricated and in contact -- when the magnet is subjected to a field by something on its carriage, it slides along the guide rail.

Linear-Motors-Profile-Linmot.jpg


A stepper motor is the "classic" DC motor in robotics and uses multiple coils to kick a geared rod around in a circle,

HT24-rotor-stator.png


A linear motor is fundamentally continuous, while a stepper is discrete. Most steppers these days do microstepping, where you can make kicks smaller than one gear pitch to smooth out the motion.

STM is its own, niche class.
My bad. I recall (perhaps wrongly) that I've seen linear motors in a ring configuration.
 
Several times I have seen it mentioned in DPReview articles and/or threads that lensesmay be optomized for PDAF or CDAF. Esp. concerning lenses adapted for use on another, usually mirrorless, camera with a different AF system.

What lens characteristics would create these design aspects?

Must a lens be designed for one type of AF. Or could it be designed for both?

If designed for both, what compromises woud be implemented?
Autofocus is just moving either the entire lens assembly (uncommon these days) or a group to change where the lens is focused.

Different motor technologies have different characteristics. For example, an ultrasonic ring motor (Canon USM, Nikon SWM, etc) has high torque (lots of power) and is well-suited to the barrel-like geometry of a lens. They do not have the ability to quickly change direction.

Conversely, a linear electromagnetic motor (Sony FE, etc) has lower torque, but is very high acceleration in the design type used.

The high torque of a USM motor allows it to move a bigger optical assembly. There are no optical advantages that I know of to a linear motor.

A design for CDAF tends towards these low torque high acceleration design, where the motor is "chirped" being commanded with many many small steps per second. In contrast, the control loop for a USM-type motor is slower, because the motor has worse acceleration characteristics. If you told it "spin left at 500RPM" and shortly after told it "525RPM!" then "500RPM!" again it would be detrimental to its performance.

This is because PDAF generally knows immediately where it is going (being something like a positional sensor) while CDAF only knows which way to go to make an improvement (something like a differential sensor). Since CDAF is prone to "wiggle" around the final focus position, the motor has to be able to make these wiggles quickly. USM simply can't do this, where a linear motor can.

To my knowledge, most of the design tradeoffs are mechanical and electronic, and not optical. Though I suppose e.g. the Sony FE 70-200 GM has a dual motor design (USM + Linear) to leverage both of these characteristics. It was also under-delivering and has significant optical issues as a result of this, so I think the integration is underbaked.
Where do Canon's STM lenses stand - linear like Sony or not?
STM stands for stepper motor and is another term, I believe, for the linear electromagnetic type of motor. Nikon's new AF-P lenses also use stepper motors.
No, a linear motor has a magnetic component and a non magnetic component. The two are lubricated and in contact -- when the magnet is subjected to a field by something on its carriage, it slides along the guide rail.

Linear-Motors-Profile-Linmot.jpg


A stepper motor is the "classic" DC motor in robotics and uses multiple coils to kick a geared rod around in a circle,

HT24-rotor-stator.png


A linear motor is fundamentally continuous, while a stepper is discrete. Most steppers these days do microstepping, where you can make kicks smaller than one gear pitch to smooth out the motion.

STM is its own, niche class.
My bad. I recall (perhaps wrongly) that I've seen linear motors in a ring configuration.
There are rotary linear motors, but all the ones I've seen in lenses are, well, linear linear motors.
 
There is a post here claiming that with dual pixel, linear motors are needed for top performance:


Is that correct?
 
There is a post here claiming that with dual pixel, linear motors are needed for top performance:

https://www.dpreview.com/forums/thread/4272426?page=5

Is that correct?
I think it is best to decompose the situation into a set of components with interfaces.

The lens's autofocus system knows nothing about what is commanding it. Depending on the particular maker, they have either chosen an encoded focus motion or unencoded. If it is encoded, usually it is an absolute motion; the lens is told to move to focus position xxx. If it is unencoded, it is necessarily a differential motion; the focus is told "move some unit backwards" where the unit is e.g. 10% of the current position.

The PDAF sensors in DSLRs run at pretty high speed -- usually 60Hz in cameras for mere mortals, and 120Hz in flagship or specialty bodies (D5, 7D, 1Dx, etc). They do not require more than one frame to compute the focus and object position estimates.

PDAF, at least the off-sensor variety is very 'confidant' and probably does not change the focus estimate very meaningfully as the lens is moving.

CDAF, by contrast is not very confidant at all and sends lots of very small motion commands kind of like PWM. It works at the live view scan speed of the sensor. I think on most Canon DSLRs, that's 120Hz at most. I know my 6D drops into 120Hz mode for the video preview. For a mirrorless camera, I really have no idea what speed live view scans at.

On-sensor PDAF is, as best I know, the same as PDAF except:
  1. The baseline is much shorter
  2. The amount of defocus present is much smaller. defocus actually helps a PDAF sensor work -- the one in a DSLR is intentionally focused quite a ways behind infinity
If this reduces the confidence and/or step size of the AF a lot and makes it more CDAF-like, then the preferred variety of motor would be linear, STM, etc.

If it doesn't, preferred is still ring USM, since it has the most torque and best efficiency which leads to the lowest power consumption and highest focus speed.

In short,

I don't know. But I hope these thoughts help.
 
There is a post here claiming that with dual pixel, linear motors are needed for top performance:

https://www.dpreview.com/forums/thread/4272426?page=5

Is that correct?
I think it is best to decompose the situation into a set of components with interfaces.

The lens's autofocus system knows nothing about what is commanding it. Depending on the particular maker, they have either chosen an encoded focus motion or unencoded. If it is encoded, usually it is an absolute motion; the lens is told to move to focus position xxx. If it is unencoded, it is necessarily a differential motion; the focus is told "move some unit backwards" where the unit is e.g. 10% of the current position.

The PDAF sensors in DSLRs run at pretty high speed -- usually 60Hz in cameras for mere mortals, and 120Hz in flagship or specialty bodies (D5, 7D, 1Dx, etc). They do not require more than one frame to compute the focus and object position estimates.

PDAF, at least the off-sensor variety is very 'confidant' and probably does not change the focus estimate very meaningfully as the lens is moving.

CDAF, by contrast is not very confidant at all and sends lots of very small motion commands kind of like PWM. It works at the live view scan speed of the sensor. I think on most Canon DSLRs, that's 120Hz at most. I know my 6D drops into 120Hz mode for the video preview. For a mirrorless camera, I really have no idea what speed live view scans at.

On-sensor PDAF is, as best I know, the same as PDAF except:
  1. The baseline is much shorter
  2. The amount of defocus present is much smaller. defocus actually helps a PDAF sensor work -- the one in a DSLR is intentionally focused quite a ways behind infinity
If this reduces the confidence and/or step size of the AF a lot and makes it more CDAF-like, then the preferred variety of motor would be linear, STM, etc.

If it doesn't, preferred is still ring USM, since it has the most torque and best efficiency which leads to the lowest power consumption and highest focus speed.

In short,

I don't know. But I hope these thoughts help.
Thank you.
 
There is a post here claiming that with dual pixel, linear motors are needed for top performance:

https://www.dpreview.com/forums/thread/4272426?page=5

Is that correct?
I think it is best to decompose the situation into a set of components with interfaces.

On-sensor PDAF is, as best I know, the same as PDAF except:
  1. The baseline is much shorter
  2. The amount of defocus present is much smaller. defocus actually helps a PDAF sensor work -- the one in a DSLR is intentionally focused quite a ways behind infinity
I don't remember Marianne mentioning this in her teardown of the D300 AF system. Care to elaborate?
 
There is a post here claiming that with dual pixel, linear motors are needed for top performance:

https://www.dpreview.com/forums/thread/4272426?page=5

Is that correct?
I think it is best to decompose the situation into a set of components with interfaces.

On-sensor PDAF is, as best I know, the same as PDAF except:
  1. The baseline is much shorter
  2. The amount of defocus present is much smaller. defocus actually helps a PDAF sensor work -- the one in a DSLR is intentionally focused quite a ways behind infinity
I don't remember Marianne mentioning this in her teardown of the D300 AF system. Care to elaborate?
The AF sensor in a DSLR is a number of millimeters in front of the focus point of the lens.
 
There is a post here claiming that with dual pixel, linear motors are needed for top performance:

https://www.dpreview.com/forums/thread/4272426?page=5

Is that correct?
I think it is best to decompose the situation into a set of components with interfaces.

On-sensor PDAF is, as best I know, the same as PDAF except:
  1. The baseline is much shorter
  2. The amount of defocus present is much smaller. defocus actually helps a PDAF sensor work -- the one in a DSLR is intentionally focused quite a ways behind infinity
I don't remember Marianne mentioning this in her teardown of the D300 AF system. Care to elaborate?
The AF sensor in a DSLR is a number of millimeters in front of the focus point of the lens.
I get that. What I remember is that the AF system has an extremely high aperture - f/22-28 - which means that its DOF is very large - over a wide range of subject positions there is relatively little fuzzing of the image, except at the extremes. This means to me that the signal presented to the AF arrays keeps a relatively constant (and decent) cleanliness, sharpening the peaks of the correlation function. Offsetting by a constant amount would appear to be a purposeful degradation of that signal.

On the other hand, the baseline issues of OSPDAF are a lot easier to understand.
 
There is a post here claiming that with dual pixel, linear motors are needed for top performance:

https://www.dpreview.com/forums/thread/4272426?page=5

Is that correct?
I think it is best to decompose the situation into a set of components with interfaces.

On-sensor PDAF is, as best I know, the same as PDAF except:
  1. The baseline is much shorter
  2. The amount of defocus present is much smaller. defocus actually helps a PDAF sensor work -- the one in a DSLR is intentionally focused quite a ways behind infinity
I don't remember Marianne mentioning this in her teardown of the D300 AF system. Care to elaborate?
The AF sensor in a DSLR is a number of millimeters in front of the focus point of the lens.
I get that.
Ok. Well, use the thin lens equation to see what object distance, say, BFL-5 gives you for a given focal length. It will be vastly out of the depth of field at any aperture.
What I remember is that the AF system has an extremely high aperture - f/22-28 - which means that its DOF is very large - over a wide range of subject positions there is relatively little fuzzing of the image, except at the extremes.
The size of the sub aperture mostly controls the ability of the system to work over a number of F/#s of the master lens. Most of these systems use prisms placed on top of a line sensor at some inclination. The amount of inclination determines how narrow an aperture is tolerated (f/8 or so) for the off-axis case. How sharp the apex angle is determines the maximum aperture accepted (f/2.8 or so). In the ideal case, this is the entire half-aperture of the lens; so an f/2.8 lens would be used at "f/4."
This means to me that the signal presented to the AF arrays keeps a relatively constant (and decent) cleanliness, sharpening the peaks of the correlation function. Offsetting by a constant amount would appear to be a purposeful degradation of that signal.
Corerlating two singlepixel-ish features which are aliased is much harder to do than correlating a many-pixel blob. We can compute the centroid of a feature in an image quickly to a fraction of a pixel. The accuracy of that only goes up as the feature gets bigger.
On the other hand, the baseline issues of OSPDAF are a lot easier to understand.
A short baseline stresses the accuracy of the centroid calculation. A very tightly distributed function presents an aliasing/accuracy issue.
 
There is a post here claiming that with dual pixel, linear motors are needed for top performance:

https://www.dpreview.com/forums/thread/4272426?page=5

Is that correct?
I think it is best to decompose the situation into a set of components with interfaces.

On-sensor PDAF is, as best I know, the same as PDAF except:
  1. The baseline is much shorter
  2. The amount of defocus present is much smaller. defocus actually helps a PDAF sensor work -- the one in a DSLR is intentionally focused quite a ways behind infinity
I don't remember Marianne mentioning this in her teardown of the D300 AF system. Care to elaborate?
The AF sensor in a DSLR is a number of millimeters in front of the focus point of the lens.
I get that.
Ok. Well, use the thin lens equation to see what object distance, say, BFL-5 gives you for a given focal length. It will be vastly out of the depth of field at any aperture.
What I remember is that the AF system has an extremely high aperture - f/22-28 - which means that its DOF is very large - over a wide range of subject positions there is relatively little fuzzing of the image, except at the extremes.
The size of the sub aperture mostly controls the ability of the system to work over a number of F/#s of the master lens. Most of these systems use prisms placed on top of a line sensor at some inclination. The amount of inclination determines how narrow an aperture is tolerated (f/8 or so) for the off-axis case. How sharp the apex angle is determines the maximum aperture accepted (f/2.8 or so). In the ideal case, this is the entire half-aperture of the lens; so an f/2.8 lens would be used at "f/4."
This means to me that the signal presented to the AF arrays keeps a relatively constant (and decent) cleanliness, sharpening the peaks of the correlation function. Offsetting by a constant amount would appear to be a purposeful degradation of that signal.
Corerlating two singlepixel-ish features which are aliased is much harder to do than correlating a many-pixel blob. We can compute the centroid of a feature in an image quickly to a fraction of a pixel. The accuracy of that only goes up as the feature gets bigger.
Good point. Interesting to consider that critical - or even close to critical - sharpness is not necessarily an advantage for computation.
On the other hand, the baseline issues of OSPDAF are a lot easier to understand.
A short baseline stresses the accuracy of the centroid calculation. A very tightly distributed function presents an aliasing/accuracy issue.
The storied "dead-zone" for nearly in-focus images that is a result of the image pairs stubstantially overlapping each other - something that doesn't happen in traditional long-baseline PDAF?
 
My bad. I recall (perhaps wrongly) that I've seen linear motors in a ring configuration.
You most likely have. Many years ago I recall visiting Prof Eastham's group (one of the pioneers of linear motor research in the UK), and they had a linear motor configured so that it produced rotary motion, for the simple reason that it was much easier to do in the lab than building an extended linear track!

J.
 
There is a post here claiming that with dual pixel, linear motors are needed for top performance:

https://www.dpreview.com/forums/thread/4272426?page=5

Is that correct?
I think it is best to decompose the situation into a set of components with interfaces.

On-sensor PDAF is, as best I know, the same as PDAF except:
  1. The baseline is much shorter
  2. The amount of defocus present is much smaller. defocus actually helps a PDAF sensor work -- the one in a DSLR is intentionally focused quite a ways behind infinity
I don't remember Marianne mentioning this in her teardown of the D300 AF system. Care to elaborate?
The AF sensor in a DSLR is a number of millimeters in front of the focus point of the lens.
I get that.
Ok. Well, use the thin lens equation to see what object distance, say, BFL-5 gives you for a given focal length. It will be vastly out of the depth of field at any aperture.
What I remember is that the AF system has an extremely high aperture - f/22-28 - which means that its DOF is very large - over a wide range of subject positions there is relatively little fuzzing of the image, except at the extremes.
The size of the sub aperture mostly controls the ability of the system to work over a number of F/#s of the master lens. Most of these systems use prisms placed on top of a line sensor at some inclination. The amount of inclination determines how narrow an aperture is tolerated (f/8 or so) for the off-axis case. How sharp the apex angle is determines the maximum aperture accepted (f/2.8 or so). In the ideal case, this is the entire half-aperture of the lens; so an f/2.8 lens would be used at "f/4."
This means to me that the signal presented to the AF arrays keeps a relatively constant (and decent) cleanliness, sharpening the peaks of the correlation function. Offsetting by a constant amount would appear to be a purposeful degradation of that signal.
Corerlating two singlepixel-ish features which are aliased is much harder to do than correlating a many-pixel blob. We can compute the centroid of a feature in an image quickly to a fraction of a pixel. The accuracy of that only goes up as the feature gets bigger.
Good point. Interesting to consider that critical - or even close to critical - sharpness is not necessarily an advantage for computation.
It is typically a disadvantage. Better registration algorithms these days correlate the phase of the FFT of the signal. You want to be >> nyquist sampled to have good noise tolerance in frequency space.
On the other hand, the baseline issues of OSPDAF are a lot easier to understand.
A short baseline stresses the accuracy of the centroid calculation. A very tightly distributed function presents an aliasing/accuracy issue.
The storied "dead-zone" for nearly in-focus images that is a result of the image pairs stubstantially overlapping each other - something that doesn't happen in traditional long-baseline PDAF?
more or less yes.
 

Keyboard shortcuts

Back
Top