Axial Aberrations and Image Sharpness

Jack Hogan

Veteran Member
Messages
8,872
Solutions
17
Reaction score
4,220
Location
Toronto, CA
Many high end prime lenses tend to correct for axial aberrations, bringing the blue plane in line with the green one, often leaving red relatively uncorrected and out of focus. Such lenses are sometimes referred to as apochromats.

However, human observers tend to be more sensitive to changes in luminance than in chrominance so when asked for a system MTF most software tends to produce results based on the Y channel, which for instance can be obtained from white balanced raw data (r, g, b) roughly as follows for the setup in the article at bottom

Y = 0.25r + g - 0.25b

This means that the spatial frequency response of the 3 individual raw color planes (MTFr, MTFg, MTFb) will be added in the same proportions to produce the system's MTF.

Wouldn't this suggest that for best perceived 'sharpness' from a lens we would want the red plane as sharp as green, if anything leaving the blue one relatively uncorrected instead, since we are adding the MTF of the former and subtracting the MTF of the latter?

Are there any lenses that do this? What do their images look like?

Why do most manufacturers prefer to do the opposite instead? Purple fringing?

Jack

See for instance here: https://www.strollswithmydog.com/combining-cfa-mtf50-ii/
 
Last edited:
"Many high end prime lenses tend to correct for axial aberrations, bringing the blue plane in line with the green one, often leaving red relatively uncorrected"

Do you have a reference for this statement, or examples of particular lenses designed this way?
 
Hi OpticsEngineer,

Good point, maybe my premise is incorrect. Here is an example showing a measure of axial CA produced by Jim Kasson almost a decade ago, many more on his site :

49 captures by Jim Kasson. Manual focus was kept fixed while moving the camera 4mm towards the slanted edge target each time.
49 captures by Jim Kasson. Manual focus was kept fixed while moving the camera 4mm towards the slanted edge target each time.
 
Last edited:
Perhaps I should have explained better. The plot above shows how the relative 'sharpness' of each color plane evolves as the camera is moved through-focus from frame to frame. When the green plane is at maximum sharpness ('in-focus') red and blue will typically be slightly out of focus because of Longitudinal Chromatic Aberrations and the like.

Manufacturers of prime lenses often try to correct one other color plane (achromats) so that it will come to focus roughly at the same distance as green, less frequently both (apochromats). The relative MTF curves at that stage might look something like this (different camera and lens)

Older lens, DSLR mount  (horizontal axis should be c/p)
Older lens, DSLR mount (horizontal axis should be c/p)

The System MTF Luminance curve can be computed by weighing the individual color plane curves by the previously mentioned Y factors, which can be read off the relevant forward matrix for a given camera, lens, illuminant and scene.

For instance using the earlier coefficients, at 0.5 c/p above System MTF Y is

MTFY = 0.25*0.22 + 0.32 - 0.25*0.35 = 0.29 c/p

and so on for all other spatial frequencies. Since the blue curve is subtracted, thus lowering MTF, one may wonder why strive to keep the blue plane as sharp as green - as opposed to red, which is instead added to the total, thus raising MTF.

Now that I have asked the question I have noticed that blue appears to be corrected in some of my DSLR lenses (e.g. Nikkor 50mm/1.8D) but not necessarily newer Nikkor mirrorless Z lenses, which apparently help red along (different camera and lens again):

Newer lens, mirrorless mount
Newer lens, mirrorless mount

So perhaps my premise was incorrect: some manufacturers do correct red instead of blue in achromats, which would seem to make more sense to obtain maximum System MTFs with current sensor dye mixes.

Thoughts?
 
Last edited:
Hi OpticsEngineer,

Good point, maybe my premise is incorrect. Here is an example showing a measure of axial CA produced by Jim Kasson almost a decade ago, many more on his site :

49 captures by Jim Kasson. Manual focus was kept fixed while moving the camera 4mm towards the slanted edge target each time.
49 captures by Jim Kasson. Manual focus was kept fixed while moving the camera 4mm towards the slanted edge target each time.
Jack, I read from various sources (may be wrong) that basic achromats are normally designed to bring Red and Blue to the same focus but Green will then have a focal shift. This matches information from Zeiss, when referencing their Otus 55mm f/1.4 and as far as I can determine goes all the way back before the last century, to achromatic doublets, rapid rectilinears and Cooke triplets etc.




There are exceptions eg. some aerial surveillance lenses, where blue is left unfocussed and the lens has a permanent deep yellow filter fitted. I rescued a couple of these that were being thrown out: Wray 36 inch f/6.3 and Wray 18 inch f/4 that cover a 9" x 9" plate. The lack of correction for Blue enabled better performance for Green, Red and Near IR on the black and white plates/cut film used but these lenses are heavy and probably of limited use today?


Apochromat appears to use a different definition when referring to microscope and telescope objectives, compared to camera lenses( where Green is brought to a common focus with Red and Blue). Super apochromats are corrected for 4 or more wavelengths, including hyperspectral designs.




The particular wavelengths chosen for correction do not always correspond to the spectral response of typical Bayer CFAs (within the range but not necessarily at or near the peak).


486.1 nm (blue Fraunhofer F line from hydrogen)
589.2 nm (orange Fraunhofer D line from sodium)
656.3 nm (red Fraunhofer C line from hydrogen)


"These wavelengths span much of the visible spectral region, and the middle one (the D line) lies in the region of maximum sensitivity of the human eye. In some cases, somewhat different wavelength values corresponding to other Fraunhofer lines are used, e.g., 480.0 nm (F' line), 587.6 nm (d line) and 643.9 nm (C' line); note that light with some wavelengths is easier to produce in gas discharge lamps than other wavelengths."




Some hyperspectral lenses use common LED wavelengths, for the visible range in their focus plots.


The ZEMAX software guide has F, d, C (Visible) wavelengths as 486.1, 587.6 and 656.3, so a slight shift from the historic Orange Sodium line to the d line but with freedom to specify whatever wavelengths you desire. The default primary focus in this example is 567.6nm. Do Nikon, Canon, Fuji, Panasonic, Leica, Sony, Sigma, Tamron and others all utilise similar/identical settings?




Ed Dozier has MTF measurements of various lenses at different wavelengths, including the Nikkor Z 28-400mm f/4-8 VR and Nikkor 85mm f/1.4 AF-S


 
Hi Trebor1, thanks for the references, interesting stuff.

The Zeiss page differentiates between achromats and apochromats, saying that the former correct one color plane bringing it in line with a second one, leaving the third to its own devices; and the latter try to correct two color planes so that all three will come to a focus at the same distance. So this thread is really about achromats, which I mistakenly referred to as apo in the OP.

Ed Dozier's site is excellent, thanks for pointing it out. Your link takes us to his evaluation of a TTArtisan 50mm/0.95 and Nikkor 85mm/1.4 which behave like my old lenses, with green and blue focusing close together while leaving red in the secondary spectrum lurch.

His evaluation of the Nikkor Z 24-120/4 S and Z 28-400/4 on the other hand show behavior similar to my Z lenses, now with the red channel corrected to come to a focus closer to green, leaving blue as the secondary spectrum.

Seems I was not the first person to realize the gain to be had in MTF by correcting red for longitudinal CA instead of blue in achromats:-) I would assume that other manufacturers also followed suit when switching their lens platform to mirrorless.

I wonder if there are any inherent penalties for the red/blue switch, e.g. in blue/purple fringing?

It would be useful to get Ed's input since he seems to have given it a fair amount of thought. Does he post around here?

Jack
 
Last edited:
Jack, I read from various sources (may be wrong) that basic achromats are normally designed to bring Red and Blue to the same focus but Green will then have a focal shift. This matches information from Zeiss, when referencing their Otus 55mm f/1.4 and as far as I can determine goes all the way back before the last century, to achromatic doublets, rapid rectilinears and Cooke triplets etc.
Here's a set of curves for an Otus lens:



2adc3e6ee8614814aa7c92aa5d970780.jpg.png



--
 
About the green and blue being corrected together, but the red being left with a longer focus

I have a vague recollection from many years ago that lenses used to be designed that way because the red light penetrated deeper into a film emulsion than the blue and green. I will dig around in some old books and see if I can find that mentioned although I am not terribly hopeful I will find it.

One of the easiest things to do in a lens design program like Zemax, OSLO or Code V is to change the design wavelengths. It seems strange to me that modern lens designers would have overlooked optimizing appropriately for digital. But I have seen stranger things than that happen. I have heard that merit functions used to optimize lens designs tend to have something of a legacy behind them and in general in a corporate setting it is often easier to go along with the legacy (Why, son we have always done it that way) rather than make a correction.
 
Last edited:
Many high end prime lenses tend to correct for axial aberrations, bringing the blue plane in line with the green one, often leaving red relatively uncorrected and out of focus. Such lenses are sometimes referred to as apochromats.

However, human observers tend to be more sensitive to changes in luminance than in chrominance so when asked for a system MTF most software tends to produce results based on the Y channel, which for instance can be obtained from white balanced raw data (r, g, b) roughly as follows for the setup in the article at bottom

Y = 0.25r + g - 0.25b

This means that the spatial frequency response of the 3 individual raw color planes (MTFr, MTFg, MTFb) will be added in the same proportions to produce the system's MTF.

Wouldn't this suggest that for best perceived 'sharpness' from a lens we would want the red plane as sharp as green, if anything leaving the blue one relatively uncorrected instead, since we are adding the MTF of the former and subtracting the MTF of the latter?

Are there any lenses that do this? What do their images look like?

Why do most manufacturers prefer to do the opposite instead? Purple fringing?

Jack

See for instance here: https://www.strollswithmydog.com/combining-cfa-mtf50-ii/


The negative -0.25 coefficient in front of b in the formula for Y comes from the fact that the human luminosity sensitivity has a relatively narrow spread, so the G channel of the camera covers a bit too much of the blue spectrum, and that needs a compensation. One should not read too much into that negative coefficient, the bottom line is that luminosity curve. Here is a link to another page in your blog with a diagram there linked directly below:

D610-CFA.png


Now, you need to include a fair amount of "red wavelengths" since that curve is not centered well at green, so it seems to me that you are right.

A quick look at the focus shift of a typical APO lens below reveals a large variation in the short wavelength range but the mean is close to that in the green one. In the visible red range, the variation is smaller but the mean is shifted more. I would expect, indeed, B to be better corrected expect for extreme violet.

wikipedia

wikipedia

So far this confirms what you said but this assumes a gray target reproduced in B&W, basically. We have to think about typical objects like humans, nature, etc., their typical spectra; and not just what we notice but what we object. We may dislike blue fringing around human faces more than red one, for example, even if we are less sensitive to blue, etc. Also, perhaps APO lenses have color shifts restricted to what is achievable with the materials and the tricks we have available. This is beyond my pay grade though.
 
well here is a little something relevant for the layers of color film

Developing Film: Color - How Photographic Film Works | HowStuffWorks

blue sensitive layer on top

green sensitive layer in the middle

red sensitive layer on the bottom

I am still looking for an old textbook that mentions camera lenses being designed for the different film layers. but now that I think about it, I also recall being told many years ago that lenses designed for machine vision (digital sensors) had all the different colors of focus set to the same focal plane distance, how weird that was, and to expect poor results if one did use a machine vision lens with film although I suspect that was from someone not quite expert in the subject.
 
So far this confirms what you said but this assumes a gray target reproduced in B&W, basically. We have to think about typical objects like humans, nature, etc., their typical spectra; and not just what we notice but what we object. We may dislike blue fringing around human faces more than red one, for example, even if we are less sensitive to blue, etc. Also, perhaps APO lenses have color shifts restricted to what is achievable with the materials and the tricks we have available. This is beyond my pay grade though.
Thanks for the validation and rationale JACS. As for the chrominance of the target, that is of course relevant but I was trying to make the wider point that the human visual system tends to be more sensitive to changes in luminance than to chrominance, some say a generic 4x more sensitive.

We can get an intuition for why that is in Hunt's vision model, which also assembles signals from the three cones in an achromatic channel and two color difference channels, somewhat akin to linear YCbCr, before sending them to the cortex via the optic nerve. It is also the reason why in compression like Jpeg and Mpeg the chrominance channels are often subsampled compared to Luminance, and why most classic sharpening in raw converters and editors happens on just the luminance Y channel.

So based on linearity and superposition, not to mention my tests, I feel that there is a justification in assuming that the MTF curves measured on the three individual raw color planes contribute generically to perceived sharpness in the proportions suggested by the middle row of the relative compromise forward matrix, which maps white balanced raw data to Y. Relative because such a matrix is scene, illuminant, lens and camera dependent, though the general point on the positive red coefficient and negative blue is typically confirmed.

Jack
 
Last edited:
well here is a little something relevant for the layers of color film

Developing Film: Color - How Photographic Film Works | HowStuffWorks

blue sensitive layer on top

green sensitive layer in the middle

red sensitive layer on the bottom

I am still looking for an old textbook that mentions camera lenses being designed for the different film layers. but now that I think about it, I also recall being told many years ago that lenses designed for machine vision (digital sensors) had all the different colors of focus set to the same focal plane distance, how weird that was, and to expect poor results if one did use a machine vision lens with film although I suspect that was from someone not quite expert in the subject.
Thanks OE, that makes perfectly good sense: a machine vision lens would presumably be corrected to work on a thinner single plane, while a film lens would expect a thicker layer as shown. Ideally the sequence and thickness of the film layers would be matched to typical lens axial aberrations or vice versa. It would be interesting to know the range of thicknesses involved in both cases.
 
Last edited:
well here is a little something relevant for the layers of color film

Developing Film: Color - How Photographic Film Works | HowStuffWorks

blue sensitive layer on top

green sensitive layer in the middle

red sensitive layer on the bottom

I am still looking for an old textbook that mentions camera lenses being designed for the different film layers. but now that I think about it, I also recall being told many years ago that lenses designed for machine vision (digital sensors) had all the different colors of focus set to the same focal plane distance, how weird that was, and to expect poor results if one did use a machine vision lens with film although I suspect that was from someone not quite expert in the subject.
Kodachrome 25 does not strictly implement that layout. The layers from top to bottom are: Blue sensitive, yellow filter, Blue-Green sensitive followed by Blue-Red layer. The effect after the yellow filtering may be similar but the layers are thinner than in other films and the spectral sensitivities significantly different from Ektachrome.

https://visns.neocities.org/4x5LFphotography/HGWK

Given that Kodachrome is generally held in high regard and was widely used by professional photographers it could be perhaps be considered a reference? Paul Simon even wrote a song about it in 1973!

https://www.paulsimon.com/track/kodachrome-6/

I'm fairly sure that you won't get consensus on the most favoured rendition of reality obtained from film (worry less about Delta E?): Original Velvia was once raved about but Fuji decided to tone it down because it was inclined towards greenish skin tones, whereas Kodachrome leaned towards redder than natural (more desirable). Fujia Provia was more neutral and Velvia 100F replicated most of its characteristics disappointing many at the time, as can be seen at Ken Rockwell's site: https://www.kenrockwell.com/tech/velvia100f.htm

Fuji negative films have additional layers. Superia Reala 100 includes a cyan sensitive layer beneath the green. https://125px.com/docs/film/fuji/superia_reala_datasheet.pdf

Kodak Professional Ektar 100 was one of the modern, well reviewed films and appears to only have 3 layers.

Regarding lenses for film cameras, focussing each of the primary colours at different depths in the emulsion, that claim was made by Jeff Schewe and Martin Evening but attracted criticism at the time.

Adobe Photoshop CS5 for Photographers: The Ultimate Workshop by Martin Evening & Jeff Schewe (Focal Press. 2011) and read this paragraph about film lenses and DSLRs:
… film lenses were designed to resolve a color image to three separate … film emulsion layers which overlaid each other. Consequently, film lenses were designed to focus the red, green and blue wavelengths at fractionally different distances and at even further distances apart towards the corner edges of the film emulsion area. Since the red, green and blue photosites are all in the same plane of focus on digital sensor, lenses … should now focus the red, green and blue wavelengths to a single plane of focus.
The more I read about the human visual system and its limitations, particularly in regard to how it handles Blue (lower resolution, very sparse sampling, relative insensitivity to intensity change but sensitive to small changes in colour etc.), makes me wonder whether many of the assumptions in both colour science and digital imaging may be barking up the wrong tree/could do better but I'm certainly no expert!

This probably requires a separate thread.
 
Last edited:
Sure, I did not formulate it well, I guess. What I meant was that once we get to chrominance errors, which ones bother us most may not depend on our sensitivity to this or that hue but on other factors.
 
Just to make sure that most cameras did indeed typically have positive red coefficients and negative blue ones, I downloaded a few raw files from DPR's studio scene under D65 lighting and computed the forward matrix to D65.

SMI is an indication of mean residual color errors and can be a little better if the white preserving constraint is relaxed (in which case the RX5II achieves 90.8)
SMI is an indication of mean residual color errors and can be a little better if the white preserving constraint is relaxed (in which case the RX5II achieves 90.8)

The matrix projects white balanced raw data (r, g, b) to XYZ D65 so that

Y = r*Yr + g*Yg + b*Yb

Two matrices were computed each time, one minimizing DE2000 (D2k) and one solving the normal equation (M0), both in white point preserving mode. D50 coefficients were then computed by applying Bradford chromatic adaptation to D65's.

It seems that negative blue generalizes to all and my rough generic assumption of about Y = 0.25*r + g - 0.25*b was not too far off.

Jack

PS The only cameras that had both negative blue and red coefficients were those based on Foveon sensors, which I did not include here (about -20% red and -40% blue).
 
Last edited:

Keyboard shortcuts

Back
Top