Did I miss a memo?

What's up with this: https://www.ebay.com/sch/i.html?_from=R40&_nkw=canon+fd+28mm+f2&_sacat=0&_sop=16

Is this a new reality? Or just people losing sense of said reality?
My take is Sumire. The Sumire primes as offerd at more than 2x of the fully modern ultra contrast super-corrected lens. The sumire uses asphericals, so by F2 they are perfectly normall modern. But the regions that get used when opening from f2.8 to f2 and to f1.4 turn it into a lens that produces "beautiful" film like images. It has lateral CA with green and red not being exactly at the same place, in addition to longitudinal, which has a green cast to it.

These Sumire primes, really, cost $7000, so the effects on images of this slight deviations from perfection makes for a more film like, emulsion-like effect, with much softer tones on skins, and with a much smoother trasition from in focus to OOF, likely in part mirroring or introducing (hopefully) a bening version akin to spherical aberration.

This, if you don't have $7000 and don't want to be a noob with a high cotrast flat rendering modern gem of clinical look...what lens do you get?

See the price changes and time it with Surime. This is my hypothesis.
 
Video trends are unpredictable and goes in a waves, so you never know... Some lenses such as Canon 50/0.95 aren’t that popular among film makers recently, but if someone use it for a viral video, prices can rocket up over night. (despite its only one of a kind).

Commercial video production has usually much bigger budgets than photo production in comparison. Equipment rental, especially lenses are a very small part of the budget, so generally, prices of a pro video-equipment are much higher. These lenses are supposed to stand some serious abuse, both in the sturdiness and reliability. It’s a different world.
Dipping a toe in that world I start to understand that even a small demand from that side influences our market. And the goal is, as for most of us here, the lens character.

For example the articles here:

https://thecinelens.com

Way higher prices in that market, companies buying old cine + comparable still lenses, glass rehoused or cine modified and then rented out.

The trend waves will give some fluctuations in price per lens model but with rental companies in between purchase/modification and actual use I doubt it will be much, the trends will dampen out on the rental companies stock shelves.

Count to that the larger cine sensors being used now, I expect that this effect is not going to disappear. 8K sensors showing even more character of a 1970 cine lens design, there is some irony in it.

In 2016 Cooke brought out new "vintage" glass in a modern cine housing. That will not change the effect for us here with Cooke's prices. Chinese new cine lens ranges might when some report that the look of one range is similar enough to that of a vintage range. NiSi full frame cine lenses (2018) pages refer to a retro cinematic look .......

BTW; not just the Kowa Cine Prominar lenses are rehoused but also (rarer though) the Kowa leaf shutter SLR lenses, in this case for full frame cine: https://www.oldfastglass.com/kowa-vintage-full-frame. Which explains the course of an old thread here: https://www.dpreview.com/forums/thread/4397939 Someone being interested in a rangefinder Kowa 85mm for cine use as there are no Kowa SLR lenses between 50 and 100mm.

Met vriendelijke groet, Ernst
No photographer's gear list is complete without the printer mentioned !
 
Last edited:
Video trends are unpredictable and goes in a waves, so you never know... Some lenses such as Canon 50/0.95 aren’t that popular among film makers recently, but if someone use it for a viral video, prices can rocket up over night. (despite its only one of a kind).

Commercial video production has usually much bigger budgets than photo production in comparison. Equipment rental, especially lenses are a very small part of the budget, so generally, prices of a pro video-equipment are much higher. These lenses are supposed to stand some serious abuse, both in the sturdiness and reliability. It’s a different world.
Dipping a toe in that world I start to understand that even a small demand from that side influences our market. And the goal is, as for most of us here, the lens character.

For example the articles here:

https://thecinelens.com

Way higher prices in that market, companies buying old cine + comparable still lenses, glass rehoused or cine modified and then rented out.

The trend waves will give some fluctuations in price per lens model but with rental companies in between purchase/modification and actual use I doubt it will be much, the trends will dampen out on the rental companies stock shelves.

Count to that the larger cine sensors being used now, I expect that this effect is not going to disappear. 8K sensors showing even more character of a 1970 cine lens design, there is some irony in it.

In 2016 Cooke brought out new "vintage" glass in a modern cine housing. That will not change the effect for us here with Cooke's prices. Chinese new cine lens ranges might when some report that the look of one range is similar enough to that of a vintage range. NiSi full frame cine lenses (2018) pages refer to a retro cinematic look .......
Couldn't the escalating prices in Canon's be related to Sumire? Canon did catch up with the release of that line in 2019.
BTW; not just the Kowa Cine Prominar lenses are rehoused but also (rarer though) the Kowa leaf shutter SLR lenses, in this case for full frame cine: https://www.oldfastglass.com/kowa-vintage-full-frame. Which explains the course of an old thread here: https://www.dpreview.com/forums/thread/4397939 Someone being interested in a rangefinder Kowa 85mm for cine use as there are no Kowa SLR lenses between 50 and 100mm.

Met vriendelijke groet, Ernst
No photographer's gear list is complete without the printer mentioned !
 
Video trends are unpredictable and goes in a waves, so you never know... Some lenses such as Canon 50/0.95 aren’t that popular among film makers recently, but if someone use it for a viral video, prices can rocket up over night. (despite its only one of a kind).

Commercial video production has usually much bigger budgets than photo production in comparison. Equipment rental, especially lenses are a very small part of the budget, so generally, prices of a pro video-equipment are much higher. These lenses are supposed to stand some serious abuse, both in the sturdiness and reliability. It’s a different world.
Dipping a toe in that world I start to understand that even a small demand from that side influences our market. And the goal is, as for most of us here, the lens character.

For example the articles here:

https://thecinelens.com

Way higher prices in that market, companies buying old cine + comparable still lenses, glass rehoused or cine modified and then rented out.

The trend waves will give some fluctuations in price per lens model but with rental companies in between purchase/modification and actual use I doubt it will be much, the trends will dampen out on the rental companies stock shelves.

Count to that the larger cine sensors being used now, I expect that this effect is not going to disappear. 8K sensors showing even more character of a 1970 cine lens design, there is some irony in it.

In 2016 Cooke brought out new "vintage" glass in a modern cine housing. That will not change the effect for us here with Cooke's prices. Chinese new cine lens ranges might when some report that the look of one range is similar enough to that of a vintage range. NiSi full frame cine lenses (2018) pages refer to a retro cinematic look .......
Couldn't the escalating prices in Canon's be related to Sumire? Canon did catch up with the release of that line in 2019.
Possibly. In this review there is a comment on the lack of available focal lengths in all Canon Cine ranges as I understand it; https://www.newsshooter.com/2019/04/03/canon-sumire-primes-a-modern-day-k-35/

Met vriendelijke groet, Ernst
No photographer's gear list is complete without the printer mentioned !
 
Video trends are unpredictable and goes in a waves, so you never know... Some lenses such as Canon 50/0.95 aren’t that popular among film makers recently, but if someone use it for a viral video, prices can rocket up over night. (despite its only one of a kind).

Commercial video production has usually much bigger budgets than photo production in comparison. Equipment rental, especially lenses are a very small part of the budget, so generally, prices of a pro video-equipment are much higher. These lenses are supposed to stand some serious abuse, both in the sturdiness and reliability. It’s a different world.
Dipping a toe in that world I start to understand that even a small demand from that side influences our market. And the goal is, as for most of us here, the lens character.

For example the articles here:

https://thecinelens.com

Way higher prices in that market, companies buying old cine + comparable still lenses, glass rehoused or cine modified and then rented out.

The trend waves will give some fluctuations in price per lens model but with rental companies in between purchase/modification and actual use I doubt it will be much, the trends will dampen out on the rental companies stock shelves.

Count to that the larger cine sensors being used now, I expect that this effect is not going to disappear. 8K sensors showing even more character of a 1970 cine lens design, there is some irony in it.

In 2016 Cooke brought out new "vintage" glass in a modern cine housing. That will not change the effect for us here with Cooke's prices. Chinese new cine lens ranges might when some report that the look of one range is similar enough to that of a vintage range. NiSi full frame cine lenses (2018) pages refer to a retro cinematic look .......
Couldn't the escalating prices in Canon's be related to Sumire? Canon did catch up with the release of that line in 2019.
Possibly. In this review there is a comment on the lack of available focal lengths in all Canon Cine ranges as I understand it; https://www.newsshooter.com/2019/04/03/canon-sumire-primes-a-modern-day-k-35/
I really don't shoot video so I can't say much about the ranges. The questions that arises is what is objective criteria to rate lower contrast lenses. It's evident that the camp saying it's collectors or nostalgia is wrong.

It also hints there are metrics yet to be discovered to really be able to characterize lenses. It's like food, where we have very little, other than people we know to have exquisite taste, or chefs with exquisite techniques, process and sources.

Take Francis Mallman, he ended up #1 I don't recall in which year, I think it may have been held in Germany or Denmark in the 80s, and he mostly cooked with Potatoes brought from Argentina.

Before being the highest rated cheff in that competition, by judges and also popular choice, he was almost disqualified. People though he was disrespecting the institution when he presented his Potato-full menu. He had sneaked in the Potatoes from Argentina in regular luggage too.

It relates with the comment in the sense that what he cooked was extremely delicious and delicate. However, Potatoes are just a root that's so easy to grow and so abundant and cheap. It also has some other unique characteristics, and he mixed them in unique ways. Likewise, a simpler optic crafted very well, with very well crafted balance of aberrations, in the right hands, may produce things you simply can't achieve at all with the ultra high contrast lenses. No post processing or flavoring after the fact can give you the end outcome.

But we have no objective way to measure in which vectors or ways it is superior. I think often we hear:
  • Smoother transitions
  • Beautiful bokeh
  • Charming color tones
  • Flattering skin tones
  • Medium Contrast
Let's only "Smoother transitions".

!!! Sanity warning, complicated language ahead, almost unintelligible. It makes sense to me because I am picturing it, but don't know how to explain it better. You likely want to not read any of it.

This requires a PSF that looks like a gradient. Can be apodized, which is an optical darkening of the faster apertures, so one is actually blending a faster aperture with a smaller one, weighting more of the image towards the stopped down apertures. The second one is though infinite ways to make the lens focus at different Focal Lengths, ideally close and clumped to a center value. But to have smooth bokeh, it means the lens should have a peak focus that is more abrupt the slide down gently. That is, say in a numerical hypothetical example, maybe 50% if the circle focuses at certain distance, then 25% a little further down, then maybe 15% even more down, then 7% and so on. I think that when the lens is close or fully APO, it can only do so in one way, and the characteristic of images may be largely similar in any image. But often, in old lenses, they would not try to go in the Apo direction, because it seems it made it very difficult to make the lens produce this smooth transition. Apo happens in a plane, what happens after or before is what is sacrificed. So a lot of the lenses that seems to have a more appealing rendering to cineasts seem to have more LoCA. This means depending on local, one can naturally be rendering different wavelengths at different focal lengths. This also has the effect or is similar to saying that different colors have different planes of focus at any focusing distance. So one can have red be very well focused, sharp, along with frequencies close to red, with green more defocused, so when taken together, the resulting colors on different objects would be quite different depending on the source wavelengths it reflects, as small detail is mixed up very differently between lenses that do this differently.

All this is said, likely, will not make sense to anyone. It seems convoluted while for me it's clear what I mean. Really, maybe it just imagining that every picture is made up of overlapping tiny circles. Like out brush it 50,000 little colored circles. The circles range from small to very big. When they are smallest they can, we call it to be in focus. So LoCA makes it so that when one circle is very small, another circle in the same position representing some other wavelength is bigger. And say, a little behind this focal plane, there's another set of circles, where now the circle that was smaller now is growing a bit bigger, and the other actually shrunk a bit. Then a tile bit more behind, both circles are a bit bigger. How fast do these circles grow the more we move behind the focal plane? That will depend (give same aperture) on the mix of LoCA (multiple focus points and spreads by wavelength) but also if the glass is focusing part of the light behind the subject, which may be some of the spherical aberrations (which don't need to arise from spherical surfaces to be measured). In this case, in the best focus point of the wavelength, we have little circles that are a bit bigger than we could. Say we only have red, and focus on a target, then make the smallest red circles possible. If the lens focuses on multiple focal planes, then what proportion is focused where? When I read undercorrected spherical aberration, it means it rapidly comes to the smallest circle possible, then the circles remain smaller than normal as we look at points behind the focal point, so it is a bit less sharp at the max point of sharpness, then the focus degrades more slowly, bu after some point the proportion behind focused at a longer FL goes to 0. From this point it becomes as blurry as any other lens. But the passage is made flatter and not peak in an abrupt way. In reality, we can imagine this as tiny circles. The corrected lens gives a very small circle, then gradually becomes bigger. Another lens may have a mix: a proportion that is the smallest right there, but some rays there are focused behind...basically, at the focal plane, some points of the same wavelength are in focus and some a bit blurry. This lower contrast but isn't lowering resolution. And with a matching sensor, this may even resolve the problem with aliasing, moire and other very damaging artifacts.

Combine then circles or different colors and sizes, across the z plane (or away from the focal point), with the glass in part focusing any color at more than one FL in a specific "recipe" (proportion or distribution vs focal plane) and then add that this varies by wavelength, which then once this concepts are clear (able to be visualized) then it starts to show how things that at a macro level look rather similar (same pic with two lenses) the actual working, colors, evolution of the focus in the out of focus areas is often so radically different.

When this is understood, then tests like MTF, as a mesure of how good a lens is, is almost like treating a weight to evaluate food. One food weights 1 kg, and another 800 grams. This tells us very little about the food, other than probably it is less dense. Likewise, contrast tells us very little about a lens. It's even worst, an Apo lens that shows 99% contrast in 20 lp/mm across some width just tells us all the circles peak in unison in one focus plane, then degrade (defocus) in a likewise fashion, where it may be a bit different how fast the circles get by wavelength, they are all going to look much like each other. One may want such a lens because is using a 200 MP camera.

This is why I think it seems crazy that we have almost no information to understand why we like some rendering. And it is also hilarious that since different lenses are non-apo in very different ways, they will do very different details depending on which combinations of of wavelengths are in the small detail subject of our picture. A lens may do an amazing pic in one subject and look unrefined and meh in another object of very different combinations of colors in the small detail.

This is also why i absolutely love Jim Kasson's analysis of some "Apo" lenses, here the measures the contrast BY COLOR for at least the RGB buckets (which is what we can sense with Bayer) across the depth plane. It tells a huge lot more than any MTF. And I also adore some Zeiss white papers, that show the MTF by wavelegth for sometimes 2 of them, sometimes 3 of them. Unfortunately, these last say absolutely nothing about contrast in the Z plane, but at least one knows that when contrast is 70% somewhere, a 30% of light is nearby, possibly poorly focused, or behind or ahead of the subject, or biased and not circular. So when I see 70% contrast, I don't think it's a bad lens. Actually, it has all the potential to be amaing, or really a horrible lens depending on exactly what is going with the 30% not in the place and focus plane. Increasing this lens contrast may be making the lens better, or poorer. It's just impossible to tell. The only thing we'd know is that its more contrasty without any uncertainty.

Maybe there could be way to explain many things in optics with overlapping circles ideagrams, in a way that little has to be said and it becomes more self evident. I am pretty sure the wording and most of the text fails to make much sense, as I am not recalling knowledge, but but a way in which to think about what the lens actually does in practice and the effects it has on details.

Is the Sumire lens worth $7000? I have no way to tell, and we have no way to characterize it, other than seeing if we enjoy the images or not, but without being able to say if they were enhaced or degraded by the lens. And this also makes comparisons, and statement s about how good or bad, or how overpriced or what a bargain a purely subjective, perceptual and emotional affair. But what is real is that the lens maker could have done things 1000 different ways and have the same contrast with infinite other ways to affect the blur circles over z-plane and across wavelengths. So at least a lens and optical designer must be a scientists, but also have a very refined taste, goal and a way to know all these details we likely don't get to understand.
 
Last edited:
But we have no objective way to measure in which vectors or ways it is superior. I think often we hear:
  • Smoother transitions
  • Beautiful bokeh
  • Charming color tones
  • Flattering skin tones
  • Medium Contrast
A large part of the reason I've been pushing measurement of OOF PSF is that one center and one corner measurement can give you a pretty good read on a ton of imaging properties, mostly relating to how out-of-focus stuff gets rendered, but also detecting things like decentering and other manufacturing/acquired optical defects.

Color rendering is a huge can of worms because the truth is that cameras are insanely undersampling color information. You'll hear a lot of people claim that "with good color science" three color channels is enough, largely based on the idea that's what human eyes use... however, that's about as valid as saying that since humans have only two contact points with the ground (two feet), no vehicle should have more than two wheels. I think the variation in color handling between cameras and digital processing methods is typically much larger than the variation between lenses. That said, it is true that some lens families are more consistent about color across lens models than other families are, and superior consistency means matching colors between them is more effective.

Medium contrast? I think this gets tied into the whole concept of MTF measurements, which are treated as {contrast, resolution} number pairs. Honestly, a pair of in-focus PSF images, one in the center and one in a corner, would give a lot more information. The catch is that the in-focus PSF is, by definition, to small to be accurately captured by the camera sensor. Personally, I'd be a lot happier if lensmakers would publish a couple of in-focus PSF images rather than those MTF charts -- but I don't think they ever will, because those charts usually come from computation using the lens design model, whereas the PSF images should really come from measuring lens samples on an optical bench or specially-constructed very-small-pixel camera.
 
But we have no objective way to measure in which vectors or ways it is superior. I think often we hear:
  • Smoother transitions
  • Beautiful bokeh
  • Charming color tones
  • Flattering skin tones
  • Medium Contrast
A large part of the reason I've been pushing measurement of OOF PSF is that one center and one corner measurement can give you a pretty good read on a ton of imaging properties, mostly relating to how out-of-focus stuff gets rendered,
It's so rich, I wonder why its use isn't more widespread.
but also detecting things like decentering and other manufacturing/acquired optical defects.
I am not sure lens makers would love super easy to detect deviations from specs. A deluge of returns with enough sample variation?
Color rendering is a huge can of worms because the truth is that cameras are insanely undersampling color information. You'll hear a lot of people claim that "with good color science" three color channels is enough,
I started to write about it, there are so many steps where things are interpreted, filtered, nangled, aggregated and plainly guessed, from the source light, the atmosphere, each element material, shape, and their coatings, then the microlens, then the Color Filters, and everything else happening before a count is recorded, then the demosaic itself and the many ways to do it, plus the undersampling consequences, to then the system interpreting it, transforming it, the display devise and the system interpretation and knowledge of its capabilities, its limitation as a display, then all that goes into our eyes, and the entire circle is restarted, each being different, and all being mostly a perceptual exercise with colors that actually don't exist in nature, only in a mental process in the brain. Then we add the effects that we can summarize but are richer than, say, LoCA, LaCA and how depending on everything, each scene will have a different rendering. It's a bit of a mystery we can see pictures and be largely content. I was amazed to learn of this optical illusion of a purely B&W image with some extremely low res color bands across it, where our brain would render mentally as if the entire color in the scene was perfectly ok. Have you seen these?

It's almost scary. Like we could almost have an elephant in front and believe without a doubt it's actually a flying spaghetti with a black hat and pink underwear.
largely based on the idea that's what human eyes use... however, that's about as valid as saying that since humans have only two contact points with the ground (two feet), no vehicle should have more than two wheels.
Good analogy :-)
I think the variation in color handling between cameras and digital processing methods is typically much larger than the variation between lenses. That said, it is true that some lens families are more consistent about color across lens models than other families are, and superior consistency means matching colors between them is more effective.
It seems to me this is very optismistic. We even have these situations with color cast being widely different by aperture in the same lens, and the widely different configurations of LoCA and LaCA and all other chromatic aberrations.

I think we could construct scene with certain details and color information, that one lens and the next, one camera and the next, may render something extremely different. I am not aware of anyone starting to play with the idea that fooling a camera with abstractly generated images could be a very effective way to make more tangible how thing crumble as we approach nyquist, and how widely different systems perceive at this small level. Obviously, those that do advanced images of the ultra small, and the ultra far away, are aware, but it is a very small subset.
Medium contrast? I think this gets tied into the whole concept of MTF measurements, which are treated as {contrast, resolution} number pairs. Honestly, a pair of in-focus PSF images, one in the center and one in a corner, would give a lot more information. The catch is that the in-focus PSF is, by definition, to small to be accurately captured by the camera sensor. Personally, I'd be a lot happier if lensmakers would publish a couple of in-focus PSF images rather than those MTF charts -- but I don't think they ever will, because those charts usually come from computation using the lens design model, whereas the PSF images should really come from measuring lens samples on an optical bench or specially-constructed very-small-pixel camera.
Mhh. I had never thought about that. I guess they'd also show as a diffraction pattern, the smaller they are made, and at what apertures.

I also like Full Field tests by LensRentals a lot, in addition to JK (much simpler) tests by color channel.

https://www.lensrentals.com/blog/2018/06/developing-a-rapid-mtf-test-for-photo-and-video-lenses/

But these are much more complicated, nuanced tests requiring an optical bench in the first caase, and a lot of magical math, and the other is greatly limited by a lot other factors and f course the CFA and sensor.

Still, the LensRental example has the issues of the MTF taken to the extreme and in full glory, saying nothing about the aspect that matters the most, which is what are the effects on color or across wavelengths. Nothing prevents doing this tests for many very specific wavelengths for, say, two or three lenses. Even if just to really understand the true lens characteristics that are hidden by monochrome testing.

But why go to that level of exasperation if one could point a lens to a tiny bright light in a darkened environment, and just observe the evolution of the PSF from any defocus position, to the focus and past, in seconds and without much room for mistake?

The Thru-Focus PSF (maybe a better term to represent the entire genealogy of blur circles it generates) is super rich, and it is also so intuitive the moment one develops a good enough intuition of what a convolution is and how images emerge from this process. It think this last challenge is the biggest limiting factor of all in having the PSF shape being a primary metric.

I think the most limited aspect of enjoying the PSF is the fact we don't encounter many good analogies of the convolution process. I did not understand it, and it made little sense, even when I could understand the explanations, it was very hard to grasp in its entirety (or to a deep enough extent), until I tortured myself who knows why with understanding and programing some basic CNNs. I wanted to very intuitively understand why and how neural networks could detect objects and achieve anything, or how it could create feature detectors out of nothing more than trial and error. This forced me to visualize the process in slow motion until after much effort, the brain grasped the concept in an intuitive way - putting the steps together as something that happens at all times, in all places, simultaneously. But is is a positive side effect of maybe being less intelligent, as I could only program or play intent-fully with CNNs -ie. understanding them to the extent I wanted- only after grasping convolution more intuitively (and correctly).

I think the observing the PSF is really a pleasure. A super nerd pleasure, and it's a bit still like reading tea leafs in a bowl. But one gets to start making good inferences, even if not always right, way above what luck would dictate, the more one practices. And in doing so, one has a tool where it's like putting the lens thru some truth telling drug, where it confesses rapidly and accurately what it actually does and what are all its secrets.

What remains for me, is figuring out a light source that can be made to emit across specific wavelengths at will, or knowing more intimately the shape / proportion of wavelengths in different point-like light sources. But even without it, a lot is revealed by what colors and fringes and casts are visible under whatever largely white light source one has at their disposal.
 
Last edited:
But we have no objective way to measure in which vectors or ways it is superior. I think often we hear:
  • Smoother transitions
  • Beautiful bokeh
  • Charming color tones
  • Flattering skin tones
  • Medium Contrast
A large part of the reason I've been pushing measurement of OOF PSF is that one center and one corner measurement can give you a pretty good read on a ton of imaging properties, mostly relating to how out-of-focus stuff gets rendered,
It's so rich, I wonder why its use isn't more widespread.
It has only been "a thing" since I started pushing the idea at Electronic Imaging about a decade ago. I will say that, for example, the idea was very well received by folks like the founder of Imatest. However, people like reducing a measure to "a number" -- not "an image" -- and that's where OOF PSFs fail.
... I think the variation in color handling between cameras and digital processing methods is typically much larger than the variation between lenses. That said, it is true that some lens families are more consistent about color across lens models than other families are, and superior consistency means matching colors between them is more effective.
It seems to me this is very optismistic. We even have these situations with color cast being widely different by aperture in the same lens, and the widely different configurations of LoCA and LaCA and all other chromatic aberrations.
You're certainly right that color shift on stopping down is commonly severe, with very few people ever noticing. Quite a few Canon FD/FDn lenses are exceptionally bad this way -- basically having almost no blue in the wide-open image -- and the general public reaction to that has largely been positive: they like the warmer look!

However, most SR/MC/MD Minolta glass doesn't do much of that, and lots of modern lenses are also much better about the stop-down color shift.

Honestly, given how computer-assisted design works, I don't understand why **EVERY** modern lens isn't an APO design with no stop-down color shift.
I think we could construct scene with certain details and color information, that one lens and the next, one camera and the next, may render something extremely different. I am not aware of anyone starting to play with the idea that fooling a camera with abstractly generated images could be a very effective way to make more tangible how thing crumble as we approach nyquist, and how widely different systems perceive at this small level. Obviously, those that do advanced images of the ultra small, and the ultra far away, are aware, but it is a very small subset.
Medium contrast? I think this gets tied into the whole concept of MTF measurements, which are treated as {contrast, resolution} number pairs. Honestly, a pair of in-focus PSF images, one in the center and one in a corner, would give a lot more information. The catch is that the in-focus PSF is, by definition, to small to be accurately captured by the camera sensor. Personally, I'd be a lot happier if lensmakers would publish a couple of in-focus PSF images rather than those MTF charts -- but I don't think they ever will, because those charts usually come from computation using the lens design model, whereas the PSF images should really come from measuring lens samples on an optical bench or specially-constructed very-small-pixel camera.
Mhh. I had never thought about that. I guess they'd also show as a diffraction pattern, the smaller they are made, and at what apertures.
Yes, an in-focus PSF literally is a diffraction pattern.
I also like Full Field tests by LensRentals a lot, in addition to JK (much simpler) tests by color channel.

https://www.lensrentals.com/blog/2018/06/developing-a-rapid-mtf-test-for-photo-and-video-lenses/

But these are much more complicated, nuanced tests requiring an optical bench in the first caase, and a lot of magical math, and the other is greatly limited by a lot other factors and f course the CFA and sensor.
OLAF is pretty impressive... and not something I have in my basement. :-)
Still, the LensRental example has the issues of the MTF taken to the extreme and in full glory, saying nothing about the aspect that matters the most, which is what are the effects on color or across wavelengths. Nothing prevents doing this tests for many very specific wavelengths for, say, two or three lenses. Even if just to really understand the true lens characteristics that are hidden by monochrome testing.
FWIW, OOF PSF can be easily measured in my basement for roughly 10nm wavelength bands. How? Don't use a "white" LED as the point source, but LEDs of specific colors. LEDs are actually very narrow-band emitters. As far as I know, the standard colors are:
  • 375nm (UV limit)
  • 450nm (Blue)
  • 530nm (Green)
  • 600nm (Red)
  • 950nm (NIR limit)
But why go to that level of exasperation if one could point a lens to a tiny bright light in a darkened environment, and just observe the evolution of the PSF from any defocus position, to the focus and past, in seconds and without much room for mistake?

The Thru-Focus PSF (maybe a better term to represent the entire genealogy of blur circles it generates) is super rich, and it is also so intuitive the moment one develops a good enough intuition of what a convolution is and how images emerge from this process. It think this last challenge is the biggest limiting factor of all in having the PSF shape being a primary metric.

I think the most limited aspect of enjoying the PSF is the fact we don't encounter many good analogies of the convolution process.
Convolution basically means a summation of multiplicative terms: think scene point luminance times PSF. Unfortunately, interference means in-focus PSFs don't exactly sum; they sort-of do if you model them as signed energy distributions, but imaging sensors only measure the positive portion. Modeling OOF PSF as a sum works even worse because occlusions effectively remove some points of view from the sums. Overall, I think of convolution rather the same way I view assumptions that unexplored parameters have Gaussian distributions: it's a mathematically elegant way to view things, but not necessarily a good approximation to what really is happening.

The other bad thing about convolution is that deconvolution isn't a particularly cheap nor reliable operation. You can't deconvolve arbitrary convolution functions. On the other hand, deconvolution is often a good enough approximation to do useful things.
... I think the observing the PSF is really a pleasure. A super nerd pleasure, and it's a bit still like reading tea leafs in a bowl. But one gets to start making good inferences, even if not always right, way above what luck would dictate, the more one practices.
As a photographer, I think PSFs are extremely intuitive and satisfying. As a signal-processing engineer, they're functions annoyingly specified by table lookup rather than a nice, continuous, invertible, closed form expression.
What remains for me, is figuring out a light source that can be made to emit across specific wavelengths at will, or knowing more intimately the shape / proportion of wavelengths in different point-like light sources. But even without it, a lot is revealed by what colors and fringes and casts are visible under whatever largely white light source one has at their disposal.
Again, the answer is that LEDs are cheaply available in a wide range of precise wavelengths. This is actually how high-end lighting is implemented now: you have an array of LEDs with various wavelengths in a mixing box, and can individually control the intensity of each color of LED to make whatever spectral brightness curve you want.
 
A 24 was listed a few days ago for US5.5k, and I would not be surprised if it's gone.
 
But we have no objective way to measure in which vectors or ways it is superior. I think often we hear:
  • Smoother transitions
  • Beautiful bokeh
  • Charming color tones
  • Flattering skin tones
  • Medium Contrast
A large part of the reason I've been pushing measurement of OOF PSF is that one center and one corner measurement can give you a pretty good read on a ton of imaging properties, mostly relating to how out-of-focus stuff gets rendered,
It's so rich, I wonder why its use isn't more widespread.
It has only been "a thing" since I started pushing the idea at Electronic Imaging about a decade ago. I will say that, for example, the idea was very well received by folks like the founder of Imatest. However, people like reducing a measure to "a number" -- not "an image" -- and that's where OOF PSFs fail.
... I think the variation in color handling between cameras and digital processing methods is typically much larger than the variation between lenses. That said, it is true that some lens families are more consistent about color across lens models than other families are, and superior consistency means matching colors between them is more effective.
It seems to me this is very optismistic. We even have these situations with color cast being widely different by aperture in the same lens, and the widely different configurations of LoCA and LaCA and all other chromatic aberrations.
You're certainly right that color shift on stopping down is commonly severe, with very few people ever noticing. Quite a few Canon FD/FDn lenses are exceptionally bad this way -- basically having almost no blue in the wide-open image -- and the general public reaction to that has largely been positive: they like the warmer look!

However, most SR/MC/MD Minolta glass doesn't do much of that, and lots of modern lenses are also much better about the stop-down color shift.

Honestly, given how computer-assisted design works, I don't understand why **EVERY** modern lens isn't an APO design with no stop-down color shift.
I think we could construct scene with certain details and color information, that one lens and the next, one camera and the next, may render something extremely different. I am not aware of anyone starting to play with the idea that fooling a camera with abstractly generated images could be a very effective way to make more tangible how thing crumble as we approach nyquist, and how widely different systems perceive at this small level. Obviously, those that do advanced images of the ultra small, and the ultra far away, are aware, but it is a very small subset.
Medium contrast? I think this gets tied into the whole concept of MTF measurements, which are treated as {contrast, resolution} number pairs. Honestly, a pair of in-focus PSF images, one in the center and one in a corner, would give a lot more information. The catch is that the in-focus PSF is, by definition, to small to be accurately captured by the camera sensor. Personally, I'd be a lot happier if lensmakers would publish a couple of in-focus PSF images rather than those MTF charts -- but I don't think they ever will, because those charts usually come from computation using the lens design model, whereas the PSF images should really come from measuring lens samples on an optical bench or specially-constructed very-small-pixel camera.
Mhh. I had never thought about that. I guess they'd also show as a diffraction pattern, the smaller they are made, and at what apertures.
Yes, an in-focus PSF literally is a diffraction pattern.
I also like Full Field tests by LensRentals a lot, in addition to JK (much simpler) tests by color channel.

https://www.lensrentals.com/blog/2018/06/developing-a-rapid-mtf-test-for-photo-and-video-lenses/

But these are much more complicated, nuanced tests requiring an optical bench in the first caase, and a lot of magical math, and the other is greatly limited by a lot other factors and f course the CFA and sensor.
OLAF is pretty impressive... and not something I have in my basement. :-)
Still, the LensRental example has the issues of the MTF taken to the extreme and in full glory, saying nothing about the aspect that matters the most, which is what are the effects on color or across wavelengths. Nothing prevents doing this tests for many very specific wavelengths for, say, two or three lenses. Even if just to really understand the true lens characteristics that are hidden by monochrome testing.
FWIW, OOF PSF can be easily measured in my basement for roughly 10nm wavelength bands. How? Don't use a "white" LED as the point source, but LEDs of specific colors. LEDs are actually very narrow-band emitters. As far as I know, the standard colors are:
  • 375nm (UV limit)
  • 450nm (Blue)
  • 530nm (Green)
  • 600nm (Red)
  • 950nm (NIR limit)
But why go to that level of exasperation if one could point a lens to a tiny bright light in a darkened environment, and just observe the evolution of the PSF from any defocus position, to the focus and past, in seconds and without much room for mistake?

The Thru-Focus PSF (maybe a better term to represent the entire genealogy of blur circles it generates) is super rich, and it is also so intuitive the moment one develops a good enough intuition of what a convolution is and how images emerge from this process. It think this last challenge is the biggest limiting factor of all in having the PSF shape being a primary metric.

I think the most limited aspect of enjoying the PSF is the fact we don't encounter many good analogies of the convolution process.
Convolution basically means a summation of multiplicative terms: think scene point luminance times PSF. Unfortunately, interference means in-focus PSFs don't exactly sum; they sort-of do if you model them as signed energy distributions, but imaging sensors only measure the positive portion. Modeling OOF PSF as a sum works even worse because occlusions effectively remove some points of view from the sums. Overall, I think of convolution rather the same way I view assumptions that unexplored parameters have Gaussian distributions: it's a mathematically elegant way to view things, but not necessarily a good approximation to what really is happening.

The other bad thing about convolution is that deconvolution isn't a particularly cheap nor reliable operation. You can't deconvolve arbitrary convolution functions. On the other hand, deconvolution is often a good enough approximation to do useful things.
It's always a pleasure to read your comments. It's a bit like I am about to finally finish a soap opera that had me nagging for long, in this case in the optical curiosity department, then last second there's a 90 degree plot twist and the screen goes blank with no more chapters to watch ;-P
... I think the observing the PSF is really a pleasure. A super nerd pleasure, and it's a bit still like reading tea leafs in a bowl. But one gets to start making good inferences, even if not always right, way above what luck would dictate, the more one practices.
As a photographer, I think PSFs are extremely intuitive and satisfying. As a signal-processing engineer, they're functions annoyingly specified by table lookup rather than a nice, continuous, invertible, closed form expression.
When I think of the PSF in pure intuitive terms, I think of it not as a proper function and more like a black box that generates "circles", the lens being the set of all the shapes across all z for every focal plane across wavelengths at each x,y. Is this in the same line as you say Table Lookup?
What remains for me, is figuring out a light source that can be made to emit across specific wavelengths at will, or knowing more intimately the shape / proportion of wavelengths in different point-like light sources. But even without it, a lot is revealed by what colors and fringes and casts are visible under whatever largely white light source one has at their disposal.
Again, the answer is that LEDs are cheaply available in a wide range of precise wavelengths. This is actually how high-end lighting is implemented now: you have an array of LEDs with various wavelengths in a mixing box, and can individually control the intensity of each color of LED to make whatever spectral brightness curve you want.
Mhh. This makes sense.

Reminds me on my amateur experiments at audio synthesis. The most common modeling approaches are subtractive synthesis, then additive synthesis. In additive, one just creates a "sound" (light spectrum) my adding individual frequency components in some proportion, eg. Sine waves of specific frequencies. Subtractive is more common and what most instruments do, generating wide spectrum signals, like Sawtooth or Square waves, then filtering out pieces of it.

Signal processing is one of the most fascinating fields. It seems everything is built up from interfering waves, from images, to sounds....to "matter", to time itself, maybe.
 
As a photographer, I think PSFs are extremely intuitive and satisfying. As a signal-processing engineer, they're functions annoyingly specified by table lookup rather than a nice, continuous, invertible, closed form expression.
When I think of the PSF in pure intuitive terms, I think of it not as a proper function and more like a black box that generates "circles", the lens being the set of all the shapes across all z for every focal plane across wavelengths at each x,y. Is this in the same line as you say Table Lookup?
The lookup table is literally the image of the PSF... well, give or take that thing about needing negative values too. ;-)
What remains for me, is figuring out a light source that can be made to emit across specific wavelengths at will, or knowing more intimately the shape / proportion of wavelengths in different point-like light sources. But even without it, a lot is revealed by what colors and fringes and casts are visible under whatever largely white light source one has at their disposal.
Again, the answer is that LEDs are cheaply available in a wide range of precise wavelengths. This is actually how high-end lighting is implemented now: you have an array of LEDs with various wavelengths in a mixing box, and can individually control the intensity of each color of LED to make whatever spectral brightness curve you want.
Mhh. This makes sense.
It is so much easier than the "old way" using dichroic mirrors... as in the dichro color head of my old 23CII enlarger.
Reminds me on my amateur experiments at audio synthesis. The most common modeling approaches are subtractive synthesis, then additive synthesis. In additive, one just creates a "sound" (light spectrum) my adding individual frequency components in some proportion, eg. Sine waves of specific frequencies. Subtractive is more common and what most instruments do, generating wide spectrum signals, like Sawtooth or Square waves, then filtering out pieces of it.

Signal processing is one of the most fascinating fields. It seems everything is built up from interfering waves, from images, to sounds....to "matter", to time itself, maybe.
A lot of engineering is simply breaking complex problems down into multiple simpler ones you know how to solve, and fragmenting the spatial and/or frequency domain is a common way to handle that.... :-)
 

Keyboard shortcuts

Back
Top