Previous news story    Next news story

Can computer corrections make simple lenses look good?

By dpreview staff on Sep 30, 2013 at 17:51 GMT

Modern lenses tend to be large and expensive, with multiple glass elements combining to minimise optical aberrations. But what if we could just use a cheap single-element lens, and remove those aberrations computationally instead? This is the question scientists at the University of British Columbia and University of Siegen are asking, and they've come up with a way of improving images from a simple single element lens that gives pretty impressive results.

Image scientists are looking at whether a complex lens can be replaced by a simple one, along with lots of computation.

The method is described in detail in the researchers' paper. It works by understanding the lens's 'point spread function' - the way point light sources are blurred by the optics - and how this changes across the frame. Knowing this, in principle it's possible to analyse an image from a simple lens and reconstruct how it should look, through a computational process known as 'deconvolution'. 

The Point Spread Function diagram for a simple f/4.5 lens of the plano-convex type (i.e. one side curved, the other flat). The centre shows broad discs due to chromatic aberration, while the cross-shapes towards the corners are due to coma and astigmatism. The researchers split the image up into 'tiles', each with its own PSF. 

This isn't a new idea, but the team of researchers claim to have made some key advances in the field, making their method more robust than those previously suggested. For example chromatic aberration means that simple lenses can give detailed information in one colour channel with significant blur in the others, so they've decided to use cross-channel information to reconstruct the finest detail possible.

One serious problem with deconvolution approaches is that they often struggle to reach a single 'best' solution. The group claims to have solved this by optimising each colour channel in turn, rather than trying to deal with them all simultaneously. 

This is all very clever, of course, but does it work? The group shows several before and after examples on its website, shot using a simple F4.5 plano-convex lens on a Canon EOS 40D, and the results are quite impressive. 

 Original version (click for original)
 Image de-blurred using deconvolution (click for original)

So will this be coming to a camera near you anytime soon? In this precise form, probably not - the system still has problems understanding areas of the image which are slightly out-of-focus, and won't work with large aperture lenses. And while the images are certainly improved, they're unlikely to satisfy committed pixel peepers. In fact we'd guess it's most likely to be useful in smartphones, where the mechanical simplicity and robustness of simple lenses should be appealing. However, it certainly offers an interesting glimpse of the way results could be improved when shooting with a 'soft' lens. 

Comments

Total comments: 162
12
paulkienitz
By paulkienitz (7 months ago)

The corrected version is not as much better than the uncorrected as one might hope.

They might get further by using a doublet as their simple lens -- that way, the initial badness is significantly less.

3 upvotes
Henry Falkner
By Henry Falkner (7 months ago)

They are re-inventing the wheel. Such corrections are incorporated in the firmware of my 24x zoom Olympus SZ-30MR pocket P&S and a multitude of similar dedicated cameras, and presumably in all cell phone cameras.

Academe is sometimes so isolated, they have no clue as what is happening around them.

4 upvotes
TwoMetreBill
By TwoMetreBill (7 months ago)

Too bad you had to demonstrate to the world that you didn't understand the article. This is known as trolling.

18 upvotes
Just another Canon shooter
By Just another Canon shooter (7 months ago)

You have no clue how what they did differs from what your Oly can do.

8 upvotes
km25
By km25 (7 months ago)

I can see these programs improving and correcting. But remember garbage in garbage. Any sharpness or correction is a guess by the program to what maybe correct. If you take a picture of Pung, a program can only go so far to make it look like Nicole Kidman. If the info is not there it just will not be. Lens IQ is there or not, some correction yes, clean up yes. It is still better if you have a fine lens.

0 upvotes
CarVac
By CarVac (7 months ago)

It is not a guess.

If you [nearly] perfectly characterize the transfer [point spread] function of a lens as these folks have done, deconvolution can be nearly lossless. The only loss is info off the edge of the picture which is why the corrected image is cropped.

The info is there, it's just spread around. The only lost info is from longitudinal CA.

2 upvotes
jsandjs
By jsandjs (7 months ago)

All those laws in physics are 'guess' too. Analysis is nothing but a better word for guess. Anyway, there are different levels of guessing, some based on some ideas and some totally grab from the air. We cannot get something from nothing but we do get something from something. Let's keep on finding those precious something.

3 upvotes
Holger Bargen
By Holger Bargen (7 months ago)

I use the Pentax 55-300 mm lens. A very nice lens and good value for the price - but definitly not a professional lens.

I also have DxO 7 and the software supports the camera lens combination K5 - 55-300 mm lens. The first time I used this software I could not believe the quality generated from the average IQ of my pictures. It works and it works very well - but it is a pitty that DxO does not support more camera-lens combinations for Pentax.

Best regards
Holger

2 upvotes
Roland Karlsson
By Roland Karlsson (7 months ago)

It is a pity you cannot calibrate your lens yourself.

1 upvote
Olivier from DxO Labs
By Olivier from DxO Labs (7 months ago)

Hi Roland,

Unfortunatly we would be very happy if it was that simple to calibrate a lens. However we need hundreds, sometimes thousands of shots in a very controlled situation (a lab) to be able to make a DxO Optics Module.
Also, this job needs to be done on each camera in order to be accurate.

As a consequence, it is not possible to do it "at home".

We made a short vidéo here to explain what we do when we calibrate a combo Camera + lens :
http://www.dxo.com/intl/photography/dxo-optics-pro/supported-equipment

Best,
Olivier

6 upvotes
Olivier from DxO Labs
By Olivier from DxO Labs (7 months ago)

Hi Holger,

I know this is frustrating. May I suggest you to send a request to our lab via this page :
http://www.dxo.com/intl/photography/dxo-optics-pro/dxo-optics-modules-suggestion-form

It's a way to give us priorities

Best,
Olivier

3 upvotes
D600Vince
By D600Vince (6 months ago)

Hi Olivier,
To me those new algorithm seems much better than DXO's can your elaborate on this new algorithm compared to those of DXO's and/or if dxo will use them
Thanks

0 upvotes
Zoran K
By Zoran K (6 months ago)

Kudos for DxO Labs and their camera-lens software modules that automatically applies sophisticated corrections according to a used lens and camera sensor !

1 upvote
Olivier from DxO Labs
By Olivier from DxO Labs (6 months ago)

Hi Vince,

I sent the article to our research dept. If I get feedback that I can share, I will tell you.

Best,
Olivier

0 upvotes
PowerG9atBlackForest
By PowerG9atBlackForest (7 months ago)

My English is not very good but I would like to say it like this: It won't work because by laws of nature, chaos (a blurred point is chaos, literally spoken) once it has been created can not be completely reconstituted to it's former organized origin due to the lack of single-valued information.

Comment edited 3 times, last edit 5 minutes after posting
3 upvotes
Roland Karlsson
By Roland Karlsson (7 months ago)

Yeah! That is the general idea. A good lens gets you better images than a corrected bad lens. Its the same as - a correctly exposed image gives you a better image than a corrected under exposed image.

But, that does not make the results uninteresting.

8 upvotes
PowerG9atBlackForest
By PowerG9atBlackForest (7 months ago)

Well, the consequence would be: Use the best lens you can get/afford! Nothing new then.

1 upvote
sprinklePony
By sprinklePony (7 months ago)

I'm sorry but the whole idea is that a blurred point is not chaos, literal or otherwise. It's the result of a systematic function of the signal. These researchers are showing that with the correct algorithms the blurring can be undone. Will it be perfect? probably not, like you said, but your main premise is incorrect.

9 upvotes
paulkienitz
By paulkienitz (7 months ago)

It's true that the blurring of a bad lens is deterministic, not chaotic. But the photons in the light add noise, which makes it impossible to unravel all the blurring deterministically, because the precision required to fully reconstruct the sharp image would exceed the noise floor.

2 upvotes
LucidStrike
By LucidStrike (7 months ago)

"Chaos theory is a field of study in mathematics, with applications in several disciplines including meteorology, physics, engineering, economics and biology. Chaos theory studies the behavior of dynamical systems that are highly sensitive to initial conditions, an effect which is popularly referred to as the butterfly effect. Small differences in initial conditions (such as those due to rounding errors in numerical computation) yield widely diverging outcomes for such dynamical systems, rendering long-term prediction impossible in general. This happens even though these systems are deterministic, meaning that their future behavior is fully determined by their initial conditions, with no random elements involved. In other words, the deterministic nature of these systems does not make them predictable. This behavior is known as 'deterministic chaos', or simply 'chaos'."

0 upvotes
jsandjs
By jsandjs (6 months ago)

No, it cannot be completely....., and no one expects so. Even in designing lens, while you add one element to do its job, the element itself creates other problems.
The whole idea here is that a cheaper and much lighter software has a chance to replace an expensive and heavy lens while both of them will never be perfect.

0 upvotes
completelyrandomstuff
By completelyrandomstuff (7 months ago)

They provide the matlab code on their website.

3 upvotes
completelyrandomstuff
By completelyrandomstuff (7 months ago)

Very cool. These results are not too bad. In general, every lens manufacturer should give the PSF data for deconvoluting the image. It is done routinely in microscopy and can help a great deal with the images.

2 upvotes
jimread
By jimread (7 months ago)

Well I'm blowed!!! The same thing happened c.100 years ago with the development of the Cooke Triplet and the Zeiss Tessar. Prior to that lenses had more elements than you could shake a stick at.

Will it work, of course it will, will it be better, the Cooke and Zeiss designs certainly were.

Cheers - Jim

1 upvote
ThePhilips
By ThePhilips (7 months ago)

Pretty cool.

Actually, the more interesting research would be have been in the other directions: what optical distortions can be correct well in the software? Some uniform resolution sacrifice is acceptable.

... And after that, the question would be: given the correctable distortions, how can we simplify the lens design, by shifting all the distortions into the correctable range?

Or even more "other way around" approach: how can we make a tiny sensor to perform on the level of the larger sensors? For a tiny sensor, one can always develop potentially near perfect lens of a manageable size.

1 upvote
Andy Westlake
By Andy Westlake (7 months ago)

This has essentially already been done, and widely exploited. Distortion, lateral chromatic aberration and vignetting are all easily corrected in software, which is what allows modern compact cameras to have huge zoom ranges. Mirrorless systems generally employ these corrections too.

3 upvotes
AbrasiveReducer
By AbrasiveReducer (7 months ago)

With the ability to correct most lens flaws, what stands in the way of a really great small (tiny, even) camera is a much better small sensor. Right now, the situation is surprisingly similar to film; bigger is better, at least where very small sensors are concerned. The only difference is now, we say the small sensor image is too noisy and we used to say small film was too grainy.

0 upvotes
AngryCorgi
By AngryCorgi (7 months ago)

Image (a) is ridiculously awful, while image (b) is just plain terrible. I'd prefer NEITHER please.

8 upvotes
mini23
By mini23 (6 months ago)

Oops. I thought those were 100% crops and not so bat at all - but after your post i realize that they in fact are terrible.

0 upvotes
vFunct
By vFunct (7 months ago)

Deconvolution is a very common technique. It can also be used to improve high-end lenses as well.

0 upvotes
RussellInCincinnati
By RussellInCincinnati (7 months ago)

Long lenses especially favor a single-element or single-group design. Because you save money on having a lot of big elements. Can imagine a reasonably-priced 200mm F/4 for APS-C for example, perhaps with push-pull focusing.

Downside of single-element designs include saying good-bye to internal focusing, oh well.

2 upvotes
EinsteinsGhost
By EinsteinsGhost (7 months ago)

While the article speaks of really simple lenses, even regular lenses are already receiving a substantial boost via software (sometimes, corrections being applied even to RAW). Optical designs have already begun to give way to software corrections.

2 upvotes
RussellInCincinnati
By RussellInCincinnati (7 months ago)

Great article, suggests many avenues of improvement of simple (not necessarily one single spherical element) lens. How about what could be done with a single ASPHERICAL element? Or with a single cemented group of two (possibly aspherical surface) elements?

Also makes one think about the many aspects of photography that could be IMPROVED by computational enhancement of super-simple lenses, instead of just "focusing" on the limitations...

For example, let's think of how simple and predictable (i.e. so easily correctable) geometric distortion would be as the output of a single-element or better yet single-group lens.

Consider how low-flare/glare-resistant super simple lenses can be, and/or how inexpensive it is to shield or baffle such lenses.

When you've only got one or two lens elements in a single group, heck you can afford to use super expensive glass all of a sudden.

And how is easy it would be to mass-produce a "perfectly" CENTERED lens, a challenge with all consumer lenses.

1 upvote
Tom Axford
By Tom Axford (7 months ago)

I think it is undoubtedly indicative of the direction in which the technology is moving.

The past forty years has seen the replacement of much electronics circuitry (e.g. in radio, television, and many other devices) by software, so perhaps we can look forward to similar changes in optics? Of course, it is not straightforward, but it wasn't in electronics either.

Forty years ago, if you asked an electronics engineer whether much of his job would eventually change into that of a software engineer, you would probably have received a flea in your ear!

0 upvotes
yabokkie
By yabokkie (7 months ago)

Kodak is dead, long live Johnson&Johnson!

1 upvote
barnesmr1
By barnesmr1 (7 months ago)

On their website, it looks like several of the samples show significant halo-ing from local contrast enhancement (see the right edge of the clock). This is enhances the initial appeal of the image, but sacrifices integrity. IMHO they should have left the contrast out of this enhancement (if that's possible?).

Sounds like a fun project to work on, and could improve detail in cheaper cameras. But as with any computational enhancements, they are guesswork and cannot tell what is "supposed" to be blurry.

1 upvote
Jogger
By Jogger (7 months ago)

the old adage.. garbage in, garbage out .. seems to apply here

1 upvote
jsandjs
By jsandjs (6 months ago)

The key point here is that it is not garbage if one can figure out what is in it.

0 upvotes
Roland Karlsson
By Roland Karlsson (7 months ago)

Interesting.

I wonder how it performs if it is applied to a rather good lens, like an average zoom kit lens?

0 upvotes
Sean Nelson
By Sean Nelson (7 months ago)

I know nothing about how this works, but I suspect that a more complex lens would have a more complex point spread function and that might make reconstructing the image significantly more difficult.

1 upvote
Lan
By Lan (7 months ago)

Unlikey to be applied to any real lenses anytime soon. With the possible exception of the lens-cap lens that joe6pack mentions.

One lens element that's mathematically simple, is a few orders of magnitude easier to correct than a collection of many different (often complex) lens elements moving independently.

For the time being, DxO is your best bet!

3 upvotes
boels069
By boels069 (7 months ago)

Canon uses DLO (Digital Lens Optimizer) in DPP.
http://web.canon.jp/imaging/dlo/factor/index.html

0 upvotes
yabokkie
By yabokkie (7 months ago)

liquid or gummi lens may solve the problem.
don't change the lens, but the curves of the lens.

Comment edited 7 minutes after posting
0 upvotes
ArvoJ
By ArvoJ (7 months ago)

Problem is not in PSF complexity. Problem is that PSF depends on lens focal length, aperture value, object distance - on every aspect, able to change camera+lens+scene optical properties. You just can't calibrate for all possible cases - and these tables even do not work with another similar lens, because manufacturing tolerances change everything (think eg about slight decentering). Another problem seems to me is that proposed solution does best work with (mostly) monochrome images - in some sense proposed processing is mix of simple deconvolution and super-resolution between color channels.

0 upvotes
andersf
By andersf (7 months ago)

You just need a psf per aperture and focal length and I believe it could be done with the same algorithms as in the paper.

0 upvotes
jsandjs
By jsandjs (6 months ago)

No, it is against the idea at the very beginning of their research. That is the reason why they started with a typical simple single piece of glass. Image from an average zoom kit lens mixed with all kinds of distortion (although improved) may fool the software.

0 upvotes
Kim Letkeman
By Kim Letkeman (7 months ago)

Seems like one of those silly binary arguments ... "pixel peepers won't really like it so we expect it to be deploy on smart phones" ... HUH?

How about "it is capable of improving any lens, once the lens has been calibrated to the software so we expect it to be widely used and to lower the bar for professional use" ...

That sounds like a conclusion that matches the article and examples ...

1 upvote
Andy Westlake
By Andy Westlake (7 months ago)

There are existing solutions for not-so-sharp complex lenses, such as DxO Optics Pro or Canon's lens optimiser module in Digital Photo Pro. This work is quite specifically about improving the output from very simple lenses.

5 upvotes
Kim Letkeman
By Kim Letkeman (6 months ago)

Take a close look at the center of the sample images ... one looks exactly like a lot of the superzooms that lose contrast and have weird distortions when shot wide open. The other looks much better. Are you saying that you don't think software driven technology that can make that kind of a difference has no application to cheap kit lenses and all in one zooms?

I realize that the original work was stimulated by its application to crappy one element lenses, but surely it would be pretty excellent if it applied to a wider audience ... unless all that matters any more is the images from our phones :-)

0 upvotes
joe6pack
By joe6pack (7 months ago)

It could be cool if it works for the Olympus M43 Lens-cap Lens.

0 upvotes
gerard boulanger
By gerard boulanger (7 months ago)

Excellent news. After all we, humans have one and only lens and our brain makes all the necessary corrections.

1 upvote
joe6pack
By joe6pack (7 months ago)

most have 2 lenses. :) That's quite advance, IMHO.

Comment edited 2 times, last edit 1 minute after posting
5 upvotes
tkbslc
By tkbslc (7 months ago)

Our brain just learns to ignore the optical deficiencies and do the best it can. Not to mention, most of us can't really swap lenses to test the difference. If you could swap eyes with someone else for a little while, I'm sure you'd notice plenty of differences

Also, ask someone who wears glasses for astigmatism if their brain makes all the necessary corrections for them.

8 upvotes
agentul
By agentul (7 months ago)

the brain can only correct so much. when it can't anymore, you wear glasses, astigmatism or not.

3 upvotes
Plastek
By Plastek (7 months ago)

Also brain computes thousands of images to get detailed one - something known from Astrophotography, though brain does it in much more sophisticated way. You never will know how an image captured by an eye in 1/60 of a second looks like in detail.
And here we come to second problem - there's no way to analyze image from the eye as recorded by brain in the same terms as we analyze image from cameras. You might find approximate angular resolution and light sensitivity, but that's pretty much all.

2 upvotes
Total comments: 162
12