Previous news story    Next news story

Can computer corrections make simple lenses look good?

By dpreview staff on Sep 30, 2013 at 17:51 GMT

Modern lenses tend to be large and expensive, with multiple glass elements combining to minimise optical aberrations. But what if we could just use a cheap single-element lens, and remove those aberrations computationally instead? This is the question scientists at the University of British Columbia and University of Siegen are asking, and they've come up with a way of improving images from a simple single element lens that gives pretty impressive results.

Image scientists are looking at whether a complex lens can be replaced by a simple one, along with lots of computation.

The method is described in detail in the researchers' paper. It works by understanding the lens's 'point spread function' - the way point light sources are blurred by the optics - and how this changes across the frame. Knowing this, in principle it's possible to analyse an image from a simple lens and reconstruct how it should look, through a computational process known as 'deconvolution'. 

The Point Spread Function diagram for a simple f/4.5 lens of the plano-convex type (i.e. one side curved, the other flat). The centre shows broad discs due to chromatic aberration, while the cross-shapes towards the corners are due to coma and astigmatism. The researchers split the image up into 'tiles', each with its own PSF. 

This isn't a new idea, but the team of researchers claim to have made some key advances in the field, making their method more robust than those previously suggested. For example chromatic aberration means that simple lenses can give detailed information in one colour channel with significant blur in the others, so they've decided to use cross-channel information to reconstruct the finest detail possible.

One serious problem with deconvolution approaches is that they often struggle to reach a single 'best' solution. The group claims to have solved this by optimising each colour channel in turn, rather than trying to deal with them all simultaneously. 

This is all very clever, of course, but does it work? The group shows several before and after examples on its website, shot using a simple F4.5 plano-convex lens on a Canon EOS 40D, and the results are quite impressive. 

 Original version (click for original)
 Image de-blurred using deconvolution (click for original)

So will this be coming to a camera near you anytime soon? In this precise form, probably not - the system still has problems understanding areas of the image which are slightly out-of-focus, and won't work with large aperture lenses. And while the images are certainly improved, they're unlikely to satisfy committed pixel peepers. In fact we'd guess it's most likely to be useful in smartphones, where the mechanical simplicity and robustness of simple lenses should be appealing. However, it certainly offers an interesting glimpse of the way results could be improved when shooting with a 'soft' lens. 

Comments

Total comments: 162
12
harold1968
By harold1968 (6 months ago)

This is effectively "inventing" detail where you have some idea of the reasons why that detail was not recorded properly in the first place.

IMHO this is good for snaps but useless for serious photography as the original detail, and as much as possible, is what you need and indeed actually wanted to take the picture for

Saying that this could come out with some more advanced techniques for improving a photo during PP

Comment edited 3 minutes after posting
3 upvotes
2001
By 2001 (6 months ago)

This is why l was charmed by but ultimately skeptical of the Lytro Camera, perhaps l will be proven wrong but it seems as though you can't create information that isn't there, you may be able to alter information that is there a great deal but at the cost of diminishing returns.

0 upvotes
chaos215bar2
By chaos215bar2 (6 months ago)

That's not necessarily an accurate description. Yes, it's possible to irrecoverably destroy information in a photo. A perfect Gaussian blur or aliasing are examples of this. However, a lot of the perceived loss of detail here doesn't necessarily destroy it, but just obscures it. I don't really see a problem with a mathematical reconstruction of details, as long as it isn't recreating information that never actually existed in the original photo.

2 upvotes
Neodp
By Neodp (6 months ago)

OK, it is high time we give the, "pixel peeper" declaration, a rest. I mean generally everyone.

Why? Because yes; at some point, it is indeed good enough. However, we are no where near there, now. Yes, even with better, higher ISO, than can be seen in film. Film, with it's pros, and its cons, should not be the ultimate, pros goal. We have already passed, SOME things; about film. Not all. Yet, digital should stand on it's own, progression standard.

Really why: Because, this "terrible", so called, "pixel peeping", is really better color/tonal sensitivity, and not just important lower noise, at only your higher ISO's! It's shadow IQ; even at your base ISO.

Every, single, camera you buy; that does not have this good sensitivity, is doomed to be completely obsoleted. They are disposables. That does affect your total expenses; over time. Think about it.

Comment edited 7 times, last edit 9 minutes after posting
3 upvotes
Neodp
By Neodp (6 months ago)

Also, a bigger overall sensor, *does* typically, and understandably cost more; but need not cost so many times more. Not like we have now. The excess is a lie, simply promoting excessive profits (excessive prices), and due only to the fact, that tiny sensors make manufactures more profit, for less work.

The manufactures are simply waiting, for you to notice. Your voice is your wallet.

Lastly sure, you can fudge some of the time, with processing; but it's never better than both, quality coming right out of the camera, and also spending zero further time; processing it for corrections. You could, better, spend that time, with appropriately, enhancing "treatment" looks; if you prefer.

Comment edited 11 minutes after posting
2 upvotes
Neodp
By Neodp (6 months ago)

Note: I shoot Raw, and USUALLY, just pulling the embedded JPEG out. This depends on the final goals.

P.S. Get it Gimp pimped! You got your ufraw (16 bit optionally) going in, and many, many easy plug-ins available, for Gimp, for you to try. Why wouldn't you? Gimp does it all; with free upgrades, and over time, again.

Try "Inverse Diffusion", in the GMIC plug-in set. I dare you. ;)

Comment edited 2 times, last edit 10 minutes after posting
1 upvote
67gtonr
By 67gtonr (6 months ago)

This looks like it could be very important to the security camera industry.

2 upvotes
Spectro
By Spectro (6 months ago)

I think adobe photoshop had a similar concept taking a blurred iamge and sharpening it. It was a hoax where they just reversed wha they did from an original sharp image.

In this case, well creating an algorithm based on a an element that has define algorithm correction created is cool. Not sure if is something is so blurred out that a wrong interpretation might result in an incorrect image processing. This is like an extreme case of lens correction. Since consumer wants cheaper product this might be more suited for like a bridge camera P&S.

0 upvotes
Lensjoy
By Lensjoy (6 months ago)

The idea is interesting, but the execution needs more thought. Without knowing anything about the lens' optical characteristics, I can go into Photoshop and apply these two operations to the original version above:
1. Filter/Sharpen/Smart Sharpen Lens Blur 79%, 5.7 pixels (more accurate checked)

2. Filter/Sharpen/Unsharp Mask 47%, 5.7 pixels

I haven't corrected it for chromatic aberration, but examining the green channel alone it's clear that the corrections above produce a better result with much better separation of tonality in the details, and more fine detail. If I had a chromatic aberration tool I am confident I could outdo their effort. Perhaps someone here can apply that with these steps and post the final result.

0 upvotes
Superka
By Superka (6 months ago)

They made it look worse with their correction.

Comment edited 22 seconds after posting
2 upvotes
rfsIII
By rfsIII (6 months ago)

Science marches on. The question is whether we will fall in step and reap the benefits of progress or fall by the wayside, mired in 20th century superstition and anxiety about the limitless power of optical and computer engineering.

1 upvote
rfsIII
By rfsIII (6 months ago)

.

Comment edited 12 minutes after posting
0 upvotes
yabokkie
By yabokkie (6 months ago)

a golden rule of thumb for more than a century:

whatever can be done in post, do it in post.

3 upvotes
pulsar123
By pulsar123 (6 months ago)

There is always a catch. With deconvolution, one of the catches is increase in the noise level (because noise scews up the deconvolution, which assumes perfect, noise free signal).

8 upvotes
Sdaniella
By Sdaniella (6 months ago)

your claim relies entirely on the assumption effective noise subtraction is impossible

which we already know today, is no longer the case

3 upvotes
RobertMartinu
By RobertMartinu (6 months ago)

You can only subtract noise that occurs in the camera. Noise already there in the signal to be sampled can't be suppressed because it follows no rules, only longer exposure helps here.

3 upvotes
Entropius
By Entropius (6 months ago)

Noise reduction *is* an intractable problem. There is a thing called quantum shot noise which is always going to be present and (in many situations) is the dominant contributor to image noise -- especially for "good" sensors like the modern Nikon and Sony (used by Olympus too) ones. If you're shooting something like animal fur or bird feathers -- fine low-contrast detail -- no amount of clever wavelet transforms are going to let you figure out what's signal and what's noise.

Deconvolution makes noise worse by a huge amount -- essentially, all it is is heavily customized sharpening.

5 upvotes
rf-design
By rf-design (6 months ago)

There is a simple basic limitation on the factor of improvement!

The deconvolution is perfect if

1. The deconvolution matrix is defined for an unlimited number of monochromatic lights sources.

2. The basic resolution of the uncorrected lens is not near diffraction limit where wave effects with interference take place. So only pure monochromatic geometrical abberations could be corrected

1. Because of the limited number of color sensitive detectors I expect an improvement factor not better than the number color channels. So using a single element lens instead of a 20 element lens is an illusion.

2. The highest performing lenses are gauss based fixed aperture around 4.0 which are diffraction limited. These are special enlarger lenses for 1:10 but it is possible to calculate them for infinity. These lenses would profit only to small extend by processing.

More interessting today are algorithms which improve existing lenses without exact knowing the exact type or sample.

1 upvote
kadardr
By kadardr (6 months ago)

This article demonstrates the importance of the development of sensor, lens and software together to achieve sublime results.

A nice practical realization of this is for example the two rx-1s.

1 upvote
le_alain
By le_alain (6 months ago)

Who shoot with such big lenses now ?
Journalism, action, fashion are done with simple lense smartphone

;)

4 upvotes
tabloid
By tabloid (6 months ago)

Better get my Brownie 127 out.

1 upvote
ZAnton
By ZAnton (6 months ago)

I assume there are too many variables to calculate good result.
For example LoCAs are distant dependent, so unless we know the distance to ALL objects on the photo, we can't calculate back the initial image.
Similar with non-flat focus-plane (field curvature). If the object is blurred by that, one must know the distance to the object for the reverse calculation of the "ideal" image.

0 upvotes
zonoskar
By zonoskar (6 months ago)

Distance information is known for the focus point, so maybe they can use that. For the OOF parts of the picture< I wouldn't mind a non-corrected image, or only CA corrcted.

0 upvotes
ZAnton
By ZAnton (6 months ago)

Focus point - yes, the rest - no.

1 upvote
chaos215bar2
By chaos215bar2 (6 months ago)

You don't even have accurate distance information within the focus "point", since it's actually a region of the image, some of which is likely to be out of focus.

0 upvotes
HelloToe
By HelloToe (6 months ago)

For comparison, here's the same shot with a simple unsharp mask filter applied to it: http://i.imgur.com/8IrZrnP.jpg

From a detail and sharpness perspective, I'd say the USM wins handily, with a much more natural-looking result.

*BUT*

What the deconvolved image gets you is chromatic aberration correction. A lot of the twigs in the image have pretty severe red or blue bands at the edges. The USM doesn't correct that at all, but the deconvo does a pretty good job of fixing it up.

Comment edited 11 seconds after posting
2 upvotes
zorgon
By zorgon (6 months ago)

Really? To my eyes the test image looks way better than your USM image in every department.
And FYI, unsharp mask is a very simple type of deconvolution.

0 upvotes
HelloToe
By HelloToe (6 months ago)

Perfectly razor-sharp edges in an otherwise fuzzy picture do not make for a natural-looking image.

0 upvotes
flowty
By flowty (6 months ago)

Nothing new here: Andrey Filipov published lengthy documentation and code about channel-specific deconvolution (inverted PSF) in 2010 already:
http://blog.elphel.com/2010/11/zoom-in-now-enhance/

To all of you thinking that's just about some "sharpening" process, it's not really the same:
1) with a dedicated calibration pattern, you measure a lense's MFTs/PSF (spatial, for every RGB channel); this basically gives out the mathematic formula that describes the spatial chromatic abberations for the three RGB wavelengths
2) you then know how to inverse the chromatic abberation process by HEAVY computation

Basically, completely lens-dependent, so yes, probably similar to DxO CA correction methods.

0 upvotes
Michael Ma
By Michael Ma (6 months ago)

It is being done already somewhat on the MFT system. Open it up on ACR, and it already has already corrected the distortion and aberrations, but not the vignetting, which is unfortunate. DxO corrects all 3, but I much prefer to work with ACR.

0 upvotes
ZAnton
By ZAnton (6 months ago)

LR4 corrects vignetting either.

0 upvotes
Entropius
By Entropius (6 months ago)

Distortion and longitudinal chromatic aberration are two aberrations that are pretty easy to fix without much degradation.

On MFT, are there any lenses that vignette that much?

0 upvotes
deleted-13120401
By deleted-13120401 (6 months ago)

Yes

0 upvotes
Jefftan
By Jefftan (6 months ago)

Hi DXO Labs
may I ask how many copies of lens do you use to calibrate each lens, lots of variaion between different copy of the same lens
also can I use other RAW converters with no sharpening and just use DXO for lens specific sharpening?
Thanks

3 upvotes
Mario G
By Mario G (6 months ago)

Since there can be wide variations between different copies, it would be even better to have a custom calibration based on the specific copy of the lens that you have... I'm not sure how complex is this calibration, if there would be a way to let the end users do it?

0 upvotes
Franka T.L.
By Franka T.L. (6 months ago)

I recall this had been done quite a number of times, based on similar concept of mathematically calculating and calibrating the image to made a pseudo lens imaging out of simple lens ( not always just single element ) but as had been, there is multiple environmental limitation and actual world factor that cannot be simply factored in and thus it will always be limited in plenty of fashion. But I can see this applied to the like of smartphone due to the nature of the sensor size vs the lens size / focal length.

It would be more useful in medical and industrial application though. Say High Power X-Ray imaging.

1 upvote
Karroly
By Karroly (6 months ago)

As far as I understand, it is the soft(ware) against soft(ness) war...

1 upvote
Franka T.L.
By Franka T.L. (6 months ago)

I wager its more than just that, I check both of those image enlarged, the deblurred image might look fine when you take it whole screen and thus downsized a lot, but when you check how well, it actually image, it show up pretty bad, and loads of artifacts and lost of definition / resolving elements.

Ultimately one cannot just get something out of thin air. the image details, must first be captured before it can be delivered. That won't change no matter how well the software goes

0 upvotes
eddie_cam
By eddie_cam (6 months ago)

... and even if they manage to get the best out of almost nothing: manufacturers might find ways to make us pay top dollar for this tech, e.g. CSC lenses. ;-)

0 upvotes
forsakenbliss
By forsakenbliss (6 months ago)

how our eyes work, not so perfectly... how our brain correct the image collected on the light sensor is what we can learn and apply.

this is a good step forward.

3 upvotes
Karroly
By Karroly (6 months ago)

Some researchers try to use computers to make good optics while others try to use optics to make fast computers...

3 upvotes
Michael Berg
By Michael Berg (6 months ago)

And the two goups must never occupy the same room at the same time, or the universe will collapse in on itself.

2 upvotes
Karroly
By Karroly (6 months ago)

Or they will run after each others in an endless loop as well !
Kind of perpetual motion !

0 upvotes
danijel973
By danijel973 (6 months ago)

This is not really impressive as I duplicated this result with a simple "sharpen" command in Gimp. Also, you can't get more information than you put in, meaning that you can't create detail from blur. You can clarify detail that's already there, but I would always prefer to do it optically to the maximum possible extent, and only then use software to try to go even further. Intentionally designing bad lenses and relying on software to make them mediocre is not a good idea.

2 upvotes
stevens37y
By stevens37y (6 months ago)

deconvolution <> sharpen

6 upvotes
Niko Vita
By Niko Vita (6 months ago)

"Also, you can't get more information than you put in, meaning that you can't create detail from blur."
Google 'adaptive optics' and you will understand why this statement is wrong. The additional info comes from knowing how your lens behaves.

6 upvotes
RichRMA
By RichRMA (6 months ago)

Unlikely. You wouldn't have been able to sharpen it that much without noticeable artifacts.

1 upvote
Karroly
By Karroly (6 months ago)

I am afraid you did not duplicate the result actually as the "simple sharpen command" does not correct the chromatic aberration...

1 upvote
hjulenissen
By hjulenissen (6 months ago)

>>Also, you can't get more information than you put in,
right
>>meaning that you can't create detail from blur.
wrong

If you encrypt your harddrive, the bits will look like a blurry mess. Given the right algorithm and key, you can have all of the information back, though.

4 upvotes
Dave Oddie
By Dave Oddie (6 months ago)

"Also, you can't get more information than you put in, meaning that you can't create detail from blur."

That isn't what is going on here. They are using software to correct the lens's various aberrations so it doesn't record a blurred image in the first place.

1 upvote
Roland Karlsson
By Roland Karlsson (6 months ago)

@hjulenissen. No - the encryption is lossless. The convolution and deconvolution is lossy.

@Dave. No. - it is not about correcting aberration, it is about deconvolution. And the image is blurry.

0 upvotes
graybalanced
By graybalanced (6 months ago)

The fatal flaw in your sharpening "duplication" of results is that you only applied a uniform amount of sharpening to all pixels. That's not where this technology is going. Your sharpening cannot correct chromatic aberration or intentionally compensating for compromises in the design of the lens to save money (as is done with several cameras' firmware already). Applying a blind uniform sharpening value is to miss the point.

Sorry, but in articles of this type, a reply posting "I got the same thing by sharpening in GIMP" usually discredits the post right away.

0 upvotes
zorgon
By zorgon (6 months ago)

You need to read up on the theory here as it's pretty in depth and complicated. Even though the original image is blurred, ALL of the information is still contained within the image. It's recovering this information that is the problem. The best we can do is to make approximations. Generally speaking, better the approximation, the more computational power that is required.
You can think of the sharpening filter as a very crude approximation.

0 upvotes
Karroly
By Karroly (6 months ago)

A smartphone taking good pictures is certainly interesting for many users, but long battery life is much more important IMHO. Using (huge ?) computational power to correct or not lens defects implies a trade-off between lens simplicity/robustness and battery life...

0 upvotes
SimenO1
By SimenO1 (6 months ago)

You probably dont need that sharpness on the phone screen. Deconvolution should be done when the pictures are imported to a PC or be sendt back and fourth to a server that do the intensive job.

0 upvotes
Michael Long
By Michael Long (6 months ago)

Today's "huge" computational power is tomorrow's 64-bit multiple core accelerated processor.

One also has to consider that for the few (or even dozens) of pictures people take with their phones, a few extra milliseconds of processing power per images is insignificant. You'll "waste" far more power firing up the radios and sending a couple of selected images up to Facebook.

Comment edited 33 seconds after posting
0 upvotes
ProfHankD
By ProfHankD (6 months ago)

I've been studying PSFs for several years now. The biggest problem with deconvolution is that the PSFs are not really convolved in the first place -- especially for out-of-focus regions of the image. Still, there's lots one can do with better computational methods; I use genetic algorithms for this sort of thing.

0 upvotes
hjulenissen
By hjulenissen (6 months ago)

What do you mean by the PSFs "not really convolved"? Does it mean that the idealized model of a (slowly varying) linear convolution does not describe the errors contributed by the lens? If not, what kind of physical process is it?

If you had access to highly detailed info about the lens (e.g. sweep monochromatic light from 400-800nm on a target print of impulses (or wavelets) distributed across the frame and sweep this target from close focus limit towards infinity), how much better could things be? Is it fundamentally a problem of gathering enough data, or is it about finding the right algorithms to apply?

0 upvotes
ProfHankD
By ProfHankD (6 months ago)

Fundamentally it is that deconvolution, although computationally cheap, isn't quite the right algorithm.

Three major issues. (1) Rays coming in through different portions of the lens are actually different viewpoints; I use this for single-lens stereo capture, but it implies that out-of-focus PSFs are subject to occlusion (see Figure 5 in http://aggregate.org/DIT/SPIEEI2012/spieei2012paper.pdf ). (2) Standard frequency-domain deconvolution algorithms cannot handle arbitrary PSFs. (3) Modeling as convolution essentially assumes positive summation, but the wave nature of light means summation is signed (and negative results clipped) for small PSFs.

My approach has been largely attempting to directly search for the object distance and RGB energy in each pixel's view, attempting to match the actual image when a more accurate model of image construction is applied. So far, it is still scary expensive computationally....

1 upvote
hjulenissen
By hjulenissen (6 months ago)

If the lens designers knows that a given lens correction is available, they might be able to "tailormake" a PSF that is easy to correct (no deep zeros, gaussian-like?), rather than a PSF that is as small as possible.

Perhaps that would allow better system-performance for a given cost/size/Weight?

1 upvote
ProfHankD
By ProfHankD (6 months ago)

Easy-to-recognize PSFs are commonly used in research -- they usually get called "coded apertures" and are very non-smooth patterns. Gaussian PSFs are not easy to make and, although deconvolution would be easy with them, they would have huge problems with noise. Incidentally, I've been using anaglyph-like color-coding of the aperture in my research over the past few years....

0 upvotes
Zamac
By Zamac (6 months ago)

Based on the particle theory of light one could reverse the distortion. However this not the whole story. Lens imperfections could greatly increase the processing required. Also the wave nature of light will result in interference effects that are less easy to remove as there is randomness involved.
Still, with the large number of pixels on even very small sensors, much can be done using statistical analysis in addition to structural decomposition. Where the target resolution is much lower than the capture resolution - as in phone cameras - one can trade resolution for sharpness and the result will be very good, especially if multiple exposures at millisecond intervals can be used to remove motion blur.
For more serious photography I would certainly favour less glass, but the improvements will probably come from curved sensors matched to the lens for primes (a la Ricoh) and, possibly, flexible lenses for zoom.
I am suree will see even more software correction in future.

2 upvotes
Entropius
By Entropius (6 months ago)

Isn't this just deconvolution, and doesn't this have the drawback that it greatly increases image noise?

1 upvote
Raist3d
By Raist3d (6 months ago)

Also isn't a step in this direction what Fuji is doing with the X20, X100s?

http://www.fujifilm.com/products/digital_cameras/x/fujifilm_x100s/features/page_02.html

(look at their vague LMO description).

0 upvotes
Raist3d
By Raist3d (6 months ago)

impressive indeed though still seems a good lens will definitively outclass their result. But in turn their methodology could be used to make good lenses even better.

There's only so much you can do per color channel that has "bad" information (vs the data from a good lens). Still impressive.

0 upvotes
locke42
By locke42 (6 months ago)

I think this will be more useful to the point and shoot crowd rather than DSLR/MILC owners. With P&S's, compactness is a necessity, and since their lenses are small anyway (which means huge DOF), the algorithm won't have any problems dealing with out of focus elements.

0 upvotes
TN Args
By TN Args (6 months ago)

A pity they didn't include a third image in the comparison, being that of an excellent complex lens shooting the same image.

4 upvotes
mjolnirq
By mjolnirq (6 months ago)

What you ask for is available in the original article (follow the link provided above, look at, e.g., Fig 16). You can find more details and the supplementary material (including the hi-res images) here http://www.cs.ubc.ca/labs/imager/tr/2013/SimpleLensImaging/

1 upvote
Kirppu
By Kirppu (6 months ago)

So it can magically guess the texture patterns that objects have even if it originally was just a blur... I would like to see that happen. I bet it would have some funny end results. :)

And didn't Adobe all ready do this deblurring thingy?

1 upvote
hjulenissen
By hjulenissen (6 months ago)

The key is that the "just a blur" thingy can be (more or less) accurately described as a function of the original, sharp image. Find that function, find a suitable inverse, and you can remove some blur.

2 upvotes
Cartwheels MD
By Cartwheels MD (6 months ago)

The edges are sharp and the aberration is gone, but it looks like it lost some detail in the actual subject.

0 upvotes
domina
By domina (6 months ago)

I'm a software engineer and into computer science and I never trust software, computers or programmers. Windows, Chrome and all your software crashes quite often, think about it. The last thing I want is bugs in my lens correction.

1 upvote
Max Fun
By Max Fun (6 months ago)

I imagine that we're talking about an algorithm rather than the program that runs the algorithm.

2 upvotes
hc44
By hc44 (6 months ago)

From a self described software engineer this is quite a naive comment. If you can't trust software in a camera I suppose you never indulge in air travel?

6 upvotes
groucher
By groucher (6 months ago)

He's not being entirely naive. Aircraft software (and financial software such as BACS) is written using extremely rigorous processes as lives (or large amounts of money) are at stake. This high integrity software runs under OSs written to similarly rigorous standards - I certainly wouldn't get on an aircraft running Windows or Apple OSs but of course such a system wouldn't stand any chance of being accepted by any Aviation Authority in the first place. Leaving aside their high failure rates, their complexity makes these OSs unqualifyable.

Having said that, camera software is written to rigorous standards. Imagine the cost to a manufacturer if they put out a firmware update that locked up your camera such that it couldn't be subsequently corrected without a return to the manufacturer.

1 upvote
Marvol
By Marvol (6 months ago)

I gather you still use film then?

2 upvotes
yabokkie
By yabokkie (6 months ago)

Nikon have been making so called "CPU lenses" since 1977.

Comment edited 6 minutes after posting
0 upvotes
peevee1
By peevee1 (6 months ago)

If worked on the image where every pixel has all color channels plus depth information (to distinguish aberrations from focal plane and from out of it), digital corrections should give fantastic results, given enough computational power.

0 upvotes
Richard Murdey
By Richard Murdey (6 months ago)

The computers in our cameras already make corrections. The discussion is only about what corrections you want to hand over to the image processing, and to what degree.

With everyone clamoring for wider aperture optics (and make it cheap!), there is intense pressure on manufacturers to increase the software corrections ...

With the massive pixel densities available in modern sensors, and the heavy noise-reduction processing that is already going on even at low ISO, lens corrections can be done pretty much for free, with little additional loss in image quality.

The only downside is any "character" of the lens is airbrushed out, but that's only something traditionalists like myself need to worry about.

1 upvote
Bart Hickman
By Bart Hickman (6 months ago)

A blurry lens causes irretrievable damage to the information that was gathered. This loss manifests itself as a bunch of noise in the corrected image. All this software does is let me trade off between sharpness and noise--but the overall SNR is unaffected. The lens in this article looks like it causes at least a couple stops of damage to the image judging by how noisy the corrected image is. There's no free lunch.

I can see cutting corners on CA or geometric distortion since fixing those doesn't really change the noise levels. But sharpness is not something to cut corners on IMO.

You might as well boost ISO and stop the lens down--same diff.

Comment edited 55 seconds after posting
5 upvotes
vadims
By vadims (6 months ago)

> A blurry lens causes irretrievable damage to the information

There is absolutely nothing "irretrievable" about that damage as long as (a) pixels are not saturated, (b) dynamic range of the sensor is sufficient, and (c) there is a precise enough mathematical model of how exactly the light that entered the lens was distributed.

A hologram is, in a way, an ultimate "blur" and yet it can easily be used to reconstruct "3D" image using purely optical means... Throw in enough math and number crunching power, and almost any "blur" that is a superposition of the source light can be reconstructed, let alone a very simple one produced by a single lens.

8 upvotes
ashyu
By ashyu (6 months ago)

Mathematically, if you know the exact transfer function that caused the blur, you apply the inverse transform, you will arrive back at the original image. (I.e, deconvolution, which is nothing new nor nothing magical).

That is the reason for them seeking to understand the lens' point-spread. Understanding a function's impulse response tells you a lot about the function behaves.

Blur doesn't mean that information is lost - it only means that information is spread-out. In a purely academic example, if you apply a simple gaussian or spatial-averaging blur to an image, you can get the original image back from the blurry image just by applying the blur function's inverse.

Of course, a lens' blur characteristics are more complex than the simple academic examples, but it doesn't mean the information is irrecoverably lost.

6 upvotes
Bart Hickman
By Bart Hickman (6 months ago)

You only know the transfer function of the image itself. But the sensor noise gets added after the lens, so it gets boosted by the de-convolution. In other words, the lens damages the dynamic range (or SNR) of the image irretrievably. Obviously if you have large image details or details with large contrast (black-to-white transitions), then the deconvolution make them more visible (along with the noise). But finer details or lower contrast details (e.g.., textures) are blurred below the noise floor and the deconvolution does nothing to recover them.

Vadims, if item (b) is true (which it clearly is not for the example in the article), then I'd argue you might as well stop the lens down and achieve optical sharpness in the first place. The result will be about the same and you avoid power hungry post-processing (and the cheap lens can probably be even cheaper without the larger aperture setting.)

3 upvotes
xeriwthe
By xeriwthe (6 months ago)

i wonder if they can eventually make this so a moderately simple lenses could look awesome. with the right design, optimized for SW correction in post, however that may be achieved, and just a simple spherical elements or something

armchair lens design

1 upvote
123Mike
By 123Mike (6 months ago)

I think the example is fake because there details in the "improved" version that do not exist in the "original".

0 upvotes
vadims
By vadims (6 months ago)

> there details in the "improved" version that do not exist in the "original"

Even though the "details" do not exist in the original version, the *information* does. What is needed to reconstruct the image is that information from the "original" image, plus information of how exactly the lens distorted the light. The latter can be thought of as a very, very extensive and detailed version of what we know today as "lens profile".

BTW, I know why you're puzzled. :-) As Arthur Clarke said, "any sufficiently advanced technology is indistinguishable from magic".

5 upvotes
123Mike
By 123Mike (6 months ago)

I disagree. I think the information is simply not there. Not in this highly exaggerated example.
Oh, and trust me, I'm not puzzled, and I don't believe in magic.

0 upvotes
hjulenissen
By hjulenissen (6 months ago)

Visual inspection is not sufficient to determine that such examples are fake.

3 upvotes
wansai
By wansai (6 months ago)

Modern digital images have more data than you can visually see. You are not looking at the actual data but the interpretation of that data.

take as raw file and run it through different raw converters to see what I mean. You get different results. A sufficiently good sensor will typically capture more than you can see.

have you ever shown an amateur how to recover highlight and shadow info? In the case of good sensors, say a Sony apsc or FF, what visually appears white or black actually has a ton of detail that can be recovered. You just can't see it until you manually pull out those details in software. The data is there.

as someone above said, for example, if you do a gaussian blur, you haven't destroyed that data. All you done is rearranged it. It's possible to mathematically revrse it completely provided you haven't saved it out in a lossy format.

Comment edited 4 minutes after posting
1 upvote
new boyz
By new boyz (6 months ago)

"Modern digital images have more data than you can visually see. You are not looking at the actual data but the interpretation of that data."

True. GIMP's c2g is one of my favorite tool to reveal the unseen detail of a picture.

0 upvotes
CaseyComo
By CaseyComo (6 months ago)

The corrected one looks better, but it's pretty artificial looking when viewed up close.

0 upvotes
thx1138
By thx1138 (7 months ago)

Having worked a bit on lens correction, the trouble you have with the PSF is the fact it oscillates and goes through multiple zero crossings. For those that are interested, the PSF is essentially the response of the lens to an impulse response. An ideal lens would have a PSF given by the Airy disk (the diffraction-limited case). That is even in a lens free of all aberration a lens will image a point impulse as an Airy disk.

If you've seen the PSF caused by many aberrations you will know they can be incredibly complex. In order to undo the aberration, you essentially have to divide by the FT of the PSF and this leads to infinities. This is is what causes most of the problems in deconvolution. Unless you have specially designed apertures such as in coded aperture lenses for computational photography, where you can tailor the PSF, this is almost impossible issue to avoid that I know of. You have to make simplifying approximations which means you can never fully undo the aberration.

5 upvotes
padang
By padang (6 months ago)

I understand the dive by zero problem... but when the correction is done by a physical lens system there is no such issue :-D Is it that we do not have a good modeling of PSF, or more information is available to the physical system (eg. distance to object) than to the software solution ?

0 upvotes
ArvoJ
By ArvoJ (6 months ago)

Deconvolution (esp in digital realm) has always divide by zero (or divide by almost zero) problem. Actually even divide by almost zero creates big errors (=huge noise and ringing artefacts); most research is done exactly to find better algorithms to avoid these artefacts. In current case authors rely partially on CA - different color planes create different PSFs and where in one plane division by zero occurs, there in another plane pretty result can be obtained. Or that is how I understood their main idea; I can be mistaken of course.
Physically corrected lens do not perform deconvolution, thereby no divide by zero problem happens.

0 upvotes
chj
By chj (7 months ago)

The corrected version looks a hell of a lot better to me. Yeah, there's no substituting a better lens, but when that costs you over $1000 (you need a camera to use that lens also), I'd say this is a pretty good idea for phones and inexpensive compacts.

0 upvotes
falconeyes
By falconeyes (7 months ago)

Mathematically, deconvolution is an unstable operator. Therefore, expect no wonders.

It is most useful with a "perfect" convoluted input signal, i.e., without any noise.

Therefore, DPR's assumption that it may be most useful with smartphones lacks a certain degree of understanding.

6 upvotes
new boyz
By new boyz (6 months ago)

How about removing the noise first(say by averaging) before applying deconvolution process? Is that practical?

0 upvotes
Henrik Herranen
By Henrik Herranen (6 months ago)

new boyz:
If you average pixels, all you are doing is spreading the impulse response even further, so you'd need appropriately more aggressive deconvolution (with more unstable infinities) to get back to the sharper image.
Even worse, if you use adaptive averaging (which is basically the way image noise reduction works in all cameras / PC software because that looks the least bad), you don't even _have_ a point spread function anymore and the whole theory falls flat.
All in all, the sad fact is that you cannot add information by removing more of it.

Comment edited 2 times, last edit 1 minute after posting
5 upvotes
new boyz
By new boyz (6 months ago)

Thanks for the reply. Very informative(at least for me).

1 upvote
padang
By padang (6 months ago)

maybe deconvolution is an unstable operator, but like it or not that is what a physical lens systems is performing fine. Why it cannot be done programmatically you think, is it a data or just not having the right algorithmic yet ?

0 upvotes
ArvoJ
By ArvoJ (6 months ago)

Physical lens system does not perform deconvolution in any possible sense.

1 upvote
utomo99
By utomo99 (7 months ago)

How about still using some lens say it 2-5 lens combined wit this. Can it produce good images ? but still lowering the number of lens and also reducing the cost and complexity, but still getting good images

0 upvotes
ozgoldman
By ozgoldman (7 months ago)

Even if the efforts so far are having limited success, there is nothing surer that in the future this technology will be refined and will work well.

0 upvotes
RichRMA
By RichRMA (7 months ago)

When they tried this with the Hubble Space Telescope (correction of optical defects via computer) they got only marginal results. It took a new set of corrective optics to do the job. You can't "create" results in resolution using software, you can only approximate.

6 upvotes
jsandjs
By jsandjs (6 months ago)

The key point is that they don't treat the final 'resolution' directly. They try to figure out why the final resolution is no good when using a certain type of single glass element.

1 upvote
mgrum
By mgrum (6 months ago)

Actually you can "create" results in resolution using software, if you know exactly how the image was blurred. However imprecise knowledge of the blurring operator and noise in the image limit the amount you can recover using this technique.

0 upvotes
Frank_BR
By Frank_BR (7 months ago)

Deblurring an image by digital deconvolution is not exactly a novelty. It is famous, for example, the use of deblurring by software to correct a serious flaw in the primary mirror of the Hubble space telescope:
http://en.wikipedia.org/wiki/Hubble_Space_Telescope#Origin_of_the_problem

That said, it is good not to be too impressed with the research done at University of British Columbia and Siegen. The bad side effect of using deconvolution to deblur the image produced by an imperfect lens is the usual increase of noise. Anyone who has used Photoshop knows that noise can become a serious problem as he/she tries to sharp more an image. Indeed, sharpening can be considered a particular form of deconvolution, and as general rule, any deconvolution increases noise and other artifacts.

2 upvotes
Total comments: 162
12