Software correction of lens aberrations

That image above showing barrel distortion is irrelevant since it never sees the light of day. It's like showing an image from a lens after one of it's elements has been removed.
Or you are using your favorite RAW converter/DAM like ACDSee or Lightzone. As a result you either have to do corrections manually for each image or drastically change (at possible expense) your workflow. Not nice.

As I said I understand and accept that there are tradeoffs between optical and software corrections. But tradeoffs also go both ways.

--
http://pbase.com/klopus
 
When people talk about software correction degrading the image quality and causing "border softness" they have in mind the interpolation needed to move the pixels around the image to correct for geometry, CA, etc. Pixels are being "generated" from values of surrounding pixels, which they feel is "inventing" information, falsifying the captured image.

What is usually forgotten, photos even from a perfectly corrected lens are still the result of an interpolation process. The spatial identity of a ray of light coming from the lens and hitting a photo site is irrevocably compromised by the computations necessary to generate the RGB representation from the Bayer sensor pattern. The brightness and color information has been distributed, smeared across several columns and rows.

Also forgotten is that the effective scaling per pixel is a fraction of a pixel width only. It's not like affected pixels would be doubled or tripled in size. Even if the apparent movement of pixels at the corners of the image may look impressive, as the correction formula is applied to the image as a whole, the effect on each individual pixel is very small.

--
Everybody loves gadgets, until they try to make them
http://www.flickr.com/photos/thinkfat
http://thinkfat.blogspot.com
 
Or you are using your favorite RAW converter/DAM like ACDSee or Lightzone. As a result you either have to do corrections manually for each image or drastically change (at possible expense) your workflow. Not nice.

As I said I understand and accept that there are tradeoffs between optical and software corrections. But tradeoffs also go both ways.
Why aren't optically excellent lenses designed for m4/3? Because it's a consumer system.

People here delude themselves it might be a pro fulll blown pro system, but it's not. The advantage is size and only at certain focals: wides.

Big P&S, that is. And yet some fools advertise this quackery as a replacement for optical excellency. Perhaps these luddites could do without lenses, mirrors and shutters at all.

Camera makers will laugh: it won't cost them next to nothing to produce such a camera, while asking for a higher price tag.

Excellent strategy in the downturn, with such affluent converts, who believe that optics are old hat.

Am.
--
Photostream: http://www.flickr.com/photos/amalric
 
Really... how do you know they are distructive and what do you mean by distructive? Nikon is doing it with the D90 and D300 and so you are saying Nikon is practising distructive distortion control? Do you think Nikon is more or less distructive? If you don't know... that's OK too.

Extensive palette of in-camera Retouch Menus

The D90’s designers incorporated a wide variety of image editing functions, making it easy for users to enhance images within the camera. The D90 introduces several new retouch options: Distortion Control adjusts lens aberration, Straighten corrects inclination of the image, while Fisheye produces optical effects similar to a fisheye lens.
http://www.nikon.com/about/news/2008/0827_d90_01.htm
Removing chromatic aberration and geometric distortion, and any other form is image aberration, as well as adjusting tonal scale, color balance, cropping, etc, are indeed all destructive editing processes ... They're done to improve the image quality .

--
Godfrey
http://godfreydigiorgi.posterous.com
 
Or you are using your favorite RAW converter/DAM like ACDSee or Lightzone. As a result you either have to do corrections manually for each image or drastically change (at possible expense) your workflow. Not nice.
Or you could, perhaps, ask of the creators of that software you are paying for to implement the features Adobe, Apple, et al. also implemented, instead of just piggybacking on dcraw for core features of their software.
 
Why aren't optically excellent lenses designed for m4/3? Because it's a consumer system. People here delude themselves it might be a pro fulll blown pro system, but it's not. The advantage is size and only at certain focals: wides.
I may be missing something, but I don't read a lot of posts by pros saying they were ditching their DSLR's in favor of a current m4/3. The worst I've seen is some people thinking that maybe someday, mirrorless IL cameras will get good enough to replace DSLR's. (They're probably wrong, but who knows.) It's absurdly obvious that the current generation of m4/3 does not rack up to even entry-level DSLR's in many respects, including AF speed. I'm seeing a lot of complaints about limited lens selections, concerns over ISO performance, etc etc. Sorry, but this seems like a bit of a straw man.
And yet some fools advertise this quackery as a replacement for optical excellency. Perhaps these luddites could do without lenses, mirrors and shutters at all.
OK, keeping in mind I am completely neutral on the assessment of the technical properties of current m4/3 lenses:

The only thing that matters is the end product. If you can examine the m4/3 image and there is some technical issue with the image that detracts from its aesthetic value, that's a problem.

But if it is in fact good enough, what's the problem? it's a shot that you might not have otherwise gotten if you were carrying around a DSLR, and the image is noticeably better than what you'd get with a 1/2.5 sensor, who cares if there's a little more CA or if the MTF charts don't look as good as a $2,000 Nikon lens?

If you're a pro, yes you need to make sure your images look their best. For everyone else it is, to put it politely, pixel-peeping.
Camera makers will laugh: it won't cost them next to nothing to produce such a camera, while asking for a higher price tag. Excellent strategy in the downturn, with such affluent converts, who believe that optics are old hat.
You're joking, right?

They need to redesign and test all the m4/3 lenses, and that is Not Cheap. M4/3 has been a pretty big risk -- and if it was such a sure thing, why haven't Nikon & Canon cranked out similar mirrorless setups? Olympus and Panasonic are plain ol' lucky that the pent-up demand for this type of intermediary camera is stronger than the effects of a massive recession.
 
Removing chromatic aberration and geometric distortion, and any other form is image aberration, as well as adjusting tonal scale, color balance, cropping, etc, are indeed all destructive editing processes ... They're done to improve the image quality .
Really... how do you know they are destructive and what do you mean by destructive? Nikon is doing it with the D90 and D300 and so you are saying Nikon is practicing destructive distortion control? Do you think Nikon is more or less destructive? If you don't know... that's OK too.
They have to be destructive operations in the precise technical sense of the term that I stipulated above . The term destructive is being used in a way that is mathematically/computer science accurate, without relation to its meaning and use in the sentence, "When the bomb was exploded, it's destructive potential was realized."

Raw conversion of an image from the digital capture devices (sensor) to an RGB channel image is an inherently destructive process for all digital cameras. So are any corrections, permutations, transformations, adjustments, what have you.

When you hear the term "non-destructive editing" as used with Lightroom and Aperture (and Photoshop with respect to Smart Objects) what that means is that theo original image file is not changed by the processing operations, only the output file has the adjustments applied to the pixel values. Once that output file is created, there only way back to the original data is to re-open the original, unchanged file: you cannot transform the output file's pixel values into the data that is stored in the original file.
--
Godfrey
http://godfreydigiorgi.posterous.com
 
I would like to talk a bit more about this just to learn myself... I understand that the DMC-Gx series corects for the Panasonic lens imperfections and that the Olympus E-x series does as well. I also understand that the D90 Nikon camera as well as the D300s and others corrects imperfections and lens aberations of Nikon lenses.

So you are saying that the camera's corrective process is distructive or non-distructive? One step further for example when the D90/D300s corrects for distrotion or vignetting is this is distructive or non-distructive. If it is non-distructive why? and to what extent? Also, what is the difference in non-distructive and distructive. i.e. if the camera's process for correcting distrotion is distructive does it take away from the detail and resolution of the image?

In my mind what it comes down to is that it doesn't matter if the correction is taking place in the camera or on the desktop software. Also, it really doesn't matter if we call it distructive or non-distructive. What does matter is how the algorythm retains the original IQ. Am I wrong on this?
Thanks!
 
No problem, I stand by what I say. M4/3 has been designed from the start as a consumer system, and in camera correction of relatively poor optics is a proof of that.

Smaller size might have acceptable tradeoffs for certain kind of photography like Street, but generally there is no free lunch.

The 20/1.7 is the exception, since it is the sweet spot of the register/sensor ratio. All the rest has to be heavily corrected.

Better lenses' lineups won't disappear because of clever marketing. m4/3 is not eating very much in the dSLR share of the major players.

It does in the upper tier of the P&S one, where optical excellency was never the paramount issue. It might also eat in the entry level of dSLRs, where you hardly buy more than a couple of lenses.

Am.
--
Photostream: http://www.flickr.com/photos/amalric
 
No problem, I stand by what I say. M4/3 has been designed from the start as a consumer system, and in camera correction of relatively poor optics is a proof of that.
OK, but who cares? You can say the same for most entry-level DSLR's and kit lenses. It doesn't even matter what the original design intent is; 120 film started out as a "consumer" format (e.g. the Brownie cameras), and later developed into a high-end format.

And even pros have kept a role for certain "consumer" level cameras, e.g. toting P&S's as both a last-ditch backup and for situations where a more obvious DSLR may not be welcome or allowed. Travel photographers have done that for years, lauding their Contax T2's and Yashica's T's and Nikon 35Ti's for those purposes.

It is also quite plausible that Panasonic in particular may try to develop high-end m4/3's, while Olympus aims more at consumers and at protecting its entry-level DSLR's. E.g. Olympus is making consumer-level m4/3 lenses, and trading off a degree of IQ for compactness. Olympus may also change their tune in a few years, if m4/3 makes a serious dent in DSLR sales. Meanwhile the Panasonic m4/3 lenses are less compact but much higher quality -- not just the 20mm, but the kit zoom and 7-14mm as well. Not a lot of amateurs can sneak an $1100 ultra-wide-angle lens past their wives. ;)
Better lenses' lineups won't disappear because of clever marketing. m4/3 is not eating very much in the dSLR share of the major players....
Is this based on actual numbers, or subjective impressions?

Plus m4/3 is literally just getting started I don't think m4/3 etc will take a big chunk of the pro DSLR market, especially sports and action; phase detection won't stand still while CDAF advances, and EVF's will always have a degree of lag. Then again, I'm sure that back in the 30s you had a bunch of pros who were dead sure that 35mm would never get good enough to displace sheet and roll film cameras, so who knows.

And really, the nitty-gritty doesn't matter that much. If the file a camera produces has the characteristics you are looking for, it's irrelevant if it's done via optics or software. Or, if you're willing to swap a degree of IQ for portability and convenience, then the design of the lens and MTF charts simply do not matter.
 
The only thing that doesn't sit quite right with your argument for me is that if you compare an 18-200mm Nikon wth a 14-140mm Panasonic lens at 35mm f/5.6 for sharpness the Panasonic wins. That's not cleaver marketing, I think it's cleaver design.

So, you can use your consumer systems argument, poor opticts proof, smaller size trade offs and free lunches... but please don't... that just colors the facts and misleads others. You have to show me the proof of what you are saying abd the comparison here at DPReview shows this clearly.

Thanks though... I appreciate all views, I just don't think bigger is always better.
No problem, I stand by what I say. M4/3 has been designed from the start as a consumer system, and in camera correction of relatively poor optics is a proof of that.

Smaller size might have acceptable tradeoffs for certain kind of photography like Street, but generally there is no free lunch.

The 20/1.7 is the exception, since it is the sweet spot of the register/sensor ratio. All the rest has to be heavily corrected.

Better lenses' lineups won't disappear because of clever marketing. m4/3 is not eating very much in the dSLR share of the major players.

It does in the upper tier of the P&S one, where optical excellency was never the paramount issue. It might also eat in the entry level of dSLRs, where you hardly buy more than a couple of lenses.

Am.
--
Photostream: http://www.flickr.com/photos/amalric
 
The only thing that doesn't sit quite right with your argument for me is that if you compare an 18-200mm Nikon wth a 14-140mm Panasonic lens at 35mm f/5.6 for sharpness the Panasonic wins. That's not cleaver marketing, I think it's cleaver design.
Your comparison doesn't hold water. You are comparing two consumer lenses, and I wouldn't touch a 18-200 with a barge pole. But then YMMV :)
So, you can use your consumer systems argument, poor opticts proof, smaller size trade offs and free lunches... but please don't... that just colors the facts and misleads others.
Well in the past I did comparisons here between zooms based on MFT resolution charts and m4/3 always had problems at the edges. The same problem is documented for Leica M lenses: they need microlenses' offset at the sensor's edges. Short registers do have problems in digital.

It can be solved but it will be expensive. Leica did it, but not Oly and Panny yet.
You have to show me the proof of what you are saying abd the comparison here at DPReview shows this clearly.

Thanks though... I appreciate all views, I just don't think bigger is always better.
Well, that's your problem :)

Am.

--
Photostream: http://www.flickr.com/photos/amalric
 
I would like to talk a bit more about this just to learn myself... I understand that the DMC-Gx series corects for the Panasonic lens imperfections and that the Olympus E-x series does as well. I also understand that the D90 Nikon camera as well as the D300s and others corrects imperfections and lens aberations of Nikon lenses.

So you are saying that the camera's corrective process is distructive or non-distructive? One step further for example when the D90/D300s corrects for distrotion or vignetting is this is distructive or non-distructive. If it is non-distructive why? and to what extent? Also, what is the difference in non-distructive and distructive. i.e. if the camera's process for correcting distrotion is distructive does it take away from the detail and resolution of the image?

In my mind what it comes down to is that it doesn't matter if the correction is taking place in the camera or on the desktop software. Also, it really doesn't matter if we call it distructive or non-distructive. What does matter is how the algorythm retains the original IQ. Am I wrong on this?
Thanks!
BTW, The word is "destructive", not "distructive". ;-)

There is no "original IQ" except as an abstract theoretical entity, theorising about the qualities of an image formed by a lens with light.

There is the data formed by light passing through a lens and collected by the sensor assembly (which includes IR filter, antialiasing filter, photosite color filter and micro-lens, and finally the photosite itself, the whole millions of them). The sensor's data capture system quantizes that data, transforming it from an analog voltage in each photosite reflecting the amount of light energy that struck it into an integer number in a 2D array mapped to the effective, active photosite array by position. That data is later written to a raw file or processed further, with destructive transformations to demosaic and interpolate color, then gamma correct the tonal scale, into a TIFF or JPEG file.

Lens correction in Micro-FourThirds cameras is an automated transformation, based on parameters supplied by the lenses feeding the correction algorithm in the camera body or in the raw converter software. It is also a destructive process. I don't know the specifics of what you're referring to in the Nikon cameras, but Olympus includes filters in their cameras and lens correction post-RAW based on a table of data for each lens in their image processing applications (Olympus Studio and Master). All of these are destructive transformations ... you cannot take their output and regenerate the original data values from them.

The image quality that results is the combined effect of that whole process, reflecting of course both the qualities of the light image formed by the lens that struck the sensor assembly in the first place as well as all of the processing that affected it along the way to becoming a viewable image. That which is formed by the lens is the optical image. That which later forms in the computer is a rendering of that optical image.

Any change (transformation) applied to the pixel values of an image, whether implemented in the camera or in software operating on the raw data files, is destructive unless it is wholly reversible and the original data values can be retrieved from that reversal. Nearly all image processing transformations are destructive in nature.

--
Godfrey
http://godfreydigiorgi.posterous.com
 
Really... how do you know they are distructive and what do you mean by distructive?
It's quite simple: destructive means that part of the image data is lost in the process. Here are two pictures taken with the Panasonic Lumix 7-14 lens. The first is a "normal" picture taken at 7mm, as processed by either by the Panasonic firmware (out of camera JPEGs), or by a compliant raw converter :



Picture credit : Rafael ( http://forum.getdpi.com/forum/member.php?u=1905 )

The second picture is raw image converted with VueScan, a raw converter that doesn't apply any optical correction :



Picture credit : Rafael ( http://forum.getdpi.com/forum/member.php?u=1905 )

It's easy to see that the first picture has a significantly reduced field of view compared to the second. Of course, distortion has not been corrected in the second picture, while it has been in the first one, but there are some instances where a wider field of view is more important than correct geometry, so it's a pity that the user has no easy way to control what is happening behind the scenes.

Cheers!

Abbazz
--
The 6x9 Photography Online Resource: http://artbig.com/

 
The 20/1.7 is the exception, since it is the sweet spot of the register/sensor ratio. All the rest has to be heavily corrected.
Pretty much all lenses are heavily corrected, it's just the correction is done in hardware with extra elements and coatings, rather than in software. For some things, like CA,when it's very efficient to do it in software with very few IQ issues I'd rather that than introduce other problems that are harder to correct or have the lens be twice as heavy and three times the cost.
 
... One thing though soft correction, just as any Post processing routine goes, is a destructive process, and decrease image quality. ...
The generalization "software correction is a destructive process which decreases image quality" is incorrect.

When image processing is said to be destructive , it is a technical statement. Destructive in this context means explicitly that data is "irrevocably changed" ... that is, the original data is lost and replaced with new data ... not that it 'destroys' or 'degrades' the data.
I don't think we're all on the same page here on what counts as technical criteria and what counts as artistic ones. (No, I don't mean art is not "technical"... just bear with me.)

Image transformations involve degradation in the sense of information loss , which is quantifiable. An image is a token that encodes information about a scene; i.e., a proxy that allows us to infer facts about the original scene. Image transformations, as a general rule, destroy some of that information; some things that were distinguished in the original image will not be so in the output. The most obvious examples are that edits tend to cause posterization and loss of spatial resolution.
Good image processing does not degrade the image quality, even if it changes it irrevocably. If this were not the case, NO work with Photoshop or any other image editor, or raw converter, could be used, and we can all see how rendering work with Photoshop improves the image quality even though its operations are "destructive". The lens correction metadata and the routines which apply it are indeed improving the quality of the images made by these lenses.
But all this means is that "image quality" in your sense doesn't mean "information." Image editing sacrifices some of the information in the original image in order to produce output that we like better than the original, by artistic criteria. In the case of distortion corrections, we sacrifice some spatial resolution because we think it looks bad that straight lines in the real world show up as curves in the photo.
Removing chromatic aberration and geometric distortion, and any other form is image aberration, as well as adjusting tonal scale, color balance, cropping, etc, are indeed all destructive editing processes ... They're done to improve the image quality .
Only in the sense that these processes produce output that we like better than the original. But again, that's an artistic decision, not a technical fact, because it involves our preferences .
 
Agreed - all things being equal proper optical formula is preferable over soft corrections.
This sounds like an irrational preference. If all other things truly were equal, why would you care one way or another?
But things are never equal. There's I believe a myth that m4/3 lens are relatively small, light and cheap (not always) solely because of format geometry. Certainly it's mostly true but if Oly and Pana didn't allow themselves to heavily compromise on optical formula when it comes to distortions and CA we most probably would see much bigger, heavier and/or slower and pricier m4/3 lens.
Why do you label Olympus and Panasonic as the ones making the compromises in this case? Why not flip it around, and say that Canon and Nikon are the ones who are compromising the size and weight of their lenses by requiring them to perform distortion correction optically?
 
And recall that time when the Astronauts made a repair visit to install new software
tweaks.
With Hubble, they installed an optical corrector, because the main mirror had been ground very precisely to an incorrect, but known, shape. It is a good example of what can and cannot be done with digital image processing: they could correct for some aberrations with software, but if some of the light collected by the mirror never got to a sensor in the first place, you can't recover the lost sensitivity by image processing.

Personally I am very interested by the technology of hybrid optical / digital design. It is one of the clear benefits of digital technology over film. Is the information exchanged by the lens and body part of the m4/3 standard? The OP asked if software correction would work the same with a Panasonic lens on an E-P1 body, but no one answered this question yet.

S.
 
Do I need to understand how it is done? No, I just look at my pics. They look better with these corrections.

Maybe that special pixel, you know, in the corner, 7th row, 2nd from the left, would look much nicer if uncorrected ..... ;)
 

Keyboard shortcuts

Back
Top