Software distortion correction works - here's the proof

Started Jul 7, 2013 | Discussions thread
EEmu
Forum MemberPosts: 69
Like?
Re: That experiment is invalid (and this issue is silly besides)
In reply to knickerhawk, Jul 8, 2013

knickerhawk wrote:

EEmu wrote:

While your experiment is interesting, it's invalid and biased to lower quality. You took an interpolated and quantized image, and then re-interpolated and quantized it twice. (Unless I'm misunderstanding your post.)

ACR was used to create a “Baseline TIFF” from raw. Then the Baseline TIFF was opened in ACR and the lens correction sliders were used to distort the image and the file was saved separately (the “Distorted TIFF”). Then I used the lens correction sliders to "correct" the Distorted TIFF to look as close as possible to the Baseline TIFF and saved the result (the “Corrected TIFF”). Then the Corrected TIFF was compared to the Baseline TIFF to see how much of a softening effect appeared in the Corrected TIFF relative to the Baseline TIFF. Since the only changes applied to the Distorted TIFF and then the Corrected TIFF were distortion control-related, I reasoned that any greater softness in the Corrected TIFF relative to the Baseline TIFF should be attributed to the software-based distortion manipulation. The success of the experiment was not dependent on the baseline image being “higher quality” nor was the point of the experiment to produce a Corrected TIFF that was optimized for IQ. The point was to note whether any RELATIVE IQ degradation (specifically visible softening) could be detected. Thus, I don’t think your criticism above is relevant. Valid criticism would relate to the type and amount of distortion manipulation applied and whether that kind of “processing” is comparable to what goes on under the covers during RAW conversion of m4/3 images.

Your reasoning is correct, but your experiment remains flawed.

The process you highlight is:

(Native) Distorted RAW -> (Corrected) 'Baseline' TIFF -> (Artificially) Distorted TIFF -> 'Corrected' TIFF.

Every time you transformed and saved the image you degraded it. As an exercise to see how this works, take an image, resize by something odd like +37%, save, then scale it back down to the original size. You'll have degraded the quality just by applying the transforms because you have to covert in between.

What you are neglecting is that RAW is decidedly not a final TIFF, it is Bayer filtered. It actually only has 1/2 the linear resolution of your TIFF and distortion correction occurs when converting that 1/2 resolution pseudo-image into the full sized TIFF. This significantly reduces the losses due to distortion correction because the demosaicing already is a 'lossy' process, so minorly adjusting the mapping prior to demosaicing doesn't impact that as much. Doing a demosaic and then doing distortion correction is worse.

(That is an experiment you can do: shoot RAW + JPEG. Convert RAW to TIFF/JPEG, perform the distortion correction, and then compare to the camera's JPEG. The processed image will be worse, though certainly it will require pixel peeping / Imatest to see.)

To perhaps better honor the true processing, I would suggest that you try:

RAW -> Corrected TIFF -> scale by 50% -> scale by 200% -> Image "A", uncorrected

Corrected TIFF -> distort -> correct -> scale by 50% -> scale by 200% -> Image "B", distortion corrected.

The loss of resolution from the scaling will simulate demosaicing, and distorting on the full size image simulates distortion correction occurring prior to demosaicing.

As I said though, your experiment leads to lower quality images than the pre-de-Bayering, so if you think that you're results look fine, then you've already proven your point. I'm just noting that your experiment actually is a worse-than-reality case.

Reply   Reply with quote   Complain
Post (hide subjects)Posted by
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark post MMy threads
Color scheme? Blue / Yellow