TN Args
Forum Pro
Certainly not the way he makes it.Since we're sticking to science...I believe i didn’t call any names here I am saying it’s difficult to understand how people can conclude that software corrections somehow can put information accurately where it’s not thereClearly you don't understand the concept. I particularly love the way you manage to insult everyone who does. See if you can find a better way to express your self in future.thanks for sticking to science Some people here have the *fantasy* that software corrections for a lens are better than optical for image quality they are better to make cheaper perhaps or smaller perhaps but not image quality
how people think that inventing information where there’s no original information is something that can yield the same result i don’t know
This is a key reason why 4/3 lenses are better overall than m43 lenses optically
You are using the data you got to stretch and interpolate to pixels While this is way better than generating random data it is still not the same as having a lens optically give you that deficinitoonnin those areas from the get goThis is not a case of reinstating lost information. What makes you think it is??
so when you software correct for distortion because you have to warp/unwrap/ stretch you necessarily lose somewhere some resolution whether thisnis joiceaboe or not will vary per individual, how strong the correction is etc
There's no need to make up any missing information to warp an image, as long as the stretching is minimal. Information theory says that the true value of any point can be known as long as the proper filter function is used and the input didn't contain any frequencies above the Nyquist limit. In our case the Bayer filter on the sensor puts an upper limit on the frequencies, since not every pixel has a sample for every color.
The perfect filter function is called Sinc, and it can't be practically implemented since the terms go out to infinity. But you can get some pretty good approximations. I'd expect desktop software to do a better job of this than the camera, since the camera processor is less powerful and has less time to do the job.
As for whether it's better to add a glass element to do the same thing, that's debatable. Every element will add some degradation because glass isn't perfect. Although we expect the degradation to be minimal there's no denying that it will happen. Certainly the lens designer wouldn't add it if the expected benefit didn't grossly outweigh the disadvantage. But if you could use software to completely eliminate an element, would the resulting image be better or worse? I don't think you can make a definitive statement either way.
When designing a lens for all-optical results without software correction, optically correcting for distortion will reduce sharpness. The sharpest possible lens designs have a variety of aberrations, including geometric distortion. So they make compromises. In much the same way that software corrections do. So the question becomes, which approach results in the better compromises for the (average, intended) customer?
Somewhere there is a paper by the Optical Society of America which found that, for wide angle camera lenses and a fixed size restriction on the lens, relaxation of the constraint on distortion during the optical system design process allows for improved optimization of other image-degrading aberrations. And that selection of a fairly large initial distortion value as a constraint yields significantly enhanced final image quality.
So, my view is that people should be a little more open minded to the fair likelihood that having the software correction option allows more flexibility to improve either image quality, size or cost of lenses -- maybe even more than one at a time. Whether lens makers use this to make the lenses someone wants is a different question, but making blanket statements that it is bound to reduce image quality is just not right.
cheers