Crop sensor Pro Body - Dp review tv

thanks for sticking to science Some people here have the *fantasy* that software corrections for a lens are better than optical for image quality they are better to make cheaper perhaps or smaller perhaps but not image quality

how people think that inventing information where there’s no original information is something that can yield the same result i don’t know

This is a key reason why 4/3 lenses are better overall than m43 lenses optically
Clearly you don't understand the concept. I particularly love the way you manage to insult everyone who does. See if you can find a better way to express your self in future.
I believe i didn’t call any names here I am saying it’s difficult to understand how people can conclude that software corrections somehow can put information accurately where it’s not there
This is not a case of reinstating lost information. What makes you think it is??
You are using the data you got to stretch and interpolate to pixels While this is way better than generating random data it is still not the same as having a lens optically give you that deficinitoonnin those areas from the get go

so when you software correct for distortion because you have to warp/unwrap/ stretch you necessarily lose somewhere some resolution whether thisnis joiceaboe or not will vary per individual, how strong the correction is etc
Since we're sticking to science...

There's no need to make up any missing information to warp an image, as long as the stretching is minimal. Information theory says that the true value of any point can be known as long as the proper filter function is used and the input didn't contain any frequencies above the Nyquist limit. In our case the Bayer filter on the sensor puts an upper limit on the frequencies, since not every pixel has a sample for every color.

The perfect filter function is called Sinc, and it can't be practically implemented since the terms go out to infinity. But you can get some pretty good approximations. I'd expect desktop software to do a better job of this than the camera, since the camera processor is less powerful and has less time to do the job.

As for whether it's better to add a glass element to do the same thing, that's debatable. Every element will add some degradation because glass isn't perfect. Although we expect the degradation to be minimal there's no denying that it will happen. Certainly the lens designer wouldn't add it if the expected benefit didn't grossly outweigh the disadvantage. But if you could use software to completely eliminate an element, would the resulting image be better or worse? I don't think you can make a definitive statement either way.
Certainly not the way he makes it.

When designing a lens for all-optical results without software correction, optically correcting for distortion will reduce sharpness. The sharpest possible lens designs have a variety of aberrations, including geometric distortion. So they make compromises. In much the same way that software corrections do. So the question becomes, which approach results in the better compromises for the (average, intended) customer?

Somewhere there is a paper by the Optical Society of America which found that, for wide angle camera lenses and a fixed size restriction on the lens, relaxation of the constraint on distortion during the optical system design process allows for improved optimization of other image-degrading aberrations. And that selection of a fairly large initial distortion value as a constraint yields significantly enhanced final image quality.

So, my view is that people should be a little more open minded to the fair likelihood that having the software correction option allows more flexibility to improve either image quality, size or cost of lenses -- maybe even more than one at a time. Whether lens makers use this to make the lenses someone wants is a different question, but making blanket statements that it is bound to reduce image quality is just not right.

cheers
 
thanks for sticking to science Some people here have the *fantasy* that software corrections for a lens are better than optical for image quality they are better to make cheaper perhaps or smaller perhaps but not image quality

how people think that inventing information where there’s no original information is something that can yield the same result i don’t know

This is a key reason why 4/3 lenses are better overall than m43 lenses optically
Clearly you don't understand the concept. I particularly love the way you manage to insult everyone who does. See if you can find a better way to express your self in future.
I believe i didn’t call any names here I am saying it’s difficult to understand how people can conclude that software corrections somehow can put information accurately where it’s not there
This is not a case of reinstating lost information. What makes you think it is??
You are using the data you got to stretch and interpolate to pixels While this is way better than generating random data it is still not the same as having a lens optically give you that deficinitoonnin those areas from the get go

so when you software correct for distortion because you have to warp/unwrap/ stretch you necessarily lose somewhere some resolution whether thisnis joiceaboe or not will vary per individual, how strong the correction is etc
Since we're sticking to science...

There's no need to make up any missing information to warp an image, as long as the stretching is minimal. Information theory says that the true value of any point can be known as long as the proper filter function is used and the input didn't contain any frequencies above the Nyquist limit. In our case the Bayer filter on the sensor puts an upper limit on the frequencies, since not every pixel has a sample for every color.

The perfect filter function is called Sinc, and it can't be practically implemented since the terms go out to infinity. But you can get some pretty good approximations. I'd expect desktop software to do a better job of this than the camera, since the camera processor is less powerful and has less time to do the job.

As for whether it's better to add a glass element to do the same thing, that's debatable. Every element will add some degradation because glass isn't perfect. Although we expect the degradation to be minimal there's no denying that it will happen. Certainly the lens designer wouldn't add it if the expected benefit didn't grossly outweigh the disadvantage. But if you could use software to completely eliminate an element, would the resulting image be better or worse? I don't think you can make a definitive statement either way.
Certainly not the way he makes it.

When designing a lens for all-optical results without software correction, optically correcting for distortion will reduce sharpness.
This is not a given. More symmetrical designs will have less distortion and use of aspheres far from the stop allows you to set arbitrarily low distortion "for free" in very asymmetric designs.
The sharpest possible lens designs have a variety of aberrations, including geometric distortion.
The sharpest lens design has no aberrations at all...
Somewhere there is a paper by the Optical Society of America which found that, for wide angle camera lenses and a fixed size restriction on the lens, relaxation of the constraint on distortion during the optical system design process allows for improved optimization of other image-degrading aberrations. And that selection of a fairly large initial distortion value as a constraint yields significantly enhanced final image quality.
OSA is not an author, it's a professional society. Are you talking about a paper in an OSA journal? If so, which one? JOSA A? Applied Optics? Optics Express? Optics Letters? And from when? Under some arbitrary constraints about design variables there is nothing wrong with those findings, but they are not general. I once designed an ultra wide angle projection lens which would have 84% distortion in an all-spherical design form. By adding a single aspheric surface, I brought distortion down to 2.3% without a substantiative change in design form. If you are willing to use aspheres and do so effectively, the findings of the paper you allude to are not general.
So, my view is that people should be a little more open minded to the fair likelihood that having the software correction option allows more flexibility to improve either image quality, size or cost of lenses -- maybe even more than one at a time.
I have no doubts it allows reduced cost and have never said otherwise. Given the shorter-than-average service life of M4/3 lenses and their mostly irreparable nature, this has lead to a cheap, disposable design philosophy. Unfortunately, those low costs have not been passed onto the consumer.
 
Since we're sticking to science...

There's no need to make up any missing information to warp an image, as long as the stretching is minimal. Information theory says that the true value of any point can be known as long as the proper filter function is used and the input didn't contain any frequencies above the Nyquist limit.
No - sampling theory states that if a signal is Nyquist sampled, its sampled representation contains enough data to recreate exactly the original signal. This has to do only with reconstruction. Correcting distortion is a non-affine, non-reversible transformation and is not a filter. Because it is non-reversible, it is lossy.
I knew I was being a bit imprecise in my language, but I couldn't quite put my finger on it. Thanks for the corrections.

I don't see how a distortion can be non-reversible. Even if you don't know the exact formula of the distortion effect, you can fashion a reversal by simple measurement and interpolation. Then it becomes a simple matter of deciding what precision is sufficient, and this is certainly easier in software than it is in glass.
In our case the Bayer filter on the sensor puts an upper limit on the frequencies, since not every pixel has a sample for every color.
This is absolutely not true. The bayer filter makes the chromatic signal sparse. Sparcity is not equivalent to filtered. The presence of aliasing and things like moire are proof positive that Bayer images can be less than Nyquist sampled. In fact, consumer preferences are for something around Q=1.5 being "ok" sharpness and Q=0.5 being "critical" sharpness. You need Q>2 for Nyquist sampling!
The process of de-Bayering the RAW data will either blur or alias those high frequencies. The damage done by a good reconstruction filter for moving pixels around will pale in comparison.
The perfect filter function is called Sinc, and it can't be practically implemented since the terms go out to infinity. But you can get some pretty good approximations. I'd expect desktop software to do a better job of this than the camera, since the camera processor is less powerful and has less time to do the job.
Sinc can be practically implemented - exactly even. The FT of sinc is just a rect function, which has finite support. The camera doesn't calculate sinc, but even if it did it's only about 5 flops. To process a 20MP image with one sinc per pixel that's 100Mflops - a measly 800Mhz processor could do it in about 100 milliseconds.
Are you suggesting that the camera could do a full FT of the entire image, do a rectangle window on it, then do another FT to convert back? For each pixel in the image? That's going to be a bit more than 5 flops.
As for whether it's better to add a glass element to do the same thing, that's debatable. Every element will add some degradation because glass isn't perfect.
In what way is the glass imperfect and how does that impact the image?

Do you mean homogeneity and things like striae? No substantive impact on an imaging system far from the diffraction limit like a camera lens. Surface roughness? Matters in UV, not VIS. Surface irregularity? The elements in a camera lens are good enough for that to not matter.
If that were true then there would be no sample variation, no decentering, no flare - every lens would be perfect. In the real world that's simply not the case. No two lenses are perfectly identical.
Although we expect the degradation to be minimal there's no denying that it will happen. Certainly the lens designer wouldn't add it if the expected benefit didn't grossly outweigh the disadvantage. But if you could use software to completely eliminate an element, would the resulting image be better or worse? I don't think you can make a definitive statement either way.
If software were king, lithography lenses wouldn't have 40-80 elements in them, they'd have 6 and software thrown on top. Here's one that's a mild derivative of a 90s design, good enough for a 100 nm or so process:

9sTLm0n.jpg


All that fused silica is definitely because software is superior!
For those of us without semiconductor fab money in our back pocket, your example is a bit unrealistic. I'm not talking about the difference between 80 elements and 6, I'm talking about e.g. the difference between 10 and 9.
 
Since we're sticking to science...

There's no need to make up any missing information to warp an image, as long as the stretching is minimal. Information theory says that the true value of any point can be known as long as the proper filter function is used and the input didn't contain any frequencies above the Nyquist limit.
No - sampling theory states that if a signal is Nyquist sampled, its sampled representation contains enough data to recreate exactly the original signal. This has to do only with reconstruction. Correcting distortion is a non-affine, non-reversible transformation and is not a filter. Because it is non-reversible, it is lossy.
I knew I was being a bit imprecise in my language, but I couldn't quite put my finger on it. Thanks for the corrections.

I don't see how a distortion can be non-reversible. Even if you don't know the exact formula of the distortion effect, you can fashion a reversal by simple measurement and interpolation. Then it becomes a simple matter of deciding what precision is sufficient, and this is certainly easier in software than it is in glass.
Suppose you have a distortion function which has a point of inflection in it. This will result in data from just above and just below that point being put into the same pixel. Once you do that reduction, you can't undo it. This is true for any substantial stationary point in the curve, which generally takes the form f(x) := ax + bx^3 + cx^5 + ...

If more than one coefficient is nonzero, you have a stationary point. If they are all zero (i.e., perfect rectilinearity) or only one is nonzero, then the correction is reversible. In practice you will still get some artifacts from reversing the transformation that are related to quantization.
In our case the Bayer filter on the sensor puts an upper limit on the frequencies, since not every pixel has a sample for every color.
This is absolutely not true. The bayer filter makes the chromatic signal sparse. Sparcity is not equivalent to filtered. The presence of aliasing and things like moire are proof positive that Bayer images can be less than Nyquist sampled. In fact, consumer preferences are for something around Q=1.5 being "ok" sharpness and Q=0.5 being "critical" sharpness. You need Q>2 for Nyquist sampling!
The process of de-Bayering the RAW data will either blur or alias those high frequencies.
aliasing is an indicator that the signal was not Nyquist sampled
The damage done by a good reconstruction filter for moving pixels around will pale in comparison.
The perfect filter function is called Sinc, and it can't be practically implemented since the terms go out to infinity. But you can get some pretty good approximations. I'd expect desktop software to do a better job of this than the camera, since the camera processor is less powerful and has less time to do the job.
Sinc can be practically implemented - exactly even. The FT of sinc is just a rect function, which has finite support. The camera doesn't calculate sinc, but even if it did it's only about 5 flops. To process a 20MP image with one sinc per pixel that's 100Mflops - a measly 800Mhz processor could do it in about 100 milliseconds.
Are you suggesting that the camera could do a full FT of the entire image, do a rectangle window on it, then do another FT to convert back? For each pixel in the image? That's going to be a bit more than 5 flops.
Ignoring the FTs, doing sinc on every pixel for a 20MP image would be 100 megaflops.

Not that it matters, because your camera doesn't do any of that as part of its core ISP, A/D converter notwithstanding. The A/Ds in a camera are clocked superfast and work on nonstationary random signals, so I am not sure they even use reconstruction filters at all. You wouldn't need one for white noise.
As for whether it's better to add a glass element to do the same thing, that's debatable. Every element will add some degradation because glass isn't perfect.
In what way is the glass imperfect and how does that impact the image?

Do you mean homogeneity and things like striae? No substantive impact on an imaging system far from the diffraction limit like a camera lens. Surface roughness? Matters in UV, not VIS. Surface irregularity? The elements in a camera lens are good enough for that to not matter.
If that were true then there would be no sample variation, no decentering, no flare - every lens would be perfect. In the real world that's simply not the case. No two lenses are perfectly identical.
Do you use "lens" to mean "element" or "assembly?"

In any event, perfectly no, but imperceptibly different is possible, e.g. the Canon 50/1.8 has this quality. Or the Zeiss 100/2 MP. The Nikon 300/2.8 in its previous generation, too.
Although we expect the degradation to be minimal there's no denying that it will happen. Certainly the lens designer wouldn't add it if the expected benefit didn't grossly outweigh the disadvantage. But if you could use software to completely eliminate an element, would the resulting image be better or worse? I don't think you can make a definitive statement either way.
If software were king, lithography lenses wouldn't have 40-80 elements in them, they'd have 6 and software thrown on top. Here's one that's a mild derivative of a 90s design, good enough for a 100 nm or so process:

9sTLm0n.jpg


All that fused silica is definitely because software is superior!
For those of us without semiconductor fab money in our back pocket, your example is a bit unrealistic. I'm not talking about the difference between 80 elements and 6, I'm talking about e.g. the difference between 10 and 9.
My point is that if software is king, the obscene complexity of litho lenses would have vanished a long time ago in exchange for a software solution. You can hire a whole lot of programmers for the ~$10M price tag of one of those lenses.
 
Last edited:
Since we're sticking to science...

There's no need to make up any missing information to warp an image, as long as the stretching is minimal. Information theory says that the true value of any point can be known as long as the proper filter function is used and the input didn't contain any frequencies above the Nyquist limit.
No - sampling theory states that if a signal is Nyquist sampled, its sampled representation contains enough data to recreate exactly the original signal. This has to do only with reconstruction. Correcting distortion is a non-affine, non-reversible transformation and is not a filter. Because it is non-reversible, it is lossy.
I knew I was being a bit imprecise in my language, but I couldn't quite put my finger on it. Thanks for the corrections.

I don't see how a distortion can be non-reversible. Even if you don't know the exact formula of the distortion effect, you can fashion a reversal by simple measurement and interpolation. Then it becomes a simple matter of deciding what precision is sufficient, and this is certainly easier in software than it is in glass.
Suppose you have a distortion function which has a point of inflection in it. This will result in data from just above and just below that point being put into the same pixel. Once you do that reduction, you can't undo it. This is true for any substantial stationary point in the curve, which generally takes the form f(x) := ax + bx^3 + cx^5 + ...

If more than one coefficient is nonzero, you have a stationary point. If they are all zero (i.e., perfect rectilinearity) or only one is nonzero, then the correction is reversible. In practice you will still get some artifacts from reversing the transformation that are related to quantization.
In our case the Bayer filter on the sensor puts an upper limit on the frequencies, since not every pixel has a sample for every color.
This is absolutely not true. The bayer filter makes the chromatic signal sparse. Sparcity is not equivalent to filtered. The presence of aliasing and things like moire are proof positive that Bayer images can be less than Nyquist sampled. In fact, consumer preferences are for something around Q=1.5 being "ok" sharpness and Q=0.5 being "critical" sharpness. You need Q>2 for Nyquist sampling!
The process of de-Bayering the RAW data will either blur or alias those high frequencies.
aliasing is an indicator that the signal was not Nyquist sampled
The damage done by a good reconstruction filter for moving pixels around will pale in comparison.
The perfect filter function is called Sinc, and it can't be practically implemented since the terms go out to infinity. But you can get some pretty good approximations. I'd expect desktop software to do a better job of this than the camera, since the camera processor is less powerful and has less time to do the job.
Sinc can be practically implemented - exactly even. The FT of sinc is just a rect function, which has finite support. The camera doesn't calculate sinc, but even if it did it's only about 5 flops. To process a 20MP image with one sinc per pixel that's 100Mflops - a measly 800Mhz processor could do it in about 100 milliseconds.
Are you suggesting that the camera could do a full FT of the entire image, do a rectangle window on it, then do another FT to convert back? For each pixel in the image? That's going to be a bit more than 5 flops.
Ignoring the FTs, doing sinc on every pixel for a 20MP image would be 100 megaflops.

Not that it matters, because your camera doesn't do any of that as part of its core ISP, A/D converter notwithstanding. The A/Ds in a camera are clocked superfast and work on nonstationary random signals, so I am not sure they even use reconstruction filters at all. You wouldn't need one for white noise.
As for whether it's better to add a glass element to do the same thing, that's debatable. Every element will add some degradation because glass isn't perfect.
In what way is the glass imperfect and how does that impact the image?

Do you mean homogeneity and things like striae? No substantive impact on an imaging system far from the diffraction limit like a camera lens. Surface roughness? Matters in UV, not VIS. Surface irregularity? The elements in a camera lens are good enough for that to not matter.
If that were true then there would be no sample variation, no decentering, no flare - every lens would be perfect. In the real world that's simply not the case. No two lenses are perfectly identical.
Do you use "lens" to mean "element" or "assembly?"

In any event, perfectly no, but imperceptibly different is possible, e.g. the Canon 50/1.8 has this quality. Or the Zeiss 100/2 MP. The Nikon 300/2.8 in its previous generation, too.
Although we expect the degradation to be minimal there's no denying that it will happen. Certainly the lens designer wouldn't add it if the expected benefit didn't grossly outweigh the disadvantage. But if you could use software to completely eliminate an element, would the resulting image be better or worse? I don't think you can make a definitive statement either way.
If software were king, lithography lenses wouldn't have 40-80 elements in them, they'd have 6 and software thrown on top. Here's one that's a mild derivative of a 90s design, good enough for a 100 nm or so process:

9sTLm0n.jpg


All that fused silica is definitely because software is superior!
For those of us without semiconductor fab money in our back pocket, your example is a bit unrealistic. I'm not talking about the difference between 80 elements and 6, I'm talking about e.g. the difference between 10 and 9.
My point is that if software is king, the obscene complexity of litho lenses would have vanished a long time ago in exchange for a software solution. You can hire a whole lot of programmers for the ~$10M price tag of one of those lenses.
Thanks... I had no idea these lithography lenses looked like that and cost that much! Super interesting.

--
M43 equivalence: "Twice the fun with half the weight"
"You are a long time dead" -
Credit to whoever said that first and my wife for saying it to me. Make the best you can of every day!
 
Since we're sticking to science...

There's no need to make up any missing information to warp an image, as long as the stretching is minimal. Information theory says that the true value of any point can be known as long as the proper filter function is used and the input didn't contain any frequencies above the Nyquist limit.
No - sampling theory states that if a signal is Nyquist sampled, its sampled representation contains enough data to recreate exactly the original signal. This has to do only with reconstruction. Correcting distortion is a non-affine, non-reversible transformation and is not a filter. Because it is non-reversible, it is lossy.
I knew I was being a bit imprecise in my language, but I couldn't quite put my finger on it. Thanks for the corrections.

I don't see how a distortion can be non-reversible. Even if you don't know the exact formula of the distortion effect, you can fashion a reversal by simple measurement and interpolation. Then it becomes a simple matter of deciding what precision is sufficient, and this is certainly easier in software than it is in glass.
Suppose you have a distortion function which has a point of inflection in it. This will result in data from just above and just below that point being put into the same pixel. Once you do that reduction, you can't undo it. This is true for any substantial stationary point in the curve, which generally takes the form f(x) := ax + bx^3 + cx^5 + ...

If more than one coefficient is nonzero, you have a stationary point. If they are all zero (i.e., perfect rectilinearity) or only one is nonzero, then the correction is reversible. In practice you will still get some artifacts from reversing the transformation that are related to quantization.
Now I understand your point, thanks. Are such inflection points common in real lenses?
In our case the Bayer filter on the sensor puts an upper limit on the frequencies, since not every pixel has a sample for every color.
This is absolutely not true. The bayer filter makes the chromatic signal sparse. Sparcity is not equivalent to filtered. The presence of aliasing and things like moire are proof positive that Bayer images can be less than Nyquist sampled. In fact, consumer preferences are for something around Q=1.5 being "ok" sharpness and Q=0.5 being "critical" sharpness. You need Q>2 for Nyquist sampling!
The process of de-Bayering the RAW data will either blur or alias those high frequencies.
aliasing is an indicator that the signal was not Nyquist sampled
I'm not arguing at all. I'll just restate my point that if your data is already damaged, a little interpolation won't make it much worse if it's done with care.
The damage done by a good reconstruction filter for moving pixels around will pale in comparison.
The perfect filter function is called Sinc, and it can't be practically implemented since the terms go out to infinity. But you can get some pretty good approximations. I'd expect desktop software to do a better job of this than the camera, since the camera processor is less powerful and has less time to do the job.
Sinc can be practically implemented - exactly even. The FT of sinc is just a rect function, which has finite support. The camera doesn't calculate sinc, but even if it did it's only about 5 flops. To process a 20MP image with one sinc per pixel that's 100Mflops - a measly 800Mhz processor could do it in about 100 milliseconds.
Are you suggesting that the camera could do a full FT of the entire image, do a rectangle window on it, then do another FT to convert back? For each pixel in the image? That's going to be a bit more than 5 flops.
Ignoring the FTs, doing sinc on every pixel for a 20MP image would be 100 megaflops.

Not that it matters, because your camera doesn't do any of that as part of its core ISP, A/D converter notwithstanding. The A/Ds in a camera are clocked superfast and work on nonstationary random signals, so I am not sure they even use reconstruction filters at all. You wouldn't need one for white noise.
I think you and I must have a different interpretation of Sinc. See Wikipedia: https://en.wikipedia.org/wiki/Sinc_function
As for whether it's better to add a glass element to do the same thing, that's debatable. Every element will add some degradation because glass isn't perfect.
In what way is the glass imperfect and how does that impact the image?

Do you mean homogeneity and things like striae? No substantive impact on an imaging system far from the diffraction limit like a camera lens. Surface roughness? Matters in UV, not VIS. Surface irregularity? The elements in a camera lens are good enough for that to not matter.
If that were true then there would be no sample variation, no decentering, no flare - every lens would be perfect. In the real world that's simply not the case. No two lenses are perfectly identical.
Do you use "lens" to mean "element" or "assembly?"

In any event, perfectly no, but imperceptibly different is possible, e.g. the Canon 50/1.8 has this quality. Or the Zeiss 100/2 MP. The Nikon 300/2.8 in its previous generation, too.
I mean assembly.

I have not had the good fortune to experience the lenses you mention. Given your day job I assume you speak from a position of authority. I wish they could all be as consistent; my own experience has been less lucky.
Although we expect the degradation to be minimal there's no denying that it will happen. Certainly the lens designer wouldn't add it if the expected benefit didn't grossly outweigh the disadvantage. But if you could use software to completely eliminate an element, would the resulting image be better or worse? I don't think you can make a definitive statement either way.
If software were king, lithography lenses wouldn't have 40-80 elements in them, they'd have 6 and software thrown on top. Here's one that's a mild derivative of a 90s design, good enough for a 100 nm or so process:

9sTLm0n.jpg


All that fused silica is definitely because software is superior!
For those of us without semiconductor fab money in our back pocket, your example is a bit unrealistic. I'm not talking about the difference between 80 elements and 6, I'm talking about e.g. the difference between 10 and 9.
My point is that if software is king, the obscene complexity of litho lenses would have vanished a long time ago in exchange for a software solution. You can hire a whole lot of programmers for the ~$10M price tag of one of those lenses.
Lithography is an entire different kettle of fish to photography, with different constraints both optical and budgetary. I can see how a solution in one realm would be completely impractical in the other.
 
The statement? sure it’s pretty simple whatever software corrections you want to do it’s on top of whatever optical captures to begin with

You said it yourself - an approximation You have to stretch pixels - which you wouldn’t have to do with optical elements

We can say more glass would make the lens heavier- sure pricier? sure bigger? sure Less “t stops?” sure

but worse whennit comes to the image itself over a software correction? don’t think so

compare software enhancements on an optocallyncorrected lens ca one needed them for reasonable distortion and the one that started with the better data will come ahead

Once again- this is one reason why 4/3 lenses were overall better than m43 lenses It’s clear from inception of the m43 standard aomemod these were relaxed (good bye require telecentricity for example ) to make the lenses both smaller and cheaper It was publicly stated even
 
thanks for sticking to science Some people here have the *fantasy* that software corrections for a lens are better than optical for image quality they are better to make cheaper perhaps or smaller perhaps but not image quality

how people think that inventing information where there’s no original information is something that can yield the same result i don’t know

This is a key reason why 4/3 lenses are better overall than m43 lenses optically
Are they always? I swapped my 12-60 for my 12-40 because it gave better results. Mustache distortion is a pita.
4/3 lenses? usually yes they are better compare equal with equal
 
We can say more glass would make the lens heavier- sure pricier? sure bigger? sure Less “t stops?” sure

but worse whennit comes to the image itself over a software correction? don’t think so
I didn't say worse - I said it was a toss-up. Optical correction doesn't come for free.
 
As already mentioned software is a lot cheaper than expensive optical glass. If 2011 wasn’t bad enough for Olympus corporation then 2012 was even worse. To quote from wikipedia :


"Olympus and wider aftermath after the scandal
In 2012, Olympus also announced it would shed 2,700 jobs (7% of its workforce)[15] and around 40 percent of its 30 manufacturing plants by 2015 to reduce its cost base.[16] In July 2013, Kikugawa and Mori were both sentenced to 3 years in prison, 5 years suspended. The auditor who had been party to the fraud was sentenced to 2.5 years in prison, 4 years suspended. Olympus was fined 700 million yen ($7 million USD). In April 2014, six banks filed a civil suit against Olympus over the fraud, seeking an additional 28 billion yen in damages.”


So no great surprise the company sort an alternative route when it came to supplying lenses that worked software wise one to one with a camera. I have a number of Olympus Pro lenses. I find the 12-100 mm f/4 with dual IS on an E-M1 mkII a very useful day to day workhorse. I’m certainly not complaining when it comes to the 'apparent' quality of the optics.
 
Is there a more miserable and ridiculous term in photography than the concept of a crop sensor?

It leaves me too depressed to even look at the text.
 
Last edited:
Since we're sticking to science...

There's no need to make up any missing information to warp an image, as long as the stretching is minimal. Information theory says that the true value of any point can be known as long as the proper filter function is used and the input didn't contain any frequencies above the Nyquist limit.
No - sampling theory states that if a signal is Nyquist sampled, its sampled representation contains enough data to recreate exactly the original signal. This has to do only with reconstruction. Correcting distortion is a non-affine, non-reversible transformation and is not a filter. Because it is non-reversible, it is lossy.
I knew I was being a bit imprecise in my language, but I couldn't quite put my finger on it. Thanks for the corrections.

I don't see how a distortion can be non-reversible. Even if you don't know the exact formula of the distortion effect, you can fashion a reversal by simple measurement and interpolation. Then it becomes a simple matter of deciding what precision is sufficient, and this is certainly easier in software than it is in glass.
Suppose you have a distortion function which has a point of inflection in it. This will result in data from just above and just below that point being put into the same pixel. Once you do that reduction, you can't undo it. This is true for any substantial stationary point in the curve, which generally takes the form f(x) := ax + bx^3 + cx^5 + ...

If more than one coefficient is nonzero, you have a stationary point. If they are all zero (i.e., perfect rectilinearity) or only one is nonzero, then the correction is reversible. In practice you will still get some artifacts from reversing the transformation that are related to quantization.
Now I understand your point, thanks. Are such inflection points common in real lenses?
Moustache distortion is a prominent inflection point. Even simple barrel distortion tends to have a nonstationary point, but not an inflection point.
In our case the Bayer filter on the sensor puts an upper limit on the frequencies, since not every pixel has a sample for every color.
This is absolutely not true. The bayer filter makes the chromatic signal sparse. Sparcity is not equivalent to filtered. The presence of aliasing and things like moire are proof positive that Bayer images can be less than Nyquist sampled. In fact, consumer preferences are for something around Q=1.5 being "ok" sharpness and Q=0.5 being "critical" sharpness. You need Q>2 for Nyquist sampling!
The process of de-Bayering the RAW data will either blur or alias those high frequencies.
aliasing is an indicator that the signal was not Nyquist sampled
I'm not arguing at all. I'll just restate my point that if your data is already damaged, a little interpolation won't make it much worse if it's done with care.
Sure it will -- the loss for fixing 2% distortion is usually about 12% when you measure with Imatest (which has its own problems...) .
The damage done by a good reconstruction filter for moving pixels around will pale in comparison.
The perfect filter function is called Sinc, and it can't be practically implemented since the terms go out to infinity. But you can get some pretty good approximations. I'd expect desktop software to do a better job of this than the camera, since the camera processor is less powerful and has less time to do the job.
Sinc can be practically implemented - exactly even. The FT of sinc is just a rect function, which has finite support. The camera doesn't calculate sinc, but even if it did it's only about 5 flops. To process a 20MP image with one sinc per pixel that's 100Mflops - a measly 800Mhz processor could do it in about 100 milliseconds.
Are you suggesting that the camera could do a full FT of the entire image, do a rectangle window on it, then do another FT to convert back? For each pixel in the image? That's going to be a bit more than 5 flops.
Ignoring the FTs, doing sinc on every pixel for a 20MP image would be 100 megaflops.

Not that it matters, because your camera doesn't do any of that as part of its core ISP, A/D converter notwithstanding. The A/Ds in a camera are clocked superfast and work on nonstationary random signals, so I am not sure they even use reconstruction filters at all. You wouldn't need one for white noise.
I think you and I must have a different interpretation of Sinc. See Wikipedia: https://en.wikipedia.org/wiki/Sinc_function
There is not much interpretation to do over a mathematical function. If I may be crystal clear, I bolded your statement above which is incorrect. Your camera does not use sinc. AHD, AMAZE, and other debayering algorithms do not generally use this function.
As for whether it's better to add a glass element to do the same thing, that's debatable. Every element will add some degradation because glass isn't perfect.
In what way is the glass imperfect and how does that impact the image?

Do you mean homogeneity and things like striae? No substantive impact on an imaging system far from the diffraction limit like a camera lens. Surface roughness? Matters in UV, not VIS. Surface irregularity? The elements in a camera lens are good enough for that to not matter.
If that were true then there would be no sample variation, no decentering, no flare - every lens would be perfect. In the real world that's simply not the case. No two lenses are perfectly identical.
Do you use "lens" to mean "element" or "assembly?"

In any event, perfectly no, but imperceptibly different is possible, e.g. the Canon 50/1.8 has this quality. Or the Zeiss 100/2 MP. The Nikon 300/2.8 in its previous generation, too.
I mean assembly.

I have not had the good fortune to experience the lenses you mention. Given your day job I assume you speak from a position of authority. I wish they could all be as consistent; my own experience has been less lucky.
Although we expect the degradation to be minimal there's no denying that it will happen. Certainly the lens designer wouldn't add it if the expected benefit didn't grossly outweigh the disadvantage. But if you could use software to completely eliminate an element, would the resulting image be better or worse? I don't think you can make a definitive statement either way.
If software were king, lithography lenses wouldn't have 40-80 elements in them, they'd have 6 and software thrown on top. Here's one that's a mild derivative of a 90s design, good enough for a 100 nm or so process:

9sTLm0n.jpg


All that fused silica is definitely because software is superior!
For those of us without semiconductor fab money in our back pocket, your example is a bit unrealistic. I'm not talking about the difference between 80 elements and 6, I'm talking about e.g. the difference between 10 and 9.
My point is that if software is king, the obscene complexity of litho lenses would have vanished a long time ago in exchange for a software solution. You can hire a whole lot of programmers for the ~$10M price tag of one of those lenses.
Lithography is an entire different kettle of fish to photography, with different constraints both optical and budgetary. I can see how a solution in one realm would be completely impractical in the other.
How so? Both are imaging over an extended field of view. The bar for lithography is a lot higher, as is the budget. Otherwise, they are principally the same task.
 
We can say more glass would make the lens heavier- sure pricier? sure bigger? sure Less “t stops?” sure

but worse whennit comes to the image itself over a software correction? don’t think so
I didn't say worse - I said it was a toss-up. Optical correction doesn't come for free.
No but software correction builds on whatever the optics give you So if you start with worse you can’t simply interpolate no matter what function you find because it’s a guess You mention Bayer acting as a sampling filter already- now you are applying yet another layer from missing data That’s not gonna work

But again proof in the pudding compare ultrawide crap 4/3 with m43 the later has more distortion correction applied which one is better? short answer: 4/3

Theory is off if empirical evidence proves otherwise
 
We can say more glass would make the lens heavier- sure pricier? sure bigger? sure Less “t stops?” sure

but worse whennit comes to the image itself over a software correction? don’t think so
I didn't say worse - I said it was a toss-up. Optical correction doesn't come for free.
No but software correction builds on whatever the optics give you So if you start with worse you can’t simply interpolate no matter what function you find because it’s a guess You mention Bayer acting as a sampling filter already- now you are applying yet another layer from missing data That’s not gonna work

But again proof in the pudding compare ultrawide crap 4/3 with m43 the later has more distortion correction applied which one is better? short answer: 4/3

Theory is off if empirical evidence proves otherwise
It all depends on the quality of the software. The in-camera software can't do the best job because it operates under severe time constraints on an underpowered processor. RAW converters could do a better job, but I don't know how the current ones stack up.

I'll accept your observations on the current state of affairs, but I think it's a mistake to dismiss the technique out of hand.
 

Keyboard shortcuts

Back
Top