N
Najinsky
Guest
Long post. Please read at leisure!
There has been and interesting sub thread running here about the GR and its lens.
For two reasons, I wanted to give it its own thread. First, its a sub thread in a thread about GR 3 expectations, so not really the right place for it (and has resulted in that thread starting to fill up). Second, the discussion seems to be turning a little tit for tat, so I wanted to start with a clean slate.
Let me start of by saying my position is that I do think there is some circumstantial evidence that the GR maybe applying software corrections to the raw files for lens distortion. That's why it got me thinking about it and why Im interested in discussing it. However, I also think if true, and done in the way I suspect it could be, it is an achievement Ricoh should be commended for, certainly not criticised.
First we need to understand the monster, the real monster. This was the image posted in the other subthread as an example from the Leica Q.
The barrel distortion of the lens is obvious and if this was a film camera, likely unacceptable for most. In the analog world, the distortion is the monster
However, in the digital age, we have the ability to correct for the distortion.
And once corrected, all is well in the world again, nearly.
As can be seen, some information has been lost in the corrected image, the most obvious is the target circles in the far corners due to scaling and cropping. In the digital world, information loss is the monster.
However there can be other types of data loss too, some of them are desireable even!
For example, a thick line can become a slightly thinner line. It's important to remember the image is distorted, the thick line is the distortion, the fake data. The thinner line is the corrected, more accurate data.
A correction could perhaps be made in the glass, but this can affect complexity, size, cost and light transmission, it may (will) also introduce artifacts. So there is a very important design trade-off to be made.
An interesting question is which would be the more accurate correction, optical or digital?
Given a lens has a kind of resolution, what you essentially have is an image being projected into a grid, and due to distortion, some elements of the image are in the wrong location in the grid. Distortion correction is essentially an attempt to move those image elements back to their correct locations.
For optical correction, the issues are as noted above, and additionally the fact that even after a corrective element, the correction may not be perfect and may produce a less severe but more complex residual distortion such as a moustache.
For digital correction, the calculations can be very precise, but the issue comes from missing or merging data. For example with barrel distortion, at the edges there is no additional data available beyond the edges to re-map into the edge locations. The resulting image therefore has blank areas at the edge giving a non-rectangular image, and fixing this results in a crop, although the process can vary and is down to the specific software used to perform the correction.
For me, I see the end goal as being to "ensure the lens transmits enough good quality information to produce the desired result efficiently within the design constrainsts (quality, performance, size, cost)".
I feel its very important to keep this goal at the forefront of considerations in order not to get bogged down in technicalities, and also so as not to let pre/mis-conceptions lead to an irrational bias.
In the above Leica Q example, the distorted image is saved in the raw file, which means to later get a usable image, correction is required and some data loss will occur. Just for the sake of discussion, lets assume that if the raw file is 16MP then after correction, say we are left with a highly usable 15MP image. But 1MP of data has been lost, so the monster lives.
Now, to fully consider the case for storing corrrected data in the raw file, we need to take a closer look at what is going on with the sensor.
One of the specs DPR like to list, when available, is not only the effective number of pixels, but the toral number of sensor pixels (more accurately, photo detectors).
Heres the Ricoh spec:
And here from a Sony with a similar sensor
Notice how both are 16MP cameras, but have a sensor with 17M photo detectors. The Ricoh claims to be just a tiny fraction bigger and offers an extra 16 pixels of width.
There has been much discussion and speculation about why the sensor has more detectors than are used for the final image data. Some known reasons are, for example:
So ultimately it is down to the makers to decide what to do with this extra data. For example, depending on the imaging circle of the lens, it could be used to provide a slightly longer edge on a 16:9 or square format image, as Panasonic did on their multi-aspect sensor in some of their M43 models.
Perhaps you can see where I'm going with this now?
In the Leica Q example above, the resulting 16MP raw file needed correcting so essentially left us with a 15MP corrected image.
But what if Ricoh saw the potential to use the extra data as a way to accept a slight barrel distortion in the lens (thus keeping down size, cost, artefacts, etc), but then still be able to deliver a full 16MP distortion-free raw, by correcting the 17MP sensor data before saving it to the 16MP crop raw file.
Given the choice, which would you prefer them to do?
a) throw away 1MP of edge data, deliver a 16MP distorted file, that you can then only turn into a 15MP image due to the missing edge data.
b) use the 17MP sensor data to correct the distortion and deliver a 16MP distortion free image that doesn't need correcting. The monster is slain.
Of course, this is all speculation, well except for that end part, because as you all know, they do in fact deliver a 16MP near distortion-free raw file, and I think the most appropriate words for that are, however you did it, well done Ricoh.
--
Andy
Try reading comments with a smile. You may discover they were written that way.
There has been and interesting sub thread running here about the GR and its lens.
For two reasons, I wanted to give it its own thread. First, its a sub thread in a thread about GR 3 expectations, so not really the right place for it (and has resulted in that thread starting to fill up). Second, the discussion seems to be turning a little tit for tat, so I wanted to start with a clean slate.
Let me start of by saying my position is that I do think there is some circumstantial evidence that the GR maybe applying software corrections to the raw files for lens distortion. That's why it got me thinking about it and why Im interested in discussing it. However, I also think if true, and done in the way I suspect it could be, it is an achievement Ricoh should be commended for, certainly not criticised.
First we need to understand the monster, the real monster. This was the image posted in the other subthread as an example from the Leica Q.
The barrel distortion of the lens is obvious and if this was a film camera, likely unacceptable for most. In the analog world, the distortion is the monster
However, in the digital age, we have the ability to correct for the distortion.
And once corrected, all is well in the world again, nearly.
As can be seen, some information has been lost in the corrected image, the most obvious is the target circles in the far corners due to scaling and cropping. In the digital world, information loss is the monster.
However there can be other types of data loss too, some of them are desireable even!
For example, a thick line can become a slightly thinner line. It's important to remember the image is distorted, the thick line is the distortion, the fake data. The thinner line is the corrected, more accurate data.
A correction could perhaps be made in the glass, but this can affect complexity, size, cost and light transmission, it may (will) also introduce artifacts. So there is a very important design trade-off to be made.
An interesting question is which would be the more accurate correction, optical or digital?
Given a lens has a kind of resolution, what you essentially have is an image being projected into a grid, and due to distortion, some elements of the image are in the wrong location in the grid. Distortion correction is essentially an attempt to move those image elements back to their correct locations.
For optical correction, the issues are as noted above, and additionally the fact that even after a corrective element, the correction may not be perfect and may produce a less severe but more complex residual distortion such as a moustache.
For digital correction, the calculations can be very precise, but the issue comes from missing or merging data. For example with barrel distortion, at the edges there is no additional data available beyond the edges to re-map into the edge locations. The resulting image therefore has blank areas at the edge giving a non-rectangular image, and fixing this results in a crop, although the process can vary and is down to the specific software used to perform the correction.
For me, I see the end goal as being to "ensure the lens transmits enough good quality information to produce the desired result efficiently within the design constrainsts (quality, performance, size, cost)".
I feel its very important to keep this goal at the forefront of considerations in order not to get bogged down in technicalities, and also so as not to let pre/mis-conceptions lead to an irrational bias.
In the above Leica Q example, the distorted image is saved in the raw file, which means to later get a usable image, correction is required and some data loss will occur. Just for the sake of discussion, lets assume that if the raw file is 16MP then after correction, say we are left with a highly usable 15MP image. But 1MP of data has been lost, so the monster lives.
Now, to fully consider the case for storing corrrected data in the raw file, we need to take a closer look at what is going on with the sensor.
One of the specs DPR like to list, when available, is not only the effective number of pixels, but the toral number of sensor pixels (more accurately, photo detectors).
Heres the Ricoh spec:
And here from a Sony with a similar sensor
Notice how both are 16MP cameras, but have a sensor with 17M photo detectors. The Ricoh claims to be just a tiny fraction bigger and offers an extra 16 pixels of width.
There has been much discussion and speculation about why the sensor has more detectors than are used for the final image data. Some known reasons are, for example:
- Lack of near neighbour make the edge detectors less usable for image data
- Digital stabilisation (primarily video)
So ultimately it is down to the makers to decide what to do with this extra data. For example, depending on the imaging circle of the lens, it could be used to provide a slightly longer edge on a 16:9 or square format image, as Panasonic did on their multi-aspect sensor in some of their M43 models.
Perhaps you can see where I'm going with this now?
In the Leica Q example above, the resulting 16MP raw file needed correcting so essentially left us with a 15MP corrected image.
But what if Ricoh saw the potential to use the extra data as a way to accept a slight barrel distortion in the lens (thus keeping down size, cost, artefacts, etc), but then still be able to deliver a full 16MP distortion-free raw, by correcting the 17MP sensor data before saving it to the 16MP crop raw file.
Given the choice, which would you prefer them to do?
a) throw away 1MP of edge data, deliver a 16MP distorted file, that you can then only turn into a 15MP image due to the missing edge data.
b) use the 17MP sensor data to correct the distortion and deliver a 16MP distortion free image that doesn't need correcting. The monster is slain.
Of course, this is all speculation, well except for that end part, because as you all know, they do in fact deliver a 16MP near distortion-free raw file, and I think the most appropriate words for that are, however you did it, well done Ricoh.
--
Andy
Try reading comments with a smile. You may discover they were written that way.
Last edited:
