Of monsters, misconceptions and distortion

  • Thread starter Thread starter Najinsky
  • Start date Start date
The GR's lens goes toe to toe with a camera/lens I seriously lust after, and it does pretty well in the comparison.
 
The Ricoh GR lens is based on the original GR1 lens, that was a revolutionary design for a 28mm lens. It was so good Ricoh even made a Leica M version. Its only limitation was the maximum aperture of f/2.8.
i have no doubt it's a great lens, I love my GR lenses. But so close to perfect from such a small lens covering an APS-C sensor in such a compact body would be a truly remarkable achievement, even for Ricoh.
I might be missing something here, but the GR1 produced distortion free pictures on 35mm, which is considerably larger than an aps-c sensor. Is there any reason why this would be harder to achieve on a smaller, digital sensor?
I don't really have any experience with the film GR cameras (though I would probably pick one up if I happened across a used one). All my GR lenses have been on digital cameras, but I find them to be very impressive and are some of my favourite lenses.

In general, lenses designed for smaller sensors can be smaller, and smaller lenses can be harder in some ways, for example due to reduced tolerances, but mainly its because they are destined for a smaller 'crop' sensor, and the net effect of a crop sensor is to magnify the final image, and with it, any imperfections in the lens.

In the case of the digital GR, it's actually an 18.3mm lens, but projected onto an APS-C sensor, we get the same view as the 28mm gives on the 135 format sensor.

Again, keeping it general, it's very very rare to find a wide angle lens with no barrel distortion. Even when additional corrective elements are used in the design, there is usually some residual distortion, its simply that the amount of distortion is no longer obvious to the naked eye. The following article is a useful read regarding distortion:


One thing we do know for a fact about the digital GR is that the lens construction is 7 elements in 5 groups, so it follows we can be pretty sure that corrective elements have been kept to a minimum and so the lions share of its performance must be coming from an efficient optical design.

One area where the move from analog to digital is starting to have a negative impact is that digital makes it so much easier to measure and test things, that we seem to becoming obsessed more with measuring and testing than real world shooting.

I haven't seen any 'optical testing' results for the film GR, but I suspect the majority of testing/review was done by taking real world photos and looking at them, rather than the chart and numbers testing we have become ensconced in today. But I guess I could be wrong about that.

Some people are becoming concerned that the quest for high scores in lens tests are having a detrimental effect on other rendering qualities of a lens due to the increasing number of elements in a lens. There is an interesting article about this over at PetaPixel.


Whatever the case may be with Ricoh, a near perfect optical design (very rare) or a highly efficient design with a touch of barrel correction in software (utilising the sensor edge data), or something else, the results speak for themselves and it's a very fine camera.
 
The GR's lens goes toe to toe with a camera/lens I seriously lust after, and it does pretty well in the comparison.
Indeed, the post was written and intended as a good news story, hence the closing paragraph
Of course, this is all speculation, well except for that end part, because as you all know, they do in fact deliver a 16MP near distortion-free raw file, and I think the most appropriate words for that are, however you did it, well done Ricoh.
I can see a couple of people have their twickers in a knist over it, but I don't really follow why. The post is simply saying; Ricoh did a great job, I wonder how they did it, and looking for clues.
 
Sorry, i missed the requested link(s) to the tests you claimed had been done. Could you point it out or were these tests just another thought experiment?
Are you just trolling, or are you really clueless about digital cameras, optics, and image processing, and have never heard of Google?

J.
Why are you getting so nasty about this? Why all this negative spin using words like troll, clueless and (from your previous post) worthless?

The OP was clearly stated as speculation for discussion. You said tests had been done and so I was intested. Of course I know how to use google, and I did, and I was unable to find them so I asked you for links. I asked nicely and even said please, look:
It has been tested by various places and has negligible distortion - around 0.3%. Some processing software such as DxO Optics Pro corrects that minor distortion, but the raw files from the GR do not.
Tested how? Was the camera disassembled and mounted on some kind of test equipment? Do you have a source for that please?

If tested from the camera raw file (which is how DXO do most of their testing) and pre-raw correction is occurring, then the test is on already corrected data, which would also explain the extremely low distortion.
Your response ignored the request and started down the nasty path with the "worthless" remarks.

I repeated the request, and now you respond with these "trolling" and "clueless" remarks.

Here's how it's looking to me. The testing you were initially referring to was probably based on raw files. Subsequently you realised that for the speculative condition under discussion, the hypothetical scenario of raw files containing pre-corrected data, then for the purposes of that discussion the raw file based tests were worthl not relevant.

But rather than admit to the initial, understable, disconnect, instead you elected to go classic warmongering 101: never-admit-to-making-a-mistake-attack-the-messenger.

Does that about sum it up?

If so, please go directly to the naughty step and spend 10 minutes contemplating how you can be nicer to people.

--
Andy
Try reading comments with a smile. You may discover they were written that way.
 
Last edited:
[snip]

I haven't seen any 'optical testing' results for the film GR, but I suspect the majority of testing/review was done by taking real world photos and looking at them, rather than the chart and numbers testing we have become ensconced in today. But I guess I could be wrong about that.

[snip]
One of the charts commonly used for testing is the 1951 USAF chart, so I think the chart-based methods pre-date digital photography.

What has changed with digital photography is that the digital image is available directly, allowing MTF curves, distortion, etc. to be calculated with much less effort and with greater repeatability than with film.

J.
 
Really, I'm not trying to be nasty. It's just that your suggestions are so implausible (for reasons extensively discussed previously) that logic leads to only two possible alternatives:
  1. You are speaking about things you know nothing about; or
  2. You are being deliberately obtuse to get a rise out of people (a.k.a. trolling).
Either alternative means it isn't worth continuing any discussion. If someone says to you that there is a Meissen teapot orbiting Pluto, offers no evidence, but insists it is up to you to prove them wrong, then the best strategy is to leave them to it.

Good luck with your teapot hunt! Cheerio!

J.
 
So let's examine your evidence for distortion correction of the raw data, and show with a few minutes of thought that it is worthless.
Oh dear, it's never a good idea to examine evidence with a bias, as it introduces the risk of not being balanced and unbiased in the examination.

But in this case, it is your desire to do so that has tripped you up before you are even out of the starting blocks.

You see, I never introduced any evidence. If you go back and re-read my OP, you will see that at the beginning I wrote:
Let me start of by saying my position is that I do think there is some circumstantial evidence that the GR maybe applying software corrections to the raw files for lens distortion.
But I never mentioned what I considered the "circumstantial" evidence to be. Instead the remainder of the OP was about how, IF it was being done, the possibility of utilising the total sensor photo detectors over the full sensor area could perhaps lead to a superior result than the defacto standard method of storing the raw uncorrected data, and correcting it later with a reduced data set.

Thats all the OP says, and it identifies it openly and honestly as speculation with the concluding paragraph:
Of course, this is all speculation, well except for that end part, because as you all know, they do in fact deliver a 16MP near distortion-free raw file, and I think the most appropriate words for that are, however you did it, well done Ricoh.
In fact, most of this "evidence" is being introduced by you and Ron, seemingly for the purpose of attributing to someone else (me) so it can then be criticised.

My initial instinct was to simply ignore these posts, but seeing as they have been introduced, I'm happy to give an opinion on the points raised.
Bill (for who I have great respect) has produced a 2D Fourier transform from part of a GR raw file (it is only 256x256 pixels), or maybe many blocks stacked together. The FT shows a correlation indicating nearest neighbour image processing. Is that something different from the usual de-mosaicing? Without more information, I can't say.
It may be something different. One of the links bill provided was an introduction to FT in which there are various graphs, including what is expected from a well behaved sensor and from various types of processing. The GR files Rondom supplied showed deviation from this, hence bills assessment that there was evidence of signal processing. The link is here (I know you know how to use google, I include it just for your convenience ;-) )

http://www.photonstophotos.net/Gene...2D_Fourier_Transforms_for_Sensor_Analysis.htm

It is also important important to bare in mind that bill (and jack) are figuring this out as they go along, essentially looking to identify different processes from different types of analysis. It is very much work in progress.
You claim that this shows evidence for distortion correction. Distortion correction is not nearest-neighbour processing. It is an interpolation and re-mapping process from a curvilinear to a rectilinear grid that varies across the image. What would the frequency-domain signature of such a transformation look like? Do you know? I find it very hard to guess - some sort of slight bias in the spectrum that would increase as you looked at blocks closer to the corners, I suppose - not what you have shown, anyway.
I made no such claim. Show me where I made such a claim and I will donate £10 to your favourite charity.

Again, I feel I should point out that I think maybe it is your desire to look at the discussion with a preconceived bias that is causing you to make these mistakes.

This "evidence" was in relation to Rons claim that the S&T thread offered a conclusive debunking on the idea that distortion correction might be happening.

My input was simply to point out:
  • That Bill confirmed he has seen evidence of some processing in raw during his testing of many sensors
  • That the GR raws he analysed from Ron showed some evidence of signal processing
  • That Jacks comment were made with a disclaimer that evidence could be hidden due to the sensitivity
  • And therefore, the thread is far from conclusive, in fact I find it rather inconclusive.
So let's discount your 2D FT "evidence" for what it is - blowing smoke -
It's not my evidence, it's Rons and Bills, and it is what it is, evidence of some signal processing.
and look at something a bit more obvious. If Ricoh is going to all the effort of manipulating the raw data to correct lens distortion digitally, why did they reduce it to 0.3%, leaving a residual distortion that can be corrected to practically zero by programs such as DxO Optics Pro? It doesn't make any sense, does it, unless that 0.3% is the actual optical distortion of the lens?
Just to point out, you are introducing this, not me. So when you and Ron are having a chuckle about it in in those other posts, you are laughing at yourselves.

My thoughts? A few spring to mind, and please remember, these are simply thoughts about what could explain the scenario you have just introduced.

1. (More likely) Processing efficiency. There could be performance implications in doing the calculations very accurately so a simple but good enough calculation is performed for efficiency.

In fact, there is already something in a similar vein to this happening with the GR, as reported over at imaging resource where the continuous shooting speed for JPEG is 4fps but rises to 6fps for raw. A plausible explanation for this being that extra time is needed to process the raw into a JPEG.

Essentially any processing of the raw affects performance so it would make perfect sense to use a faster but not quite as accurate algorithm.

Heres a link to the IR site.

http://www.imaging-resource.com/PRODS/ricoh-gr/ricoh-grA6.HTM

2. (Less likely) Sample variation. The calculation was designed for a reference lens but individual copies may vary.

3. (Conspiracy) Making something too perfect may attract unwanted attention.
The signature of the Ricoh GR series was originally a highly-compact 28 mm lens covering the full 35 mm frame, with exceptional optical performance but relatively modest aperture. Your idea that in the current version Ricoh for some inexplicable reason replaced it with an under-corrected lens relying on software correction of distortion is, literally, nonsense, in that it doesn't make any sense logically.
Again, this is "evidence" being introduced by you, not me, but again I'm happy to offer my thoughts.

You seem to be saying it's nonsense because you think it's nonsense?

The digital GR lens is a different lens, a new 18.3mm design with an imaging circle intended to cover APS-C. When designing the film GR lens, digital correction was not an option, it had to be optical.

But in the digital era, digital correction is an option, and for many companies one they choose to take.

It's been discussed often and there are a number DPR articles about it and numerous white papers on the subject.

They way I understand it is, lens design is highly iterative. At some point in the design process there comes a point where a decision is made to add further corrective elements which would add to size, weight, complexity, cost, etc and still not be "perfect", or to accept the design and correct in software. There is nothing nonsense about it, it's how the industry works now

It just becomes a decision about how 'pure' to make the lens and how much correction is acceptable for the desired quality.

Personally, I don't find it hard to imagine Ricoh could produce A highly efficient optical design with only very slight barrel distortion. And Slight barrel distortion is very easy to correct digitally.

I have some high quality M43 lenses like this, optically very good, digitally corrected, better still.
Other cameras that use this approach to distortion correction such as µ4/3, Leica Q, etc. do it by including the corrections as meta-data in the raw file, so it is easy to detect and measure the uncorrected distortion.
Indeed, that is how they choose do it. And it has merit, especially for an interchangeable lens camera. But it has the downside that a distorted image is saved in the raw which may need correcting before use and will lead to some information loss.

But it's not the only way it could be done.

If Ricoh were to do it at the raw stage, with access to the full 17MP sensor data, they could end up with a corrected 16MP raw file that was ready to go, without further correction, somewhat similar to the raw file the GR is actually producing.
So, to summarise: your idea is not supported by any evidence, it fails when tested against measurement, and it doesn't make sense logically.
Unsurpringly I see a different summary.

--
Andy
Try reading comments with a smile. You may discover they were written that way.
 
Last edited:
Ron - I said I will perform my own test and report back.
Great! Sounds good. I suggest that you share it in S&T forum.
As we are aware of about half of the lens vignetting in ORIG - at what point does it demonstrate that the usable portion of a lens is less than the intended focal length?
Ok, the vignetting once again :)
The vignette correction is degrading the image, can not be turned off. The GR lens in this sense is a lie.
A lie? You need some distortion correction as well :)

Ok. I will let this statement speak for itself. Considering your strong opinion, I'm not sure if your distortion correction test will be impartial, though! Can't wait!
Could Ricoh sell this GR lens as a 28mm equiv without the vignette correction?
Could Leica sell the Q "summilux" without correction? That's a clear No.

Ok maybe this is worth discussing: Is vignette correction as critical as distortion correction. I don't know the answer to that. I didn't notice my DNG files breaking apart during post processing. I'm sure Leica files are perfectly fine as well. I guess I have a soft spot for distortion free glass (I assume yours is vignette free glass)

I personally like vignetting. I even add some!

I would say It's important to see the TRUE ORIGINAL before making such gross claims about the lens being useless for the intended FOV,

Could there be a test that would simulate the pre-correction look?

I would love to see that. But I would be embarrassed to go back to S&T and ask the question! I already took a lot of their time- and my own time too :)
Ok, I posted a link on the S&T thread to this discussion and invited Bill and Jack to have a browse, seeing as how we keep mentioning them.

 
Here's a run down of the various sensor areas using the Leica M9 which has a Kodak KAF-18500 sensor as an example.
(I chose this simply because I have the spec sheet handy.)

5212x3468
This is the final image size.

5216x3472
This is the raw image size.
The 2 pixel border is required for demosaicing.

5270x3516
This is the Active Area according to the spec sheet.
These pixels have the Color Filter Array (CFA).
55 columns and 44 rows of pixels were not used.

5310x3556
This is the Effective Area according to the spec sheet.
There is a 20 pixel border of buffer pixels.

5422x3610
This is the Total Area according to the spec sheet.
The pixels are electrically active but have no CFA.
Some are purposely shielded from light ("optical black").

Note that with 6.8 micron pixels the image is 35.44mmx23.63mm; not quite 36mmx24mm
The sensor array occupies 36.87mmx24.55mm
The total package size, according to the spec sheet, is 37.8mmx26.4mm

I point this out because we're often not given pixel size (pitch) and frequently overestimate pitch with careless math.
35.8mmx23.9mm is commonly listed as the sensor size; but this is the Active Area not the image area.
So carelessly we might say 35.8mm/5216pixels = 6.86 microns; but that's about 1% too large.
For some sensors, especially small ones, the difference is more extreme and leads to errors in computing Quantum Efficiency (QE).

Regards,
 
... This was the image posted in the other subthread as an example from the Leica Q.

328664d4e7c24c44b9325df10bb32876.jpg


The barrel distortion of the lens is obvious and if this was a film camera, likely unacceptable for most. In the analog world, the distortion is the monster

...
Here's a raw image from the GR II taken from imaging resources:

71bd7ff18e674bd1be908c055f20b169.jpg.png


Looks clear to me that the raw file has not been corrected for distortion.

Regards,

--
Bill ( Your trusted source for independent sensor data at http://www.photonstophotos.net )
 
I'm careful as to whether or not I correct the distortion. Sometimes it gives the image a certain presence that is hard for me to describe. The exception is close up photos of people, where the correction really helps.
 
I'm careful as to whether or not I correct the distortion. Sometimes it gives the image a certain presence that is hard for me to describe. The exception is close up photos of people, where the correction really helps.
Of course, but I think the open question was whether Ricoh corrects the raw data; I don't believe they do.

Regards,
 
I'm careful as to whether or not I correct the distortion. Sometimes it gives the image a certain presence that is hard for me to describe. The exception is close up photos of people, where the correction really helps.
 
Here's a raw image from the GR II taken from imaging resources:

71bd7ff18e674bd1be908c055f20b169.jpg.png


Looks clear to me that the raw file has not been corrected for distortion.

Regards,

--
Bill ( Your trusted source for independent sensor data at http://www.photonstophotos.net )
Thank you for adding some words of good sense to this thread. It will be interesting to see if the OP listens to you, as he seems unwilling to listen to anyone else!

J.
 
Ricoh did a really good job designing a low distortion, small lens for the GR, good enough, in fact, that some people think they must be using digital trickery. As far as we can tell, there is no manipulation of the raw data in order to correct distortion. And if they did design it that way, that would also be a major accomplishment, worthy of applause.

Now lets see some more pictures on this forum.
 
... This was the image posted in the other subthread as an example from the Leica Q.

328664d4e7c24c44b9325df10bb32876.jpg


The barrel distortion of the lens is obvious and if this was a film camera, likely unacceptable for most. In the analog world, the distortion is the monster

...
Here's a raw image from the GR II taken from imaging resources:

71bd7ff18e674bd1be908c055f20b169.jpg.png


Looks clear to me that the raw file has not been corrected for distortion.
Looks clear in what way please?



--
Andy
Try reading comments with a smile. You may discover they were written that way.
 
Here's a run down of the various sensor areas using the Leica M9 which has a Kodak KAF-18500 sensor as an example.
(I chose this simply because I have the spec sheet handy.)

5212x3468
This is the final image size.

5216x3472
This is the raw image size.
The 2 pixel border is required for demosaicing.

5270x3516
This is the Active Area according to the spec sheet.
These pixels have the Color Filter Array (CFA).
55 columns and 44 rows of pixels were not used.

5310x3556
This is the Effective Area according to the spec sheet.
There is a 20 pixel border of buffer pixels.

5422x3610
This is the Total Area according to the spec sheet.
The pixels are electrically active but have no CFA.
Some are purposely shielded from light ("optical black").

Note that with 6.8 micron pixels the image is 35.44mmx23.63mm; not quite 36mmx24mm
The sensor array occupies 36.87mmx24.55mm
The total package size, according to the spec sheet, is 37.8mmx26.4mm

I point this out because we're often not given pixel size (pitch) and frequently overestimate pitch with careless math.
35.8mmx23.9mm is commonly listed as the sensor size; but this is the Active Area not the image area.
So carelessly we might say 35.8mm/5216pixels = 6.86 microns; but that's about 1% too large.
For some sensors, especially small ones, the difference is more extreme and leads to errors in computing Quantum Efficiency (QE).
Excellent. Thanks for taking the time to join us, much appreciated.

This seems to confirm that there is indeed a border of unused data around the final used area, although at around 1.5% (for this sensor) perhaps not as much as anticipated, I guess it will vary from sensor to sensor.

Do you have any insights into why they elect not to use this data, or indeed of any scenarios where it does get used for something?

Is getting hold of the sensor spec sheet something that is relatively easy for the layman?

And what about identifying which sensor is being used. This is an area that has caused numerous debates over the years, for example in the M43 forum there was a debate over which sensor was used in the first OM-Ds. It raged for months and eventually led to one member buying a camera just to dissassemble it to try to identify the sensor, which ultimately led to the conclusion it was built by Sony. Do you think it is a case of some manufacturers being very open about it and some concealing it, or more a case of being fairly easy if you know the right places to look.

Once again, thanks for joining us.

Regards,

--

Andy

Try reading comments with a smile. You may discover they were written that way.
 
... This was the image posted in the other subthread as an example from the Leica Q.

328664d4e7c24c44b9325df10bb32876.jpg


The barrel distortion of the lens is obvious and if this was a film camera, likely unacceptable for most. In the analog world, the distortion is the monster

...
Here's a raw image from the GR II taken from imaging resources:

71bd7ff18e674bd1be908c055f20b169.jpg.png


Looks clear to me that the raw file has not been corrected for distortion.
Looks clear in what way please?
Looks distorted, although not so much as the Leica Q.

--
Bill ( Your trusted source for independent sensor data at http://www.photonstophotos.net )
 
Here's a run down of the various sensor areas using the Leica M9 which has a Kodak KAF-18500 sensor as an example.
(I chose this simply because I have the spec sheet handy.)

5212x3468
This is the final image size.

5216x3472
This is the raw image size.
The 2 pixel border is required for demosaicing.

5270x3516
This is the Active Area according to the spec sheet.
These pixels have the Color Filter Array (CFA).
55 columns and 44 rows of pixels were not used.

5310x3556
This is the Effective Area according to the spec sheet.
There is a 20 pixel border of buffer pixels.

5422x3610
This is the Total Area according to the spec sheet.
The pixels are electrically active but have no CFA.
Some are purposely shielded from light ("optical black").

Note that with 6.8 micron pixels the image is 35.44mmx23.63mm; not quite 36mmx24mm
The sensor array occupies 36.87mmx24.55mm
The total package size, according to the spec sheet, is 37.8mmx26.4mm

I point this out because we're often not given pixel size (pitch) and frequently overestimate pitch with careless math.
35.8mmx23.9mm is commonly listed as the sensor size; but this is the Active Area not the image area.
So carelessly we might say 35.8mm/5216pixels = 6.86 microns; but that's about 1% too large.
For some sensors, especially small ones, the difference is more extreme and leads to errors in computing Quantum Efficiency (QE).
Excellent. Thanks for taking the time to join us, much appreciated.

This seems to confirm that there is indeed a border of unused data around the final used area, although at around 1.5% (for this sensor) perhaps not as much as anticipated, I guess it will vary from sensor to sensor.
Not unused, just not part of the final image; this is an important distinction.
Generally consulted for demosaicing. There are other potential uses.
Do you have any insights into why they elect not to use this data, or indeed of any scenarios where it does get used for something?
As I said, some of it is definitely "used".
I'm sure but aspect ratio and other considerations affect the amount of "lost" pixels.
Not sure what those "other considerations" might be.
Is getting hold of the sensor spec sheet something that is relatively easy for the layman?
No. This is an annoying aspect of the industry as it has matured.
And what about identifying which sensor is being used. This is an area that has caused numerous debates over the years, for example in the M43 forum there was a debate over which sensor was used in the first OM-Ds. It raged for months and eventually led to one member buying a camera just to dissassemble it to try to identify the sensor, which ultimately led to the conclusion it was built by Sony. Do you think it is a case of some manufacturers being very open about it and some concealing it, or more a case of being fairly easy if you know the right places to look.
Manufacturers often don't identify the exact sensor used.
Sometimes the sensor is known but no spec sheet is publicly available.
(In the early days Kodak was very open and so those spec sheets are easy to come by.)
Probably not marketing but considered trade-secret for whatever reason.
The ironic thing is that anyone with resources can have a company like ChipWorks tear a competitors sensor down anyway.

There's a similar situation with lenses and optical data.
Zeiss is just about the only lens maker who supplies real information on their lenses.
Once again, thanks for joining us.
You are welcome.

Regards,
 
[...]. Obviously GR's 0.3% barrel distortion is hardly an issue
DxO and DPR report that their copy of GR had a distortion of 0.6% at short edge and 0.01% at long edge. The measured focal length was 18.4mm (as opposed to the declared 18.3mm). Compare that to the measured 18.1mm focal length of Coolpix A (which is declared as 18.5) and this might explain why DPR's studio scene test shows sharper extreme corners for the GR than the Nikon A. Also, something is correcting the CA better in the GR (glass or software?).
 
Last edited:

Keyboard shortcuts

Back
Top