Sense check rough estimates for point spread function for adjacent sensor cells Ricoh GR

Started 3 months ago | Discussions thread
Flat view
jimduk New Member • Posts: 3
Sense check rough estimates for point spread function for adjacent sensor cells Ricoh GR

Hi All,

I'm using a Ricoh GR (prime lens, APS-C, no filter, 16MP) in RAW mode, and I've got a rough estimate for its 'spread' which I'd like to sense check. I'm more a maths/sensor person than an optics/camera person so apologies for ignorance, but this seemed like a good group to ask.

Summary:- looking at the G1 and G2 Bayer channels against a target with sharp black and white edges, it looks like the 'spread' is less than 1 adjacent pixel ( possibly as low as 1/2) and a very very rough heuristic is that a pixel seems to receive 20-30% of the 'light' from the nearest half of the adjacent pixel. This is presumably driven  by the point spread function (and maybe other factors?)

e.g. if there was an imaged edge which 'perfectly' fell between two imager cells, then the darker cell would also receive roughly 0.5 * (0.2 or 0.3) of the value from the lighter cell. If the imaged edge fell 50% across a cell, then the lighter side may not contribute to the next adjacent pixel

Is the above reasonable? ( I could easily be very wrong)

More detail - My understanding at a high level is the responsiveness for a given imager cell is a complex chain of the incoming light spectrum, the camera optics, the microlens optics and then the imager characteristics

I am assuming that for the Ricoh GR, if I'm looking near the centre of the image and not oversaturating then the point spread function of the main optics dominates. I'm not smart enough to derive this (presumably it's some set of Gaussians, I bought Fourier Optics but it made my head hurt), so I'm looking experimentally

If I take a picture of a set of thin vertical white lines through black boxes (in the garden, not experimental conditions) , I get a result like below (chain is RAW with no AWB, DNG convert to 1.3, into Matlab, then zero out the R and B channels from Bayer so we are looking at a grid only showing G1 G2)

Looking at the picture, when I image a vertical line (slight slant) which geometrically projects to ~1/2 pixel, then it always impacts 2 pixels, which suggests the 'spread' is 1/2 pixel. This seems to hold for wider lines, and very rough maths on the 2mm line (1 pixel) when it is centred suggests 20-30% of the 'value' of the nearest 1/2 pixel spreads across to the adjacent pixel.

(background - I'm trying to look at sub-pixel matching for stereo at 10m+ ranges using the Ricoh as a machine vision camera and this needs a bit of theory for how the imager responds, but only to a very rough approximation)

Do the assumptions seem reasonable? Are there other factors or better models I should be looking at?


Ricoh GR
If you believe there are incorrect tags, please send us this post using our feedback form.
Flat view
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow