Demosaicing and bit dimensions

Started Nov 16, 2013 | Discussions thread
The_Suede
Contributing MemberPosts: 603
Like?
Re: Demosaicing and bit dimensions
In reply to Mark Scott Abeln, Nov 17, 2013

Mark Scott Abeln wrote:

OK, if we have a sensor that has, say, 16 megapixels, we expect that the final image produced also has 16 megapixels — one pixel per sensel, right?

While this seems common-sensical, are there practical mathematical reasons for a demosaicing algorithm to deliver another size, not mapping sensels to pixels 1:1? Not simply to resize, but because the it delivers superior results in one way or another?

Not really, since we're still firmly in undersampled territory with 16MP captures - both with APS and FF sensors. No matter what you do, the image is undersampled, meaning that you have large voids in the data map that's supposed to be a correct rendering of the object space.

And - as soon as we get pixels small enough to give correct sampling, any enlargement of the output image is pointless. The only way to improve a correctly sampled image is to downsample it.

But what you're suggesting has already been done... Though in a quite different scenario.

Using very simple lenses with known aberrations, aberrations that give effective point spreads several pixels wide, you can arrive at a very correct (and VERY sharp!) image that's about half the original size if you do the Bayer interpolation for each position on the sensor by solving backwards  for the optical PSF result. This is one way to sharpen an oversampled original data map, or if you put that in another way; get the optimal, correctly sampled resolution from a system.

Unfortunately, this does of course only work with "known" lenses with PSF functions that are neither to big or to small over the entire image surface - which limits the flexibility of the system.

Reply   Reply with quote   Complain
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark post MMy threads
Color scheme? Blue / Yellow