Demosaicing and bit dimensions

Started 11 months ago | Discussions thread
The_Suede
Contributing MemberPosts: 584
Like?
Re: Demosaicing and bit dimensions
In reply to Roland Karlsson, 11 months ago

Roland Karlsson wrote:

Mark Scott Abeln wrote:

OK, if we have a sensor that has, say, 16 megapixels, we expect that the final image produced also has 16 megapixels — one pixel per sensel, right?

While this seems common-sensical, are there practical mathematical reasons for a demosaicing algorithm to deliver another size, not mapping sensels to pixels 1:1? Not simply to resize, but because the it delivers superior results in one way or another?

I do believe the best you can do with a Bayer grid is to rotate it 45 degrees, and then use the green detectors as basis for the output. Then a 10 MP sensor becomes a 5 MP sensor, but a superior one. If you want to get as much as possible out of that rotated sensor, you might also make a 20 MP image.

Oops, Fuji already has done this! And they got heavily attacked when they tried the 20 MP trick. So, they were forced to output 10 MP from the 10 MP sensor, which, of course, is a very bad idea. So - they stopped making those sensors, as they were market wise impossible to sell.

Your idea works, but still has the problem of scale vs resolution.

With the Pentax Q i get working oversampling, with good FF/FX lenses. That's a 1.5µm pixel structure. With the Nikon 1V2, that has ~3µm pixel structures, some good lenses are still undersampled to such a degree that you get aliasing patterns at their best apertures. But that's probably right on the edge of any normal to very good FF lens after you've included losses for diffraction and the filter plate structures - the filter plates induces quite a lot of point spread, even when you don't have the birefringent AA filter structures included.

Working that 45º example out to a 36x24mm FF area, that's 96MP, to get a non-aliased 48MP image.

Comparing that to say a 48MP normal 0/90º Bayer image, I doubt you get ANY improvements at all - and I can bet quite a lot that the process to make the pixel structure sqrt(2) smaller involves quite a bit of losses in both angle sensitivity and overall sensitivity. This will balance out with the Bayer interpolation inaccuracies to get you a net sum of nothing, or 1:1 ratio of improvement if you put it that way. So: Pay more to get nothing.

Reply   Reply with quote   Complain
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark post MMy threads
Color scheme? Blue / Yellow