Are more megapixels always better?

Started Jun 13, 2013 | Discussions thread
hjulenissen Senior Member • Posts: 2,087
Re: Are more megapixels always better?

olliess wrote:

hjulenissen wrote:

It seems that:

1. (Analog) binning can significantly reduce the image-level read-noise as there is "one part noise per bin" instead of "one part noise per sensel"...

2a. Binning of CFA data is very hard because you need to bin each color channel separately, creating somewhat complex signal routing...

Both of these are consistent with my understanding.

2b. Binning of CFA data is very hard because you tend to get nasty aliasing artifacts caused by the "interleaved" reading of color channels

My point above is that the CFA already produces nasty aliasing artifacts. If anything the aliasing might be a little better with interleaved, instead of non-coincident, reading. Because of the interleaving, the spatial smoothing of the bins actually acts over a 3x3 square rather than a 2x2. This would have to be worked out, but it seems worth thinking about more.

Are you saying that combining 2x2 red sensels that are spaced in a pattern like this would reduce aliasing artifacts? Assuming that the optical path before sensor is reasonably sharp (optimized for the physical sensel size) I dont think so.

r x r x

x x x x

r x r x

x x x x

3. Analog binning of demosaiced data is difficult because, there are no realistic analog demosaicing methods, and if there were, they would probably include components that injected more noise than just reading and ADC-ing the charge.

You already read out the sensor data as raw digital counts to pass to your demosaicing process, I'm not seeing where analog binning fits in.

If you could somehow demosaic the data prior to binning, you would be able to bin datapoints that were spatial and spectral "aligned". Of course, this would have to occur prior to adding (read) noise as that is the main selling-point of binning. So one would need analog demosaicing that did not add any noise. As I was suggesting, this seems to be impossible.

comment#1: 2b) might be solved by having sufficient pre-blur (OLPF, defocus, sensor shake, vaseline...).

AFAIk, the Phase One back has no OPLF, and smaller-format sensors seem to be going that way as well. We are running into the practical limits of commercially-available lenses (and ultimately, diffraction).

We may not need OLPF for many practical situations when operating the camera at its full resolution. If you are to bin by a factor of 2x2, the need for OLPF may become important.

although perhaps a (VERY!) small amount of carefully constructed spatial dither from an image stabilizer system would also do the trick, but this could only work for relatively show shutter speeds, right?

Depends on the speed of the sensor movement, I guess. If it can move on the order of 1 sensel in 1/8000 seconds, then that might work as well.


Post (hide subjects) Posted by
(unknown member)
(unknown member)
(unknown member)
(unknown member)
(unknown member)
(unknown member)
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow