# Daves interstellar foolishness

Started Sep 10, 2012 | Discussions thread
Re: Armchair technologists fighting!

bobn2 wrote:

crames wrote:

bobn2 wrote:

"Effectively, the two green channels act as a single sampling grid, sampling every other pixel rotation, but rotated by 45°, which means that in the vertical and horizontal directions the green or luminance channel is sampled every 1.414 (square root of two) pixel spaces."

Actually the combined green channels have full resolution in the the horizontal and vertical directions, it is the diagonal directions which are sampled at a lower rate.

Ref: Li, et al "Image Demosaicing: A Systematic Survey" http://www.csee.wvu.edu/~xinl/papers/demosaicing_survey.pdf

That is actually nothing different from what I said 'sampling every other pixel location, but rotated by 45%'

You said, "in the vertical and horizontal directions the green or luminance channel is sampled every 1.414 (square root of two) pixel spaces", which misleadingly implies that the horizontal and vertical directions were being sampled at a 1.414 times lower rate, which is not true.

Yes, ambiguous wording on my part, in the rotated array the vertical and horizontal directions are sampled 1.414 pixel spaces (relative to the rotated array, not the vertical and horizontal of the image). Bad wording surely, but so is yours because saying that it is sampled at the full rate in the vertical and horizontal directions would imply that it is sampled at the same rate in two dimensions - which is clearly not true, it has half the number of samples. So, I admit that I could word that a lot better.

It is sampled at the full rate in the vertical and horizontal directions. Look at the middle graph - the red diamond shows how the vertical and horizontal directions go all the way out to Nyquist, which can only happen if those directions are fully sampled. It is the four diagonal corners that are not fully sampled by the quincunx sampling pattern of the green channel. You have half the samples, and only half of each quadrant is being sampled. It happens that the horizontal and vertical directions remain fully sampled, while the diagonals are not.

Despite the "same amount of information," the 36 MB Bayer has more high-frequency information in the horizontal and vertical directions

but of course detail in an image isn't neatly arranged vertically and horizontally, so total information is what counts.

(to which the eye has preferential contrast sensitivity).

Really? I didn't now that. I thought that the rods and cones were somewhat randomly distributed. Do you have a source for that. Sounds interesting.

It's called the oblique effect. (Ok, not sure if it happens in the eye, within the brain, or a combination.)
http://en.wikipedia.org/wiki/Oblique_effect
http://www.ski.org/CWTyler_lab/CWTyler/TylerPDFs/TylerMitchellObliquePVAVR77.pdf

Not to mention that nearly all demosaicking routines in the last decade or more do not treat the color channels in isolation and thereby extract additional luminance information from the red and blue channels. I think it is misleading to suggest that a 18 MP mono sensor will produce the same amount of luminance information as a 36 MP Bayer.

It does produce the same amount of luminance information. Certainly sophisticated demosaicking algorithms interpolate additional luminance from the chroma channels. However, that doesn't mean that there is more luminance information, it means that there is generally in real images a correlation between luminance and chrominance which can be used to infer additional resolution.

Certainly imaging a monochrome object with a Bayer array, all you have to do is balance the 3 color channels and you get full resolution with no demosaicking necessary. When the object becomes colored it's more complicated but the point is that the luminance information resides in all three channels, not just green. This happens from the overlapping spectral sensitivities of the RGB channels, and the fact that most colors produce a response in all three of the RGB channels. I think you will find that many algorithms (such as Emil's AMaZe) get much more than the half luminance information that you are expecting from the green channel alone.

However, in practice the 18MP Leica monochrom does match the 36MP D800E used to make a monochrome image, or at least that is what the tests that accompanied the article (not done by me) found.

I'll take a look at it. If that's the result I suspect the D800e image wasn't processed well.

Complain
Post ()
Keyboard shortcuts: