Wrong according to color science. Luminance is a weighted average of blue red green in which green is the most significant term. That is why the practical choice of filter array is 2 green pixels for one red and one blue: to extract more luminance information. Red is barely contributing. So the luminance info cannot be better than 3/4 of the resolution of the pixel array, and is close to that value in practice.
cart before the horse. green is the most prominent factor in luminance because there are two green photosites on the sensor for every one red and one blue. if you look up "luminance" in any non-photographic source, i promise it doesn't use the word "green" anywhere.
depending on the sensor and that demosaicing involved, there might be some averaging going on in the interpolation
What do you mean "there might be"? Demosaicing does involve interpolation, which is nothing more than an educated guess based on a clever form of averaging.
averaging of luminance values, i mean. remember, each photosite collects
only luminance values, and the
color is interpolated from the positions. demosaicing
probably does some alteration of the luminance values collected, unless the filters all have the same density. which they probably do not -- green would be the least dense, which would be
why there are two of them. however, this a generalization, i can't speak for all sensors or all demosaic algorithms. certainly something else entirely goes on with a foveon sensor.
Like all interpolation, it will work if there is color continuity locally in this area. It will be wrong if not, i.e. when at the edge of a detail. The interpolation will blur the edge.
yes, but
only of the color . depending on what kind of magic is done between the luminance values, you might see
slightly fuzzier edges. this, like the above AA issue, is generally well within the range of some tasteful USM.
Then explain to me: why is Thom Hogan cliams the resolution would be higher without a Bayer filter. According to you, one can have both color and resolution. What is Thom missing?
http://www.luminous-landscape.com/essays/hogan-leica.shtml
"resolution" as in resolving power, not as in pixel-dimensions. and i'm not sure that claim is entirely correct.
looking at some crops from MF monochrome cameras vs the bayer counterpart here:
http://www.luminous-landscape.com/reviews/cameras/achromatic.shtml , i just don't see it. it's
ever so slightly sharper at 300%.
the author of that page writes this:
I had planned to do some pixel peeping comparisons with a P45+ and a P65+. In fact I did them, setting up the shot on a heavy tripod firmly planted on concrete, 120mm Macro lens at optimum aperture, manual focus tripple checked with a 3X magifier, mirror lockup, eletronic cable release – the whole nine yards.
I processed the files, and examined them with and without optimium sharpening.
So why aren't the results here? Simply because I can't see any convincing difference in resolution or sharpness between them. Yes, the P65+ file is bigger and therefore will take more magnification. But between the Achromatic and the P45+ on which it's based I just don't consistently see anything to convince me that the non-Bayer Achromatic consistently offers sharper results.
This of course flies in the face of common wisdom, that the Bayer Array robs digital files or their inherent monochrome resolution. While that may be the case in the lab, in the real world, using the best shooting technique I know, I can't see it.
i can't comment on what thom is missing, or if he knows something i don't. but if the difference is so earth-shaking, why am i so underwhelmed by the difference at
39 megapixels?
True, although I don't see what this has do to with the subject of the discussion. Note that interpolation can be done at the edge. At least one RAW converter, RawMagick, yields the entire image captured by the sensor, and none of that "actual" pixels nonsense.
yes. however, most methods of interpolation
discard those pixels, thus the difference. your 10mp will yield
larger pixel dimensions cranked though rawmagick. my point was that this is where resolution is being discarded. not through interpolation.
A strong AA filter will rob you of some of the sharpness from your lens That seems to me even less desirable if you've paid top dollars for that lens.
if you're paying top dollar for a lens, in my experience, it doesn't matter. it's less desirable if you have sub-par optics that need all the help they can get. even with a strong AA filter, good glass is well within the range of a gentle sharpening. bad glass, on the other hand, can't be saved.
Life is made of compromise, and you're fooling yourself in believing that you can have it both ways: 1) to never have to deal with moiré or extremely rarely (as is the case for the D200 & D300) or 2) to preserve detail lost by an AA filter. Some people would say they prefer a strong AA filter a la D300 and never have to deal with moiré in post-processing. I certainly respect that view, even if I'd rather have it the other way. But you're living in a fairy tale if you think there is that magic AA filter in the D200/D300 which does it all. My advice: take Nkon's marketing with a grain of salt.
no, i didn't say it was magic. just that it happens to be relatively easy to deal with in post processing. truth be told, i've considered having mine removed
at least once. life is a compromise -- i just haven't seen much to convince me that there would be any significant gain, for me. if you'd prefer the compromise the other way, that's fine.
of course, i also wouldn't mind a monochrome dSLR. i'm just not under the illusion that it would give me more resolution.