Antialiasing (AA) filter robs us of too much detail

TOF guy wrote:

I have not found any evidence that the D300s has "too agressive AA filter" since there is no camera without AA filter,
Kodak's SLR/n and Pro14n had no AA filter in front of their 14 MP full-frame sensors. I still use my 6-year-old SLR/n and love the extra detail (usually, except when moire raises its ugly head on some occasions). At the time, many users believed that the lack of AA filter helped it match or exceed the 16mp resolution of the Canon 1Ds mk11.
The AA filter is a compromise between blurring details and avoiding moire.
True. It's generally easier to sharpen with Photoshop's USM than it is to fix moire.
...after 1/250 ... FP sync looses power so quickly that it is barely useful past 1/500 on a d300 even if in theory it can go faster than that.
I don't know if the D70 can do that
The FP sync mode was introduced after the D70 - first seen in the D2H.
The D70 is way superior to the D300(s) thanks to its electronic shutter. It can sync to 1/500 sec. And one can mask contacts on the flash shoe and the D70 will sync to its maximum shutter speed. True syncing. No gimmick. In practice the only limitation is the duration of the flash (which can be a little longer than 1/1000 sec, depending on power required from the flash).
Through a flash trigger, I get solid full-power sync up to 1/1,600 on my D70 - it starts losing flash power after that. With my D200, I start to see the shutter curtain at 1/320 and higher.
I definitely prefer the way the D300s handles the flash if the D70 has any limitation in FP or shutter speed.
The D300s has limitations. The D70 does not.
The D70's CCD sensor does have some problems with "blooming" of extra-bright highlights - most notably light sources like the sun. Otherwise, I enjoy it's higher sync speed.

--
Desmond Murray
http://www.KelownaPhotographer.com
I shoot to thrill
 
The D70's CCD sensor does have some problems with "blooming" of extra-bright highlights
Yes, there is a price to pay for the D70 electronic shutter. One, as you state, is the potential for blooming. The other is simply high ISO performance (there is a line used to dump charges for the electronic shutter which take space on the sensor at the detriment of the well, i.e. the area that collects photon). Finally no manufacturer has found a way yet to implement an electronic shutter with CMOS sensors, although I've read several rumors to the effect that Canon is working on a solution.

--
Thierry
 
The D70's CCD sensor does have some problems with "blooming" of extra-bright highlights
Yes, there is a price to pay for the D70 electronic shutter. One, as you state, is the potential for blooming. The other is simply high ISO performance (there is a line used to dump charges for the electronic shutter which take space on the sensor at the detriment of the well, i.e. the area that collects photon). Finally no manufacturer has found a way yet to implement an electronic shutter with CMOS sensors, although I've read several rumors to the effect that Canon is working on a solution.
In other words, the D70 has some really serious limitations compared with the D300...
 
The D300s has limitations. The D70 does not.
With that blanket statement there is nothing more I can say.
Looks like that sentence lead to a misunderstanding. I was only talking about flash sync speed. It seemed obvious to me from the context but maybe not after all.

I'll re-phrase it then:

As far as flash sync mode is concerned, the D300s has limitations which the D70 does not have.

Of course the D300 is a much better camera in most other aspects. Still the absence of 1/500 flash sync speed (the real thing, not the FP gimmick) on the D300 is a step backwards.

--
Thierry
 
In other words, the D70 has some really serious limitations compared with the D300...
Never meant otherwise. Our discussion was about flash sync speed without FP mode (and AA filters), and about these aspects of the cameras only.

Hope that clarifies.
--
Thierry
 
Here's D300 sample raw from original D300 review on this site, converted with ACR:



The D300 ACR conversion crop above shows artifacts at around nyquist, about 1400 lines / picture height or 28 on that chart.
Sorry I didn't reply before. I missed your post and was away for several days.

Very little artifacts. In fact it shows significant detail loss (lines fused together) way before Nyquist.

You want it all: no moiré and no detail loss before Nyquist. It can't be done. It's called a compromise. If the filter high-pass frequency is close to Nyquist it will show quite a bit of moiré around that point.

A strong AA filter may work for you, I can understand that. But don't fool yourself into believing that there is no price in terms of details.

Compare to the D70 and you'll see.

BTW thanks for the link. I forgot about dpreview's tests. I should have started by posting that image. The results are even worse than what I remembered before starting the thread.
--
Thierry
 
if you're talking about monochromatic sensors as opposed to bayer sensors you're rather specifically not interested in color perception.
Not true. See further about what the L channel of LAB is about.
you're the one that keeps bring up Lab.

monochromatic sensors produce monochromatic images. the only part that has anything to do with color, in that equation, that is the frequency response range of the monochromatic sensor. which, btw, is not typically only green, nor is it identical to the human visual response. as that page i linked previously demonstrates, you need a special UV/IR filter unless you want to capture that data as well.
The 1st version of Photoshop came out in 1990. The current definition of CIELAB dates back from 1976. But you know so much about LAB that you did not imply that LAB is a by-product of Photoshop, right ? :-)
yes, but your average person didn't have much exposure to things like color spaces before photoshop. similarly, not many people used online message boards before the advent of the internet, even though such things existed. and not a lot of people walked around with portable mp3 players before the ipod. popularization , not invention.

in any case, the date you'd want for photoshop here is 1991, as version 1.0 was, and i know this is a shocker here, monochromatic. it did black pixels and white pixels, as it was written on and for the macintosh plus, which lacks a color monitor.
Intuitively we think of a color as having some amout of grey.
i don't. i think of gray as having some color components, specifically, whichever are the three primaries of the color system you're using.
A dark color would match a darker grey.
two fields of the same value have the same value, regardless of their hue, or lack of hue. it's tautological: either they have the same value, or they do not. yes, if you're mixing paint and you mix a red pigment into a neutral gray you've made, you will modify the value, because the pigment itself has some inherent value. (conversely, if you mix a neutral valued paint into that red, you will modify the intensity at the same time as the value).
And consequently we also think intuitively of a conversion of a color image to a Black & white one.
however, this is not what is happening in a digital camera. a digital camera has an array of monochromatic sensors, with colored filters over them in usually the bayer pattern. the color image is interpolated from the monochromatic data.
LAB is a fruit of colour science which study how human beings perceive color (and that encompasses greys as well). The L channel in LAB matches this hunan perception of the amount of grey in a color.
correct. but this is not how humans see. in humans, the same visual cell that is responsible for scotopic vision is also partly responsible for the blue and green parts of the opponent process, along with one of the cones. this is why blues and greens appear stronger at dusk. (and probably why bayer chose to duplicate the green).

there are three types of cone cells, one predominantly responsible for reds (L), one for greens (M), and one for blues (S). strictly speaking, even though Lab tries to duplicate the neurological opponent process, RGB actually better duplicates the biological way humans see color.
Tip: want to convert a color image to a B&W one which matches our perception of how a B&W image should look like? Just convert to LAB using a digital image manipulation software which supports LAB (Photoshop being one possibility), extract the L channel, and, voila, you have your conversion.
human beings do not see in black and white. there is no universal perception of how a b+w "should" look. in the film days, i typically used red and yellow filters. now with digital cameras, i will typically do my b+w conversions in multiple parts, essentially using a red filter where i see fit, a green filter elsewhere, and a blue elsewhere.
for instance, see ansel adams "the camera" or "the negative" where he uses the term interchangeably with "brightness."
"the term" here being luminance and not lightness as in LAB color. since we've now lost context thanks to your repeated insistence that we're apparently talking about LAB color.
Ansel Adams used negatives and papers which under proper development would give a sensical image, one that matches reasonably well a perceptually correct conversion to B&W. The manufacturers of such negatives and papers saw to it. Ansel Adams does not have to worry about it just like you don't. Manufacturers of dSLRs and writers of RAW converters saw to it.
this is just so plainly ignorant i don't even know where to begin.

i'm not even a big ansel adams fan; i just happen to think he wrote a very elegant trilogy of books on the technical side of photography. but if you look at an ansel adams photograph and thing the people who manufactured his film or paper saw to it that the image matched a perceptual correct b+w version of reality you are sorely mistaken, and should probably go take another art appreciation class. ansel adams neither tried to be "perceptually correct" nor did he rely on the way his film or paper was manufactured. rather, his prints are the result of countless hours of tweaking, waiting on top of mountains for just the right light, filtration, exposure/rating modification, development modification, dodging, burning, and all kinds of alteration you cannot even begin to imagine, in order to arrive at a photo ansel thought appropriately captured the way he wanted to portray a scene, or the emotional quality thereof. this why a modern print from an ansel adams negative is worth significantly less than an ansel adams print.

in any case, most b+w film certainly does not have the same color response curve as human vision, and b+w paper is wholely insensitive to red.
 
perhaps, because the rod photoreceptor cells, which are primarily used for lightness, are sensitive to blue-green.
No, rod photoreceptor cells are used only for low light (scotopic) vision. Human vision under normal lighting conditions ignores them (photopic vision).
what's "normal" exactly? in any case, you're totally discounting mesopic vision, where both rods and cones are active. also, this is not a refutation of what i said, that rods are more sensitive to green/blue. see, for instance, here: http://en.wikipedia.org/wiki/Purkinje_effect
No. He use 2 greens for 1 R and 1 B because human vision extracts more lightness detail from the green channel than the others.
again, see here: http://en.wikipedia.org/wiki/Bayer_filter
Bryce Bayer's patent (U.S. Patent No. 3,971,065) in 1976 called the green photosensors luminance-sensitive elements and the red and blue ones chrominance-sensitive elements. He used twice as many green elements as red or blue to mimic the physiology of the human eye. The retina has more rod cells than cone cells and rod cells are most sensitive to green light.
when a camera makes a b+w output image, it usually primarily based on green.
No it is based on extracting the L channel. But the preponderant term in that conversion comes from the green channel.
ahem, no. when you put your camera into "b+w" mode, it's taking its lightness information from the photosites covered in green microfilters. i assure you, your camera is not shooting in lab color mode. it's not even using any color mode. it's recording values from monochromatic sensors covered in colored filters. it applies a color space after the fact to the image it is interpolating from the raw RGB data. and i'm not aware of any cameras were LAB is even an output option.
it is, yes, but it's only recording those photons it's collecting: it's recording brightness.
Yes, from the point of view of the camera, and then no: it is not collecting brightness as our eyes would see it .
perhaps not. cameras are not eyeballs, and vice versa.
The only thing that is collected by the camera is some amount of light that goes though a filter. The combination of filters, sensor, demosaicing must be based on how human vision perceive colors to give a sensical result. Right?
no, don't be silly. of course not. people have shot with sensors -- films -- that do not have the same response curves to color as human vision does, and produce monochromatic results, for a century and a half. and they make sense just fine. as i mentioned above, i used to shot predominantly with a red filter (which specifically removes all green values) for years. my photos made perfect sense.
Disagree, as proved by comparisons of pictures with and without AA filters (if you know what to look for ...).
...am i looking for the moire? no, seriously, if you poke around on maxmax.com, they've got some wonderful example pics that don't look substantially different than a gentle sharpening. now, their argument for "monochromatic cameras have more resolution" is much more convincing.
the color back resolves detail 1 pixel wide, and the author feels that the achromatic version does so more precisely . we're not talking about a difference in resolution , we're talking about a difference in precision .
Above all letl's not distort the meaning of the sentence. What he means is obvious: that the details have much more clarity without AA filter, or put in other words that they are not blurred, or somewhat destroyed by the AA filter.
i'm thoroughly convinced, at this point, that you haven't the foggies idea what you're talking about. especially as that page was about the difference between a monochromatic camera and a bayer camera, neither of which possess an AA filter .
 
Here's D300 sample raw from original D300 review on this site, converted with ACR:



The D300 ACR conversion crop above shows artifacts at around nyquist, about 1400 lines / picture height or 28 on that chart.
Sorry I didn't reply before. I missed your post and was away for several days.

Very little artifacts. In fact it shows significant detail loss (lines fused together) way before Nyquist.

You want it all: no moiré and no detail loss before Nyquist. It can't be done. It's called a compromise. If the filter high-pass frequency is close to Nyquist it will show quite a bit of moiré around that point.

A strong AA filter may work for you, I can understand that. But don't fool yourself into believing that there is no price in terms of details.

Compare to the D70 and you'll see.

BTW thanks for the link. I forgot about dpreview's tests. I should have started by posting that image. The results are even worse than what I remembered before starting the thread.
Perhaps you should have checked this out then:



So what we actually see is that with a weak AA filter (D70), not only is there significant colour moire (ok the colour artefacts can be corrected in software) but also aliasing artefacts well before nyquist (100 lp/ph - "20" on the chart). There is also detail loss (fused lines) at the top of that crop of the chart - way before nyquist. If you look at the full images, then the individual lines seem to be resolved fully at about 19-20 for the D300, about 12-13 for the D70.

So, the "full" resolution of the D70 is at about 65% of nyquist, and about 70% for the D300 and the moire/aliasing free resolution of the D70 is not as good as the D300 expressed as a % of total resolution by a huge margin.

So, I'm afraid I still disagree with you. The comparison D70/D300 does not show resolution loss (relative to system resolution) from a stronger AA filter. The high level of artefacts for no resolution gain in the D70 show me that the AA filter is not well optimised at all.

OK, so bottom line for me on this is that it's subjective in real life situations. I used a D70 for years. Yes, the lack of AA filter on screen means it can look nice and crisp on screen. If that's what you want, get your D300 converted and enjoy it.
 
I think we don't really know how bad the moire
would be without the AA filter. Nikon put the filter
in because they thought it was a substantial problem.

Images from the D300, D3, D3X, etc are excellent.

maljo
 
So what we actually see is that with a weak AA filter (D70), not only is there significant colour moire (ok the colour artefacts can be corrected in software)
yes
but also aliasing artefacts well before nyquist (100 lp/ph - "20" on the chart)
yes
There is also detail loss (fused lines) at the top of that crop of the chart - way before nyquist.
There I don't see it the same way. The fusing of tlhe lines as I see it occurs very near Nyquist for the D70 (admittedly the moiré does it a little more difficult to assess, and it trends to come and go), and way before for the D300 (it comes and never go away).
So, I'm afraid I still disagree with you.
While I respectfully continue to think otherwise.
--
Thierry
 
I think we don't really know how bad the moire
would be without the AA filter. Nikon put the filter
in because they thought it was a substantial problem.

Images from the D300, D3, D3X, etc are excellent.
I agree, that's why I said earlier that I trust Nikon, or Olympus more on this issue than any other person on this forum. These discussions usually ends the same way, one guy, or a very small group claiming a camera has too strong AA and would be better without. For all of the other people the AA filter do not represent the limit or any problems concerning detail in images while moire would probably mean I'd not like using digital cameras. Even if images may be sharper without the AA filter, removing it would be crazy, not only because it would remove the dust shaker as well, but it would introduce moire which is far worse than the possible softness the AA filter causes.
 
perhaps, because the rod photoreceptor cells, which are primarily used for lightness, are sensitive to blue-green.
No, rod photoreceptor cells are used only for low light (scotopic) vision. Human vision under normal lighting conditions ignores them (photopic vision).
what's "normal" exactly?
more than 100 cd/m2
in any case, you're totally discounting mesopic vision, where both rods and cones are active
Because it adds nothing to the discussion.
also, this is not a refutation of what i said, that rods are more sensitive to green/blue.
My refutation is that rod cells are not involved in human vision under regular light.
No. He use 2 greens for 1 R and 1 B because human vision extracts more lightness detail from the green channel than the others.
again, see here: http://en.wikipedia.org/wiki/Bayer_filter
I've checked: All the info you provide here and elsewhere comes from Wikipedia, quoted verbatim or very close to. That's all and good but articles in Wikipedia can be edited by anybody and often are not very accurate.

Instead of Wikipedia, let's quote Prof. Mark Fairchild, a worlwide higly respected professor of color science at the top institution of this field:
http://www.cis.rit.edu/fairchild/

"The most important distinction between rods and cones is in visual function. Rods serve vision at low levels (e.g. less than 1cd/m2). While cones serve vision at higher luminance levels. Thus the transition from rod to cone vision is one mechanism which allows our visual system to function over a large range of luminance levels. At high luminance levels, (e.g. greater than 100 cd/m2) the rods are effectively saturated and only the cones function and contribute to vision (Marck Fairchild Color Appearances Models 2nd Edition Wiley page 8 (talk about advanced knowledge which is right there after a few introductory paragraphs).

Now let's read the wikipedia article which is all you know:
Bryce Bayer's patent (U.S. Patent No. 3,971,065) in 1976 called the green photosensors luminance-sensitive elements and the red and blue ones chrominance-sensitive elements. He used twice as many green elements as red or blue to mimic the physiology of the human eye. The retina has more rod cells than cone cells and rod cells are most sensitive to green light.
True the retina has more rod cells. Colour vision just does not use them in "regular" light. Talk about a reliable source.

Oh and BTW the ratio number of rods/ nb of cones is way higher than 2 ...

Forget the middle man (wikipedia) and read the patent itself (reference at the end of the wikipedia article). Bayer never mentions rods or cones. He does repeat the word luminance. It is clear from the reading that he alludes to a human perception, and not the amout of light that goes through a filter. It is also clear that he talks about the quality of a color after making abstraction of its chroma. That's lightness in Lab. He mentions repeatedly (and correctly) that perception of lightness matters more than perception of chroma, that green is what provides the dominant contribution to the perception of luminance, and that he chose 2 greens for 1 red/blue to improve lightness detail at the expense of chroma detail. Which is what I've stated from the get go :-)
No it is based on extracting the L channel. But the preponderant term in that conversion comes from the green channel.
ahem, no. when you put your camera into "b+w" mode, it's taking its lightness information from the photosites covered in green microfilters. i assure you, your camera is not shooting in lab color mode.
Of course not: it does not extract alll LAB channels, only the L channel.
it's not even using any color mode. it's recording values from monochromatic sensors covered in colored filters.
Of course.
it applies a color space after the fact to the image it is interpolating from the raw RGB data.
True (never said otherwise). That gives an image in color. For B&W it extracts the L channel, which can be readily calculated from say the sRGB image.
and i'm not aware of any cameras were LAB is even an output option.
Me either. Who's talked about LAB as an output option? Not me. Just the L channel. Its evaluation does not require the determination of the other A/B channels.
no, don't be silly. of course not. people have shot with sensors -- films -- that do not have the same response curves to color as human vision does
Not quite true. Irrelevant in our new digital world anyway.

,> as i mentioned above, i used to shot predominantly with a red filter (which specifically removes all green values) for years. my photos made perfect sense.
Post one and I'll tell you what does not make sense. At all.
i'm thoroughly convinced, at this point, that you haven't the foggies idea what you're talking about.
I know: I don't swear by Wikipedia.

--
Thierry
 
what's "normal" exactly?
more than 100 cd/m2
i believe you missed the point of the question. human vision is reasonably well adapted for a variety of lighting conditions (your source below agrees). though we do not see as well as nocturnal animals at night, we are not totally crippled in the dark, or fading light, either. the definition that photopic vision is "normal" and scotopic vision is "not" is entirely arbitrary. the ratios of the cells would perhaps indicate otherwise.
in any case, you're totally discounting mesopic vision, where both rods and cones are active
Because it adds nothing to the discussion.
on the contrary, as the rest of this post will show.
My refutation is that rod cells are not involved in human vision under regular light.
which is fine. but camera sensor are not human eyes -- there aren't two (or three) modes of vision. all photosites are used, all the time, in bayer interpolation.
I've checked: All the info you provide here and elsewhere comes from Wikipedia, quoted verbatim or very close to. That's all and good but articles in Wikipedia can be edited by anybody and often are not very accurate.
it's an easy source at hand. the "wikipedia" objection is frequently just pedantic. if it's wrong, show that it's wrong.
"The most important distinction between rods and cones is in visual function. Rods serve vision at low levels (e.g. less than 1cd/m2). While cones serve vision at higher luminance levels. Thus the transition from rod to cone vision is one mechanism which allows our visual system to function over a large range of luminance levels. At high luminance levels, (e.g. greater than 100 cd/m2) the rods are effectively saturated and only the cones function and contribute to vision
that's nice. but that neither comments on bayer's motivations, nor does it actually demonstrate your original point, which i'm fairly sure you've now lost track of. and you're ignoring the whole range between 1 and 100 cd/m2, where both sets of cells are used -- color is still sensed, at a lower level, and luminance information is primarily provided by cells which respond best to green wavelengths.

i'm not even actually sure why you're arguing against this point. clearly that's the same sort of compromise bayer was attempting to make between (green sensing) luminance detail and color detail. you say green is the preponderant term in luminance in vision, and i agree that it is under certain conditions. why argue?
True the retina has more rod cells. Colour vision just does not use them in "regular" light. Talk about a reliable source.
it does help if you don't randomly interpret things to mean whatever you want. both of the facts that it gives, that rods are more sensitive to green light, and that there are more of them, happen to be correct. your assumption that bayer's "mimic" of the human eye is a perfect analogy is incorrect. bayer camera sensors do not have two (or three) modes of vision, where it will selectively use one group of photosites or another, depending on luminance. i emphasize "bayer" here because fuji happens to make sensors that do indeed function in approximately this way. bayer, on the other hand, appears to have adopted a compromise where the primary luminance input is derived from green -- guess which vision mode this more closely mimics?
Forget the middle man (wikipedia) and read the patent itself (reference at the end of the wikipedia article). Bayer never mentions rods or cones. He does repeat the word luminance.
yes, he's submitting a patent application for a camera sensor, not a human retina. it is partly based on the human visual process, but is not a description of the human vision under all conditions.
It is clear from the reading that he alludes to a human perception, and not the amout of light that goes through a filter. It is also clear that he talks about the quality of a color after making abstraction of its chroma. That's lightness in Lab. He mentions repeatedly (and correctly) that perception of lightness matters more than perception of chroma, that green is what provides the dominant contribution to the perception of luminance, and that he chose 2 greens for 1 red/blue to improve lightness detail at the expense of chroma detail. Which is what I've stated from the get go :-)
you are still conflating human vision with camera sensors. influence and mimicry do not mean duplication .

the simple fact of the matter is that green is only the preponderant factor of luminance detail in human vision under scotopic and mesopic conditions, and it's because those rod cells are more sensitive to green light. the fact that this is the case is relatively easy to demonstrate, and that other wiki article i linked should have been the hint.

however, as you say, rod cells are unused in the photopic process, at higher luminance values. and you will find that the distribution of cone cells becomes important in sensing luminance values. if i were to show you a chart, under decent lighting conditions, that displayed pure red, pure green, and pure blue, you would probably place the red and the green at approximately the same brightness, and the blue would appear darker. most normal adult human beings will do about the same. this is because there are relatively fewer short-wavelength sensing cone cells than there are the medium and long wavelength sensing ones. between the M and L, the distribution can vary from one person to the next.

but i suspect that this conflation of camera and eye is why you are seemingly trying to argue contradictory claims. yes, it is true that in the bayer process, green is the most important factor in determining luminance, and it is also the case that green is the most important in low-light human vision. but it is not the case that green is the most important factor of luminance in general , which is a simple measure of brightness irrespective of color .
 
Even if images may be sharper without the AA filter, removing it would be crazy,
well, to be fair, it's not that crazy in the grand scheme of things. you do get some increased detail resolving power, but at a cost of aliasing/moire, and about $400.

now, if we want to talk crazy it's also possible to remove the IR and UV in the same process, and record all of the light the camera is capable of sensing. you'd have to put IR/UV filters in front of your lens to block it out, and that'll likely mess up AF and metering. but without anything, you get close to a two stop improvement in sensitivity.

and maxmax is apparently working on a way to convert bayer cameras to monochrome.
not only because it would remove the dust shaker as well
strictly speaking, if you get it done right your camera won't be missing any pieces afterward. the AA/IR/UV filter glass is simply replaced.
 
Even if images may be sharper without the AA filter, removing it would be crazy,
well, to be fair, it's not that crazy in the grand scheme of things. you do get some increased detail resolving power, but at a cost of aliasing/moire, and about $400.
...which is why I think it would be crazy to do. I don't want moiré and I want the dust shaker to work.
now, if we want to talk crazy it's also possible to remove the IR and UV in the same process, and record all of the light the camera is capable of sensing. you'd have to put IR/UV filters in front of your lens to block it out, and that'll likely mess up AF and metering. but without anything, you get close to a two stop improvement in sensitivity.

and maxmax is apparently working on a way to convert bayer cameras to monochrome.
not only because it would remove the dust shaker as well
strictly speaking, if you get it done right your camera won't be missing any pieces afterward. the AA/IR/UV filter glass is simply replaced.
OK, replace the filter pack with clear glass saves the dust shaker if the surgery is successful. I am still not interested, not even if it would be given away free of charge since it would still result in moiré which is far worse than some loss of pixel detail.
 
...and about $400.
...which is why I think it would be crazy to do. I don't want moiré and I want the dust shaker to work.
ignoring for a second that it shouldn't interfere with sensor cleaning, my camera lacks a sensor cleaner. i'd have no problems there.

i'm just not paying $400 for someone to open up my camera and do surgery on it. it just makes me nervous. and i just don't see results that make me think it's worth it.
OK, replace the filter pack with clear glass saves the dust shaker if the surgery is successful. I am still not interested, not even if it would be given away free of charge since it would still result in moiré which is far worse than some loss of pixel detail.
six of one, half dozen of the other. everything in photography is compromise. i'm not rushing out to do this myself. i'm just saying, i can see why some might want it done.
 
i'm just not paying $400 for someone to open up my camera and do surgery on it. it just makes me nervous. and i just don't see results that make me think it's worth it.
Exactly. I have yet to see evidence it is worth the trouble or the money. As I said, this has been discussed on the Olympus forum as well for many years but no one has provided any evidence or proof. Most of the time people, just like in this thread, compare one camera with a supposedly weaker AA filter with another camera with a supposedly stronger AA filter. What they often miss is that in most cases the cameras are several years apart, using a different sensor and having significantly different mega pixel counts and internal electronics and firmware. I have not yet seen any evidence of advantages, just a lot of talk.
 
Exactly. I have yet to see evidence it is worth the trouble or the money. As I said, this has been discussed on the Olympus forum as well for many years but no one has provided any evidence or proof. Most of the time people, just like in this thread, compare one camera with a supposedly weaker AA filter with another camera with a supposedly stronger AA filter. What they often miss is that in most cases the cameras are several years apart, using a different sensor and having significantly different mega pixel counts and internal electronics and firmware. I have not yet seen any evidence of advantages, just a lot of talk.
frankly, i find comparing a d70 to a d300 laughable . sure, the d70 has a weaker AA filter, but if you actually go look at those demonstration charts, guess which one resolves more detail? the one with twice the pixel density of the other one. nah, that's probably just a coincidence, right? :P

the appropriate way to test is one camera against itself , before and after AA filter removal. the folks over at maxmax.com have done just that, and the proof is there -- there is an improvement.

$400 worth? i dunno. that's up to you.
 

Keyboard shortcuts

Back
Top