Antialiasing (AA) filter robs us of too much detail

See the post by Anthony Medici here:

https://www.naturescapes.net/phpBB3/viewtopic.php?f=1&t=140141&start=0#p1419237

He says that he and Thom Hogan have seen a 10-15% improvement in resolution on a D300 after the removal of the AA filter. He also points out some disadvantages but, interestingly, moire didn't seem to be a huge issue.

I dunno about anyone else but I would not consider a 10% increase in resolution to be inconsequential. Not by a long shot.

I do think the current D300 AA filter does somewhat more harm than good overall. It's a tricky balancing act but I'd certainly like to see Nikon move toward a slightly weaker AA filter.

The fact that the effects of the AA filter can be ameliorated with sharpening does not change the fact I'd like to capture the maximum amount of detail.
 
See the post by Anthony Medici here:

https://www.naturescapes.net/phpBB3/viewtopic.php?f=1&t=140141&start=0#p1419237

He says that he and Thom Hogan have seen a 10-15% improvement in resolution on a D300 after the removal of the AA filter. He also points out some disadvantages but, interestingly, moire didn't seem to be a huge issue.
Thank you for the link, very consistent with what I would expect in terms of the impact of the AA filter.

It also reminded me of one of the reasons why I did not go for the AA removal: it means the loss of the anti-dust feature on a D300. I hated having to clean the D70 sensor, and I had to do it more often than I care to admit it as I frequently swap lenses.
I dunno about anyone else but I would not consider a 10% increase in resolution to be inconsequential. Not by a long shot.
If you think about it, one needs about 20% more pixels for a 10% resolution increase. That's a 14.4 MP sensor. But I'd rather have larger pixels in a 12 MP sensor and get a little less noise and more DR.
I do think the current D300 AA filter does somewhat more harm than good overall. It's a tricky balancing act but I'd certainly like to see Nikon move toward a slightly weaker AA filter.
I'm totally with you here. I wouldn't want a camera without an AA filter at all. The D70 filter was too weak to my liking. The ideal compromise for me is between the D70 and D300.
The fact that the effects of the AA filter can be ameliorated with sharpening does not change the fact I'd like to capture the maximum amount of detail.
I'll re-state what I've written elsewhere: you need more aggressive sharpening with the AA filter, which is not necessarily what you want because sharpening comes with its own price (halos etc)

--
Thierry
 
Has anyone here actually had the AA filter removed on a D300/300s? A weaker AA filter might be nice but none worries me because I haven't see any software to fix moire. I'd much rather have sharper detail and post process to correct for moire. All my images would have better detail and I could selectively fix the few that needed it.
See the link that another poster has provided here:
http://forums.dpreview.com/forums/read.asp?forum=1039&message=35330160

Problems with filter removed:
  • may void warranty
  • lost of anti-dust functionality
  • more cure than you need: as someone mentioned elsewhere there is something called a Nyquist frequency, and there is no benefit for the AA filter to have a higher frequency cut-off.
  • No cheap, considering $400 is about a quarter of the cost of a new D300s
--
Thierry
 
I'll re-state what I've written elsewhere: you need more aggressive sharpening with the AA filter, which is not necessarily what you want because sharpening comes with its own price (halos etc)
I completely agree. Sharpening does not restore the data that was lost to the AA filter. It only ameliorates the perceived effect of the AA filter while adding some artifacts of its own.
 
There is only one thing I miss from the D70 and that is the weaker AA filter (and on a few occasioins the 1/500th flash synch speed. But the price to pay for this feature is too high). I don't mind the occasional moiré from a weaker filter, but I hate to be robbed of detail. Maybe the D70 AA filter was too weak, but the D300 filter is too strong. There must be a better compromise between these two cameras.

Do other people here feel that way? Do you wish Nikon should revise the AA filter in future dSLRs?
Yes.

Tell me about it! i have a S5 (maybe the thickest AA out there). If our D70 ever dies, i shuld send it to Nikon along with S5 and get AA exchanged.
--
Thierry
 
Please tell me which lenses your D200 prefers. I shoot with just a D200 (love it, and am waiting to update to Nikons next FF due anytime). I have noticed that a few select lenses give me incredible sharpness, detail, and an almost 3D depth where my photo looks like I am looking out the window, not at a photo--reach out and touch.
yes. i'm not knocking the d200, at all. on the contrary, with a few select lenses, as you say, photos can look just incredible.

i haven't, of course, done any real official testing. just taking notes on which of my lenses it prefers, and which it does not. as the person below your comment mentions, that 50mm 1.8 AIs is among the lenses that seem to do really well on it. even wide-open, if you can get the focus right. a few other MF primes really seem to shine, too.

it loves my new 17-55, and my old 70-210 non-d. but not the 18-70. MF zooms are right out.
 
Wrong according to color science. Luminance is a weighted average of blue red green in which green is the most significant term. That is why the practical choice of filter array is 2 green pixels for one red and one blue: to extract more luminance information. Red is barely contributing. So the luminance info cannot be better than 3/4 of the resolution of the pixel array, and is close to that value in practice.
cart before the horse. green is the most prominent factor in luminance because there are two green photosites on the sensor for every one red and one blue. if you look up "luminance" in any non-photographic source, i promise it doesn't use the word "green" anywhere.
depending on the sensor and that demosaicing involved, there might be some averaging going on in the interpolation
What do you mean "there might be"? Demosaicing does involve interpolation, which is nothing more than an educated guess based on a clever form of averaging.
averaging of luminance values, i mean. remember, each photosite collects only luminance values, and the color is interpolated from the positions. demosaicing probably does some alteration of the luminance values collected, unless the filters all have the same density. which they probably do not -- green would be the least dense, which would be why there are two of them. however, this a generalization, i can't speak for all sensors or all demosaic algorithms. certainly something else entirely goes on with a foveon sensor.
Like all interpolation, it will work if there is color continuity locally in this area. It will be wrong if not, i.e. when at the edge of a detail. The interpolation will blur the edge.
yes, but only of the color . depending on what kind of magic is done between the luminance values, you might see slightly fuzzier edges. this, like the above AA issue, is generally well within the range of some tasteful USM.
Then explain to me: why is Thom Hogan cliams the resolution would be higher without a Bayer filter. According to you, one can have both color and resolution. What is Thom missing?
http://www.luminous-landscape.com/essays/hogan-leica.shtml
"resolution" as in resolving power, not as in pixel-dimensions. and i'm not sure that claim is entirely correct.

looking at some crops from MF monochrome cameras vs the bayer counterpart here: http://www.luminous-landscape.com/reviews/cameras/achromatic.shtml , i just don't see it. it's ever so slightly sharper at 300%.

the author of that page writes this:
I had planned to do some pixel peeping comparisons with a P45+ and a P65+. In fact I did them, setting up the shot on a heavy tripod firmly planted on concrete, 120mm Macro lens at optimum aperture, manual focus tripple checked with a 3X magifier, mirror lockup, eletronic cable release – the whole nine yards.
I processed the files, and examined them with and without optimium sharpening.
So why aren't the results here? Simply because I can't see any convincing difference in resolution or sharpness between them. Yes, the P65+ file is bigger and therefore will take more magnification. But between the Achromatic and the P45+ on which it's based I just don't consistently see anything to convince me that the non-Bayer Achromatic consistently offers sharper results.
This of course flies in the face of common wisdom, that the Bayer Array robs digital files or their inherent monochrome resolution. While that may be the case in the lab, in the real world, using the best shooting technique I know, I can't see it.
i can't comment on what thom is missing, or if he knows something i don't. but if the difference is so earth-shaking, why am i so underwhelmed by the difference at 39 megapixels?
True, although I don't see what this has do to with the subject of the discussion. Note that interpolation can be done at the edge. At least one RAW converter, RawMagick, yields the entire image captured by the sensor, and none of that "actual" pixels nonsense.
yes. however, most methods of interpolation discard those pixels, thus the difference. your 10mp will yield larger pixel dimensions cranked though rawmagick. my point was that this is where resolution is being discarded. not through interpolation.
A strong AA filter will rob you of some of the sharpness from your lens That seems to me even less desirable if you've paid top dollars for that lens.
if you're paying top dollar for a lens, in my experience, it doesn't matter. it's less desirable if you have sub-par optics that need all the help they can get. even with a strong AA filter, good glass is well within the range of a gentle sharpening. bad glass, on the other hand, can't be saved.
Life is made of compromise, and you're fooling yourself in believing that you can have it both ways: 1) to never have to deal with moiré or extremely rarely (as is the case for the D200 & D300) or 2) to preserve detail lost by an AA filter. Some people would say they prefer a strong AA filter a la D300 and never have to deal with moiré in post-processing. I certainly respect that view, even if I'd rather have it the other way. But you're living in a fairy tale if you think there is that magic AA filter in the D200/D300 which does it all. My advice: take Nkon's marketing with a grain of salt.
no, i didn't say it was magic. just that it happens to be relatively easy to deal with in post processing. truth be told, i've considered having mine removed at least once. life is a compromise -- i just haven't seen much to convince me that there would be any significant gain, for me. if you'd prefer the compromise the other way, that's fine.

of course, i also wouldn't mind a monochrome dSLR. i'm just not under the illusion that it would give me more resolution.
 
cart before the horse. green is the most prominent factor in luminance because there are two green photosites on the sensor for every one red and one blue.
Firstly, some definitions. We're only interested in color perception because the discussion is all about photography. What you refer to luminance, separated from color (hue and saturation) perception, is Lightness (the L* in the LAB color space).

The most important contribution to lightness comes from green. That's how the human visual system works. Camera manufacturers (who unlike you are not novices in color science) know that: the RGB channels do not have even contribution to our perception of luminosity (or lightness). Green is preponderant. That is why they put 2 G pixels for 1 R and 1 G pixel.
if you look up "luminance" in any non-photographic source, i promise it doesn't use the word "green" anywhere.
And if you open a source which is concerned about human color perception you'll read it exactly as I state above.
averaging of luminance values, i mean. remember, each photosite collects only luminance values,
You use the word "luminance" is so many contexts that you end up confusing yourself. A photosite collects photons. But that is through a filter. The end product is a determination of reg blue green components, or equivalently lightness and color (one can calculate one from the other).
and the color is interpolated from the positions. demosaicing probably does some alteration of the luminance values collected, unless the filters all have the same density. which they probably do not -- green would be the least dense, which would be why there are two of them.
They don't have identical sensitivities, but that has nothing to do with the choice of 2 green pixels for 1 red and 1 blue.
however, this a generalization, i can't speak for all sensors or all demosaic algorithms. certainly something else entirely goes on with a foveon sensor.
Like all interpolation, it will work if there is color continuity locally in this area. It will be wrong if not, i.e. when at the edge of a detail. The interpolation will blur the edge.
yes, but only of the color . depending on what kind of magic is done between the luminance values,
You seem to believe that lightness can be infered from the amount collected from any pixel. That is not true as already explained.
you might see slightly fuzzier edges. this, like the above AA issue, is generally well within the range of some tasteful USM.
Additional amount of blur means stronger sharpening required at the end. Sronger sharpenng means more artefacts. The added blur from the filter makes it harder to find sharpening parameters which yield a tasteful result.
Then explain to me: why is Thom Hogan cliams the resolution would be higher without a Bayer filter. According to you, one can have both color and resolution. What is Thom missing?
http://www.luminous-landscape.com/essays/hogan-leica.shtml
"resolution" as in resolving power, not as in pixel-dimensions. and i'm not sure that claim is entirely correct.
Yes resolution as in resolving power. Thom is not only extremly familiiar with photographic hardware, and an experienced photographer, he is also clearly not a novice in color science. Also he is careful in his statements which are the results of experimenting, testing, and just a lot of knowledge. Therefore if I were you, I would ask myself "what do I not know and what am I missing" rather than questioning his views.
looking at some crops from MF monochrome cameras vs the bayer counterpart here: http://www.luminous-landscape.com/reviews/cameras/achromatic.shtml , i > just don't see it. it's ever so slightly sharper at 300%.
Disagree.
the author of that page writes this:
(snip) Simply because I can't see any convincing difference in resolution or sharpness between them.
There are two other people who've posted their "opinions" and conclude to better resolution:
"The Achromatic back definitely has more resolution than a standard P45+ back"

"Even though that the P45+ has a fantastic ability to resolve details only 1 pixel wide, it is evident that the Achromatic+ back adds a whole new dimension to how precise these details can be defined"
BTW these two made an excellent choice of target for the intended test ...

--
Thierry
 
I have a D60 (similar sensor to D200) and a D300S (similar to D300) and with both cameras, good lenses at optimum apertures easily and frequently produce luminance aliasing and colour moire.
Really? Please posts links to D300 or D300s pictures with moiré.
Here's D300 sample raw from original D300 review on this site, converted with ACR:



The D300 ACR conversion crop above shows artifacts at around nyquist, about 1400 lines / picture height or 28 on that chart. Other raw converters do a better job, smoothing away lost resolution beyond nyquist. ACR shows colour moire in samples from other cameras, but that might be just how it (ACR) is optimised.

Bizarrely, this site (in lens reviews and camera reviews) often reports "resolution" as an integer above nyquist for the device that they're measuring.

Anyway, IMO for what I want (a non- "digital look") if the AA filter was too strong, all detail in the crop above would be lost before nyquist frequency. It isn't, but a better raw converter (than ACR) give a better artifact-free transition beyond resolution extinction.
 
Firstly, some definitions. We're only interested in color perception because the discussion is all about photography.
um, i hate to tell you, but if you're talking about monochromatic sensors as opposed to bayer sensors you're rather specifically not interested in color perception.
What you refer to luminance, separated from color (hue and saturation) perception, is Lightness (the L* in the LAB color space).
i'm sorry, i'm using it in the way any normal person would, instead of a technical distinction in various color spaces.

further, i'm using it the way any photographer would have prior to the invention of photoshop. for instance, see ansel adams "the camera" or "the negative" where he uses the term interchangeably with "brightness."
The most important contribution to lightness comes from green. That's how the human visual system works.
perhaps, because the rod photoreceptor cells, which are primarily used for lightness, are sensitive to blue-green.
Camera manufacturers (who unlike you are not novices in color science) know that: the RGB channels do not have even contribution to our perception of luminosity (or lightness). Green is preponderant. That is why they put 2 G pixels for 1 R and 1 G pixel.
er, well, let's get this straight. bryce bayer used 2 greens to mimic the human eye.

this does not mean that green is a factor in luminosity in general . perhaps in the demosaicing algorithm, green is more heavily weighted for luminosity. when a camera makes a b+w output image, it usually primarily based on green.

this, however, does not mean that there are not other luminance values being recorded, and that somehow there are fewer (effective) pixels than in a sensor that doesn't need to be demosaiced.
You use the word "luminance" is so many contexts that you end up confusing yourself. A photosite collects photons. But that is through a filter. The end product is a determination of reg blue green components, or equivalently lightness and color (one can calculate one from the other).
it is, yes, but it's only recording those photons it's collecting: it's recording brightness.
You seem to believe that lightness can be infered from the amount collected from any pixel. That is not true as already explained.
not inferred. it's all that's being collected. color is inferred from position in the bayer pattern, and demosaicing.
Additional amount of blur means stronger sharpening required at the end. Sronger sharpenng means more artefacts. The added blur from the filter makes it harder to find sharpening parameters which yield a tasteful result.
to a degree, certainly. but in practice, it's not a degree that actually has any significance
Yes resolution as in resolving power. Thom is not only extremly familiiar with photographic hardware, and an experienced photographer, he is also clearly not a novice in color science. Also he is careful in his statements which are the results of experimenting, testing, and just a lot of knowledge. Therefore if I were you, I would ask myself "what do I not know and what am I missing" rather than questioning his views.
okay. the first thing i'm missing is any evidence whatsoever of that significant increase in resolving power.
looking at some crops from MF monochrome cameras vs the bayer counterpart here: http://www.luminous-landscape.com/reviews/cameras/achromatic.shtml , i > just don't see it. it's ever so slightly sharper at 300%.
Disagree.
that's great. shall we take a poll? it's certainly not seeing a 133% percent increase in resolution. it's not the difference between say, 39 and 52 mp. it's there, it's just so slight that i can't see why we'd be all worked up about this.
There are two other people who've posted their "opinions" and conclude to better resolution:
"The Achromatic back definitely has more resolution than a standard P45+ back"

"Even though that the P45+ has a fantastic ability to resolve details only 1 pixel wide, it is evident that the Achromatic+ back adds a whole new dimension to how precise these details can be defined"
yes, and notice the pictures to which i'm referring are in the same opinion as the last quote? i'm just not seeing it. yeah, it's a little better. but not significantly so. notice the wording of that last quote, and read it very carefully.

the color back resolves detail 1 pixel wide, and the author feels that the achromatic version does so more precisely . we're not talking about a difference in resolution , we're talking about a difference in precision .
 
I also wonder if a weaker filter would enable camera makers to push DX-sized sensors up to 15 MP or so.
How would an AA filter prevent anyone from building a 15 MP sensor (apart from the fact that Canon is at 18 MP already...)?
It doesn't stop you building one. However, a 15MP DX-sized sensor will probably provide about the same resolution as the current 12MP sensors. So, you don't actually gain any resolution unless you do something else too.
Hardly. Any facts to support this notion?
 
It doesn't stop you building one. However, a 15MP DX-sized sensor will probably provide about the same resolution as the current 12MP sensors. So, you don't actually gain any resolution unless you do something else too.
Hardly. Any facts to support this notion?
If you increase the number of pixels on a sensor of a fixed (DX) size, you have to increase the pixel density and reduce the pixel size. This leads to increased problems with diffraction and noise. You can Google for various papers that discuss this. It's also one of the main reasons Nikon don't produce a DX camera with more than 12MP. And why many DX shooters say when asked that more MP is not a priority for them.

In order to get past 12MP on a DX sensor some things are going to have to change. I disagree with those that say the physical constraints of current sensors will never be overcome. I think new innovations in sensor design may very well get us past the current barriers.

But we do have some evidence to suggest that removal of the AA filter might gain us 10-15% in resolution.

I have suggested that it might be possible for Nikon to deliver more effective resolving power by weakening the AA filter and increasing the pixel density. I don't think it would be an earth-shattering improvement but I do suspect it would be sufficient to be quite worthwhile.

Personally, I'm convinced that Nikon could gain some advantage here. Whether that would represent the best deployment of their R&D investment is a much harder and questionable issue.
 
It doesn't stop you building one. However, a 15MP DX-sized sensor will probably provide about the same resolution as the current 12MP sensors. So, you don't actually gain any resolution unless you do something else too.
Hardly. Any facts to support this notion?
If you increase the number of pixels on a sensor of a fixed (DX) size, you have to increase the pixel density and reduce the pixel size. This leads to increased problems with diffraction and noise.
I'm aware of these issues, but I don't think the AA filter introduces noise, so it's quite irrelevant for this discussion.
In order to get past 12MP on a DX sensor some things are going to have to change.
Look, Canon is already doing just that. You're saying this like it represents a technical hurdle which hasn't yet been overcome. We're there already, nothing new, nothing fancy.
I disagree with those that say the physical constraints of current sensors will never be overcome. I think new innovations in sensor design may very well get us past the current barriers.
Yup.
But we do have some evidence to suggest that removal of the AA filter might gain us 10-15% in resolution.
Yup. So will adding 10-15% more pixels (in each dimension) with an appropriate AA filter.
I have suggested that it might be possible for Nikon to deliver more effective resolving power by weakening the AA filter and increasing the pixel density.
Ah, now it's weakening the AA filter, no longer removing. That, I agree with, they'd have to, at higher pixel density, of course. But honestly, that doesn't really go past stating the obvious...
 
There is only one thing I miss from the D70 and that is the weaker AA filter (and on a few occasioins the 1/500th flash synch speed. But the price to pay for this feature is too high). I don't mind the occasional moiré from a weaker filter, but I hate to be robbed of detail. Maybe the D70 AA filter was too weak, but the D300 filter is too strong. There must be a better compromise between these two cameras.

Do other people here feel that way? Do you wish Nikon should revise the AA filter in future dSLRs?
Coming from the Olympus camp, I know too well the tune of "AA song". I honestly never understood why people want weaker AA filters and in which way it would improve their photography. I have been doing a lot of macro with both the Olympus E-500 with a supposedly weak AA filter and also with the Olympus E-3 with a supposedly strong AA filter and could not see any problems using the same lenses. Somehow I think the "issue" is a none issue. I would however hate to get moiré in my images. Currently I am using the D300s which supposed to have a weaker AA filter compared with the Olympus E-3 but I must say, there is only a very small visible difference.

Where exactly do you find the D70 advantages? I mean the camera has substancially less pixels, so looking at the images at 100% magnification is not really right. If you compare the two you have to down sample the D300 to the same as the D70. That's one thing. Then also, what is the point in looking at 100% and saying one is better than the other? Do you really print at maximum size? If you do you would see the D70 will give you a smaller image.

The secret of details in a image is:
  1. Right exposure
  2. Right WB
  3. Good lens
  4. High shutter speed
  5. Tripod
I think that would give you the highest detail. Camera shake and vibrations caused by the mirror swing can have a more severe affect on the D300 then on the D70 because of the larger number of pixels. A slight movement can indeed give you the impression of less detail and the sensitivity is higher with a camera which has more pixels on the same area.

Regarding the flash, the D300 can sync up to maximum shutter speed, so 1/500s is not the maximum.
 
Depending a bit on the Bayer filter properties, the red and blue pixels are rather lower in luminance compared to the green and do not contribute much to resolving power. It is a good approximation to assume that the green provides most of the luminance and determines the resolving power. That is why many would say that you need twice the pixels counts for a Bayer to equal the resolving power of a Foveon sensor.
--
Leon
http://web.me.com/leonwittwer/landscapes.htm
 
If you increase the number of pixels on a sensor of a fixed (DX) size, you have to increase the pixel density and reduce the pixel size. This leads to increased problems with diffraction and noise.
I'm aware of these issues, but I don't think the AA filter introduces noise, so it's quite irrelevant for this discussion.
We were discussing changing the AA filter and the pixel density in combination. So noise is going to be a consideration.
Ah, now it's weakening the AA filter, no longer removing.
That's been my position since the post #2 in this thread.

I also cited some information of the effects of removal because that's the only data we have, to my knowledge.
 
Firstly, some definitions. We're only interested in color perception because the discussion is all about photography.
if you're talking about monochromatic sensors as opposed to bayer sensors you're rather specifically not interested in color perception.
Not true. See further about what the L channel of LAB is about.
What you refer to luminance, separated from color (hue and saturation) perception, is Lightness (the L* in the LAB color space).
i'm sorry, i'm using it in the way any normal person would, instead of a technical distinction in various color spaces.
That LAB color space has more to do with a "normal" perception of color than you think.
further, i'm using it the way any photographer would have prior to the invention of photoshop.
The 1st version of Photoshop came out in 1990. The current definition of CIELAB dates back from 1976. But you know so much about LAB that you did not imply that LAB is a by-product of Photoshop, right ? :-)

Intuitively we think of a color as having some amout of grey. A dark color would match a darker grey. And consequently we also think intuitively of a conversion of a color image to a Black & white one. LAB is a fruit of colour science which study how human beings perceive color (and that encompasses greys as well). The L channel in LAB matches this hunan perception of the amount of grey in a color. Tip: want to convert a color image to a B&W one which matches our perception of how a B&W image should look like? Just convert to LAB using a digital image manipulation software which supports LAB (Photoshop being one possibility), extract the L channel, and, voila, you have your conversion.
for instance, see ansel adams "the camera" or "the negative" where he uses the term interchangeably with "brightness."
Ansel Adams used negatives and papers which under proper development would give a sensical image, one that matches reasonably well a perceptually correct conversion to B&W. The manufacturers of such negatives and papers saw to it. Ansel Adams does not have to worry about it just like you don't. Manufacturers of dSLRs and writers of RAW converters saw to it.
The most important contribution to lightness comes from green. That's how the human visual system works.
perhaps, because the rod photoreceptor cells, which are primarily used for lightness, are sensitive to blue-green.
No, rod photoreceptor cells are used only for low light (scotopic) vision. Human vision under normal lighting conditions ignores them (photopic vision).
Camera manufacturers (who unlike you are not novices in color science) know that: the RGB channels do not have even contribution to our perception of luminosity (or lightness). Green is preponderant. That is why they put 2 G pixels for 1 R and 1 G pixel.
bryce bayer used 2 greens to mimic the human eye.
No. He use 2 greens for 1 R and 1 B because human vision extracts more lightness detail from the green channel than the others.
when a camera makes a b+w output image, it usually primarily based on green.
No it is based on extracting the L channel. But the preponderant term in that conversion comes from the green channel.
You use the word "luminance" is so many contexts that you end up confusing yourself. A photosite collects photons. But that is through a filter. The end product is a determination of reg blue green components, or equivalently lightness and color (one can calculate one from the other).
it is, yes, but it's only recording those photons it's collecting: it's recording brightness.
Yes, from the point of view of the camera, and then no: it is not collecting brightness as our eyes would see it .
You seem to believe that lightness can be infered from the amount collected from any pixel. That is not true as already explained.
not inferred. it's all that's being collected. color is inferred from position in the bayer pattern, and demosaicing.
The only thing that is collected by the camera is some amount of light that goes though a filter. The combination of filters, sensor, demosaicing must be based on how human vision perceive colors to give a sensical result. Right?
Additional amount of blur means stronger sharpening required at the end. Sronger sharpenng means more artefacts. The added blur from the filter makes it harder to find sharpening parameters which yield a tasteful result.
to a degree, certainly. but in practice, it's not a degree that actually has any significance
Disagree, as proved by comparisons of pictures with and without AA filters (if you know what to look for ...).
"Even though that the P45+ has a fantastic ability to resolve details only 1 pixel wide, it is evident that the Achromatic+ back adds a whole new dimension to how precise these details can be defined"
the color back resolves detail 1 pixel wide, and the author feels that the achromatic version does so more precisely . we're not talking about a difference in resolution , we're talking about a difference in precision .
Above all letl's not distort the meaning of the sentence. What he means is obvious: that the details have much more clarity without AA filter, or put in other words that they are not blurred, or somewhat destroyed by the AA filter.

--
Thierry
 
The secret of details in a image is:
  1. Right exposure
Exposing to the right will improve signal/noise in particular in shadows. Therefore yes in shadows, otherwise no.

However you did not mention contrast. Contrast is very important to the human perception of details.
Arguable. Right WB gives you more accurate colors, not necessarily more details.
  1. Good lens
Aperture is important, maybe even more important. Many not so good lens are actually sharp at f/8. And many excellent lens have some softness wide-opened.
  1. High shutter speed
If the subject moves or you're not using a tripod
If the subject deos not move.

Add sharpening.
I think that would give you the highest detail. Camera shake and vibrations caused by the mirror swing can have a more severe affect on the D300 then on the D70 because of the larger number of pixels.
Do we want to add blur from an aggressive AA filter?
Regarding the flash, the D300 can sync up to maximum shutter speed, so 1/500s is not the maximum.
The D300 can only truly sync to 1/250 sec. At faster shutter speed it uses a different scheme referred as FP sync mode. Here is a a very simplified description: say you're at 1/500 sec exposure = twice the maximum sync speed of 1/250. There are two steps. First, one half of the image is exposed during the first half of the exposure while the other half is obscured,. Next is the second half of the exposure. The part that was initially obscured is now exposed and the part that was exposed initially is now obscured. This results in a shutter speed of 1/250 (the max the D300 can sync at) but each half of the image got exposed for only 1/500. This is very costly in flash power, and can creates artefacts given the wrong conditions since the two halves of the image did not record the subject exactly at the same time.

--
Thierry
 
The secret of details in a image is:
  1. Right exposure
Exposing to the right will improve signal/noise in particular in shadows. Therefore yes in shadows, otherwise no.

However you did not mention contrast. Contrast is very important to the human perception of details.
Detail is detail. When you expose right you will hopefully not have over exposed high lights and underexposed shadows which will automatically result in maximum detail with regard to the exposure. I did not mention contrast because contrast is not always increasing detail, but yes, to a degree it may give that impression.
Arguable. Right WB gives you more accurate colors, not necessarily more details.
Maybe arguable but in my opinion it is very important. The right WB gives an impression of better detail as well.
  1. Good lens
Aperture is important, maybe even more important. Many not so good lens are actually sharp at f/8. And many excellent lens have some softness wide-opened.
At f/8 almost any lens is sort of good, but a good lens at its sweet spot aperture is always better than a bad lens at its own sweet spot which is why I did not mention aperture. Lens is much more important than aperture. If you compare lenses wide open then you can not compare a slow lens and a fast lens, you must compare lenses at their matching aperture, in other words it is totally unimportant than a good lens wide open at f/1.4 is softer than a cheapo lens at f/8. That cheapo may not be able to go wider than f/5.6 so it would be apples and oranges to compare them that way.
  1. High shutter speed
If the subject moves or you're not using a tripod
Regardless if the subject moves or if you use tripod, high shutter speed always results in crispier images and more detail. Of course, you can't always use high shutter speed, and anyway, what's your definition of high shutter speed? A shutter speed higher than 1/FL*1.5 for the cropped cameras is my definition, but the higher the better seen from the image detail aspect.
If the subject deos not move.
Of course. But we are not discussing a certain scenario and none of the above alone is enough for highest possible detail except in a certain scenario.
Add sharpening.
Sharpening can be very destructive to detail if not used carefully. A softer image is many times more pleasant than an over sharpened where ISO noise is also sharpened.
I think that would give you the highest detail. Camera shake and vibrations caused by the mirror swing can have a more severe affect on the D300 then on the D70 because of the larger number of pixels.
Do we want to add blur from an aggressive AA filter?
I don't find any blur in my images, except those caused by my own mistakes, my lenses or some other explainable facts. I have not found any evidence that the D300s has "too agressive AA filter" since there is no camera without AA filter, I just simply trust Nikon on this one.
Regarding the flash, the D300 can sync up to maximum shutter speed, so 1/500s is not the maximum.
The D300 can only truly sync to 1/250 sec. At faster shutter speed it uses a different scheme referred as FP sync mode. Here is a very simplified description: say you're at 1/500 sec exposure = twice the maximum sync speed of 1/250. There are two steps. First, one half of the image is exposed during the first half of the exposure while the other half is obscured,. Next is the second half of the exposure. The part that was initially obscured is now exposed and the part that was exposed initially is now obscured. This results in a shutter speed of 1/250 (the max the D300 can sync at) but each half of the image got exposed for only 1/500. This is very costly in flash power, and can creates artefacts given the wrong conditions since the two halves of the image did not record the subject exactly at the same time.
I know it is FP after 1/250 and I know how FP is working. I also know it is not working the way you describe; in fact it is very different from your simplified description. Never the less, the D300s and many other cameras can sync (in FP) up to the maximum shutter speed of the camera with an SB-900. I don't know if the D70 can do that, but I definitely prefer the way the D300s handles the flash if the D70 has any limitation in FP or shutter speed.
 
The secret of details in a image is:
  1. Right exposure
Exposing to the right will improve signal/noise in particular in shadows. Therefore yes in shadows, otherwise no.

However you did not mention contrast. Contrast is very important to the human perception of details.
Detail is detail.
I am only interested in a detail which I can see, and not details that are hidden behind the numbers but nobody can tell if they are there or not.
When you expose right you will hopefully not have over exposed high lights
Which is implied by "exposing to the right"
and underexposed shadows which will automatically result in maximum detail with regard to the exposure.
Perceptually only in shadows.
I did not mention contrast because contrast is not always increasing detail, but yes, to a degree it may give that impression.
Again, I am only interested in what people see when looking at a picture. Contrast is always increasing perceived detail, often significantly. Sharpening for instance works in large part because it increases local contrast around details.
in my opinion it is very important. The right WB gives an impression of better detail as well.
Not really.
  1. Good lens
Aperture is important, maybe even more important. Many not so good lens are actually sharp at f/8. And many excellent lens have some softness wide-opened.
At f/8 almost any lens is sort of good, but a good lens at its sweet spot aperture is always better than a bad lens at its own sweet spot
Very arguable these days with the progresses which have been recently been made in the quality of "consumer-grade" lens.
which is why I did not mention aperture. Lens is much more important than aperture.
They both matter. But even with an excellent lens sharpness increases when stopping down.
you must compare lenses at their matching aperture
Okay, I compare lens closed down. One can overcome very effectively the optical shortcomings of a so-so lens by closing down.
Regardless if the subject moves or if you use tripod, high shutter speed always results in crispier images and more detail.
Not always. It does not do anything one way or the other if the subject is not moving, the camera is solidly attached to a good tripod and good techniques are used. It doesn't hurt either.
Add sharpening.
Sharpening can be very destructive to detail if not used carefully. A softer image is many times more pleasant than an over sharpened where ISO noise is also sharpened.
Exactly . And that is why one does not want to use more sharpening to compensate for the shortcomings of a strong AA filter.
I don't find any blur in my images,
You don't know what to look for.
I have not found any evidence that the D300s has "too agressive AA filter" since there is no camera without AA filter,
Not true. Some have had the filter removed and reported the results.
I just simply trust Nikon on this one.
The AA filter is a compromise between blurring details and avoiding moire. Nikon made a choice. The D300 is very impervious to moire and that implies a strong AA filter. That is your right to believe that Nikon's choice work for you, just like it is other people's right to believe that they'd rather have a weaker filter. It is not a question of trusting Nikon or not.
The D300 can only truly sync to 1/250 sec. At faster shutter speed it uses a different scheme referred as FP sync mode. Here is a very simplified description: say you're at 1/500 sec exposure = twice the maximum sync speed of 1/250. There are two steps. First, one half of the image is exposed during the first half of the exposure while the other half is obscured,. Next is the second half of the exposure. The part that was initially obscured is now exposed and the part that was exposed initially is now obscured. This results in a shutter speed of 1/250 (the max the D300 can sync at) but each half of the image got exposed for only 1/500. This is very costly in flash power, and can creates artefacts given the wrong conditions since the two halves of the image did not record the subject exactly at the same time.
I know it is FP after 1/250 and I know how FP is working. I also know it is not working the way you describe; in fact it is very different from your simplified description.
It is not very different even if I've passed on a number of details. (as stated in my post BTW). But the simplifications concern the way the curtains move and how the flash acts as a strobe during exposure. Still the simplified explanation describes well the fact that a there is a gimmick behind FP sync. It is not true sync. The simplification also gives a sense that the gimmick comes down to exposing with flash only part of an image at a time, and a lot of flash is wasted during FP syncing (the true mechanism waste more energy that what my simplified description suggests !). In fact FP sync looses power so quickly that it is barely useful past 1/500 on a d300 even if in theory it can go faster than that.
Never the less, the D300s and many other cameras can sync (in FP) up to the maximum shutter speed of the camera with an SB-900.
Only in FP sync past 1/250s so we've agreed all along
I don't know if the D70 can do that,
The D70 is way superior to the D300(s) thanks to its electronic shutter. It can sync to 1/500 sec. And one can mask contacts on the flash shoe and the D70 will sync to its maximum shutter speed. True syncing. No gimmick. In practice the only limitation is the duration of the flash (which can be a little longer than 1/1000 sec, depending on power required from the flash).
but I definitely prefer the way the D300s handles the flash if the D70 has any limitation in FP or shutter speed.
The D300s has limitations. The D70 does not.

--
Thierry
 

Keyboard shortcuts

Back
Top