Results -- Testing the Test Targets
See this thread for methodology. To use a Chris Berman cliché, "that's why they play the game."
I followed through with my proposed test and ran the numbers. ImageJ kept running out of memory which added to the time needed to perform the analysis. To address Marianne's concerns, I also ran a series at a different fine tune setting. I found that for both targets, contrast decreased at an fine-tune setting of +5 by approximately 3.3% over the results at zero. Variance for each target remained the same to a 90%+ confidence level. Therefore, a lack of directional data did not mask differences in variance.
The "good target" posed some challenges. Not only was the image not cleanly reproduced in the JPG I received, but the aspect ratio made it impossible to print at the demanded 5x7" size. So I printed both images at a 7" width, centered them on an 11x8.5" piece of paper and let the chips fall where they may. The "good" target also lacked an obvious aiming point. So I did a search and focused on the "right bar chart," but I have to wonder if the proponent of this chart has actually tried to follow his own instructions.
The "bad target" didn't have an obvious aiming point either, so I chose the "cheek" of the lion.
I measured EV across the target area with a Minolta V light meter. It was slightly uneven, ranging from 8.1 to 8.6, but I don't think that would have mattered for this test. The AF detection range of the D800 is rated to work at an EV -2. Outside of the area of the page it dropped off considerably.
To the results then. For the "good" target, on average RMS contrast was 97.90% of that available in the Live View reference shot. For the "bad" target it was 97.92%. However, the standard deviation of the "good" target was 0.85%, and that of the "bad" target was 0.58%.
An F-test indicated that the difference in variance was only 20.5% likely due to chance. This is insufficient to show that the "bad" target has less variance than the "good target," but it is sufficient to reject the opposite hypothesis.
A T-test indicated that there was a 95.6% chance that the choice of these test targets made no difference at all in AF performance under the test conditions. This met the stated benchmark. A difference of 0.02% over 12 samples per set is statistically meaningless.
Conclusion: While in real life, an animal behind bars may well be a poor AF test target, the same can't be said for a 2-dimensional drawing of that scenario. Although the "bad" target may be fractionally better than the good one, either one could be used for AF fine tuning to almost 4 deltas of confidence. The target itself is nowhere near as important as controlling the other test conditions.
Such commentary has become ubiquitous on the Internet and is widely perceived to carry no indicium of reliability and little weight. (Digital Media News v. Escape Media Group, May 2014).
|Post (hide subjects)||Posted by||When|
|Jul 19, 2014||5|
|Jul 19, 2014|
|Jul 19, 2014|
|Jul 20, 2014||2|
|Jul 20, 2014|
|Jul 20, 2014||4|
|Jul 20, 2014|
- Canon EOS M58.8%
- Panasonic G85/G803.3%
- Panasonic FZ2500/FZ20001.9%
- Panasonic LX10/LX151.2%
- Panasonic GH5 development3.6%
- Sony a99 II15.9%
- Nikon KeyMission 170 and 801.0%
- Fujifilm GFX 50S development28.3%
- Olympus E-M1 II development18.7%
- Olympus E-PL80.1%
- Olympus 25mm F1.2 Pro1.5%
- Olympus 12-100mm F4 IS Pro1.9%
- Olympus 30mm F3.5 Macro0.1%
- Sigma 85mm F1.4 Art3.6%
- Sigma 12-24mm F4 Art2.6%
- Sigma 500mm F4 DG OS HSM Sport2.4%
- YI M12.2%
- GoPro Hero50.8%
- GoPro Karma drone2.2%
|Sunflower Field by GrannyMeg|
from An impressionist piece
|Flag from Staten Island Ferry by wam7|
|Windswept juniper by Kreber|
from Wind power
|SAND SCULPTURE by duskman|
from Landscape - Black and White #4