So much for the EVF being insufficient for action.....

Started 7 months ago | Discussions thread
unknown member
(unknown member)
Like?
Evolution complication: double compensation, double complication
In reply to TrojMacReady, 6 months ago

TrojMacReady wrote:

You already have to compensate for far more than 5 milliseconds. Even prefocused with the fastest DSLR's, the shutterlag is already 40-50 milliseconds. Meaning that by the time you decide to hit the shutterbutton, the bird still moves for 40-50 milliseconds. Entering AF into that equation, adds another 90-250 milliseconds. And let's not overlook the largest delay, the human reaction time, usually being the longest.

That puts a whopping 5-20 milliseconds into perspective.

People have evolved to negating their internal lag when observing moving objects. This accomodation comes from learned discrepancy between sight (what is seen) and reality (what is). The brain does not easily discern between movement seen in real time though OVF or with bare eyes and movement seen on EVF with a lag, no matter how small the lag is. All other factors being equal (once you use a camera for fast action shooting long enough, your brain also learns to accommodate for the specific lag times of that camera, but it's a relatively slow process), adding another layer of lag complicates thing. Now let's see why it complicates things very much:

Learning to add to your prediction due to shutter lag is a learned response directly related to your own reaction time. It uses the tactile sense and relation to past experience in the same way you learn to predict where a baseball will be in order to catch it. Hand predicting baseball, finger predicting shutter, muscle memory. One-layer visual processing required.

Having to learn that the image you're seeing is not the image your brain is used to analyze and base its calculations on is an order of magnitude more complicated. Either your brain has to learn two different lag time averages for the same action with the same muscle memory, or do parallel visual processing, where it has to process visual data that it's not used to and factor in the extra lag into the visual calculations, retraining the muscle memory to predict a higher lag, effectively erasing the previous accustomation to any other camera you've used, creating new processing patters and new order of processing from the visual cortext to the movement centers in the brain. Either way the brain works it out is going to be a whole lot more complicated than simply getting used to a new camera, because the fundamentals are different: your brain can no longer process visual information as it's used to from the get-go (since you were born). It has to dread unknown territory and find a new way to cope with the known-but-not-perceived visual lag on top of the already-compensated-for-known-visual-reactonary-lag. Compensation on top of compensation as a new exercise that cannot be built on old experience. Two-layer visual processing required.

Reply   Reply with quote   Complain
Post (hide subjects)Posted by
(unknown member)
(unknown member)
(unknown member)
NONew
(unknown member)
(unknown member)
(unknown member)
(unknown member)
(unknown member)
(unknown member)
(unknown member)
(unknown member)
(unknown member)
(unknown member)
YesNew
(unknown member)
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark post MMy threads
Color scheme? Blue / Yellow