So much for the EVF being insufficient for action.....

Started Jan 28, 2014 | Discussions
Shop cameras & lenses ▾
enemjii Senior Member • Posts: 1,889
Re: OVF vs EVF and Bokeh
1

enemjii wrote:

Greg A A wrote:

Many people don't want to look at the world they are photographing through a small TV set. Try replacing someone's glasses with an electronic viewing screen. Why doesn't Google just replace the entire glasses with an electronic screen? This is why many people prefer their OVF.

Theoretically, what you see on mirror is not what is captured on Sensor because the sensor is not emulating a mirror. They are 2 different technologies. You may think you like what you see in a mirror, but when it hits the sensor, it is different. Couple of days back, I was composing a bokeh on a Canon EOS 100D/SL1, and I was excited with what I saw on the mirror, snapped the picture and when I preview it, it is completely different.

In other words, I like looking at the mirror, but would have the electronics companies come up with an EVF rather quickly that shows me what the sensor sees, and will capture.

Search and yea shall succeed. I did find a picture of what I was talking about the Bokeh in OVF vs the EVF. See for yourself and shake your head in disbelief.

 enemjii's gear list:enemjii's gear list
Sony SLT-A65 Sony a77 II Sony DT 55-300mm F4.5-5.6 SAM Sony DT 35mm F1.8 SAM Sigma 85mm F1.4 EX DG HSM +6 more
enemjii Senior Member • Posts: 1,889
it takes one-tenth of a second for the brain to process what it sees
1

Sante Patate wrote:

Ontario Gone wrote:

Well in case anybody else hasn't heard, Fuji just dropped a bomb [...] they also ratcheted down the lag. We are looking at a 0.005 sec lag time, not bad eh?

A bird flying at 60 km/h travels 8cm in 5 milliseconds. With a 200mm lens, on APS-C, giving an AoV of 8 degrees, a bird with a two metre wingspan fills the frame if it is 11 metres away. Obviously, you want to centre it in the frame. That is not easy if you have an OVF, and what you see as you move the camera tracks the bird's motion with no delay. But the 5 miilisecond lag time corresponds to 0.4 degrees, one twentieth of the frame, so by the time you see it in the centre, it is actually off the far edge.

I am sure it is possible for the human brain to learn to compensate for the viewfinder lag, but in my experience that is a very steep learning curve even if the motion is predictable.

When human eyes see an object, it takes one-tenth of a second for the brain to process that information, said Gerrit Maus, a postdoctoral fellow in psychology at UC Berkeley.

"The fundamental problem is that our brain doesn't work in real-time," Maus said. "The brain actually works rather slow, compared to some electronics or computers that we have today. Information that the brain receives from the eye is already out of date by the time it gets to the visual cortex."

"The brain does not think the object is in the position where the eye tells us it [that it] is," Maus told LiveScience. "The object is shifted forward in the direction that it's moving, so we're actually predicting where things are going to be."

This means the brain perceives moving objects to be farther along in their trajectory than what a person actually sees with their eyes, he explained.

 enemjii's gear list:enemjii's gear list
Sony SLT-A65 Sony a77 II Sony DT 55-300mm F4.5-5.6 SAM Sony DT 35mm F1.8 SAM Sigma 85mm F1.4 EX DG HSM +6 more
TrojMacReady
TrojMacReady Veteran Member • Posts: 8,683
You forgot something. Or should I say 3 things.
2

Sante Patate wrote:

Ontario Gone wrote:

Well in case anybody else hasn't heard, Fuji just dropped a bomb [...] they also ratcheted down the lag. We are looking at a 0.005 sec lag time, not bad eh?

A bird flying at 60 km/h travels 8cm in 5 milliseconds. With a 200mm lens, on APS-C, giving an AoV of 8 degrees, a bird with a two metre wingspan fills the frame if it is 11 metres away. Obviously, you want to centre it in the frame. That is not easy if you have an OVF, and what you see as you move the camera tracks the bird's motion with no delay. But the 5 miilisecond lag time corresponds to 0.4 degrees, one twentieth of the frame, so by the time you see it in the centre, it is actually off the far edge.

I am sure it is possible for the human brain to learn to compensate for the viewfinder lag, but in my experience that is a very steep learning curve even if the motion is predictable.

You already have to compensate for far more than 5 milliseconds. Even prefocused with the fastest DSLR's, the shutterlag is already 40-50 milliseconds. Meaning that by the time you decide to hit the shutterbutton, the bird still moves for 40-50 milliseconds. Entering AF into that equation, adds another 90-250 milliseconds. And let's not overlook the largest delay, the human reaction time, usually being the longest.

That puts a whopping 5-20 milliseconds into perspective.

Erik Magnuson Forum Pro • Posts: 12,237
Not the mirror but the focus screen

enemjii wrote:

I was composing a bokeh on a Canon EOS 100D/SL1, and I was excited with what I saw on the mirror, snapped the picture and when I preview it, it is completely different.

Its not the mirror that' stone issue, but the "ground glass" view screen.   Moderns laser-cut screens are optimized for brightness;  for more accurate DOF or bokeh you want a fine ground glass

-- hide signature --

Erik

 Erik Magnuson's gear list:Erik Magnuson's gear list
Canon EOS 5D Mark II Canon EOS 450D Sigma SD10 Sony Alpha NEX-5 Nikon D3200 +28 more
enemjii Senior Member • Posts: 1,889
Re: 5 milliseconds is still too much. - BS

Sante Patate wrote:

Ontario Gone wrote:

Well in case anybody else hasn't heard, Fuji just dropped a bomb [...] they also ratcheted down the lag. We are looking at a 0.005 sec lag time, not bad eh?

A bird flying at 60 km/h travels 8cm in 5 milliseconds. With a 200mm lens, on APS-C, giving an AoV of 8 degrees, a bird with a two metre wingspan fills the frame if it is 11 metres away. Obviously, you want to centre it in the frame. That is not easy if you have an OVF, and what you see as you move the camera tracks the bird's motion with no delay. But the 5 miilisecond lag time corresponds to 0.4 degrees, one twentieth of the frame, so by the time you see it in the centre, it is actually off the far edge.

I am sure it is possible for the human brain to learn to compensate for the viewfinder lag, but in my experience that is a very steep learning curve even if the motion is predictable.

So I took my Sony A65, put one eye to the EVF, kept the other eye open. Guess what?

I had perfect stereo vision. The EVF did not contribute to any lag and make me go seasick, as would be the case if both eyes were capturing the action at different rates.

So i call BS on your assertions.

 enemjii's gear list:enemjii's gear list
Sony SLT-A65 Sony a77 II Sony DT 55-300mm F4.5-5.6 SAM Sony DT 35mm F1.8 SAM Sigma 85mm F1.4 EX DG HSM +6 more
Erik Magnuson Forum Pro • Posts: 12,237
Exactly what you do t want an EVF adding extra lag
1

enemjii wrote:

When human eyes see an object, it takes one-tenth of a second for the brain to process that information, said Gerrit Maus, a postdoctoral fellow in psychology at UC Berkeley.

And this is exactly why you don't want an EVF adding extra lag. Your brain has 20-70 years of experience estimating motion based on direct observation.  Add extra lag (or even worse the stutter of static images that happens with high FPS shooting) and you are fighting all of that experience.

-- hide signature --

Erik

 Erik Magnuson's gear list:Erik Magnuson's gear list
Canon EOS 5D Mark II Canon EOS 450D Sigma SD10 Sony Alpha NEX-5 Nikon D3200 +28 more
enemjii Senior Member • Posts: 1,889
Re: Not the mirror but the focus screen

Erik Magnuson wrote:

enemjii wrote:

I was composing a bokeh on a Canon EOS 100D/SL1, and I was excited with what I saw on the mirror, snapped the picture and when I preview it, it is completely different.

Its not the mirror that' stone issue, but the "ground glass" view screen. Moderns laser-cut screens are optimized for brightness; for more accurate DOF or bokeh you want a fine ground glass

-- hide signature --

Erik

The simpler answer is to have a better EVF. WYSIWYG.

 enemjii's gear list:enemjii's gear list
Sony SLT-A65 Sony a77 II Sony DT 55-300mm F4.5-5.6 SAM Sony DT 35mm F1.8 SAM Sigma 85mm F1.4 EX DG HSM +6 more
(unknown member) Senior Member • Posts: 1,324
Evolution complication: double compensation, double complication
2

TrojMacReady wrote:

You already have to compensate for far more than 5 milliseconds. Even prefocused with the fastest DSLR's, the shutterlag is already 40-50 milliseconds. Meaning that by the time you decide to hit the shutterbutton, the bird still moves for 40-50 milliseconds. Entering AF into that equation, adds another 90-250 milliseconds. And let's not overlook the largest delay, the human reaction time, usually being the longest.

That puts a whopping 5-20 milliseconds into perspective.

People have evolved to negating their internal lag when observing moving objects. This accomodation comes from learned discrepancy between sight (what is seen) and reality (what is). The brain does not easily discern between movement seen in real time though OVF or with bare eyes and movement seen on EVF with a lag, no matter how small the lag is. All other factors being equal (once you use a camera for fast action shooting long enough, your brain also learns to accommodate for the specific lag times of that camera, but it's a relatively slow process), adding another layer of lag complicates thing. Now let's see why it complicates things very much:

Learning to add to your prediction due to shutter lag is a learned response directly related to your own reaction time. It uses the tactile sense and relation to past experience in the same way you learn to predict where a baseball will be in order to catch it. Hand predicting baseball, finger predicting shutter, muscle memory. One-layer visual processing required.

Having to learn that the image you're seeing is not the image your brain is used to analyze and base its calculations on is an order of magnitude more complicated. Either your brain has to learn two different lag time averages for the same action with the same muscle memory, or do parallel visual processing, where it has to process visual data that it's not used to and factor in the extra lag into the visual calculations, retraining the muscle memory to predict a higher lag, effectively erasing the previous accustomation to any other camera you've used, creating new processing patters and new order of processing from the visual cortext to the movement centers in the brain. Either way the brain works it out is going to be a whole lot more complicated than simply getting used to a new camera, because the fundamentals are different: your brain can no longer process visual information as it's used to from the get-go (since you were born). It has to dread unknown territory and find a new way to cope with the known-but-not-perceived visual lag on top of the already-compensated-for-known-visual-reactonary-lag. Compensation on top of compensation as a new exercise that cannot be built on old experience. Two-layer visual processing required.

(unknown member) Senior Member • Posts: 1,324
Exactly
1

Erik Magnuson wrote:

enemjii wrote:

When human eyes see an object, it takes one-tenth of a second for the brain to process that information, said Gerrit Maus, a postdoctoral fellow in psychology at UC Berkeley.

And this is exactly why you don't want an EVF adding extra lag. Your brain has 20-70 years of experience estimating motion based on direct observation. Add extra lag (or even worse the stutter of static images that happens with high FPS shooting) and you are fighting all of that experience.

-- hide signature --

Erik

I just stated that in my previous post! I used a few more words.

Erik Magnuson Forum Pro • Posts: 12,237
Different part of the cycle
1

TrojMacReady wrote:

You already have to compensate for far more than 5 milliseconds.

Read the post above about how the brain estimates motion. Extra delay when viewing throw this off.

Even prefocused with the fastest DSLR's, the shutterlag is already 40-50 milliseconds. Meaning that by the time you decide to hit the shutterbutton, the bird still moves for 40-50 milliseconds.

Exactly - but as humans have been tracking objects and predicting their future position based on direct observation for a long time; this is a problem we are well equipped to solve.

Entering AF into that equation, adds another 90-250 milliseconds. And let's not overlook the largest delay, the human reaction time, usually being the longest.

I take it you don't do this type of shooting very often.  It's not point and shoot, but you acquire and then follow the target. Like your brain, a good AF system will begin to do predictive tracking.  You can then start shooting single or burst as you continue to track the target In the viewfinder.  Yes, there is blackout but again that's a type of condition we are very well adapted to.  (You dont lose the ability to track a moving object if it's behind a picket fence do you?)

-- hide signature --

Erik

 Erik Magnuson's gear list:Erik Magnuson's gear list
Canon EOS 5D Mark II Canon EOS 450D Sigma SD10 Sony Alpha NEX-5 Nikon D3200 +28 more
enemjii Senior Member • Posts: 1,889
Re: Exactly
1

canonagain123 wrote:

Erik Magnuson wrote:

enemjii wrote:

When human eyes see an object, it takes one-tenth of a second for the brain to process that information, said Gerrit Maus, a postdoctoral fellow in psychology at UC Berkeley.

And this is exactly why you don't want an EVF adding extra lag. Your brain has 20-70 years of experience estimating motion based on direct observation. Add extra lag (or even worse the stutter of static images that happens with high FPS shooting) and you are fighting all of that experience.

-- hide signature --

Erik

I just stated that in my previous post! I used a few more words.

5 milliseconds adds lag to 1/10 sec? I don't know what your eyes are trying to capture folks.

 enemjii's gear list:enemjii's gear list
Sony SLT-A65 Sony a77 II Sony DT 55-300mm F4.5-5.6 SAM Sony DT 35mm F1.8 SAM Sigma 85mm F1.4 EX DG HSM +6 more
David Hull
David Hull Veteran Member • Posts: 5,997
The mirrorless (or mindless) troll is back -nt

Ontario Gone wrote:

Well in case anybody else hasn't heard, Fuji just dropped a bomb and there will likely be a lot of broken glass, as well as broken hearts. Not only is it a .77x magnification, the biggest of any digital camera, they also ratcheted down the lag. We are looking at a 0.005 sec lag time, not bad eh? The goodies are listed HERE. I guess now it's just a matter of getting that on sensor predictive tracking a bit faster and then...well, we know what then. Happy shooting

-- hide signature --

"Run to the light, Carol Anne. Run as fast as you can!"

-- hide signature --
 David Hull's gear list:David Hull's gear list
Canon EOS 50D Canon EOS 5D Mark III
jtan163 Senior Member • Posts: 2,264
Re: No mention of refresh rate?

coudet wrote:

I do wonder why are they hiding it.

I'm not sure they are hiding it, I'm pretty sure I've seen in mentioned somewhere as 55hz.

 jtan163's gear list:jtan163's gear list
Nikon D750 Nikon D4 Olympus OM-D E-M5 Nikon AF-S Nikkor 50mm f/1.4G Nikon AF-S Nikkor 24-120mm f/4G ED VR +7 more
TrojMacReady
TrojMacReady Veteran Member • Posts: 8,683
Re: Evolution complication: double compensation, double complication
2

Overly simplified because both AF lag and human lag vary by a much larger amount than even the total (much more consistent) EVF lag.

Having done swallows in flight with a fast OLED EVF, I can also safely say that it didn't take me much time to compensate for the lag compared to my DSLR.

TrojMacReady
TrojMacReady Veteran Member • Posts: 8,683
Re: Different part of the cycle
2

Erik Magnuson wrote:

TrojMacReady wrote:

You already have to compensate for far more than 5 milliseconds.

Read the post above about how the brain estimates motion. Extra delay when viewing throw this off.

No, as explained above, the AF and human delay vary and their average variance is already a multitude of the rather consistent EVF delay. I have found compensating for EVF delay no harder to adjust to than just the bare shutter delay, let alone AF. YMMV.

Even prefocused with the fastest DSLR's, the shutterlag is already 40-50 milliseconds. Meaning that by the time you decide to hit the shutterbutton, the bird still moves for 40-50 milliseconds.

Exactly - but as humans have been tracking objects and predicting their future position based on direct observation for a long time; this is a problem we are well equipped to solve.

We are equally equipped to compensate for an extra few milliseconds. The bird might have changed direction, but that could have equally been the case during the cycle of shutterlag. The differences in shutterlag between different cameras is already larger than the timeframe discussed here.

Entering AF into that equation, adds another 90-250 milliseconds. And let's not overlook the largest delay, the human reaction time, usually being the longest.

I take it you don't do this type of shooting very often. It's not point and shoot, but you acquire and then follow the target. Like your brain, a good AF system will begin to do predictive tracking. You can then start shooting single or burst as you continue to track the target In the viewfinder. Yes, there is blackout but again that's a type of condition we are very well adapted to. (You dont lose the ability to track a moving object if it's behind a picket fence do you?)

You fail to realize that every sequence starts with AF delay and even during a session of predictive tracking, there is delay which can cause predictive failures of the AF system. The EVF delay is a fraction of that. The only difference is that the EVF can visually already represent the loss of connection with the subject when our own compensation for the EVF delay fails, where as the possible disconnection during shutter delay, AF delay and reaction time, will only be represented in the pictures. Which means a much more delayed or no direct feedback. Which in turn hinders improvement of compensation. With the EVF delay continually represented, your brains are offered continuous feedback. Yes, it's great if you are able to keep that bird framed in you OVF without failure during a longer period, but in the end it's about the pictures and then you can't ignore the rest of the delay cycle for reasons mentioned.

I've seen for example 2 users respectively in their 70's and 80's succesfully track and frame remote controlled jets and helicopters doing speeds in excess of a 100 mph and unpredictable stunts, all with an EVF that has about 10 ms delay. I'd say most of us shouldn't have many issues to learn to adapt too then, in an age where millions of gamers are also successfully dealing with input lag, display and internet lag (the latter 2 often have similar effects as EVF lag, but internet lag is usually less consistent) that is much greater in fast paced 3D shooters. Proving this isn't exactly magic.

Lee Jay Forum Pro • Posts: 50,178
Re: Different part of the cycle

25ms of EVF lag required me to frame a particular fast moving object at 200mm equivalent while I could sustain 640mm equivalent with an OVF of the same object from the same location. That's a factor of 10 difference in final pixel count on the target.
--
Lee Jay

 Lee Jay's gear list:Lee Jay's gear list
Canon IXUS 310 HS Canon PowerShot SX260 HS Canon EOS 5D Canon EOS 20D Canon EOS 550D +22 more
Erik Magnuson Forum Pro • Posts: 12,237
Re: Evolution complication: double compensation, double complication
1

TrojMacReady wrote:

Having done swallows in flight with a fast OLED EVF, I can also safely say that it didn't take me much time to compensate for the lag compared to my DSLR.

OK, so you admit there is a difference and want to quibble on how important the difference is.  As LeeJay says, the difference is often how tightly you are able to frame the shots.  With swallows that's almost never a factor - try the same thing with larger birds or even airplanes where you can almost fill the frame with the subject.

-- hide signature --

Erik

 Erik Magnuson's gear list:Erik Magnuson's gear list
Canon EOS 5D Mark II Canon EOS 450D Sigma SD10 Sony Alpha NEX-5 Nikon D3200 +28 more
TrojMacReady
TrojMacReady Veteran Member • Posts: 8,683
Re: Evolution complication: double compensation, double complication

Erik Magnuson wrote:

TrojMacReady wrote:

Having done swallows in flight with a fast OLED EVF, I can also safely say that it didn't take me much time to compensate for the lag compared to my DSLR.

OK, so you admit there is a difference and want to quibble on how important the difference is.

That was already pretty clear in my first reply. The notion before that was, that 5ms would make a steep learning curve.

I say that around 10ms delay didn't give me a steep learning curve, considering much less predictable and much greater delay factors in the whole cycle. And I have extensively shot both.

Here's an exampleof someone who shoots 300-500mm lenses handheld (!) using manual focus (!) and an OLED finder with about 10ms delay. We can theorize all day, but if someone can still succesfully track and frame small and large flying birds with those factors at play, those that fail with AF, a tripod/monopod and comparable or better technology, should wonder if it's the EVF that's the issue here.

As LeeJay says, the difference is often how tightly you are able to frame the shots. With swallows that's almost never a factor - try the same thing with larger birds or even airplanes where you can almost fill the frame with the subject.

I've shot airshows in the past at 504mm equiv. handheld, using a by current standards ancient EVF (FZ18) with 10 times lower pixel count, up to 4 times lower refresh rate, 40 to 70 ms delay and last but not least, one with LCD technology that tended and still tends to ghost, rather than the latest OLED viewfinders that do not suffer from this. And let's for this topic ignore dog slow CDAF and heavily compromised optics, further challenging the outcome.

And again I can happily say that despite the old tech mentioned above it didn't turn out to be the magic or near impossible that some seem to make it sound. I'm sure I could have done much better using today's technology too.

bobhp Forum Member • Posts: 58
Re: The mirrorless (or mindless) troll is back -nt

David Hull wrote:

Ontario Gone wrote:

Well in case anybody else hasn't heard, Fuji just dropped a bomb and there will likely be a lot of broken glass, as well as broken hearts. Not only is it a .77x magnification, the biggest of any digital camera, they also ratcheted down the lag. We are looking at a 0.005 sec lag time, not bad eh? The goodies are listed HERE. I guess now it's just a matter of getting that on sensor predictive tracking a bit faster and then...well, we know what then. Happy shooting

-- hide signature --

"Run to the light, Carol Anne. Run as fast as you can!"

-- hide signature --

My thoughts exactly David, one and the same in my opinion absolute  drivel.

 bobhp's gear list:bobhp's gear list
Fujifilm XF 23mm F1.4 R
Chikoo
Chikoo Senior Member • Posts: 1,630
Re: Not the mirror but the focus screen

enemjii wrote:

I was composing a bokeh on a Canon EOS 100D/SL1, and I was excited with what I saw on the mirror, snapped the picture and when I preview it, it is completely different.

Its not the mirror that' stone issue, but the "ground glass" view screen.   Moderns laser-cut screens are optimized for brightness;  for more accurate DOF or bokeh you want a fine ground glass

-- hide signature --

Erik

How about a better evf than the fine mirror you are talking about?

Keyboard shortcuts:
FForum MMy threads