EVF Lag times (continued)  For Anders
Anders W wrote:
Michael J Davis wrote:
Anders W said:
Consider the following two scenarios for my shot with the camera used for recording the two screens (with the recording time being 6.25 ms if I use the G1 and 4 ms if I use the EM5).
First scenario:
The frame shown by the computer screen is at the very end of its 16.7 ms display time.
The frame shown by the camera screen was captured at the very beginning of the 16.7 ms display time of the corresponding computer frame.
The frame shown by the camera screen is at the very beginning of its 8.3 (120 Hz) or 16.7 ms (60 Hz) display time.
Second scenario:
The frame shown by the computer screen is at the very beginning of its 16.7 ms display time.
The frame shown by the camera screen was captured at the very end of the 16.7 ms display time of the corresponding computer frame.
The frame shown by the camera screen is at the very end of its 8.3 (120 Hz) or 16.7 ms (60 Hz) display time.
Add to this the possibility that there is some random error in the computer clock such that it doesn't step forward exactly 16.7 ms for each new computer frame.
Obviously, higher sampling rate at both ends of the chain is an advantage in the sense that it reduces the sample we need for a reasonably precise estimate of the average. Regrettably, I have no 240 fps video camera available (the EM5 manages only 30 fps), so I have to sample more than you.
I started to emulate your tests, using a G1 to record both the G1 display and the Stopwatch displayed on my Viewsonic VP201s (not recent but a useful 1600 x 1200 display).
1. I had great difficulty getting both the G1 screen and the Monitor in focus. Stopping down so increased the exposure times that I couldn't read the figures!!
2. So I attempted to see the response of the monitor as I merely recorded the stopwatch (program you used).
This is the result at exposure speeds between 1/40th sec and 1/10th:
I was amazed that there is no significant double imaging down to 1/25th (one of the 1/30ths caught the changeover), but worse is that longer shutter speeds only showed the ghosting of 2 sets of figures. I was expecting  even at UK 50cps refreshes  to see at least three images at 1/10sec. Inspection of these suggests that we can only measure delays around 1/20th second. i.e. 50ms. Which when we are comparing 100ms to 25ms delays is much less precise than I thought your experiment suggested.
I know some discussion on this has taken place; and your monitor is now doubt more up to date and responsive; but ISTM that the errors in the process undermine any conclusions.
Just a thought.
Mike
Yes, I noticed the DoF problem too. My solution was to magnify the figures so that they nearly covered the whole screen (zoom level 1000 percent) and then set focus somewhere inbetween the computer screen and the camera screen. This left both blurry but still sharp enough that I could read the figures and allowed me to use a fairly wide aperture along with a fast shutter speed if I cranked up the ISO to 3200. The shutter speed I used was 1/1000. At this speed it still takes at least 4 ms on the EM5 and at least 6.25 ms on the G1 to expose the entire frame since the max speed you can use for flash sync is 1/250 and 1/160, respectively. But since the numbers displayed covered only part of the frame vertically and since I made an effort to keep the computer numbers and the camera numbers at the same height, so that they would be passed by the shutter slit at the same time, I probably did a bit better than that.
I repeated your garbling test and found, once I got down to the slowest speeds you used, that there was garbling of three rather than two numbers more often than not. This may be due simply to different performance of our computers on one or other level.
However, regardless of how frequently the onscreen clock is actually updated it doesn't undermine the conclusions. In the previous thread, I provided an even more extreme example to show this. Suppose the actual EVF lag is 100 ms and the computer clock is updated only once a second. This means that the probability that a single test shot will show a difference of zero is 90 percent and the probability that it will show a difference of one second is 10 percent. Consequently, the expected average is still correct (100 ms) although none of the test shots will show a value close to that average.
At the same time, it would, under the extreme conditions of this example, take a large sample of test shots before we could be reasonably sure that the actual lag time was in the vicinity of 100 ms. The greater the amount of random error, the bigger a sample we need.
To recap a few other things that I already pointed out in the other thread:
How large a sample we need for this or that level of precision can be estimated by means of the sample itself. In the tests I performed I had standard deviations somewhere between 20 and 25. For the sample size I used (25 cases), this means that the standard error of the mean is about 5 and the 95percent confidence interval roughly plus/minus 10. I should emphasize that these figures are but rough approximations based on some assumptions that are not strictly met (e.g., that the underlying distribution is normal) but that they are nevertheless rather close to what we would get if we bothered to calculate more precisely.
If you want a more precise estimate without changing the test setup, all you have to do is increase the sample size. For example, with a sample of 100 rather than 25 test shots, the 95percent confidence interval would shrink from about plus/minus 10 to about plus/minus 5 (i.e., the confidence interval is cut in half if you quadruple the sample size since precision is proportional to the square root of the number of cases).
Yes, sorry, I do understand the statistics _ I should have made this clearer, and you are right that adequate sampling does lead us to arrive at the 'correct' solution.I was just trying to repeat your experiment, so I understood it better.
You are the expert here, but I was thinking that to pick up timings that are of the same order of magnitude as the refresh times would require very large sample sizes to get to adequate confidence levels. However, I bow to your statistical prowess!
Once again an interesting discussion  thanks!
Mike
Mike Davis
Photographing the public for over 50 years
www.flickr.com/photos/watchman
Post (hide subjects)  Posted by  When  

Nov 1, 2012  
Nov 1, 2012  2  
Nov 2, 2012  
Nov 3, 2012  
Nov 2, 2012  
Nov 3, 2012 

Follow us
