Sdaniella: showing us AF performance on bright sunny day conditions with high contrast subjects where modest crop sensor lens aperture diameters offer deeper dof, tells us only the AF need never work hard ...
if the lighting conditions were less contrasty, less bright, sundown, indoor, at home, in arena, etc, wider apertures be required, where shallower dof is present, and PDAF is more pushed to task
or if longer tele lenses with yet tighter apertures, not all PDAF points at the edges may work under such lower lighting, but only those pdaf pts clustering in the center
My Nikon D750 focuses wide-open using PDAF (viewfinder). When you force it to stop down (DoF preview) it refuses to focus even. Using CDAF (Live View) it focuses at the currently chosen f-stop.
On the other hand Olympus mirrorless cameras focus wide-open using either PDAF or CDAF. But you can force them to focus at a slower f-stop by using DoF preview (where the Nikon refuses to focus at all).
So it's not a thing or DSLR vs. mirrorless, but of specific camera implementations and exposure/low-light limits of focus-sensors.
Physics still at work. The PDAF sensels are small and for any given area relatively few. This in turn means that they can only capture relatively little light compared to a big PDAF module of a DSLR.
The workaround is to increase exposure of every single Live View frame, which in turn decreases fps of Live View, which then in turn decreases the speed of the AF feedback loop.
At one point PDAF sensor exposure is so low that the camera switches to CDAF, because it can then make use of more sensels (albeit not necessarily the whole sensor area, Olympus uses line skipping during Live View, Samsung's NX1 is said to not do, no idea about Sony cameras).
Kids at home = moving subjects at close vicinity in bad light = everyday practice = one reason for me to switch from mirrorless to DSLR until on-sensor AF-C improves considerably!
RPJG: "However, along with 4K capture, the 1D X II includes tools to grab 8.8MP frames from its 4K files: at which point the decision to save every frame as an individual JPEG makes slightly more sense."
Does this mean it makes it *easier* to grab a frame from a video?
Surely you can grab a frame from *any* video, using the right program - or are there limitations with that process?
Sure, software can extract single frames. Still the "single" frame is a reconstruction of a key-frame and subsequent sub-frames, which only record changes. So a full JPEG frame usually wins over a reconstructed one, depending on content and codec.
Yes, it makes it *easier* to grab a frame from video when every single frame is already encoded as JPEG. Other video codecs only write a full (as in all information) key-frame every X frames (like every 15) and then only save the differences between frames.
So in order to fetch a single frame you first have to get the key-frame and then all subsequent sub-frames. Of course this already happens for Live View anyway, but it also means that you have to keep a full resolution image in memory for every single frame beside the downsized Live View image. I don't know how they do it in practice, but as you can already guess it's more complicated than just pulling a full featured JPEG frame.
With fast motion you may also get information deterioration, because those "differences" sub-frames are compressed at varying levels, too (implementation depending on codec).
vscd: It's a shame to enter such a picture into a compo, but if I look into professional photography it seems to get more and more into common. Even the big markets, stock or prices have the problem with faked photos... on this example you can spot it easily but how to spot a "Streetphotography" with "magic moments", not taken in reality but with models or background actors... this is the same fake, but difficult to find out.
You can enter anything you like. It's up to the judges to decide what's real and good. Maybe the judges are currently hiding in shame, that is, the interns that were asked to do a quick and inexpensive social media thingy. :D
Bob Janes: Hmmm, don't quite understand why "made in the USA" is a like....
When the English invented the "Made in Germany" sticker they wanted to make people aware of "cheap copies", quite similar to what would later be called "Made in China". Meanings change over time... ;)
Well, there is 0:32.
FuhTeng: Touchscreen for video, but you can't use it to select AF? Wouldn't that be useful while recording video?
Also, forgive this mirrorless camera user, what's the purpose of mirror-lock up 14 fps? A 20 MP resolution stop-motion video of something that doesn't change focal planes?
Still no focus peaking? I can hardly judge proper manual focus on screen using f/2.8. I want peaking to assist me in nailing that focal plane instead of just getting close.
raztec: What, no pop up flash?Deal breaker for me!
Of course this was in jest, especially when talking about this very expensive professional body. But when the technology trickles down I hope very much for a pop-up flash.
Nikon's pop-up flash algorithm is great! The D750 by defaults adds just a single stop of fill-flash when you use the pop-up flash. That's one step less noise on your main subject without the flash dominating the image. You can set the camera so that (non-flash) exposure compensation controls the amount of flash (every stop of negative EC adds a stop of flash). So a single turn of the control wheel lets you change from subtle (higher ISO, less flash visible) to dominating (lower ISO, more flash visible).
Fujifilm can do subtle, too, but changing to dominating wasn't as easy when I tried last (quite some time ago).
Timur Born: Let it trickle down to D750 territory, especially the AF and sensor tech.
AF fine-tune could be implemented to current cameras via firmware update. Should have been in there long ago. Too bad paid-for firmware upgrades are still no reality.
Japanese toilet-seats sometimes seem to have more processor logic than DSLR lenses.
Let the camera (and maybe even the lens) save some numbers for different distances and focal length in memory. Even add different parts of the image to cover most of the AF sensor area (and thus middle of the lens area) and you will end up with a few kb per lens, if at all.
If the camera processor/OS cannot do this then just transfer the raw numbers to a connected smartphone/computer and let those to the number crunching. Transfer the finished calculations back and be good to go.
DSLR manufacturers are lazy on the software/firmware front. As with too many other "current" technological gizmos this feels like the digital stone-ages.
Too bad that Samsung didn't make it with the NX1, they seemed to be on the fore-front with their processor and software expertise. Hopefully some other company is willing to pay Samsung good money for the tech and makes good use of it.
Let it trickle down to D750 territory, especially the AF and sensor tech.
How will it react to dust and pocket lint? That's the main downfall of my Panasonic LF1, the sensor is full of it. If it says "into your pocket" then it should be able to handle that situation properly.
fortwodriver: Did I miss something, or did I not see the lens used for some of these tests. The image on the conclusion page is interesting. It was shot with a Sigma lens. That noise (the patterned banding) look like the noise I used to get on my older 7D using a Sigma lens that wasn't quite right in its head. After I sent it back to Sigma, they swapped out the circuit and I never had an issue with it again.
I don't know if it's wise to test those sorts of things with out-of-system reverse-engineered components. Canon, out of all the camera companies out there, has been the absolute worst at providing guidance to third-party lens manufacturers. By using a third party lens, you've basically poisoned the test results for some of those images - because Canon never guarantees compatibility with anything from Sigma.
It wouldn't surprise me at all if some of the worst shadow noise you found was caused by that Sigma lens.
It shows as horizontal banding, usually starting from the top of the sensor and then only reaching partly into the image (the amount of which varies). With the 70-200 it happens at a fixed frequency (size and spacing of stripes), with the 24-70 it's either fixed (larger size and spacing) or variable (usually from smaller to larger).
It's easily reproduced in combination with AF-C, harder with AF-S. Usually you need very high ISO to see it (it's sensor noise in the end), but with the 24-70 I can see it starting from about ISO 6400. I could not reproduce it with the few Nikon lenses I had access to yet. No experience with Sigma lenses and no idea if the Tamron lenses do the same thing to Canon sensors.
So when you notice banding noise on sensors it can be useful to turn off lens based focus motor and stabilization features to make sure it's really the camera's fault. Maybe even let the electronics settle (communicate) after using manual focus, albeit that works well with the Tamrons.
On the 5D tested here the noise pattern may be coming from the camera itself, but yes, lenses can induce that. I see it with Tamron lenses (24-70 & 70-200) on my D750 where it seems to be induced by the AF motor in the lenses. Also saw the same pattern Tamron 70-200 specific pattern on someone shooting a D7200 here on the Forum. This is a real bummer and doesn't seem to always be as easily fixed as exchanging a circuit board.
Samsung claimed that the full sensor is read-out during Live View, without line-skipping. With their expertise in high performance processors I trust them being capable of doing so. But has it ever been confirmed in practice?
It's EVF can do 240 fps or so? That would be 240 times 28 mp in one second, plus processing and resizing to EVF/screen resolution. Quite a feat if true!
Sports Dad: Such and incredible bargain. Every bit as good as an A7Rii up to ISO3200, but for 1/3rd the price. The big difference is the Samsung zooms are F/2-2.8 and F/2.8 vs. F/4, and Samsung can shoot at 15 FPS.
120 FPS video while tracking focus is amazing as well as the 4K video.
And did you know the NX1 has IBIS during video for any lens you put on it?
When the NX1 was on display at Photokina (using preproduction firmware) continuous AF was short of a disaster, even on high contrast motionless targets. When I pointed that out to the German person at the stand he called no less than three Korean guys to surround me. I demonstrated the issues and we kind of agreed that this was sub-par performance. They nodded and hopefully went back home with an agenda to improve things.
Since then I never had another NX1 in my hands, so I don't know how much AF-C improved. Most mirrorless cameras are good and fast with AF-S, it's AF-C where they have to compete with DSLRs.
Timur Born: Now the really interesting question is which cameras do not use line skipping (on sensor) during Live View? When the source image is lower resolution than the EVF then all the extra EVF resolution is only useful for playback (which is based on a full resolution image).
For example, the E-M1 does not make much use of its 2.3M EVF during Live View. In practice during Live View you cannot make out any difference to the 1.44M EVF of the E-M5 (MK1) when it comes to recognizing fine details of a scene.
Ulfric, this is not about the comparison of E-M1 vs. E-M5, it is about the comparison of Live View vs. playback. As long as Live View lacks sharpness vs. playback it doesn't matter how many pixels you put on the viewfinder. This is true for both the E-M1 and E-M5.
Digimat, in low light the viewfinder is more noisy, because it still has to maintain anywhere between 15 to 60 frames/s. So exposure of each individual frame is usually lower than your final image. This is why Olympus cameras lower the frame-rate in low light, so that each frame gets longer exposure.
On top of that line-skipping causes less light to be used for Live View to begin with. It's processor intensive to read-out and process 16 mp at 15-240 fps, this is why the claim about the NX1 not doing line-skipping was a special feature.
Fortunately the advent of 4k (<9 mp) and then 8k video pushes the envelop towards faster processing.
Live View (screen) on the D750 shows more detail in full darkness than on the E-M1.
Regardless of whether the E-M1 EVF is better or not than the E-M5's one, the bottle-neck still is *not* the viewfinder. As long as Live View on the same viewfinder is more blurred than playback the real culprit lies somewhere else. You can increase the resolution all you want, if the source doesn't improve the EVF resolution is still of lesser interest.
A. Reviews usually don't dig so deep into the specifics of the electronics. How many reviews have told you yet that Tamron lenses cause interference banding lines on Nikon sensors?! It's just not part of a normal review process.
Improved optics, improved contrast, improved downscaling algorithm (more processing power) and maybe improved horizontal resolution (line skipping affects vertical resolution).
In practice I compared the E-M1 directly to the E-M5 and any present differences in EVF resolution alone are hardly visible. What is very visible is the lack of detail in Live View compared to playback view *after* the shot was taken.
Aka, Live View through the Olympus' viewfinder isn't tack sharp, which is not a fault of the EVF (resolution) itself. More EVF resolution will *not* solve this issue.
B. I report that part of the image information is not present during Live View! Scaling down from 8 or 4 mp results in less detail than scaling down from full 16 mp.
No rant, by the way.