Let's drop the first bomb

Erwann Loison

Leading Member
Messages
530
Solutions
1
Reaction score
404
Location
middle of nowhere
How good is the actual DFD-based AF and how good can we expect it to be(come)?
 
"It's better than that. Sigma are going to make an EF-L adapter. So put a Sigma EF mount lens on the adapter and it should work perfectly, since Sigma know exactly how they made their own lenses in EF mount."

-sounds EXTREMELY optimistic to me
Why? You don't think Sigma understand their own lenses?
 
How good is the actual DFD-based AF and how good can we expect it to be(come)?
More to the point I wonder why Panny did not go the PD route when they had a clean slate sensor and camera. PD is basically bulletproof whereas DFD has never been very good.
PD reduces IQ.
In theory, yes. But when we are talking a handful of pixels lost to PD sensors out of 24M or 47M, I'm not sure it makes much of an impact. There may be additional sensor noise - the stacked Z dark frame patterns, possibly - but I think it's a little early to tell how bad it is.
 
"It's better than that. Sigma are going to make an EF-L adapter. So put a Sigma EF mount lens on the adapter and it should work perfectly, since Sigma know exactly how they made their own lenses in EF mount."

-sounds EXTREMELY optimistic to me
Why? You don't think Sigma understand their own lenses?
i'm very skeptical of ANY adapter working "perfectly"
 
Depends... If it's just a tube extending the flange and passing contacts... It's just annoying to HAVE to use it.
 
How good is the actual DFD-based AF and how good can we expect it to be(come)?
More to the point I wonder why Panny did not go the PD route when they had a clean slate sensor and camera. PD is basically bulletproof whereas DFD has never been very good.
PD reduces IQ.
In theory, yes. But when we are talking a handful of pixels lost to PD sensors out of 24M or 47M, I'm not sure it makes much of an impact. There may be additional sensor noise - the stacked Z dark frame patterns, possibly - but I think it's a little early to tell how bad it is.
It appears that it is a substantial percentage of pixels. For the Fuji X-T3, it is 2 million out of 26 megapixels, so well over 7%. I'm not sure how much of an impact that makes on the image, but I wouldn't label it as a "handful" of pixels.

Regarding DFD, I kinda assume that Panasonic knows what they are doing. DFD needs to have real potential, or they should have converted to phase detect research already.

One way DFD would prove out as having potential: I assume that Panasonic created a camera connected to a powerful computer without the limitations of today's on camera chips (which must be low powered and run not too hot). So apply DFD algorithms against the most powerful computer needed, and go ahead and take photos at a soccer game or in a test environment or whatever. Is DFD very close to phase detect? Maybe better? If so, how much computer power is needed and how close to being able to get it in a camera? Are there other limitations like DFD algorithms ask the lens to move more quickly or more minutely than it can? Or if it isn't close: then I hope they are considering other approaches like phase detect pixels.
 
In theory, yes. But when we are talking a handful of pixels lost to PD sensors out of 24M or 47M, I'm not sure it makes much of an impact. There may be additional sensor noise - the stacked Z dark frame patterns, possibly - but I think it's a little early to tell how bad it is.
It appears that it is a substantial percentage of pixels. For the Fuji X-T3, it is 2 million out of 26 megapixels, so well over 7%. I'm not sure how much of an impact that makes on the image, but I wouldn't label it as a "handful" of pixels.
Interesting. I'd read that the PD sencells were 2 pixels in size. The A6000 supposedly has a bit fewer than 500 so that's around 1K pixels lost. I've read specs that talk about focus points which may or may not be sencells. I went looking for an authoritative article but didn't readily come up with one. Do you have a pointer to one? Not arguing here, sincerely want to understand this.
 
How good is the actual DFD-based AF and how good can we expect it to be(come)?
More to the point I wonder why Panny did not go the PD route when they had a clean slate sensor and camera. PD is basically bulletproof whereas DFD has never been very good.
PD reduces IQ.
How though? Can't say I've seen terrible IQ out of Nikon, Sony and all other manufacturers that use PD or am I missing something?
Have a look at this, regarding the Nikon Z7. Link.

Note the mention of other OSPDAF systems having similar IQ issues.
 
Last edited:
In theory, yes. But when we are talking a handful of pixels lost to PD sensors out of 24M or 47M, I'm not sure it makes much of an impact. There may be additional sensor noise - the stacked Z dark frame patterns, possibly - but I think it's a little early to tell how bad it is.
It appears that it is a substantial percentage of pixels. For the Fuji X-T3, it is 2 million out of 26 megapixels, so well over 7%. I'm not sure how much of an impact that makes on the image, but I wouldn't label it as a "handful" of pixels.
Interesting. I'd read that the PD sencells were 2 pixels in size. The A6000 supposedly has a bit fewer than 500 so that's around 1K pixels lost. I've read specs that talk about focus points which may or may not be sencells. I went looking for an authoritative article but didn't readily come up with one. Do you have a pointer to one? Not arguing here, sincerely want to understand this.
I'm not an expert and don't have definitive info. I did open a thread in which a few people opined:


The 2 million phase detect pixels number is from the Fuji X-T3 press release:

 

Keyboard shortcuts

Back
Top