falconeyes

falconeyes

Lives in Germany Germany
Has a website at falklumo.blogspot.com
Joined on Apr 28, 2008

Comments

Total: 199, showing: 1 – 20
« First‹ Previous12345Next ›Last »
In reply to:

Mike Davis: I suspect the label "Diffraction Correction" exaggerates the effectiveness of this feature. No amount of processing can magically recreate actual subject detail that was lost to diffraction as the light passed through the aperture. It might be able to simulate what appears to be genuine subject detail, but it won't be accurate.

For example, assuming that all other variables affecting resolution are up to the task... If diffraction at a given f-Number is just bad enough to prevent you from discerning the date "2014" on the face of a coin lying on a table several meters from a camera equipped with a normal FL lens when viewing at 100%, "Diffraction Correction" isn't going to reconstruct that data from thin air when the data never got past the aperture in the first place.

You can't make a silk purse from a pig's ear.

Mike Davis and followers seem to be lost in physics, so I'd like to clarify a bit. Diffraction reduces contrast up to a certain spatial frequency and looses information beyond. With current sensors, the point where information starts to be lost is abot F/11. Up to this point, the effect of diffraction can be reverted using deconvolution. This is possible because the PSF of diffraction is a known. Something similiar was run in the early Hubble scope. Note that the mathematical properties of the procedure require a clean signal, so low ISO.

So yes, diffraction effects can be corrected for, sometimes. Same holds for other small lens aberrations. DxO does the same in their raw converter.

Direct link | Posted on Jul 23, 2014 at 18:50 UTC
On Nikon 1 V3 First Impressions Review preview (618 comments in total)
In reply to:

AbrasiveReducer: Except for image quality, sounds like a great camera.

> Except for image quality, sounds like a great camera.

Except for image quality, a recent smart phone sounds like an even greater camera...

Direct link | Posted on Jul 17, 2014 at 12:32 UTC
On Drone lighting could be coming soon to your studio article (120 comments in total)

This is what *research* at most-famous US lab MIT looks like? Toy projects which would be classified "product development" in any small engineering company in any developed country?

To remind people what research is about and why it is worth to spend public money on it: It is all about to push the enveloppe of what is understood, known and feasible. A copter-based flash was feasible years ago.

Direct link | Posted on Jul 17, 2014 at 09:15 UTC as 46th comment | 2 replies
On What is equivalence and why should I care? article (1763 comments in total)
In reply to:

attomole: So f2 = f2 =f2 after all, only if you want to keep quality or DOF constant between formats it isn't (and why would you want to do that). it took hours or pouring over the Joseph james article and the three Petapixel videos on the subject but I finally came back full circle.

The total light stuff was an interesting revelation to me, the discussion on this and the thought process always gets bogged down in the mix of, number of pixels, pixel pitch sensor size and viewing conditions, the concept of total light captured. nicely sidesteps that argument to explain the bulk effects we see regardless of pixel size and number of pixels (almost) and is nicely illustrated in the graphics in this article)

> If I asked you to set the aperture to "F/4 equivalent", what would you do?

This isn't a hypothetical question. In a photo workshop or professional production, you WILL be asked so. And a M43 user better KNOWS to then set his camera to F/2.

On several occasions, I actually asked for a firmware mode to operate a camera solely based on equivalent units for FL, FStop and ISO. This would be a huge simplification for users operating mutliple camera format, or learning photography.

Actually, many cameras (RX100 being an example) display FL as an equivalent unit and FStop, Iso as a non-equivalent unit. I find this misleading and almost fraudulent.

Direct link | Posted on Jul 9, 2014 at 13:59 UTC
On What is equivalence and why should I care? article (1763 comments in total)
In reply to:

mostlyboringphotog: I guess now I understand why "equivalence" becomes such a heated topic while it's nothing more than manipulating some math values.

Is 2+2 equivalent to 3+1?

What I object to the F-stop equivalence is that I think it buries the understanding of DOF even deeper while creating that DOF is sensor size dependent misconception.

For example, if you crop an image from a FF to a size of crop image, do you now need to crank up to the equivalent exposure using PP to maintain the DOF? Does DOF change if an image is cropped?

DOF is a function of FL, F-stop (EP) AND distance to the subject.
You don't need "equivalent F-stop" if you move closer to the subject with a crop camera to maintain the same DOF of FF.
If you don't want to move, you may change the FL (not equivalent).
And if your lens is not a constant F-stop zoom (or prime), then you can change the F-stop to a DOF calculated F-stop (not equivalent), then adjust ISO or shutter speed or add ND-filter to maintain the same exposure.

@mostlyboringphotog: The reason why I personally like the concept of equivalent parameters so much is this: If you characterize a photo, you normally have to say which FL, F-Stop, SS, Iso, format, postproduction-crop, postproduction-exposure-push. Or alternatively, just equivalent FL/F-Stop/Iso, SS. Yes, postproduction changes equivalent values for FL, F-Stop and Iso.

It simply is an easier way of comunication between photographers when talking about their images. 4 parameters rather than 7.

Of course, one may do whatever fits. The only thing I teally object to is to mix real and equivalent parameters.

Direct link | Posted on Jul 8, 2014 at 18:33 UTC
On What is equivalence and why should I care? article (1763 comments in total)
In reply to:

falconeyes: 18 hours ago, David Jacobowitz made an argument that this article should cite work which helped evolve the concept of equivalence (or how I call it, the equivalence theoreme).

To this end, I observed that the concept was missing in internet discussions dated 2007, Jan 11. At that time, Daniel Buck in http://www.fredmiranda.com/forum/topic/544062/ described the Brenizer method on the Fred Miranda forum (actually before Ryan Brenizer "invented" it; he did not). The effect is easily understood using equivalence (stitching effectively creates a larger sensor). Yet, the fredmiranda discussion fails to recognize this relation and does a poor job explaining the effect or compute its effective aperture.

Therefore, I think it is safe to assume that the equivalence theoreme was discovered after 2007 January. Moreover, this is a nice example how useful the equivalence theoreme actually is ...

@pidera, I recognize your early contribution. However, this post did not yet introduce the term "equivalence" which covers all aspects of IQ, including noise and diffraction effects, not just FoV and DoF.

Direct link | Posted on Jul 8, 2014 at 18:21 UTC
On What is equivalence and why should I care? article (1763 comments in total)
In reply to:

falconeyes: 18 hours ago, David Jacobowitz made an argument that this article should cite work which helped evolve the concept of equivalence (or how I call it, the equivalence theoreme).

To this end, I observed that the concept was missing in internet discussions dated 2007, Jan 11. At that time, Daniel Buck in http://www.fredmiranda.com/forum/topic/544062/ described the Brenizer method on the Fred Miranda forum (actually before Ryan Brenizer "invented" it; he did not). The effect is easily understood using equivalence (stitching effectively creates a larger sensor). Yet, the fredmiranda discussion fails to recognize this relation and does a poor job explaining the effect or compute its effective aperture.

Therefore, I think it is safe to assume that the equivalence theoreme was discovered after 2007 January. Moreover, this is a nice example how useful the equivalence theoreme actually is ...

@panos_m: Thanks for the link, it is 13 days older than my Fredmiranda link. So therefore, this may indeed mark the first occurence of the concept in the internet.

@JA Canon shooter: I try to avoid redundant words. I think it was clear what I meant: Two photos impossible to distinguish if from the same or two different cameras. In brevity is clarity.

Direct link | Posted on Jul 8, 2014 at 17:25 UTC
On What is equivalence and why should I care? article (1763 comments in total)
In reply to:

falconeyes: 18 hours ago, David Jacobowitz made an argument that this article should cite work which helped evolve the concept of equivalence (or how I call it, the equivalence theoreme).

To this end, I observed that the concept was missing in internet discussions dated 2007, Jan 11. At that time, Daniel Buck in http://www.fredmiranda.com/forum/topic/544062/ described the Brenizer method on the Fred Miranda forum (actually before Ryan Brenizer "invented" it; he did not). The effect is easily understood using equivalence (stitching effectively creates a larger sensor). Yet, the fredmiranda discussion fails to recognize this relation and does a poor job explaining the effect or compute its effective aperture.

Therefore, I think it is safe to assume that the equivalence theoreme was discovered after 2007 January. Moreover, this is a nice example how useful the equivalence theoreme actually is ...

The theoreme isn't about the math involved when applying it. Not at all.

The equivalence theoreme states that two images taken with two (ideal) cameras using different sized frames can be completely indistinguishable even when using the most advanced analysis methods. Said two cameras do then belong into a single mathematical equivalency class, hence the term. The proof involves considerations about depth of field, photon shot noise, diffraction and lens aberration. Among others. It's not trivial. And if a corrolary, corrolary to which theoreme?

Direct link | Posted on Jul 8, 2014 at 13:32 UTC
On What is equivalence and why should I care? article (1763 comments in total)

18 hours ago, David Jacobowitz made an argument that this article should cite work which helped evolve the concept of equivalence (or how I call it, the equivalence theoreme).

To this end, I observed that the concept was missing in internet discussions dated 2007, Jan 11. At that time, Daniel Buck in http://www.fredmiranda.com/forum/topic/544062/ described the Brenizer method on the Fred Miranda forum (actually before Ryan Brenizer "invented" it; he did not). The effect is easily understood using equivalence (stitching effectively creates a larger sensor). Yet, the fredmiranda discussion fails to recognize this relation and does a poor job explaining the effect or compute its effective aperture.

Therefore, I think it is safe to assume that the equivalence theoreme was discovered after 2007 January. Moreover, this is a nice example how useful the equivalence theoreme actually is ...

Direct link | Posted on Jul 8, 2014 at 12:07 UTC as 157th comment | 18 replies
On What is equivalence and why should I care? article (1763 comments in total)
In reply to:

David Jacobowitz: Important article, but that no credit or mention given to Joseph James is sincerely Not Cool.

He was arguing and explaining equivalence on these very forums years ago, and has had a very comprehensive essay on the subject (more so than this piece) for as long.

I find it highly unlikely that Richard Butler did not even look at that piece:

http://www.josephjamesphotography.com/equivalence/

I don't know if anybody is reading comment replies this old. Therefore, I make my reply a new comment.

Direct link | Posted on Jul 8, 2014 at 11:41 UTC
On What is equivalence and why should I care? article (1763 comments in total)
In reply to:

Tom Caldwell: Useful article. "Equivalence" has its own complications.

Aps-c sensor users of legacy ex-slr lenses are happy with 1.5x equivalence being the crop factor. Aps-c seems to use the FF equivalence as it's standard.

In the case of focal reducers making aps-c roughly 0.71x adjustment then applying 1.5x crop factor or a combined equivalence of 1.5 x 0.71 = 1.07 compared to the nominal focal length of the lens is a standard that most users can relate to. A lens of 100mm used on aps-c will happily be thought of as 107mm equivalent. In most cases reasonably similar to the markings on the lens.

In the case of M4/3 users they look at the same lens as 100 x 0.71 or a 71mm equivalent (to 4/3 sensor use) Then of course we have to multiply by the crop factor of 2x to get yet another equivalent value of approximately 140mm for a 100mm legacy ex-slr lens used on a M4/3 camera with a focal reducer.

How this article applies to focal reducers would be a further interesting exercise.

focal reducers or tele converters are a trivial exercise: just apply their factor to the crop factor first.

Note however that focal reducers don't work with large apertures, i.e., they clip the aperture enough and they cannot deliver an equivalent F1.4 with smaller sensors than FF.

Direct link | Posted on Jul 8, 2014 at 11:23 UTC
On What is equivalence and why should I care? article (1763 comments in total)
In reply to:

Sacher Khoudari: For those interested in the math of depth-of-field: I have started an own article on this topic some weeks ago. It's not finished yet and it's only in german (I'm sorry!), but even though you may understand the formulas.

http://craesh.net/Downloads/scharfentiefe.pdf

Feedback is welcome!

Best regards
Sacher

Hi Sacher,

your article is nice. But only covers a very narrow field of what is covered by the equivalence theoreme.

The equivalence theoreme describes the condition of when two images from different cameras are indistinguishable (even when measurbating them and even if cameras differ in sensor size). Topics besides DoF which must be covered are diffraction, photon shot noise, full well capacity and as I repeat to say, AF accuracy and lens aberrations.

Direct link | Posted on Jul 8, 2014 at 11:17 UTC
On What is equivalence and why should I care? article (1763 comments in total)
In reply to:

David Jacobowitz: Important article, but that no credit or mention given to Joseph James is sincerely Not Cool.

He was arguing and explaining equivalence on these very forums years ago, and has had a very comprehensive essay on the subject (more so than this piece) for as long.

I find it highly unlikely that Richard Butler did not even look at that piece:

http://www.josephjamesphotography.com/equivalence/

I agree DPR should have added a "Further Read" section. But I welcome this important article nevertheless.

Btw. I am the author of the second entry in the "Related Article" section of Joseph's article ( http://www.falklumo.com/lumolabs/articles/equivalence/ ).

About the content of the article itself: I think DPR discusses "equivalent ISO" in too defensive a manner. It is key to understand the entire concept, and it is accurate provided the same sensor silicon design rules have been used by the sensors of two cameras.

Direct link | Posted on Jul 7, 2014 at 17:27 UTC

@Damien, I appreciate your article and opinion. However, I think it is a bit premature and exaggerated. E.g.:

- ISO 64 may not be real, only a change of the exposure line. With bigger microlenses and no OLPF this would mean twice the FWC and I don't believe this to be possible with an oem'ed sensor not available elsewere!

- The effect of electronic curtain on shutter blur should be measured before writing about. I did it for Pentax K-7 vs. K-5, so it can be done ...

- The inconstincy problems with AF fields may have been fixed, or not ...

- The inconsistencies with flash-illuminated AF may have been fixed, or not ...

- The video still uses sensor line skipping which many consider a bug. Even the recent Cybershot RX100m3 got rid of it...

- LV may still look coarse, did you check?

- LV AF may still be slow, compared to other vendors. Did you check?

The D810 may be a great camera, as is the D800. But the difference may be smaller than what your hoping mind did let you anticipate.

Direct link | Posted on Jul 1, 2014 at 14:56 UTC as 13th comment | 1 reply
On Get more accurate color with camera calibration article (202 comments in total)
In reply to:

TakePictures: Not sure whether the calibrated photos look better. Colors look more vibrant, sure, but are they also more "natural"?

ColorChecker provides a target in the box, it is a standardized target and other sources exist. Target image is produced by photographing the target.

Direct link | Posted on May 21, 2014 at 23:05 UTC
On Sony Cyber-shot DSC-RX100 III First Impressions Review preview (2979 comments in total)
In reply to:

Lederhosen: Again with the complaints about the clickless wheel? Get over it, DPR. Some people like it, others don't. Your obsession with it is blinkered, subjective, and completely out of proportion. "The wheel does not click, therefore I cannot love this camera!" Please.

I assume the complaint is less thst it is clickless but more the wheel isn't smooth in operation and visual feedback has an annoying lag. A clickless wheel could be fun to use but the one on the RX100m2 isn't.

Direct link | Posted on May 19, 2014 at 17:21 UTC

Impressive. Their existing mobile app is worth a trial too. Works great with stunning result. And it is free.

Most importantly: They seem to develop their own core SFM engine and as far as I can see, it is the only one which is able to run on an iPhone with short rendering times and real-time keypoint detection.

It may not yield the resolution as some of the server-based "big brothers". But the difference isn't huge.

My kudos to the developers!

Direct link | Posted on May 12, 2014 at 08:51 UTC as 4th comment | 1 reply
On Get more accurate color with camera calibration article (202 comments in total)
In reply to:

TakePictures: Not sure whether the calibrated photos look better. Colors look more vibrant, sure, but are they also more "natural"?

Maybe your monitor would need calibration too ;)

But the point is: color calibration is for RAW shooters, it creates a much better starting point in post processing (calibrated, natural colours) than the converter's default. Moreover, your creative color work becomes camera-independent. BTW, Lightroom supports ColorChecker calibration via its own profile tool.

Direct link | Posted on Apr 28, 2014 at 09:15 UTC
On Lytro announces Illum light field camera article (346 comments in total)
In reply to:

falconeyes: I am going to write a blog article about this. I figured out that Lytro light field and Canon dual pixel are just special cases of a more generic class of subpixel sensor cameras. Neither Canon nor Lytro have hit the sweet spot yet though ...

A side effect may be that Canon's patent on dual pixel AF is void.

Moreover, it does probably mean that Canon's dual pixel AF causes diffraction problems at very high fstops. Something worth to be studied ;)

I started a thread about this:

-> http://www.dpreview.com/forums/post/53547381

Direct link | Posted on Apr 23, 2014 at 12:05 UTC
On Lytro announces Illum light field camera article (346 comments in total)

I am going to write a blog article about this. I figured out that Lytro light field and Canon dual pixel are just special cases of a more generic class of subpixel sensor cameras. Neither Canon nor Lytro have hit the sweet spot yet though ...

A side effect may be that Canon's patent on dual pixel AF is void.

Moreover, it does probably mean that Canon's dual pixel AF causes diffraction problems at very high fstops. Something worth to be studied ;)

Direct link | Posted on Apr 22, 2014 at 23:41 UTC as 39th comment | 4 replies
Total: 199, showing: 1 – 20
« First‹ Previous12345Next ›Last »