falconeyes

falconeyes

Lives in Germany Germany
Has a website at falklumo.blogspot.com
Joined on Apr 28, 2008

Comments

Total: 213, showing: 1 – 20
« First‹ Previous12345Next ›Last »
In reply to:

Ron A 19: Cool! First S-system f2.0? I'm guessing a new Otus 85mm will blow this away though (for less $).

An 85/1.4 Otus would likely be just as expensive and not AF.

Look at the price difference between conventional 50/1.4 and 85/1.4 ...

Direct link | Posted on Aug 30, 2014 at 23:46 UTC
In reply to:

Krich13: Wait a minute. A 100 mm MF lens whose angle of view is equivalent to that of 80 mm on FF? That means the "crop factor" is only 0.8?
Then the equivalent aperture of this f/2 lens is just f/1.6, how is it a "similar proposition" to 85/1.2 lens?

I confirm, Damien should correct the mistake in his article.

100/2 is equivalent to exactly 80/1.6 and as 85mm, has the same 50mm aperture as 85/1.7. That's much more 85/1.8 rather than 85/1.4, let alone 85/1.2.

However, wrt optical quality it should perform like the Zeiss Otus which only is 39mm aperture.

Direct link | Posted on Aug 30, 2014 at 23:40 UTC
On Hands-on with the Pentax K-S1 article (276 comments in total)

I hope the market of Asian female dSLR shooters is big enough.

Direct link | Posted on Aug 27, 2014 at 23:16 UTC as 109th comment | 1 reply
On Background blur and its relationship to sensor size article (17 comments in total)

Great article.

Your formula for B can be further simplified:

B = Ø / w1

where Ø = f / N is the physical aperture diameter in mm, i.e., B is the ratio of lens diameter and subject size. Sounds almost trivial if expressed this way and reinforces the sensor size independence of the argument. It is part of the equivalence theoreme too (equivalent cameras produce equal images and have equal Ø).

Direct link | Posted on Aug 25, 2014 at 14:26 UTC as 1st comment
In reply to:

Jahled: The sad part of this is that the monkey almost certainly didn’t press the button, the photographer did. To generate more interest in the photo it was sold to the media on the idea of it being a “selfie” by the animal, and unfortunately that strategy has somewhat backfired…I would imagine he’s a bit regretful now, but then the photo would not be so well known if he had not taken that approach…

@Albert Silver, I remember the story where a nature photographer had an emcounter with a Brown bear who took his camera with tripod and detached the lens (imitating the photographer changing the lens). Unfortunately, the Brown bear missed the subtle part of pressing the lock button. Sheer force did the job as well ;)

Direct link | Posted on Aug 22, 2014 at 22:56 UTC
In reply to:

Jahled: The sad part of this is that the monkey almost certainly didn’t press the button, the photographer did. To generate more interest in the photo it was sold to the media on the idea of it being a “selfie” by the animal, and unfortunately that strategy has somewhat backfired…I would imagine he’s a bit regretful now, but then the photo would not be so well known if he had not taken that approach…

@Jahled, we are monkeys ourself. And in many IQ tests, some monkeys outperform humans (like identifying shapes not belonging into the same class as other shapes shown). As for pressing a camera trigger and chimping, I'd say they perform the same a a three y.o. human child. May depend on the species of monkey of course.

I'd even say it's feasible to educate a Chimpanzee to do street photography. Which may be an interesting project in itself. And the human educator / photographer would maintain copyright because he staged it (had the concept).

Direct link | Posted on Aug 22, 2014 at 22:47 UTC
In reply to:

falconeyes: I fear the editors at DPR missed the decisive sentence in the 1222 p document all else is following from:

>>> The copyright law only protects “the fruits of intellectual labor” that “are founded in the creative powers of the mind. ... Because copyright law is limited to “original intellectual conceptions of the author,” the Office will refuse to register a claim if it determines that a human being did not create the work. <<<

Therefore, the simple question isn't if a monkey created the photograph. It is if there is "an original intellectual conception" behind the photograph.

Example "monkey taking a photograph" only applies in the simple case of an accidental image.

Here, one must determine if the selfie results from "an intellectual conception" by Mr. Slater, i.e., did he plan and evolve the situation in order to obtain selfies.

From what Mr. Slater reported elsewhere, he did not. It was an unplanned accidental event. No copyright then. But it is the back story which decides.

So many replies, yet most miss a single word: CONCEPTION.

Is it this difficult to understand that the US copyright office really needed 1222 pages to explain and still people are confused?

Slater reported he had no concept of obtaining those images. He could have reported otherwise, but he didn't. Case closed.

Direct link | Posted on Aug 22, 2014 at 22:41 UTC

I fear the editors at DPR missed the decisive sentence in the 1222 p document all else is following from:

>>> The copyright law only protects “the fruits of intellectual labor” that “are founded in the creative powers of the mind. ... Because copyright law is limited to “original intellectual conceptions of the author,” the Office will refuse to register a claim if it determines that a human being did not create the work. <<<

Therefore, the simple question isn't if a monkey created the photograph. It is if there is "an original intellectual conception" behind the photograph.

Example "monkey taking a photograph" only applies in the simple case of an accidental image.

Here, one must determine if the selfie results from "an intellectual conception" by Mr. Slater, i.e., did he plan and evolve the situation in order to obtain selfies.

From what Mr. Slater reported elsewhere, he did not. It was an unplanned accidental event. No copyright then. But it is the back story which decides.

Direct link | Posted on Aug 22, 2014 at 13:30 UTC as 59th comment | 10 replies
On Sony announces Alpha a5100 compact mirrorless camera article (106 comments in total)

This otherwise great product offering highlights pretty well the problems of the interchangeable lens idea for a very compact design:

The A5100+16-50/3.5-5.6 vs. a Sony RX100m3:

- is still heavier (400g vs. 290g)
- still thicker (~88mm vs. 58mm)
- same zoom range but less light gathering (when both compared in equiv. terms)
- inferior optical quality (according to reports I read)
- lacks an EVF
- a tad cheaper (700$ vs. 800$).
- about same video spec, incl. the missing mic port.

I understand the A5100 would sing with a prime or tele lens. But then portability isn't the primary design goal anymore. If it is, E mount cannot beat the RX line, obviously.

Direct link | Posted on Aug 18, 2014 at 12:11 UTC as 14th comment | 1 reply

The video is available from the "official research page" (clicking the link below the video, links right to the video).

SfM, or "structure from motion", is an emerging technology with many applications, which one of them is this. There are quite a few research teams in the field, and a few commercial tools. Note however that reconstructing an entire video in HD quality will be a very computationally intense procedure, and prone to artefacts too.

Nevertheless, this work has some very interesting aspects. For instance can it deal with movements in the reconstructed 3D geometry. That's something other algorithms have severe trouble with.

Direct link | Posted on Aug 12, 2014 at 07:59 UTC as 42nd comment

Oh boy, never use a programmed trigger to take a photo. Because no indirection seems allowed to keep the copyright ;)

Direct link | Posted on Aug 7, 2014 at 14:05 UTC as 319th comment | 2 replies
In reply to:

falconeyes: I wished people would stop confusing light field and sensor array cameras.

A light field camera a la Lytro is ignoring some physics (known as plenoptics) which severely limits the usability of such devices and makes them unsuitable for the consumer market. Except you are in a search for venture capital and an exit scenario ...

A sensor array camera a la Pelican is the de-facto standard for future high end smartphone cameras and is parameterized to operate in the usable range for the consumer market (about 9-36 lenses all large enough to deliver enough resolution diffraction limited).

This is like day and night. DPR, please stop calling the Pelican approach light field. It is not, even though a limited range for refocussing is an inherent property of camera arrays (you can refocus with shallow DoF determined by the array diameter within the *focussed* DoF of a single lens within the array).

Roland, I didn't mean you. I meant the article's author who seems to mean that everything needs to refer to Lytro. ;)

Direct link | Posted on Aug 7, 2014 at 11:09 UTC
In reply to:

falconeyes: I wished people would stop confusing light field and sensor array cameras.

A light field camera a la Lytro is ignoring some physics (known as plenoptics) which severely limits the usability of such devices and makes them unsuitable for the consumer market. Except you are in a search for venture capital and an exit scenario ...

A sensor array camera a la Pelican is the de-facto standard for future high end smartphone cameras and is parameterized to operate in the usable range for the consumer market (about 9-36 lenses all large enough to deliver enough resolution diffraction limited).

This is like day and night. DPR, please stop calling the Pelican approach light field. It is not, even though a limited range for refocussing is an inherent property of camera arrays (you can refocus with shallow DoF determined by the array diameter within the *focussed* DoF of a single lens within the array).

Lytro uses many more rays per pixel, it's been analyzed by universtity labs. More importantly, the effective aperture per ray is MUCH smaller than in a camera array. More narrow cone, more blurred pixel (diffraction) and cone becomes ray. Simply speaking. I simply do not like if people treat all things equal.

Direct link | Posted on Aug 4, 2014 at 21:09 UTC
In reply to:

falconeyes: I wished people would stop confusing light field and sensor array cameras.

A light field camera a la Lytro is ignoring some physics (known as plenoptics) which severely limits the usability of such devices and makes them unsuitable for the consumer market. Except you are in a search for venture capital and an exit scenario ...

A sensor array camera a la Pelican is the de-facto standard for future high end smartphone cameras and is parameterized to operate in the usable range for the consumer market (about 9-36 lenses all large enough to deliver enough resolution diffraction limited).

This is like day and night. DPR, please stop calling the Pelican approach light field. It is not, even though a limited range for refocussing is an inherent property of camera arrays (you can refocus with shallow DoF determined by the array diameter within the *focussed* DoF of a single lens within the array).

That would be a nice camera (maybe too shallow DoF for most applications though when using the full array), but wouldn't be a plenoptical camera. A plenoptical camera array would need much smaller individual lens apertures to capture the 4D light field, such that diffraction would make people refrain from buying. In simple language, your camera would capture light cones rather than light rays.

Direct link | Posted on Aug 4, 2014 at 18:08 UTC
In reply to:

falconeyes: I wished people would stop confusing light field and sensor array cameras.

A light field camera a la Lytro is ignoring some physics (known as plenoptics) which severely limits the usability of such devices and makes them unsuitable for the consumer market. Except you are in a search for venture capital and an exit scenario ...

A sensor array camera a la Pelican is the de-facto standard for future high end smartphone cameras and is parameterized to operate in the usable range for the consumer market (about 9-36 lenses all large enough to deliver enough resolution diffraction limited).

This is like day and night. DPR, please stop calling the Pelican approach light field. It is not, even though a limited range for refocussing is an inherent property of camera arrays (you can refocus with shallow DoF determined by the array diameter within the *focussed* DoF of a single lens within the array).

Sone consider multi array cameras a special case of plenoptics, I do not, thinking this is a misunderstanding of underlying concepts.

The core idea of plenoptics is capturing the 4D light field.

The core idea of multi camera arrays is increasing the available lens surface for DSLR-like noise and DoF performance in a flat form factor fittiing a smart phone.

While a plenoptical camera coukd be built from a camera areay indeed, this is not what is intended or going to happen. Real multi array smart phone cameras will have to few lenses to capture the light field, even though they will offer a limited support for refocussability.

Direct link | Posted on Aug 3, 2014 at 15:36 UTC

I wished people would stop confusing light field and sensor array cameras.

A light field camera a la Lytro is ignoring some physics (known as plenoptics) which severely limits the usability of such devices and makes them unsuitable for the consumer market. Except you are in a search for venture capital and an exit scenario ...

A sensor array camera a la Pelican is the de-facto standard for future high end smartphone cameras and is parameterized to operate in the usable range for the consumer market (about 9-36 lenses all large enough to deliver enough resolution diffraction limited).

This is like day and night. DPR, please stop calling the Pelican approach light field. It is not, even though a limited range for refocussing is an inherent property of camera arrays (you can refocus with shallow DoF determined by the array diameter within the *focussed* DoF of a single lens within the array).

Direct link | Posted on Jul 29, 2014 at 11:32 UTC as 12th comment | 10 replies
In reply to:

Mike Davis: I suspect the label "Diffraction Correction" exaggerates the effectiveness of this feature. No amount of processing can magically recreate actual subject detail that was lost to diffraction as the light passed through the aperture. It might be able to simulate what appears to be genuine subject detail, but it won't be accurate.

For example, assuming that all other variables affecting resolution are up to the task... If diffraction at a given f-Number is just bad enough to prevent you from discerning the date "2014" on the face of a coin lying on a table several meters from a camera equipped with a normal FL lens when viewing at 100%, "Diffraction Correction" isn't going to reconstruct that data from thin air when the data never got past the aperture in the first place.

You can't make a silk purse from a pig's ear.

Mike Davis and followers seem to be lost in physics, so I'd like to clarify a bit. Diffraction reduces contrast up to a certain spatial frequency and looses information beyond. With current sensors, the point where information starts to be lost is abot F/11. Up to this point, the effect of diffraction can be reverted using deconvolution. This is possible because the PSF of diffraction is a known. Something similiar was run in the early Hubble scope. Note that the mathematical properties of the procedure require a clean signal, so low ISO.

So yes, diffraction effects can be corrected for, sometimes. Same holds for other small lens aberrations. DxO does the same in their raw converter.

Direct link | Posted on Jul 23, 2014 at 18:50 UTC
On Nikon 1 V3 First Impressions Review preview (624 comments in total)
In reply to:

AbrasiveReducer: Except for image quality, sounds like a great camera.

> Except for image quality, sounds like a great camera.

Except for image quality, a recent smart phone sounds like an even greater camera...

Direct link | Posted on Jul 17, 2014 at 12:32 UTC
On Drone lighting could be coming soon to your studio article (129 comments in total)

This is what *research* at most-famous US lab MIT looks like? Toy projects which would be classified "product development" in any small engineering company in any developed country?

To remind people what research is about and why it is worth to spend public money on it: It is all about to push the enveloppe of what is understood, known and feasible. A copter-based flash was feasible years ago.

Direct link | Posted on Jul 17, 2014 at 09:15 UTC as 47th comment | 2 replies
On What is equivalence and why should I care? article (2004 comments in total)
In reply to:

attomole: So f2 = f2 =f2 after all, only if you want to keep quality or DOF constant between formats it isn't (and why would you want to do that). it took hours or pouring over the Joseph james article and the three Petapixel videos on the subject but I finally came back full circle.

The total light stuff was an interesting revelation to me, the discussion on this and the thought process always gets bogged down in the mix of, number of pixels, pixel pitch sensor size and viewing conditions, the concept of total light captured. nicely sidesteps that argument to explain the bulk effects we see regardless of pixel size and number of pixels (almost) and is nicely illustrated in the graphics in this article)

> If I asked you to set the aperture to "F/4 equivalent", what would you do?

This isn't a hypothetical question. In a photo workshop or professional production, you WILL be asked so. And a M43 user better KNOWS to then set his camera to F/2.

On several occasions, I actually asked for a firmware mode to operate a camera solely based on equivalent units for FL, FStop and ISO. This would be a huge simplification for users operating mutliple camera format, or learning photography.

Actually, many cameras (RX100 being an example) display FL as an equivalent unit and FStop, Iso as a non-equivalent unit. I find this misleading and almost fraudulent.

Direct link | Posted on Jul 9, 2014 at 13:59 UTC
Total: 213, showing: 1 – 20
« First‹ Previous12345Next ›Last »