Ron A 19: Cool! First S-system f2.0? I'm guessing a new Otus 85mm will blow this away though (for less $).
An 85/1.4 Otus would likely be just as expensive and not AF.
Look at the price difference between conventional 50/1.4 and 85/1.4 ...
Krich13: Wait a minute. A 100 mm MF lens whose angle of view is equivalent to that of 80 mm on FF? That means the "crop factor" is only 0.8?Then the equivalent aperture of this f/2 lens is just f/1.6, how is it a "similar proposition" to 85/1.2 lens?
I confirm, Damien should correct the mistake in his article.
100/2 is equivalent to exactly 80/1.6 and as 85mm, has the same 50mm aperture as 85/1.7. That's much more 85/1.8 rather than 85/1.4, let alone 85/1.2.
However, wrt optical quality it should perform like the Zeiss Otus which only is 39mm aperture.
I hope the market of Asian female dSLR shooters is big enough.
Your formula for B can be further simplified:
B = Ø / w1
where Ø = f / N is the physical aperture diameter in mm, i.e., B is the ratio of lens diameter and subject size. Sounds almost trivial if expressed this way and reinforces the sensor size independence of the argument. It is part of the equivalence theoreme too (equivalent cameras produce equal images and have equal Ø).
Jahled: The sad part of this is that the monkey almost certainly didn’t press the button, the photographer did. To generate more interest in the photo it was sold to the media on the idea of it being a “selfie” by the animal, and unfortunately that strategy has somewhat backfired…I would imagine he’s a bit regretful now, but then the photo would not be so well known if he had not taken that approach…
@Albert Silver, I remember the story where a nature photographer had an emcounter with a Brown bear who took his camera with tripod and detached the lens (imitating the photographer changing the lens). Unfortunately, the Brown bear missed the subtle part of pressing the lock button. Sheer force did the job as well ;)
@Jahled, we are monkeys ourself. And in many IQ tests, some monkeys outperform humans (like identifying shapes not belonging into the same class as other shapes shown). As for pressing a camera trigger and chimping, I'd say they perform the same a a three y.o. human child. May depend on the species of monkey of course.
I'd even say it's feasible to educate a Chimpanzee to do street photography. Which may be an interesting project in itself. And the human educator / photographer would maintain copyright because he staged it (had the concept).
falconeyes: I fear the editors at DPR missed the decisive sentence in the 1222 p document all else is following from:
>>> The copyright law only protects “the fruits of intellectual labor” that “are founded in the creative powers of the mind. ... Because copyright law is limited to “original intellectual conceptions of the author,” the Office will refuse to register a claim if it determines that a human being did not create the work. <<<
Therefore, the simple question isn't if a monkey created the photograph. It is if there is "an original intellectual conception" behind the photograph.
Example "monkey taking a photograph" only applies in the simple case of an accidental image.
Here, one must determine if the selfie results from "an intellectual conception" by Mr. Slater, i.e., did he plan and evolve the situation in order to obtain selfies.
From what Mr. Slater reported elsewhere, he did not. It was an unplanned accidental event. No copyright then. But it is the back story which decides.
So many replies, yet most miss a single word: CONCEPTION.
Is it this difficult to understand that the US copyright office really needed 1222 pages to explain and still people are confused?
Slater reported he had no concept of obtaining those images. He could have reported otherwise, but he didn't. Case closed.
I fear the editors at DPR missed the decisive sentence in the 1222 p document all else is following from:
This otherwise great product offering highlights pretty well the problems of the interchangeable lens idea for a very compact design:
The A5100+16-50/3.5-5.6 vs. a Sony RX100m3:
- is still heavier (400g vs. 290g)- still thicker (~88mm vs. 58mm)- same zoom range but less light gathering (when both compared in equiv. terms)- inferior optical quality (according to reports I read)- lacks an EVF- a tad cheaper (700$ vs. 800$).- about same video spec, incl. the missing mic port.
I understand the A5100 would sing with a prime or tele lens. But then portability isn't the primary design goal anymore. If it is, E mount cannot beat the RX line, obviously.
The video is available from the "official research page" (clicking the link below the video, links right to the video).
SfM, or "structure from motion", is an emerging technology with many applications, which one of them is this. There are quite a few research teams in the field, and a few commercial tools. Note however that reconstructing an entire video in HD quality will be a very computationally intense procedure, and prone to artefacts too.
Nevertheless, this work has some very interesting aspects. For instance can it deal with movements in the reconstructed 3D geometry. That's something other algorithms have severe trouble with.
Oh boy, never use a programmed trigger to take a photo. Because no indirection seems allowed to keep the copyright ;)
falconeyes: I wished people would stop confusing light field and sensor array cameras.
A light field camera a la Lytro is ignoring some physics (known as plenoptics) which severely limits the usability of such devices and makes them unsuitable for the consumer market. Except you are in a search for venture capital and an exit scenario ...
A sensor array camera a la Pelican is the de-facto standard for future high end smartphone cameras and is parameterized to operate in the usable range for the consumer market (about 9-36 lenses all large enough to deliver enough resolution diffraction limited).
This is like day and night. DPR, please stop calling the Pelican approach light field. It is not, even though a limited range for refocussing is an inherent property of camera arrays (you can refocus with shallow DoF determined by the array diameter within the *focussed* DoF of a single lens within the array).
Roland, I didn't mean you. I meant the article's author who seems to mean that everything needs to refer to Lytro. ;)
Lytro uses many more rays per pixel, it's been analyzed by universtity labs. More importantly, the effective aperture per ray is MUCH smaller than in a camera array. More narrow cone, more blurred pixel (diffraction) and cone becomes ray. Simply speaking. I simply do not like if people treat all things equal.
That would be a nice camera (maybe too shallow DoF for most applications though when using the full array), but wouldn't be a plenoptical camera. A plenoptical camera array would need much smaller individual lens apertures to capture the 4D light field, such that diffraction would make people refrain from buying. In simple language, your camera would capture light cones rather than light rays.
Sone consider multi array cameras a special case of plenoptics, I do not, thinking this is a misunderstanding of underlying concepts.
The core idea of plenoptics is capturing the 4D light field.
The core idea of multi camera arrays is increasing the available lens surface for DSLR-like noise and DoF performance in a flat form factor fittiing a smart phone.
While a plenoptical camera coukd be built from a camera areay indeed, this is not what is intended or going to happen. Real multi array smart phone cameras will have to few lenses to capture the light field, even though they will offer a limited support for refocussability.
I wished people would stop confusing light field and sensor array cameras.
Mike Davis: I suspect the label "Diffraction Correction" exaggerates the effectiveness of this feature. No amount of processing can magically recreate actual subject detail that was lost to diffraction as the light passed through the aperture. It might be able to simulate what appears to be genuine subject detail, but it won't be accurate.
For example, assuming that all other variables affecting resolution are up to the task... If diffraction at a given f-Number is just bad enough to prevent you from discerning the date "2014" on the face of a coin lying on a table several meters from a camera equipped with a normal FL lens when viewing at 100%, "Diffraction Correction" isn't going to reconstruct that data from thin air when the data never got past the aperture in the first place.
You can't make a silk purse from a pig's ear.
Mike Davis and followers seem to be lost in physics, so I'd like to clarify a bit. Diffraction reduces contrast up to a certain spatial frequency and looses information beyond. With current sensors, the point where information starts to be lost is abot F/11. Up to this point, the effect of diffraction can be reverted using deconvolution. This is possible because the PSF of diffraction is a known. Something similiar was run in the early Hubble scope. Note that the mathematical properties of the procedure require a clean signal, so low ISO.
So yes, diffraction effects can be corrected for, sometimes. Same holds for other small lens aberrations. DxO does the same in their raw converter.
AbrasiveReducer: Except for image quality, sounds like a great camera.
> Except for image quality, sounds like a great camera.
Except for image quality, a recent smart phone sounds like an even greater camera...
This is what *research* at most-famous US lab MIT looks like? Toy projects which would be classified "product development" in any small engineering company in any developed country?
To remind people what research is about and why it is worth to spend public money on it: It is all about to push the enveloppe of what is understood, known and feasible. A copter-based flash was feasible years ago.
attomole: So f2 = f2 =f2 after all, only if you want to keep quality or DOF constant between formats it isn't (and why would you want to do that). it took hours or pouring over the Joseph james article and the three Petapixel videos on the subject but I finally came back full circle.
The total light stuff was an interesting revelation to me, the discussion on this and the thought process always gets bogged down in the mix of, number of pixels, pixel pitch sensor size and viewing conditions, the concept of total light captured. nicely sidesteps that argument to explain the bulk effects we see regardless of pixel size and number of pixels (almost) and is nicely illustrated in the graphics in this article)
> If I asked you to set the aperture to "F/4 equivalent", what would you do?
This isn't a hypothetical question. In a photo workshop or professional production, you WILL be asked so. And a M43 user better KNOWS to then set his camera to F/2.
On several occasions, I actually asked for a firmware mode to operate a camera solely based on equivalent units for FL, FStop and ISO. This would be a huge simplification for users operating mutliple camera format, or learning photography.
Actually, many cameras (RX100 being an example) display FL as an equivalent unit and FStop, Iso as a non-equivalent unit. I find this misleading and almost fraudulent.