Previous news story    Next news story

Lytro's Ren Ng sheds some light on the company's ambitions

By Richard Butler on Aug 18, 2011 at 22:42 GMT

Lytro's announcement that it will be launching a plenoptic 'light field' camera that allows images to be re-focused after they've been taken, was met with equal amounts of interest and skepticism. Interested to find out more, we spoke to the company's founder and CEO, Ren Ng, to hear just what he has planned and how far towards a product the company has got.

The first thing to understand, he stressed, is how the system works: by placing an array of microlenses some distance in front of an imaging sensor, points of light arriving through the lens are scattered across multiple photo sites, depending on the angle they've arrived from. This information, captured in a single exposure, provides the ability to render images as if they'd been focused at different distances. The company says it will begin selling such a device before the end of the year (2011). Not only would such a device be able to produce re-focusable images, but it also wouldn't need focusing at the point of shooting.

"Our vision is a product that allows people to
shoot and share very simply"

The first device will be aimed at the consumer end of the market, says Ng, explaining that the company is targeting: 'people who really like to have fun with pictures and share them with friends and family. Our vision is a product that allows people to shoot and share very simply.' And this product is not far from becoming a reality, he says: 'The product will be out in 2011 and priced competitively for a consumer product. It's already in the hands of photographers.' (Of the people shooting the samples on the company's website, only Eric Cheng, its director of photography, is an employee).

Despite the consumer focus for the first product, Ng believes the nature of the technology means this won't just entail people pointing and shooting: 'We're looking at someone really interested in what photography means, who wants to experiment with the capabilities of this new approach, and wants to explore and enjoy the artistic possibilities of working with a new medium.'

Sharing the experience

'Light field photography creates a fundamentally different type of data. When we moved from film to digital it made all sorts of changes to what we could do with photographs, but we were still collecting essentially the same 2D data that we always had been, right back to the days of the daguerreotype. There are opportunities as an artistic process for people to experiment and be creative. The type of data is very resonant with that - you can create an image and invite the viewers to explore the picture. There are opportunities in terms of crafting and posing pictures in a way that gives a sense of discovery to the viewer. A sense of discovering a story for themselves.'

The first product's focus will be on making this capability accessible and easy to share, he says: 'Five years ago, this would have been impossible. It's only the development of web infrastructure, technologies such as Flash and HTML5 that allow us to program the interaction through an internet browser without having to download or install additional software. That's what powers the experience of our product, just as much as the instant shutter, instant focus or any of the other benefits.'

"The end user gets the full 'living picture'
experience without onerous downloads"

'The software to convert the captured 'light field' into an image, which we're calling the Light Field Engine, is in the camera. It is installed on your PC as well and, when you share your images through social networks, mobile devices or all the other places people share images, the Light Field Engine goes with the picture, so that the end user gets the full 'living picture' experience without onerous downloads.'

What about those samples?

He notes the concerns expressed about the samples that have already been shown on the web, explaining that, while they show how shareable the images can be, they are not representative of the camera's full capabilities: 'The ability to focus after-the-fact is fully continuous - you can focus at any depth. There are two factors that make this less apparent in the samples. The first is the tendency in photography for depth to appear compressed, so objects of similar distances appear together [as they do when you shoot a portrait with a long focal length, as an extreme example]. Depending on composition and arrangement of subjects, there may only be two or three significant depths within an image. Also the way we've packaged the data for easy viewing on the internet has an effect. It's not the full light field you're seeing - it's a subset to make it more portable. It's analogous to comparing the Raw data that an enthusiast photographer might take, with the small, compressed JPEG that Facebook might serve up if you view it on your smart phone.'

Also, while he explains that the sample images come from devices taken from the production line, they are not yet final: 'The devices themselves look very close to final on the outside, but the hardware internals, software and image quality are not production standard yet,' he says.

"The 0.1MP resolution we were producing then is not
consumer-ready, so we've come a long way from
there to make a commercializable product"

This is a long way beyond the point dpreview last spoke to him (in 2005), when Ng has adapted a 16MP medium format camera to produce 900,000 pixel images: 'An important thing to note is that at that stage of development, the focus was on: "how do we take a multi-camera array and miniaturize it to a single device?" The results at that time were not anywhere near commercializable. It was a scientific breakthrough we were working towards. The next step we've been working on has been making a commercial breakthrough. The 0.1MP resolution we were producing then is not consumer-ready, so we've come a long way from there to make a commercializable product, that can sell in the highest volumes. And doing that has required making a product that makes it easy to share the results on the internet. If you look at the way people use pictures, the vast majority of pictures are on the web.'

More creativity to come

The initial software won't allow a great deal of post-shot editing, he explains: 'At first we'll be making those decisions for the user - so that we can make the process as simple as possible but, further down the line, we'll provide tools to give more control over the final output. It's important to understand that Lytro's camera will record full light fields at day one, and folks will be able to do more and more with those same files as the software grows into the future. It's a bit like DSLR shooters working with the initial Raw formats: the new editing features you could achieve with those Raw file increased over time as software support matured.'

"We're very keen to see light field images develop
through an ecosystem of software"

'We're very keen to see light field images develop through an ecosystem of software, to allow people to share images and edit images, as with normal, 2D images. We're producing a format with an API to provide developer access to the format's capabilities. Kurt Akeley, our Chief Technology Officer used to work for SGI, where he invented the OpenGL API, so we've got some truly world class experience in this sort of thing.'

'It's not going to just drop into existing software, it's going to require a bit of work - it's a richer data with greater possibilities. The light field, when turned into pictures, is 2D but there are opportunities to work on light fields directly to access their full possibility. Tapping the full potential is a huge opportunity, and paves the road for a great deal of exciting R&D.'

"In the same way that Polaroid changed the market -
it brought an immediacy and shareability to photography"

But, even at his most positive, Ng says he doesn't expect light field photography to replace conventional, 2D photography: 'The folks at dpreview are not going to replace all their cameras with our first product. But, once it comes and opens up all these other capabilities, I think they're going to be enchanted by what it represents for photography. It provides new opportunities - allowing you to create compositions that tell a story in a way you never could before. They're going to keep their existing tools but add this as well. In the same way that Polaroid changed the market - it brought an immediacy and shareability to photography, but that wasn't at the expense of conventional film photography, it was in addition.'

Over time, he believes, people will find additional creative options in the images. For example, some enthusiast photographers discuss the quality of the out-of-focus regions of their photographs, which is influenced by the complexity of the design of the lens used to shot the image. This could be an area people want to experiment in, Ng proposes: 'At the beginning, the out-of-focus region will look as it would through an optical viewfinder. For people who want to shape their bokeh, this could be the thing that really interests them. The ability to control your bokeh after the effect, could be another example of creative control on the editing side. The photographic possibilities will explode as people experiment with this sort of thing.'

Pushing sensor technology

And, if it achieves the level of success he and the company are hoping for, he says he can envisage light field cameras influencing sensor technology: 'As well as a scientific and commercial breakthrough, this could cause a technological breakthrough. We've got to the stage where we're seeing 14-16MP sensors for compacts and 20-24MP in larger sensors. It's not technological limitations that are defining that figure, it's a marketing-driven progression. When we went from VGA to 1MP to 4MP sensors, that was technology growth.'

"You could in theory make a sensor with
hundreds of millions of pixels"

'Growth in that underlying industry capacity hasn't stopped, there's just no demand for it. With 14MP, for print or web use, those are enormous images, so there's no great pressure to move on from there. But if you applied the technology being developed for mobile phone cameras and applied it to an APS-C sensor, you could in theory make a sensor with hundreds of millions of pixels - an order of magnitude beyond what we're currently seeing. With such a sensor in a light field camera, we'd be able to measure hundreds of millions of rays of light. Light field technology can utilize and re-invigorate amazing growth in density of sensors.

And the lower output resolution of light field cameras, compared to conventional ones, could be a real benefit: 'Light field technology is inherently more capable in low light - we can shoot wide-open with apertures larger than make any sense for conventional photography. And we're not just trying to make enormous pictures. One dead or noisy pixel in conventional photography is expected to result in one output pixel in the final image. In light field photography it translates to a dead 'ray' which won't have as much impact on the final output - the sensitivity to defects from the sensor will go down.'

Going it alone

Trying to go to market with a product based on a fundamentally different technology may sound ambitious for a camera company nobody had heard of two months ago, but Ng is unfazed by the challenge: 'We feel we're better placed to bring the full benefits of this technology. It's a transformational technology, it needs a transformational product. If you look at most digital cameras, they're very good but they've come about as a result of a series of incremental changes to the previous technology. Trying to do this as an incremental change to an existing technology would rob the consumer of many of the most disruptive benefits.'

"We feel we're better placed to bring the full
benefits of this technology"

There's another reason for producing the camera themseleves, Ng says: 'because we can build this kind of company today. Ten years ago [doing all this themseleves] would have been impossible but, as with the advances in web infrastructure that make the pictures sharable, there have been great advances in manufacturing and distribution that make it possible for a new company to do this. In the past, to get the message out, you'd have needed to buy an ad during the Superbowl - which is a very expensive thing to do and doesn't get your message to the right people. The web has made it so much easier in terms of localizing the message. Just the pictures we've posted, spreading out across the web has generated so much interest.'

Comments

Total comments: 116
12
The A-Team
By The A-Team (Aug 24, 2011)

I'm very excited to see where this technology goes, but I have mixed feelings. After all, focus points are core to the challenge and experience of photography! On the other hand, removing this challenge would open a lot of creative avenues. More thoughts here: http://www.aputure.com/blog/?p=2447

1 upvote
lylejk
By lylejk (Aug 23, 2011)

Other companies exist now but the price of their camera is outrageous to say the least. How much is this so called consumer camera going to cost? That's what I want to know. Still, I do believe this is the future of photography; no more lost captures. Also, the ability to create 3D models on the fly and so many other benfits that again, as the article describes, this is a game changer that is for sure. :)

0 upvotes
falconeyes
By falconeyes (Aug 23, 2011)

a game changer like cold fusion, you mean ; ?

0 upvotes
LaFonte
By LaFonte (Aug 23, 2011)

There are some overly optimistic ideas in the posts but no you cannot market this to movie studios, because you are light years from developing such idea into 2k or 4k video on a full frame slice.
The idea presented by NG is interesting, but at this stage of workable densities only a mere curiosity.
You would require to put far higher density of wells on APC sized sensor to keep with resolution of todays (or even yesterdays) cameras which would be beyond operability (you will mostly receive noise due to huge crosstalk of wells).
So we are in front of a problem, we can choose between a much smaller resolution or a huge increase of noise just to be able to focus later....hmmm...maybe neither?

1 upvote
falconeyes
By falconeyes (Aug 22, 2011)

To everybody confused now:

Read http://optics.org/indepth/2/6/3
and http://raytrix.de/index.php/press.html?file=tl_files/downloads/Raytrix_Intro_LightField_100124_web.pdf esp. p.27

Page 27 is interesting as it explains that a plenoptical camera (read a Lytro camera) is confined to 0.3MP (say sub MP) and if you want to push barriers a little bit buy doing a hybrid between a lightfield and a conventional camera, you'll need Raytrix patents.

After this lecture, there isn't much left unknown.

1 upvote
falconeyes
By falconeyes (Aug 22, 2011)

Tle links again (any way to make them clickable?):

http://optics.org/indepth/2/6/3

http://raytrix.de/index.php/press.html?file=tl_files/downloads/Raytrix_Intro_LightField_100124_web.pdf

0 upvotes
Craig49
By Craig49 (Aug 22, 2011)

I would be pleased to be wrong...but he comes across like a con man. Lot's of hand-waving about how fine it will be...

1 upvote
Alollini
By Alollini (Aug 22, 2011)

part1
I think the use of diffraction is not as well understood here. There are applications where diffraction data is used today in science : space interferometry, especially when using more than one telecope to simulate a bigger telescope of the diameter of space between them.

We must not suppose there will be ONE lens in front of the microlens layer.
the image sensor, composed of a plane of light detectors and a plane of microlenses don't need a lens in front, and also to capture multiple planes don't need to be made of multiple physical planes.

The innovation resides in what and where, instead of just having one microlens for one pixel just behind it, the technology can take measurement of light with an angle, using many pixels for one lens, and the same pixels for different lenses.

0 upvotes
Alollini
By Alollini (Aug 22, 2011)

part2
In the articles i studied back in 2000 this was only for astronomy applications, and the focus was to take one huge light field and being able to re-aperture it (not re-focus), making 10x more Z slices out of the picture of a galaxy, to map x and y at a much higher Z resolution.

0 upvotes
falconeyes
By falconeyes (Aug 22, 2011)

That's my only complaint in Mike Davis postings: He talks about diffraction. Because Ng could in theory have explored new stuff, I prefer talking about Heisenberg's uncertainty relation which can't be circumvented.

An example is an array of lenses. They overcome the obstacles of diffraction-limited lenses but still obey the Heisenberg relation.

Therefore, I want to make sure everybody understands that my argument solely relied on the Heisenberg principle. Ng can do so much. Which may be nice enough for some. But not more.

0 upvotes
jloup
By jloup (Aug 22, 2011)

I've read all the posts by Mike Davis. I'm impressed how he can be so sure of what he thought he understood. Has he thought that the camera could work in a fundamentally different way than now ? Actually this system is not diffraction-limited in the old sense. Indeed, it uses all the information of diffraction as a data.

"Multi-plane" light field sensor does NOT mean that multiple exposures are taken on actual multi-plane sensors. It means all informations about the "rays" ("the field" is better), and not only the area they light up, are recorded on a single sensor.

For example, the information on the direction of a ray is not possible to recover from the "which-pixel-is-light-up" information. With light-field this information can be recovered, thus allowing for tracing back from where (which plane in the real scene) a ray comes from. Then, selecting a narrow set of planes in the final image creates a narrow depth-of-field picture ; selecting a wide set creates a deep DOF picture.

0 upvotes
jloup
By jloup (Aug 22, 2011)

Sure, Mike Davis understands things better than the one who invested millions of dollars in the project (and far more on technology than on marketing).

I'm not saying his arguments are nonsense ; they are very sensible as far as classical photography is concerned (by classical, I mean "single-plane"), but they may simply not apply to the kind of information recorded by this new technology. As a clue, imagine how to think "classically" with an array of micro-lenses between the lens and the sensor : 1) the effectif "density" is not that of the sensor anymore, but that of the microlens array. 2) Then, multiple sensor pixels are required to get informations from a single micro-lens. 3) as a consequence it is both understandable why there is no such diffraction limitation on the sensor density itself, and why so many pixels are required : 1 Mega pixel microlens array * 1000 pixel sub-sensor by microlens = 1 Giga sensor pixels, for a 1 M pix resolution only : no diffractioin pb.

1 upvote
Ivanaker
By Ivanaker (Aug 21, 2011)

Auto exposure? check
Auto fill flash? check
Auto focus? check
Auto white balance? check
Wait, we don`t need auto focus anymore, we don`t need to focus at all.

I just don`t get one thing: its the lens that focuses, not the sensor, so, how is sensor going to fix, lets say, 135 f/2.0 where subject is 7 feet away and background is 150 feet. nothing in the background is even close to focus when it reaches sensor, there is just not enough data for computer to fork with.

I expect for this to be like this: sensor - 1x1 mm 1MP, lens 1-4mm f/11 and a lot of sharpening after in the computer to the selected point.

Until we get real samples, not this flash animations i must assume that this is all a hoax. And whats with all this secrecy?

0 upvotes
Chronis
By Chronis (Aug 21, 2011)

Who took the jam out of your donuts Mike Davies?

one can be skeptical about something but your post is bitter like you don't want them to succeed....

I say best of luck.....

0 upvotes
Mike Davis
By Mike Davis (Aug 21, 2011)

:-)

I wish Mr. Ng the best of luck, too.

Quoting my opening sentence: "You're a brilliant young man, Mr. Ng, and I am genuinely impressed with your innovative technology - it will no doubt have a lasting impact on photography as we know it - but I'm compelled to point out that you would have us believe the limitations imposed by DIFFRACTION can be ignored."

I went on to quote Mr. Ng's RIDICULOUS statement that "there's just no demand for [APS-C sensors having pixel counts greater than 14 MP.]" For that and his obvious pretense that diffraction isn't an issue, yes, the remainder of my 6-part post was negative.

I want him to succeed, but his assertion that his resolution problem can be cured by using the same or higher diffraction-prone pixel densities as currently used on mobile phones borders on being a fraudulent statement - not a statement made out of ignorance - not from someone with his obvious knowledge of optics.

See falconeyes' post, below - Aug 20, 2011 at 10:44:10 GMT

0 upvotes
Mike Davis
By Mike Davis (Aug 21, 2011)

(Part 1)

You're a brilliant young man, Mr. Ng, and I am genuinely impressed with your innovative technology - it will no doubt have a lasting impact on photography as we know it - but I'm compelled to point out that you would have us believe
the limitations imposed by DIFFRACTION can be ignored:

Quoting from the interview, above:

"'Growth in that underlying industry capacity hasn't stopped, there's just no demand for it. With 14MP, for print or web use, those are enormous images, so there's no great pressure to move on from there. But if you applied the technology being developed for mobile phone cameras and applied it to an APS-C sensor, you could in theory make a sensor with hundreds of millions of pixels - an order of magnitude beyond what we're currently seeing. With such a sensor in a light field camera, we'd be able to measure hundreds of millions of rays of light. Light field technology can utilize and re-invigorate amazing growth in density of sensors."

(Continued below.)

0 upvotes
Mike Davis
By Mike Davis (Aug 21, 2011)

(Part 2)

First, let me point out that your multi-plane 'Light Field'sensor will always have a resolution less than conventional, single-plane sensors because some of your photosites are, by design, blocking the view of any photosites that could reside beneath them. But in your pipe-dream statement, above, you are arguing that your multi-plane sensor technology could ultimately achieve the same resolution as today's conventional, single-plane sensors if only manufacturers would apply the same pixel densities to APS-C (and full frame) sensors as are currently being applied to mobile phone cameras!

(Continued below.)

0 upvotes
Mike Davis
By Mike Davis (Aug 21, 2011)

(Part 3)

I hate to pop your balloon, Mr. Ng, but we will NEVER see pixel densities anywhere near those ultra-high mobile phone pixel densities in conventional APS-C (or full frame) sensors because DIFFRACTION would prevent users from selecting the f-Numbers available with current lens technology while deploying enlargement factors far greater than those we are suffering with current pixel densities. Diffraction is already discouraging conventional APS-C users from shooting at f/16 and f/22 when making unresampled 300 dpi enlargements. Would you have us always shoot wide open with the expensive lenses made for APS-C and full-frame bodies - avoiding f/ 4.0, f/5.6, f/8, and f/11 in addition to f/16 and f/22?

(Continued below.)

0 upvotes
Mike Davis
By Mike Davis (Aug 21, 2011)

(Part 4)

Diffraction is your enemy Mr. Ng. The 2-inch by 3-inch images seen on our mobile phones are already softened by diffraction thanks to the outrageous enlargement factors involved! It's diffraction that is preventing the use of higher pixel densities in conventional APS-C and full-frame sensors and it will be diffraction that prevents your 'Light Field' sensor from exceeding the resolutions already enjoyed by conventional APS-C and full-frame sensors. You're welcome to use higher densities as necessary in your Light Field sensor to end up at the same resolutions as current technology, but diffraction will prevent you from going any higher. What good is the infinite Depth of Field offered by your technology if it will be accompanied by diffraction that degrades the entire image, independent of subject distance?

(Continued below.)

0 upvotes
Mike Davis
By Mike Davis (Aug 21, 2011)

(Part 5)

Here's a solution to your problem and to the problem we're already up against with conventional small dimension sensors that require huge enlargement factors to make prints that exploit all those pixels: Start working on a camera that not only embodies your Light Field sensor, but also includes an ultra-fast zoom lens - one that incorporates the following f-Numbers:

f/0.125
f/0.177
f/0.250
f/0.354
f/0.500
f/0.707
f/1.000
f/1.414
f/2.000

(Continued below.)

0 upvotes
Mike Davis
By Mike Davis (Aug 21, 2011)

(Part 6)

That would give us nine stops to work with, much like an f/1.4 lens that includes f/22, and they would ALL be diffraction-free at the enlargement factors to which we are currently limited due to diffraction. Such a lens would permit the use of a sensor having pixel densities three times higher than what is currently used in APS-C and full-frame sensors, giving the user a creative choice of several combinations of shutter speed and aperture, instead of forcing us to shoot wide open, at only one aperture, to avoid diffraction - as is the case with some of today's 12 and 14 Megapixel tiny-sensored digicams.

By the way, good luck with controlling all the aberrations you'll suffer designing lenses that operate at those f-Numbers.

Mike Davis

0 upvotes
raoul821
By raoul821 (Aug 20, 2011)

I'll buy one. Too cool

0 upvotes
Dan Tong
By Dan Tong (Aug 20, 2011)

This is exciting stuff and I wish Ng and his company the greatest success in this brand new venture. I'm looking forward to getting my hand on one of these cameras as soon as it is shipping.

0 upvotes
Tee1up
By Tee1up (Aug 20, 2011)

Every time I try to understand what these folks are doing my eyes glaze over. I have a pretty strong technical/physics background but reading comments by the Lytro folks makes me feel like I slept through too many classes. I keep expecting someone to pull back the curtains and shout "Hey, they are shooting everything at f30 and then messing about with gaussian blurs!".

0 upvotes
Craig49
By Craig49 (Aug 22, 2011)

It may be that your eyes glaze over because they aren't really saying much.

0 upvotes
Skyviews
By Skyviews (Aug 20, 2011)

For all the nay sayers this is interesting stuff. Wether it comes to fruition or not is another matter. I was at RIT in the late 70's and the concept of auto focus cameras was ridiculous. Yes we were taught the old fashion way and not that I can recite the formulas anymore, we could figure out depth of field, shutter seed and ASA & aperture combinations etc. Today there are many pros that are shooting auto focus, simply because it is better than their eyes. A little sharpening in photoshop and you get a beautiful image. I would give this technology a chance. Who knows where it will go and as for the science and physics behind it....big deal. If someone can change all of these things that we think are written in stone isn't that great. I say best of luck to him and his company.

0 upvotes
falconeyes
By falconeyes (Aug 20, 2011)

I am still fascinated about the ignorance of Ng wrt facts of optical physics.

Photons are elementary particles with a wave nature and obeying the Heisenberg uncertainty relationship. Speaking about rays and hundreds of millions of pixels at the same time is close to spreading false information: E.g., to capture 170 MP on a APSC sensor (not exactly a P&S specification btw) you need 1.5µm pixels (ok if you ignore the Bayer matrix for a second) and a f/2.0 - f/2.2 lens which resolves 800 lp/mm across the entire field (otherwise, you won't be able to refocus outside the image center). The best system camera lenses on earth resolve about 400 lp/mm (primes from Leica or Zeiss) and they only do so in the center and at about f/4. Lenses with a smaller image circle can do better but this wouldn't solve the light field capture problem.

I guess, it is all ok because US investors typically didn't exactly study physics ;)

1 upvote
Dan Tong
By Dan Tong (Aug 20, 2011)

I'm fascinated by the arrogance of some people who think they understand something and appear to say that what has already been done is impossible.

0 upvotes
falconeyes
By falconeyes (Aug 21, 2011)

You are right and you cannot mean me.
Because I did not say such a thing.
What has been done is possible. And I even provide details where I explain what to expect when full specs become public. All I say is that some of the predictions made by Lytro lack scientific background.

What Lytro does is in no way magical. It is applying rather old physics. What is new is the processing which can now be done in-camera.

0 upvotes
Mike Davis
By Mike Davis (Aug 21, 2011)

Falconeyes has exhibited no arrogance whatsoever in his comment, above. He has only stated the facts. It is Mr. Ng (CEO of Lytro) who is exhibiting arrogance in his attempt to pull the wool over the eyes of investors and consumers alike when he claims that it's only the lack of market interest that has prevented manufacturers from applying the same pixel densities used in mobile phones to APS-C sensors. No, Mr. Ng, it's DIFFRACTION that is already preventing us from using f/16 and f/22 with APS-C sensors when we try to use enlargement factors that exploit the pixel densities had with 12 or 14 Megapixel sensors! Mr Ng's technology can do NOTHING about overcoming diffraction. (See my long-version comment above, dated (Aug 21, 2011 at 14:01:40 GMT.)

0 upvotes
eyematters
By eyematters (Aug 19, 2011)

The amount of data that would have to be collected to account for all light and angles should be staggering. I find it hard to believe it will all fit in the file size they are talking about. Could it simply be that the picture is taken with a large depth of field (all is in focus) and they then apply a gaussian blur throughout, which is selectively removed as you click on parts of the pictures? Would work if it is also coupled with distance information... Just a thought.

0 upvotes
Sean Clark
By Sean Clark (Aug 19, 2011)

The optical flaws of large aperture lenses are more pronounced at the edges of the lens. If this camera uses micro lenses does the software sample light more from the center of the lens as DOF is increased in post? Does that result in a reduction of Coma, chromatic aberration and increased sharpness? i.e. Does the image improve optically in a similar way to stopping down the lens when increasing the DOF?

0 upvotes
Oliver Loch
By Oliver Loch (Aug 19, 2011)

Here're some simple ideas how this technology could be used (software permitting):

Adjust focus: Useful if accurate focus is difficult to achieve, for example with moving objects.

Refocus: Useful if you change your mind about what should be in focus.

Selective focus: Like painting with light (aka dodge and burn) you could paint with sharpness and blur.

All in focus: Like focus stacking, great for macro

Non-parallel focus plane: Similar to tilting the lens you could tilt the focus plane. Useful for architecture, portraiture, product shots etc.

Non-planar focus area: Have the surface of a curved or oddly shaped object in focus

Focus masks: Use the depth information to create masks in photoshop, for example to desaturate elements that are further away or to easily select and replace a background.

Displacement maps: Use depth information with photoshop's displacement filter to wrap another picture around the image

(cont'd)

2 upvotes
Oliver Loch
By Oliver Loch (Aug 19, 2011)

Follow focus: killer app if the camera could shoot light field videos

3D from one picture: Use depth information to create (limited) 3d objects similar to what helicon focus can do as a side product of focus stacking.

Stereoscopic light fields: Use two light field cameras for 3D movies with additional depth information (which would allow you to move the head a bit a see a slightly different image) or to render an area sharp depending on what the viewer is looking at.

Lenticulars: Use depth information with lenticular prints

Embosser: Combine the light field camera with a device that creates a relief of the scene. Useful if you want to share pictures with blind people. Or take a portrait, inverse the depth information, use with a 3d printer and put it on the wall to have the person watch you.

0 upvotes
Oliver Loch
By Oliver Loch (Aug 19, 2011)

Relight: Use the raw light field that has the directional information of the light to change the brightness of the light sources or adjust their colors in mixed light situations.

Compress: Use the depth information to compress the perspective as if the picture was taken with a longer lens.

There're probably thousands of other ideas just waiting to be thought of. So I'm much more excited about what could be done with this technology than I'm concerned about what the initial technical shortcomings might be.

2 upvotes
Lightshow
By Lightshow (Aug 19, 2011)

I was hoping he'd share what the final image size will be, but we only get this comment:
"The 0.1MP resolution we were producing then is not consumer-ready, so we've come a long way from there to make a commercializable product"

So 1MP?
If they can get to 4MP that would be a huge improvement.

0 upvotes
falconeyes
By falconeyes (Aug 20, 2011)

It can be computed, plenoptics is very old science. With current sensors and affordable lenses, the limit would be around 0.3 MP.

0 upvotes
BPJosh
By BPJosh (Aug 19, 2011)

old news

0 upvotes
Humboldt Jim
By Humboldt Jim (Aug 19, 2011)

As distance from lens increases, effect decreases.

0 upvotes
treepop
By treepop (Aug 19, 2011)

This technology would be GREAT! in phone cameras! iPhone 5?

0 upvotes
semorg
By semorg (Aug 19, 2011)

They are doing it wrong and he will soon be replaced as the CEO.

They need to market this first to hollywood and movie makers and use their technology in movie studios. Allowing the director and editors to place specific focus on a character, object as needed to best tell the story.

They can use this approach first to build a bit of brand recognition. Also this is where this product has the most use, IMHO. They are making photosharing more difficult in the world of instagrams and quick mobile sharing photo, and I think consumer market is not the way to go.

They will sell a few thousand units to bunch of early adopters, but all their investment and time in consumer market will end there. I just don't think they can convince enough consumers to buy these things to make the company survive in a very competitive consumer camera market.

1 upvote
nachos
By nachos (Aug 19, 2011)

Ha, I agree. 3D moviemaking has once again about run its course and Hollywood is always in want of some technological magic to throw its huge amounts of money at. If anything, at least it'll earn the engineers a decent salary.

Such technology eventually will trickle down as people see the possibilities, but for that initial cash you want those deep pockets that are always looking for the next big thing.

0 upvotes
olddog99
By olddog99 (Aug 19, 2011)

I'm open to seeing what this is like and, depending on cost, will probably try it and then pass it on to a favored nephew or grandchild. I don't really need a consumer grader.

I've seen a lot of new things flop in 50 years in the medium. Some work. Others don't. Others are novel, but so what, e.g. a panorama camera is useful, but therea's not always room in the bag.

What I don't expect to see is something that would affect the basic nature of what i like to shoot, although the aspects of low light photography mentioned has some appeal, depending on what Ng means. We're to a degree looking at an elephant with a blindfold, one hand tied behind our backs and sitting in a chair and no sense of smell.

I don't read too much into the samples which are as he says limited. But he's demonstrating one thing and the possibility of edge to edge focus sharpness could be useful. I prefer to work with prints and this is to some degree aimed at screen view.

0 upvotes
StephenSPhotog
By StephenSPhotog (Aug 19, 2011)

What happened to learning a craft and mastering it? I heard someone down in this tread say "Haven't you ever missed focus and wanted to correct it?" Well, yes I actually have. But you know what I had to do? Learn. All I could do was learn to pay closer attention to focus next time and nail it. If I missed a shot a missed a shot. It's my fault.

I never shot film. I've only been into photography in the digital age. But even so, I miss film. I would love a film SLR to take around and better teach myself to pay more attention to composition, focus, and lighting.

I don't mean to sound like someone who is afraid of new technology. I love new tech. I eagerly await the next Nikon and Canon cameras. But I can't help but see this as photography being taken over by laziness. Don't be lazy people.

Take a shot and correct focus later? Do they even hear themselves when they say that?

0 upvotes
manas0210
By manas0210 (Aug 19, 2011)

I feel the same way Stephen. Strongly agree!

0 upvotes
Dan Tong
By Dan Tong (Aug 20, 2011)

Don't be lazy, learn to draw and stop using a camera as a crutch to capture images that you wish to share with others :)

0 upvotes
Dan Tong
By Dan Tong (Aug 20, 2011)

Don't be lazy, learn to draw and stop using a camera as a crutch to capture images that you wish to share with others :)

0 upvotes
StephenSPhotog
By StephenSPhotog (Aug 21, 2011)

Dan, you obviously see my point but are choosing a silly very exaggerated response that really actually holds no merit at all.

Good try though.

0 upvotes
Cy Cheze
By Cy Cheze (Aug 19, 2011)

Instead of a plenoptic lens, wouldn't it be cheaper and easier to have a camera with four 1/2.3" sensors and four ordinary lenses, each of them set at a different focus? One could then pick the preferred mix of focus from the four layered shots.

Easier still: a single deep-focus shot, followed by application of Gaussian blur when editing.

Some things are charming in the abstract, but come with high costs or constraints. Solar energy, for example, won't re-charge your Volt unless it is mid-summer, you have an acre with $500k worth of panels, and you don't drive very much.

0 upvotes
JackM
By JackM (Aug 19, 2011)

I think this whole idea could be achieved much better with a normal camera and implementing the selective focus in software only. No microlenses, use the full res of the sensor, do the re-focus parlor trick after the fact. How? Just take a deep DOF picture (small aperture) and then apply gaussian blur in a horizontal gradient above and below the point of click? Of course, that will only work in good light.

0 upvotes
Cy Cheze
By Cy Cheze (Aug 19, 2011)

Would a plenoptic lens work any better in low light? Any approach would require slower shutter speed and risk blur unless the camera were very firm. If some folks think plenoptics means "never a bad photo," they are confusing bad focus with blur due to movement of camera or subject.

0 upvotes
binary_eye
By binary_eye (Aug 19, 2011)

Clearly, you've never actually tried applying a horizontal gradient blur to give the illusion of shallow depth of field. It might work if all you're shooting is landscapes from a hot-air balloon, but for anything else it's mostly useless.

0 upvotes
iae aa eia
By iae aa eia (Aug 19, 2011)

Though still far from competing on the same quality level of conventional sensors, I believe it is a great technology, and besides the low resolution, double that would already allow a wide use of it and selling it possible. Online media photographers that normally use photos up to screen-resolution would have a huge benefit. Imagine photographing sports or constant moving long distant social events without the need for focusing. It would mean fast shooting with much less power drain.

Maybe it'll never be a replacement technology, but surely it'll have its market, and it will not be small.

0 upvotes
HumanTarget
By HumanTarget (Aug 19, 2011)

Even if the hype were all true, I think the samples show pretty well how pointless this technology except in the case of missed focus, which is the exception, not the rule.

0 upvotes
treepop
By treepop (Aug 19, 2011)

Getting a tack on focus is one of my MAIN issues.

0 upvotes
fmian
By fmian (Aug 19, 2011)

Now to let you change shutter speed and aperture after taking the shot.
Then it would be really cool.

0 upvotes
duartix
By duartix (Aug 19, 2011)

Ah but you can... Sort of!
The Panasonic GH2 has an electronic shutter that works at 40fps@4MP.
If you make a 1s capture in that mode and then choose how much frames you stack in software later, it's basically the same as choosing the shutter speed after you took the shot. ;)

0 upvotes
MrRoger
By MrRoger (Aug 19, 2011)

Some wonderful popular science theories here. Lets put some counter arguments.

1) The way images are recreated suggests to be that it would be quite possible to populate the sensor plane with multiple smaller chips, the microlens arrangement can ensure there are no gaps in the captured information.

2) 1024x768 is a popular computer display size, but it's only 0.75MP.

3) Each point in a Plenoptic camera draws information from multiple sensor points,that in itself reduces noise, why do we think these cameras are going to be noisy?

4) I am overjoyed by the fact the first product is going to be a consumer product. So I can afford to buy one and try it out. And I hope, I really hope, it is going to work.

1 upvote
Dan Tong
By Dan Tong (Aug 20, 2011)

It's a pleasure to hear an opinion from someone who actually reads the information fully and actually understands the basic facts.

0 upvotes
dragra
By dragra (Aug 19, 2011)

Not a single detil about the camera. Just show us the camera and we know if this is going to be a flop or not. I think they went the wrong way in making a camera, instead they should concentrate on sensor tech make some patents and later license it to other camera makers. Much trara about nothing (yet).

1 upvote
aardvark7
By aardvark7 (Aug 19, 2011)

As with all the other samples, they are very poor and would ordinarily end up 'on the cutting room floor'!

Even when trying to apply the single stated advantage, that of selecting focus point, it is so inaccurate and arbitrary, that the 'gain' is actually a negative. To see what I mean, just try getting the earring in focus on the first sample shot...

Even if this system succeeded to some degree, the thought of having to fool around afterwards with every shot, just to optimise focus, sends shivers down my spine! It's bad enough post processing 500 - 1000 wedding shots when I've pretty well nailed the focus and the exposure is already reasonably good. Add this into the mix and you are looking at nervous breakdown territory!!!

All that said, I'm with Joe on this: it's complete hooey!

0 upvotes
Cy Cheze
By Cy Cheze (Aug 19, 2011)

What makes you think the picture with the earring was taken with a plenoptic lens at all. Basically, the choice is between the face and the flag. That is either "duo-optic" or a montage of two mono-optic photos.

But this would not be for wedding photos. It would be for people to share holiday greetings with little family photos at screen resolution. Pet lovers would share pictures of their pride. Recipients would buy a reader to see the pictures and have a good chuckle. Maybe there would be a market for coffee table digital viewers and touch screen selection of focus. Politicians' PR staff could use it to "airbrush" photos to highlight their employers' favorable features and defocus any nefarious faces nearby.

0 upvotes
Lightshow
By Lightshow (Aug 19, 2011)

Re: the earring shot.
from the interview:
"Also the way we've packaged the data for easy viewing on the internet has an effect. It's not the full light field you're seeing - it's a subset to make it more portable. It's analogous to comparing the Raw data that an enthusiast photographer might take, with the small, compressed JPEG that Facebook might serve up if you view it on your smart phone."

0 upvotes
aardvark7
By aardvark7 (Aug 22, 2011)

So what, might I ask, is the point of the samples?
They purport to show the results, but don't! Merely a feeble impression.
We are told that the majority of use (certainly initially) will be for Facebook et al., but this only demonstrates it is not even usable for that.
The only conclusions then are that the device is even more useless than common sense suggests and far less likely to see the light of day than has been stated.

0 upvotes
mauvan
By mauvan (Aug 19, 2011)

The best and sharpest picture is the portrait of mr. Ren Ng. But there the trick is not working ;)

1 upvote
Kumara
By Kumara (Aug 19, 2011)

I agree with some of the previous comments: first you do need a very high sensor density to have the least decent picture resolution, else what you gain by later focusing the image details, you're losing it by poor definition.
The other point is that actually as a photographer you should... ahem, FOCUS on your subjects in the shooting INSTANT: having said that, it may be a useful way to re-elaborate an image, but in this case the matter gets on a much wider and controversial scope,

0 upvotes
HBowman
By HBowman (Aug 19, 2011)

None of the picture presented are sharp. This toy technologie will probabbly equip Phones for facebook geekerys.

1 upvote
Carsten Saager
By Carsten Saager (Aug 19, 2011)

I am getting more and more the impression that this is vaporware. Plenoptic camera exist already and are in use (mostly industrial). The concept is sound, but to achieve more than a web resolution the sensor has to capture an enormous amount of pixels. This has a consequence for implementing it into small devices (memory, power consumption). Besides manufacturing problems for the then needed smaller microlenses, eventually the noise level of the individual pixel will become to high to allow the plenoptic reconstruction - this is a physical limit that cannot be overcome

0 upvotes
TheEye
By TheEye (Aug 19, 2011)

For me, this potential new "tool," if you will, has very little to do with creative photography. Rather it seems to me a convenient way to achieve something that I prefer to realize while shooting, in retrospect. I am absolutely not interested in that sort of thing. It's surely great for picture-takers who can't decide where they should focus.

0 upvotes
David Parsons
By David Parsons (Aug 19, 2011)

I am assuming the dof seen in the photo is due to the aperture of the lens, then moving the focal point is achieved by the information gathered from the micro lenses? so having greater dof would be possible with a smaller aperture, but with a wide aperture, surely there is the opportunity (now or later with software development) is do an 'HDR' like effect on dof, shoot at f1.8 but layer up to what would have been achieved at f8 if that is what you wanted - this would be very useful for low light, if you wanted more of the picture in focus than the aperture needed for the conditions allowed?

0 upvotes
marsbar
By marsbar (Aug 19, 2011)

The "HDR" effect your talking about is somewhat similar to focus stacking, its already something people do mostly for Macro Photography, but can also be used for landscapes and architecture. Software is readily available for cameras with normal lenses to do this, of course this means taking multiple photos at different focus distances, so your pretty much limited to no moving subjects. But of course you can still shot wide open and have each image with less noise. So instead of one f/8 at ISO 1600 you can have say for example 4 or 5 f/2.8 shots at ISO 200 or still at ISO 1600 but a faster, more stabilizing, shutter speed.

Not exactly as versatile as a one shot plenoptic, but i thought i should point out that this is something available to you.

0 upvotes
duartix
By duartix (Aug 19, 2011)

Now all we need is a camera with a global shutter and decent FPS/resolution.
The Panasonic GH2 does 40fps@4MP but I'd like to see a bit more.

0 upvotes
PaulSnowcat
By PaulSnowcat (Aug 19, 2011)

This all seems to work with a tiny (not very small, but tiny) sensor with tremendous pixel count. And you can see how Ng points several times - "sharing. sharing, sharing, sharing...". A revolution? Perhaps it is, in cameraphones, but nothing more then a new useless toy for a photographer...

0 upvotes
Michael Ma
By Michael Ma (Aug 19, 2011)

Is it just me or is it that his portrait shot really needs some work? Maybe it's just unnecessarily big for a CEO message article.

0 upvotes
alphacam
By alphacam (Aug 19, 2011)

oh man... there I was clicking on his portrait/photo too to see if any focus point changes :|

0 upvotes
Nazgman
By Nazgman (Aug 19, 2011)

What I'm looking forward to is a matching display for these images: it has the same amount of photo sites and micro-lenses as the camera, and turns the image into something viewable without the need for processing. It would give a 3D effect when viewed from the right range of angles, without the need for 3D glasses.

0 upvotes
Nazgman
By Nazgman (Aug 19, 2011)

I think this has enormous potential from a lens design point of view.

How about field curvature, chromatic aberration, spherical aberration, distortion? All trivially solved with the right algorithm. Some of that is already done today, and light field photography would add a whole new dimension to the possibilities.

0 upvotes
xlynx9
By xlynx9 (Aug 19, 2011)

It becomes more interesting when you combine the two comments:

"wouldn't need focusing at the point of shooting." and

"we can shoot wide-open with apertures larger than make any sense for conventional photography"

0 upvotes
love_them_all
By love_them_all (Aug 19, 2011)

This kind of effect can be easily "copied" by a software. Take a picture with the max DOF, then in post the software can burr out zones of the picture...

0 upvotes
xlynx9
By xlynx9 (Aug 19, 2011)

but it can not correct a drastically out of focus picture, this can.

1 upvote
kenw
By kenw (Aug 19, 2011)

This camera can not correct a drastically out of focus picture any more than stopping down with a conventional camera. See Joseph's post below. What it will let you do is still have selective focus effects within a range of focus (at a tremendous reduction of resolution). That range, however, is no larger than if you had stopped down for larger DOF to begin with.

1 upvote
BoyOhBoy
By BoyOhBoy (Aug 19, 2011)

Nor is it easy to generate a smooth and believable transition between the sharp and blurred area. However, I find something deeply unsatisfying in the images posted for experimentation. The refocussed area never looks truly sharp, even in the highly downsized images they provide. Good enough for Facebook? Sure, but so is my crappy cell phone camera.

1 upvote
sorinx
By sorinx (Aug 19, 2011)

I think it will have a larger DOF than a normal camera at maximum aperture. In fact it does not need aperture change for DOF.

In a normal camera, all you have is a plane with sensor. With this camera you have micro-lenses at different distances from lens. It is like taking multiple shots at maximum DOF (with different focus points).
The problem is however, that you loose resolution/sensitivity compared with a normal camera. If for example you have 16MP and 4 focus planes, and you want a small DOF image, than you only have 4MP to work with. For more focal planes, even less

0 upvotes
Cy Cheze
By Cy Cheze (Aug 19, 2011)

xlynx9, how can a plenoptic picture rescue a picture if what you perceive as bad focus was really slow shutter blur or high ISO noise?

0 upvotes
Sean Clark
By Sean Clark (Aug 19, 2011)

Cy, if you have a lens with an aperture so large the DOF is too small for the scene you could shoot it wide open anyway to gather enough light that you can then increase the shutter speed. Now motion blur is reduced, but your DOF is too shallow. With this technique, you could increase your DOF in post.
High ISO doesn't blur shots, it adds noise. Noise reduction often does at blur though. There is the potential that this technique could smooth noise in a way similar to picture stack averaging, which could slightly sharpen the image while it reduces noise - as in astro photography.
Actual 1st gen products may not be capable of either improvement even if technically possible.

0 upvotes
xlynx9
By xlynx9 (Aug 20, 2011)

kenw: If you could correct anywhere within the normal stop-down range, I would say that is very significant! Fully stopped down can give a pretty huge range - from a few cm to infinite. However, I suspect in practise the system will be more limited by the number of photo site positions and the design of lenses to match.

As a side note, not having to focus at all could also help make for a very fast and responsive camera!

Cy: How could the war in Afghanistan kill Elvis if you perceived him to be shot?

0 upvotes
alphacam
By alphacam (Aug 19, 2011)

It sure is fun to click around on the pictures above to change its focus point!!

0 upvotes
WT21
By WT21 (Aug 19, 2011)

What if you want the option for selective focus OR deep DOF. Not selective focus on only one or the other subject. Can you do that? Look at, say, the climber in focus, or all of it focus?

1 upvote
BoyOhBoy
By BoyOhBoy (Aug 19, 2011)

You can do it if they give you more access to the computation engine. They do say that initially they will take the Apple approach - dumb down everything to the lowest common denominator. Which is why some of us don't touch Apple products.

0 upvotes
Joseph S Wisniewski
By Joseph S Wisniewski (Aug 19, 2011)

I always get a kick out of when Ren Ng makes presentations...

"When we moved from film to digital it made all sorts of changes to what we could do with photographs, but we were still collecting essentially the same 2D data that we always had been, right back to the days of the daguerreotype."

I guess he's never seen:
* A holograph. That's a film technique over a half century old, to capture a lightfield. It captures a lot more information than Ng's folly.
* A lenticular array, like the one in his camera, used for the analog film processes for which it was designed. (baseball cards, advertisements, crackerjack box prizes).
* A stereo or higher order film camera. Remember the four lens Nimslo?

1 upvote
Richard Butler
By Richard Butler (Aug 19, 2011)

In fairness, Joseph, he doesn't say nothing but 2D has been done before, just that current digital photography captures the same data as most film did.

0 upvotes
Michael Uschold
By Michael Uschold (Aug 19, 2011)

What Ng should have said is: "For the overwhelming majority of photos taken today, we are still collecting essentially the same 2D data". The important exceptions you note are very small niches compared to mainstream photography. This new device may have the potential to change that.

0 upvotes
Mark Devine
By Mark Devine (Aug 19, 2011)

I have been following this for some time and am excited to see what Lytro releases and how it progresses. Having used DSLRs since their infancy, I think Ng's forecast that the growth in sophistication and power of both Lytro files and tools to work with them will be like the growth for raw files and tools is probably very fair.

I got on the waiting list some time ago, and very much looking forward to see what Lytro releases. Wish I had it for my upcoming hiking trip to the Pacific Northwest!

While a Lytro image's focus may not be as tack sharp as the photo of Mr. Ng at the head of this article, I think that is simply a matter of time. The code jockey in me is fascinated by the whole concept and the photographer just wants to have fun.

Good luck Mr Ng!

0 upvotes
ThePaulmeister
By ThePaulmeister (Aug 19, 2011)

it seems it can't get critical focus on here, from the examples, will wait on this i think

0 upvotes
mikiev
By mikiev (Aug 19, 2011)

Yeah,

I only seem to get 2 or 3 'zones' of focus, at most.

Not rushing out to buy, but definitely interested in using it in the future.

0 upvotes
Joseph S Wisniewski
By Joseph S Wisniewski (Aug 19, 2011)

It can get an arbitrary number of zones of focus. But the calculations to do that aren't anywhere near realtime. To get something you can move sliders and see something happening, you need to calculate a finite number of zones of focus, offline, with a big computer, then put them together in some sort of layered presentation (flash is ideal for fluff like this).

0 upvotes
xlynx9
By xlynx9 (Aug 19, 2011)

I count 6 zones on the last image.

0 upvotes
Lightshow
By Lightshow (Aug 19, 2011)

I posted this quote near the top,

"Also the way we've packaged the data for easy viewing on the internet has an effect. It's not the full light field you're seeing - it's a subset to make it more portable. It's analogous to comparing the Raw data that an enthusiast photographer might take, with the small, compressed JPEG that Facebook might serve up if you view it on your smart phone."

It sounds like they sampled the data depending on the image used, so if the image has 3 points of interest, they only included those DOF slices plus a few OOF to save on bandwidth.

0 upvotes
Rich Maher
By Rich Maher (Sep 8, 2011)

Who cares. Another conversation piece for a short while.

0 upvotes
The_cheshirecat
By The_cheshirecat (Oct 20, 2011)

Points available to select for focus are 'iffy'. Trying to focus on the girls teeth nothing happens. Using her earing as a point of focus does trigger refocus but nothing, including the earring, comes into focus.

There are many other aspects which I expect will keep this from becoming mainstream, even among uncritical snapshooters. It is a shame because I was really enthusiastic about this as I first read it.

This all an interesting concept and I applaud those who have taken it this far, but it has a long way to go before it will be widely accepted by anyone but very few.

Please keep working in this direction. While it may not replace the pro cameras there is certainly a place for this in today's world after its many shortcomings have been overcome.

Comment edited 2 minutes after posting
0 upvotes
Total comments: 116
12