After Foveon, a new smart sensor is now out

If it can capture all the light field, and you can selectively focus afterwards, then why is any of it out of focus in the first place? I mean the original capture must have enough information to reproduce the whole scene in focus.
Yes --- you are right.
You could have infinite depth of field if you want.
Rather very large. All is actually not in focus ,,, but a range.
And then I assume you could choose your depth of field, and even have two or more depths of field. So it's weird that all the demos show off the concept of post focusing, when that concept of focus is totally eliminated in the technology. It's like they've gone to all this effort to eliminate the concept of focus, then gone ahead and made focus and DOF the main features.
I think they do this because great DOF is demonstrated by any cheap mobile phone camera - and therefore not interesting.

--
Roland

support http://www.openraw.org/
(Sleeping - so the need to support it is even higher)

X3F tools : http://www.proxel.se/x3f.html
 
It's like they've gone to all this effort to eliminate the concept of focus, then gone ahead and made focus and DOF the main features.
I guess it's clever marketing.

For many subjects pretty much everything is in focus with compact cameras already. So the point of "infinite" DOF alone wouldn't really stand out that much. But the big deal here is the gathering of spatial information instead of just a twodimensional image. To avoid looking twodimensional one has to add depth, and that's exactly what they do, or better said they let you decide where to put this depth.

And being able to "shift focus" only by clicking is just cool. No matter that the whole picture is perfectly in focus when captured and the depth is being added artificially afterwards.
 
I've been shooting Sigma/Foveon since the SD9 and I'm very picky about focus. I'll easily throw away 2/3 of my raw images due to focus issues.

I'll be keeping an eye on this new technology.
--
Obscura
 
I signed up for 'reservations...'

--
Jim
Not me,

Part of photography for me is the challenge of getting the perfect picture right in the camera. I refuse to use a camera that takes that challenge away, which is probably why I like the SD14 so much.
You are joking, right? I can't see this taking any creativity away at all, but helping get everything right in the first phase of the picture captured. PP will let you be the artist. If that's the way you like it, that's fine, but why not go back to film and a darkroom then? Or better yet a brush and canvas! I'm sure there were many painters/artist that thought film should not or could not be considered art, but we proved them wrong over time. Me, I'll take one if they don't cost an arm, leg and left testicle like the SD1. I'm sure it will of course, so I'll just keep plugging away like I am with film and (SD9/SD14)digital. JohnW
...Because the whole concept sounds like a April fools joke. ;)
 
If they can capture the light field then they are surely only a step away from a holographic camera
It states in his paper that he is capturing intensity and direction

A hologram captures and reproduces the wave-front being emitted from an object or objects

An object looks real and 3d because the light coming from the hologram is the same as the light coming from the original object

Not sure what you would use to image the captured data but maybe holgraphic images and even Video are on there way
 
If they can capture the light field then they are surely only a step away from a holographic camera
It states in his paper that he is capturing intensity and direction

A hologram captures and reproduces the wave-front being emitted from an object or objects

An object looks real and 3d because the light coming from the hologram is the same as the light coming from the original object

Not sure what you would use to image the captured data but maybe holgraphic images and even Video are on there way
Hmmmm ... actually ... thats more or less true.

Its not holographic. But you can use the result in a similar way. You could, via some computation, create a hologram from the data.

--
Roland

support http://www.openraw.org/
(Sleeping - so the need to support it is even higher)

X3F tools : http://www.proxel.se/x3f.html
 
. . . of the present or likely resolution in the short-mid term future. But I thought this line might be "suggestive" of the order of magnitude reduction in resolution at present.

"A team at Stanford University used a 16 megapixel camera with a 90,000-microlens array (meaning that each microlens covers about 175 pixels, and the final resolution is 90 kilopixels) to demonstrate that pictures can be refocused after they are taken."

http://en.wikipedia.org/wiki/Plenoptic_camera

Best,
--
Ed_S

http://www.pbase.com/ecsquires
 
...Because the whole concept sounds like a April fools joke. ;)
It is not a joke. Read the Ng's PhD thesis. There is a link on one of their web pages.

It's a combiation of focus dependent scattering by additional microlenses (about 100.00) and image reconstruction by Fourier transform methods from the resulting pixel pattern of the higher resolution (about 10MP) imager.
 
If they can capture the light field then they are surely only a step away from a holographic camera
It states in his paper that he is capturing intensity and direction

A hologram captures and reproduces the wave-front being emitted from an object or objects

An object looks real and 3d because the light coming from the hologram is the same as the light coming from the original object

Not sure what you would use to image the captured data but maybe holgraphic images and even Video are on there way
Hmmmm ... actually ... thats more or less true.

Its not holographic. But you can use the result in a similar way. You could, via some computation, create a hologram from the data.
I don't think so. In order to acquire a hologram, you need to capture the relative phases of the light waves coming from different parts and directions of the object. Capturing just intensities doesn't help.

In a hologram, the phases are transferred into intensities by using interference with a reference signal. One has to do this because film and digital imagers are only able to acquire intensities.

I am pretty sure you know this but thought it might be good to add some more detailed information.

Regards, Frank
 
I don't think so. In order to acquire a hologram, you need to capture the relative phases of the light waves coming from different parts and directions of the object. Capturing just intensities doesn't help.

In a hologram, the phases are transferred into intensities by using interference with a reference signal. One has to do this because film and digital imagers are only able to acquire intensities.

I am pretty sure you know this but thought it might be good to add some more detailed information.
Yeah ... I know how holograms work. And I know the plenoptic image is not holographic. I was just speculating what could be done with it. And actually the same can be made with any representation where you can extract the 3D data.

If you have the 3D data from an image you can render a hologram. You then have to compute the interference patterns that a 3D object of that shape would generate. It would be a synthetic hologram computed from a "real" plenoptic image.

--
Roland

support http://www.openraw.org/
(Sleeping - so the need to support it is even higher)

X3F tools : http://www.proxel.se/x3f.html
 
...Because the whole concept sounds like a April fools joke. ;)
I assure you - its for real. It might even be a nice toy to own. I will surely follow this one.

There are two questions:

1. Will it ever be able to produce a high quality photo?

2. Will it be an economical success?

--
Roland

support http://www.openraw.org/
(Sleeping - so the need to support it is even higher)

X3F tools : http://www.proxel.se/x3f.html
 
Roland, the explanation of the technology on the Lyttro website begins,

"The team at Lytro is completing the job of a century’s worth of theory and exploration about light fields."

I don't think they're claiming any magic, or denying that they are standing on many decades of research shoulders.

It's an interesting concept that I too had seen before. As with all technical advance, in the end I think our stance readily becomes, 'Why not?'. The real key is in the implementations: how much they offer to many persons, compared to their real costs or negative influences.

And taking Jerry's recent statement as a kind of springboard, if we feel like we like making artful pictures with another means, that's fine too, isn't it?

Regards, and hope you are doing well there,
Clive
 
On the website you will find the Ph.D. thesis of
the CEO Ren Ng (see "The Science inside", pg. 4, last link).

You will find, that he significantly expanded the
theory behind the plenoptic camera, by focussing
on getting practical and useful results, with resolution
close to conventional cameras.

However, there is one point that is not handled at all
in the thesis: color photography. I.e. how does the
bayer pattern affect the computational reconstruction
of the image.

So, although I think the company has a good basis,
there is still a lot to do.

Greetings,
--
Robert F. Tobler
http://ray.cg.tuwien.ac.at/rft/Photography/
 
However, there is one point that is not handled at all
in the thesis: color photography. I.e. how does the
bayer pattern affect the computational reconstruction
of the image.
I think color is solvable quite straight forward. I assume they will use an off the shelf Bayer CFA imager. Then you have the different light field samples.

It might be OK to look at the red, green and blue samples separately as you have got lots of different angles on the same image details.

Or - it might be better to really use an ordinary Bayer CFA reconstruction of the plenoptic image before running the plenoptic algorithms.

Both approaches may work just fine.
So, although I think the company has a good basis,
there is still a lot to do.
Probably

--
Roland

support http://www.openraw.org/
(Sleeping - so the need to support it is even higher)

X3F tools : http://www.proxel.se/x3f.html
 
I am doing just fine!

I think thus is fun stuff. And it is fun to see that someone wants to make real stuff from research results. So - I will look at this with interest.

The not so fun part about it is that companies almost always puts the lid on and are secret about their solutions. So - you are probably not going to be able to search the Internet and read any paper about the advancements they make from now on. So - you will e.g. probably not get any info on how they solve color photography.

--
Roland

support http://www.openraw.org/
(Sleeping - so the need to support it is even higher)

X3F tools : http://www.proxel.se/x3f.html
 
I think color is solvable quite straight forward. I assume they will use an off the shelf Bayer CFA imager. Then you have the different light field samples.
The problem is, that current demosaicing algorithms take the spatial closeness of the pixels into account, and can do a very good job of approaching black-and-white resolution if there is not much color involved. The microlenses of the plenoptic camera may require the development of different demosaicing algorithms in order to attain the same resolution, and here is where a small company could be seriously challenged.

Greetings,
--
Robert F. Tobler
http://ray.cg.tuwien.ac.at/rft/Photography/
 
I think color is solvable quite straight forward. I assume they will use an off the shelf Bayer CFA imager. Then you have the different light field samples.
The problem is, that current demosaicing algorithms take the spatial closeness of the pixels into account, and can do a very good job of approaching black-and-white resolution if there is not much color involved. The microlenses of the plenoptic camera may require the development of different demosaicing algorithms in order to attain the same resolution, and here is where a small company could be seriously challenged.
Solving such problems is typical university research work. And the company is spawned from university results --- so there might be a connection there.

Lets hope that the color research is made as open research.

--
Roland

support http://www.openraw.org/
(Sleeping - so the need to support it is even higher)

X3F tools : http://www.proxel.se/x3f.html
 
I.e. how does the bayer pattern affect the computational
reconstruction of the image.
But: Do they use a Bayer pattern sensor for the Lytro camera?

The problem, in any case, is the immense decrease of resolution, caused by the microlens array placed on top of the sensor. In the thesis Ren Ng describes usage of a Contax 645 medium format camera with a 16.8 megapixel digital back. And yet the outcome is an image with: 296×296 pixels That's 0.09 megapixels!

Publicly Lytro representatives have given hints that the camera will be a point-and-shoot, and somewhere also they state that today's cameras have a much larger resolution than needed for a picture on Facebook. That marketing makes sense. However, I wonder how they will manage an appropriate price, that is once they have burned through their cash...
 
But: Do they use a Bayer pattern sensor for the Lytro camera?
I guess they do. Anything else would be very expensive to order from Sony.
The problem, in any case, is the immense decrease of resolution, caused by the microlens array placed on top of the sensor.
Yes.
Publicly Lytro representatives have given hints that the camera will be a point-and-shoot, and somewhere also they state that today's cameras have a much larger resolution than needed for a picture on Facebook. That marketing makes sense. However, I wonder how they will manage an appropriate price, that is once they have burned through their cash...
Thats the usual problem for start ups. Someone has to pay e.g. 40 million dollars to get it going. Then the company in the best case has a revenue of 0.1 million dollars a year or in 99 cases of 100 fails. Does not work. Dont know why they do it, except maybe to be sold out to a big company like Amazon.

But even more strange - I dont think people on facebook want unsharp pictures. They rather use a tiny mobile phone camera and get infinite sharpness without any strange cameras and get more resolution.

This technology is for techno nerds - those that bought 3D glasses for their games 1998.

--
Roland

support http://www.openraw.org/
(Sleeping - so the need to support it is even higher)

X3F tools : http://www.proxel.se/x3f.html
 
But even more strange - I dont think people on facebook want unsharp pictures. They rather use a tiny mobile phone camera and get infinite sharpness without any strange cameras and get more resolution.
I don't know if you've been paying attention to Facebook and in general small camera image trends - people on Facebook or other social sites LOVE unsharp images. An app on the iPhone called "Instagram" kind of kicked off this trend, with a lot of people using filters that produce unsharp edges and other various kinds of effects.

I don't think that's the issue, I think the real issue the camera faces is the amount of work the photographer is going to have to go through to produce an image they can print or share. The success of the camera is probably predicated on usability of workflow they have developed, and it will have to be because Photoshop cannot help them.

--
---> Kendall
http://InsideAperture.com
http://www.pbase.com/kgelner
http://www.pbase.com/sigmadslr/user_home
 

Keyboard shortcuts

Back
Top