joe6pack: Interesting concept but I cannot see the application of this. You can do anyway with lens and get infinite DOF using pin hole camera as well.
With a single sensor, the camera needs to poll the sensor each time it modify the aperture array, which I believe is much slower than polling sensor.
A very similar method is already used by astronomers to image in X-ray and gamma ray energies:
fieldray: This is also known as coded aperture imaging which has been around for awhile. Luckily for us, somebody invented the focusing lens to sample the light field simultaneously, transform the field to an angular field map, and somebody else invented a sensor array to also sample the angular field map simultaneously! Very clever. Of course you still have to sample this field at multiple depth points to get the full image information. Stereo photography is another creative combination of these approaches that provides some depth information (no 3d wavefront information) but still does most of the sampling simultaneously.
That's what I was thinking too, it seems to be the same idea used in X-ray and gamma-ray astronomy.
Mark Schormann: This makes the most sense in applications where the sensor is incredibly expensive and the aperture array can be made more cheaply.
I could imagine some applications in doing high speed optical data transmissions where there are multiple end-points.
It's commonly done in X-ray and gamma-ray astronomy, where it's extremely difficult to make a mirror or lens.
wetsleet: "The composite image is made up of 55 high-resolution images, taken using its MAHLI [2MP CCD] camera"
So how does a 2MP camera take a high resolution image?
Read Mark Ravine's interview linked in this article. The camera sensor choice was a compromise for using the same sensor on all cameras. And the performance requirements were pretty much frozen in their original 2004 proposal.