wilsonlaidlaw: The thing that impresses me is the heat resistance of the landing legs. I assume they must be covered in some sort of ablative material, such as tantalum hafnium carbide, but even so, to retain their structural rigidity after that extreme cooking, is amazing. They may be actively cooled by circulation of some of the rocket fuel through them.
The legs are placed well away from the engine exhaust. And since this is the 1st stage of a multi-stage rocket, it doesn't go fast enough for aerodynamic heating to become a huge problem.
rfclark: One reason this didn't crash and burn like a lot of our early military test rockets is that this rocket is built with private money!The engineers made sure they knew what they were doing before they lit the fuse!
SpaceX has had its share of failures. The first 3 launches of the Falcon-1 were all failures.
Rich J: Thust vectoring I understand. What puzzles me is the burn control. That's got to be a solid rocket- a big firework. So firstly how do they control the thrust to decelerate to a hover and then finely control the thrust to control the descent and final shut-down?
Most modern space launch vehicles use liquid fuel, though there are a few notable exceptions (e.g. Shuttle boosters). This one uses kerosene + liquid oxygen, the same as the Saturn V.
joe6pack: Interesting concept but I cannot see the application of this. You can do anyway with lens and get infinite DOF using pin hole camera as well.
With a single sensor, the camera needs to poll the sensor each time it modify the aperture array, which I believe is much slower than polling sensor.
A very similar method is already used by astronomers to image in X-ray and gamma ray energies:
fieldray: This is also known as coded aperture imaging which has been around for awhile. Luckily for us, somebody invented the focusing lens to sample the light field simultaneously, transform the field to an angular field map, and somebody else invented a sensor array to also sample the angular field map simultaneously! Very clever. Of course you still have to sample this field at multiple depth points to get the full image information. Stereo photography is another creative combination of these approaches that provides some depth information (no 3d wavefront information) but still does most of the sampling simultaneously.
That's what I was thinking too, it seems to be the same idea used in X-ray and gamma-ray astronomy.
Mark Schormann: This makes the most sense in applications where the sensor is incredibly expensive and the aperture array can be made more cheaply.
I could imagine some applications in doing high speed optical data transmissions where there are multiple end-points.
It's commonly done in X-ray and gamma-ray astronomy, where it's extremely difficult to make a mirror or lens.
wetsleet: "The composite image is made up of 55 high-resolution images, taken using its MAHLI [2MP CCD] camera"
So how does a 2MP camera take a high resolution image?
Read Mark Ravine's interview linked in this article. The camera sensor choice was a compromise for using the same sensor on all cameras. And the performance requirements were pretty much frozen in their original 2004 proposal.