Microsoft has publicly released Photosynth, a way of combining conventional images to create 3D scenes. After you upload a set of images, the software analyzes each for similarities to the others, and then it uses this data to build a model of where the photos were taken.  A viewer can then browse through the final photograph, navigating smoothly and zooming in tiny details.

What's the idea?

Humans are able to perceive depth by instinctively calculating the effect on perspective of the offset between their eyes. Conventional digital cameras, with a single viewpoint on the world, cannot do this. Photosynth identifies common features in multiple photographs and uses them to work out how the images relate to one another. It then uses this information to build up a 3D map of how the features in the image, and the positions of the cameras that took them, relate to one another.

The software can combine images shot with the creation of 'Synths' in mind or by mixing images taken at different times, dates and resolutions.

It is the first use of Microsoft's much-hyped and rather astonishingly-named 'Seadragon' technology. The accompanying Photosynth blog provides some of the background. The team has provided video and pdf instructions for creating your own 'Synths,' including a guide to subjects and photographs that will be considered 'Synthy' and those considered 'Not Synthy.' (It turns out that Venice is considered distinctive and feature-full enough to be Synthy, while the Seattle Public Library remains stubbornly resistant to Synthing).

Click here for more information and downloads.