Researchers with the University of California, Santa Barbara (UCSB) and NVIDIA have detailed a new type of technology called 'computational zoom' that can be used to adjust the focal length and perspective of an image after it has been taken. The technology was detailed in a recently published technical paper, as well as a video (above) that shows the tech in action. With it, photographers are able to tweak an image's composition during post-processing.

According to UCSB, computational zoom technology can, at times, allow for the creation of 'novel image compositions' that can't be captured using a physical camera. One example is the generation of multi-perspective images featuring elements from photos taken using a telephoto lens and a wide-angle lens.

To utilize the technology, photographers must take what the researchers call a 'stack' of images, where each image is taken slightly closer to the subject while the focal length remains unchanged. The combination of an algorithm and the computational zoom system then determines the camera's orientation and position based on the image stack, followed by the creation of a 3D rendition of the scene with multiple views.

"Finally," UCSB researchers explain, "all of this information is used to synthesize multi-perspective images which have novel compositions through a user interface."

The end result is the ability to change an image's composition in real time using the software, bringing a photo's background seemingly closer to the subject or moving it further away, as well as tweaking the perspective at which it is viewed. Computational zoom technology may make its way into commercial image editing software, according to UCSB, which says the team hopes to make it available to photographers in the form of software plug-ins.