Imagine a computer system automatically cataloging your photographs with vivid definitions as soon as you download them. This is exactly what Google is working on at its Research Labs in Mountain View, California. The latest complex algorithm from the search engine giant is able to systematically 'produce captions to accurately describe images the first time it sees them'.

Google's new image identification technology produces coherent (and often surprisingly accurate) sentences describing a photo's subjects, rather than individual words or tags. 

The team of four scientists, including Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan, aim to improve object detection, classification, and labeling with their latest advancements. Google is eyeing the technology as a way to help the visually impaired better interpret photographs and to provide captioning in areas of the world where internet connections are limited.

The technology involves borrows advancements in the field of language translation and applies them to photographs. A vision Convolutional Neural Network analyzes an image, and rather than classifying individual objects in the photograph, a second process is applied to translate the findings into phrases. Google has provided examples in which the algorithm successfully describes 'two pizzas sitting on top of a stove top oven', or even, 'a group of people shopping at an outdoor market'.

What is probably most interesting to photographers is the potential use of this technology by cataloging applications such as Adobe Lightroom. Wedding photographers, for example, might be able to download thousands of images while the algorithm automatically detects which photographs contain shots of bride and groom. What do you make of it? Let us know in the comments.