Anyone who manages a large image library knows how important keywording and captioning are for categorizing and keeping things searchable. They also know how time-consuming these tasks can be. That's where artificial intelligence may be able to lend a hand though, and the updated version of Google’s trainable 'Show and Tell' algorithm, which has just been made open source, is now capable of describing the contents of an image with an impressive 93.9% accuracy.

Google's model generates a new captions by using concepts learned from pre-captioned images in the training set.

According to an article on the Google Research Blog the updated algorithm is faster to train and produces more detailed descriptions. The Google researchers trained 'Show and Tell' by showing it pre-captioned images of a specific scene to teach it to accurately caption similar scenes without any human help. By making 'Show and Tell' open source Google aims to promote research in the field of image recognition.

After the update the image model is now capable of providing more detailed descriptions and more likely to include color descriptions.