Image: Google

We have seen several attempts at automated image assessment from both technical and aesthetic points of view in the past. For example, Google researchers have previously used convolutional neural networks (CNNs) to assess image quality of specific image categories, such as landscapes.

However, these previous approaches could typically only differentiate between low and high image quality in a binary way. Now, a Google research team has developed a methodology that can provide a more granular assessment of the quality of a photograph that is applicable to all types of images.

The NIMA: Neural Image Assessment model uses a deep CNN that was trained to predict which images a typical user would rate as looking technically good or aesthetically pleasing, using that information to rate an image on a scale of 1 to 10.

To achieve this, it relies on state-of-the-art deep object recognition networks and uses them to develop an understanding of general categories of objects. As a result, NIMA can be used to score images in a reliable manner and with high correlation to human perception, which makes it a potentially very useful tool for labor intensive and subjective tasks, such as automated image editing or image optimization for user engagement.

The NIMA team says that, in testing, the model's aesthetic ranking of images closely matches the mean scores that were assigned by human judges. What's more, the technology is still in its infancy; further retraining and testing should improve the model even further. Once systems get better, future applications could include image capture with real-time feedback to the photographer, auto-culling, or providing guidance to image editors to achieve optimized post-processing results.

More detail on this fascinating new system are available on the Google Research Blog.