Adobe researchers have developed a neural network that can identify Photoshopped images. The technology was detailed in a newly published study [PDF], which points out that it is often difficult for humans to notice altered parts of an image. However, differences between the original image and edited elements typically persist despite any attempts to obfuscate them, such as applying a Gaussian blur, and machines can be trained to spot those discrepancies.

Various differences may exist between original and edited image elements, such as different noise patterns and contrast levels. Manual adjustments to these edited elements can make them virtually indistinguishable to the human eye. Adobe's neural network, however, can not only identify these changes, but also determine the type of tampering technique used to edit the image.

The system involves a two-stream Faster R-CNN network with end-to-end training in identifying manipulated images. The first, called an RGB stream, looks for various tampering artifacts, including big contrast differences and altered boundaries. The second, called a noise stream, looks for inconsistencies in the image's noise to identify edited elements.

In the study, researchers explain:

We then fuse features from the two streams through a bilinear pooling layer to further incorporate spatial co-occurrence of these two modalities. Experiments on four standard image manipulation datasets demonstrate that our two-stream framework outperforms each individual stream, and also achieves state-of-the-art performance compared to alternative methods with robustness to resizing and compression.

Such technology could prove useful for verifying the authenticity of images used in photojournalism, photography contests, and similar situations.