During its Adobe MAX 2019 event, Adobe announced its Content Authenticity Initiative (CAI), the first mission of which is to develop a new standard for content attribution. 'We will provide a layer of robust, tamper-evident attribution and history data built upon XMP, Schema.org and other metadata standards that goes far beyond common uses today,' the company explains in a new white paper about the initiative.

The idea behind Adobe's CAI is that there's no single, simple, and permanent way to attach attribution data to an image, making it hard for viewers to see who owns the image and the context surrounding its subject matter. This paves the way for image theft, as well as the spread of misinformation and disinformation, a growing problem on the modern Internet.

Adobe's new industry standard for digital content attribution, which was announced in collaboration with Twitter and The New York Times, will potentially change this, adding a level of trust in content that may otherwise be modified or presented with an inauthentic context on social media and elsewhere.

Adobe said in November 2019 that it had a technical team:

...exploring a high-level framework architecture based on our vision of attribution, and we are inviting input and feedback from industry partners to help shape the final solution. The goal of the Initiative is for each member to bring its deep technical and business knowledge to the solution. Success will mean building a growing ecosystem of members who are contributing to a long-term solution, adoption of the framework and supporting consumers to understand who and what to trust.

The newly published white paper titled 'The Content Authenticity Initiative: Setting the Standard for Digital Content Attribution' explains how this new digital content attribution system will work.

The team cites a number of 'guiding principles' in the initiative, including the ability for their specifications to fit in with existing workflows, interoperability for 'various types of target users,' respect for 'common privacy concerns,' an avoidance of unreasonable 'technical complexity and cost' and more. Adobe expects a variety of users will utilize its content attribution system, including content creators, publishers and consumers, the latter of which may include lawyers, fact-checkers and law enforcement.

The team provides examples of the potential uses for its authenticity system in various professions. For photojournalists, for example, the workflow may include capturing content at a press event using a 'CAI-enabled capture device,' then importing the files into a photo editing application that has 'CAI functionality enabled.'

Having preserved those details during editing, the photojournalist can then pass on the images to their editor, triggering a series of content verifications and distribution to publications, social media managers and social platforms, all of which will, ideally, support displaying not only the CAI information but also any alterations made to the content (cropping, compression, etc).

The idea is that at all times during its distribution across the Internet, anyone will be able to view the details about the image's origination, including who created it, what publication originally published the image, when the photo was captured, what modifications may have been made to the image and more.

The white paper goes on to detail other potential creation-to-distribution pipelines for creative professionals and human rights activists.

What about the system itself? The researchers explain that:

The proposed system is based on a simple structure for storing and accessing cryptographically verifiable metadata created by an entity we refer to as an actor. An actor can be a human or non-human (hardware or software) that is participating in the CAI ecosystem. For example: a camera (capture device), image editing software, or the person using such tools.

The CAI embraces existing standards. A core philosophy is to enable rapid, wide adoption by creating only the minimum required novel technology and relying on prior, proven techniques wherever possible. This includes standards for encoding, hashing, signing, compression and metadata.

Each process during the creator's workflow, such as capturing the image and then editing, produce 'assertions' as part of the CAI system. Typically speaking, according to the white paper, these assertions are JSON-based data structures that reference declarations made by the actor, which can refer to both humans and machines, including hardware like cameras and software like Photoshop.

The researchers go on to explain that:

Assertions are cryptographically hashed and their hashes are gathered together into a claim. A claim is a digitally signed data structure that represents a set of assertions along with one or more cryptographic hashes on the data of an asset. The signature ensures the integrity of the claim and makes the system tamper-evident. A claim can be either directly or indirectly embedded into an asset as it moves through the life of the asset.

For every lifecycle milestone for the image, such as when it was created, published, etc., the authenticity system will create a new set of assertions and claim related to it, with each claim daisy-chaining off the previous claim to create something like a digital paper trail for the work.

Of course, there are potential issues with Adobe's vision for content authentication, the most obvious being whether the industry is willing to adopt this system as a new standard. The CAI digital content attribution system will only succeed if major hardware and software companies implement the standard into their products. Beyond that, social media platforms would need to join the effort to ensure these permanent attribution and modification details are accessible to users.

As well, Adobe's system will have to achieve its highest goal, which is to be tamper-proof, something that is yet to be demonstrated. Work under this initiative is still underway; interested consumers can find all of the technical details in the white paper linked above.