The tools and methods we evaluated are at the frontier of image standards, encryption, and web publishing technology. Some are mature and gaining adoption, and others are in beta versions, making us some of the first users to test them.
If one thinks of the steps from taking a photo to publishing a photo as a journey, then it’s easier to understand how each step can form a chain of authenticity from the field to the digital front page of a newspaper website.
Following the Starling Framework, we break this journey into three steps:
From there, photos are published on a variety of platforms. In Reuters’ case, photos are syndicated to publications that reach millions of readers around the world each day.
The challenge for all news organizations, and especially Reuters as a wire service, is that even if photos have metadata with rich information, often this data is stripped out by social media platforms and publishers for security reasons or buried within a file and inaccessible to users. As photos spread across the web without information on their provenance, important context is lost and can give rise to confusion or manipulation.
Some of that is starting to change.
Over the last year, members of the Starling Lab helped advise the Content Authenticity Initiative (CAI), spearheaded by Adobe, the New York Times, and Twitter. This growing consortium of partners seeks to create a new attribution standard and user interface that allows anyone to access the information behind each step of producing this photo and video content.
The CAI standard builds upon long-standing photo metadata projects, like EXIF, IPTC, and XMP. Metadata created about each image is securely stored directly within the photo itself using the new JPEG universal metadata box format (JUMBF).
All the data embedded in the photo is accessed using a lightweight browser-based application that keeps the user in control of the experience. Therefore, rather than relying on a cloud-based service that centralizes authority, the CAI standard transforms photos into a trusted and self-sufficient container for information. If users elect, each addition or modification of that information can be embedded in the file and form a secure chain of custody.
In advance of the presidential transition, Adobe gave us early access to technical specs, early web tools, and a private beta of Photoshop that formed the first end-to-end signal flow of the CAI standard and enabled us to test in the field.
Our teams implemented the CAI rapidly by building upon open source prototypes we already deployed with Reuters to cover the California Presidential Primary in March 2020. In total, we tested over half a dozen technologies in this case study. Each was orchestrated to leverage their strengths and together form a comprehensive prototype.
The CAI’s whitepaper sets both the technology and the principles to shape the initiative. In helping draft that document, we saw Starling’s role as representing our lab’s experience working on human rights accountability and supporting journalists. As such, we were pleased to support the CAI in keeping their standards and tools open source, globally accessible, and privacy-preserving.
Most importantly, the CAI standard can integrate a suite of authentication technologies that provide an alternative mainstream internet infrastructure of today. These decentralized and open source platforms —a generation of technology often called decentralized web or dWeb — have an important proof point with CAI.
The standard makes it possible for each photo to include embedded metadata and even fact-checking information to help, as analysts at Amnesty International put it, “lower uncertainty” about the image’s veracity. Together these self-contained nodes form a decentralized knowledge graph of information. This ensures that no single entity can hold control or manipulate the data.
That’s essentially what is new here. The open-source approach we present works because it enables trust in facts to emerge from a plurality of experts. Decentralization is a potent strategy precisely because it lets the most diverse set of voices create clarity.
Conspiracies break apart when an overwhelming number of different viewpoints dispel them.
The goal of resetting our era of disinformation, therefore, begins by bringing as many different people to the table to advance the cause of transparent and fair reporting.
For our team, the stakes remain high. Image authentication is powerful but also still vulnerable. It can be abused and misused in just as many ways as it can strengthen trust in journalism.
The Starling Lab’s venerable collaborator, Witness, an international nonprofit organization that helps people use video and technology to protect and defend human rights, points this out in their path-breaking report on authenticated media: The problems start the moment when the technology “becomes an obligation, not a choice.” Image authentication can easily become a tool of surveillance to unmask the identity of vulnerable sources and photographers or a way to exclude or chill voice.
The introduction of image authentication technology, therefore, needs to be carefully considered. As the New York Times found in their News Provenance Project with IBM, the implications, significance, and relevance of metadata is not self-evident or even fully understood.
Practitioners and readers need to be guided through a transition that showcases the promise of the technology but does not unwind the trust in brave and accurate news reporting today that uses unencrypted images. Encrypted and unencrypted images are not necessarily more or less authentic. Each of them have their distinct integrity and vulnerability. Both still rely on human photographers and human publishers to deploy them in good faith. Both require readers to be discerning and use their judgment from various sources to come to their own conclusions.
We use cookies from third party services to offer you a better experience. Read about how we use cookies and how you can control them by clicking "Privacy Preferences."