I’m soliciting prompts for discussion. This piece is a part of that series.
It seems like the last 200 years or so - when we could use recorded media, photographs, audio, videos, as evidence or proof of anything - may have been a brief, glorious aberration, a detour in the timeline. Barely a blink of an eye, relative to the full history of civilization. Nice while it lasted, maybe it’s over now.
What does that mean? If true, how will we adapt? What techniques for evidence and proof from the pre-recorded-media era will we return to? What new techniques will we find, or need?
I’ll start by asking: could we? Or to put it another way: have previous assumptions we might have made about the trustworthiness of recorded media been warranted?
One of the most famous users of photo editing to alter the historical record was Stalin, who often edited people he deemed to be enemies of the state out of photographs. Portraits of the leader that hung in peoples’ homes were retouched so that they were more to his liking.
A few years later, the artist Yves Klein took photographs like this one of him hurling himself off a building. Obviously, they weren’t real: his intent was to demonstrate that the theatre of the future could be an empty room; arguably an accurate demonstration of our present.
Later still, a photo of Obama shaking hands with the President of Iran circulated widely on Republican social media — despite the fact that the event never happened.
And there are so many more. As the Guardian wrote a few years ago about Photoshop:
In fact, the lesson of the earliest fake photos is that technology does not fool the human eye; it is the mind that does this. From scissors and glue to the latest software, the fabrication of an image only works because the viewer wants it to work. We see what we wish to see.
Sometimes, we didn’t even need trickery. President Roosevelt tried to hide his disability by having the Secret Service rip the film out of anyone’s camera if they caught him in his wheelchair. Endless short men in the public eye — Tom Cruise, for example — have hid their height on camera by standing on boxes or having their counterparts stand in a hole.
Of course, the latest deepfake technology and generative AI make it cheaper and easier to create this kind of impossible media. Although it’s not new, it will become more prolific and more naturalistic than ever before.
The Brookings Institution points out that in addition to the proliferation of disinformation, there will be two more adverse effects:
- Exhaustion of critical thinking: “it will take more effort for individuals to ascertain whether information is true, especially when it does not come from trusted actors.”
- Plausible deniability: accusations of impropriety will be more easily deflected.
Trusted actors, of course, are those we already know and rely on. Most people will not think the New York Times is faking its images. So another adverse effect will be the relative inability for new sources to be taken seriously — which will particularly hurt sources from disadvantaged or underrepresented groups. For the same reason, maintaining a list of “approved” sources that we can trust is not a real solution to this problem. In addition to it censoring new and underrepresented voice, who could possibly reliably maintain this kind of list? And what will prevent them from interpreting factual data that they don’t like as disinformation?
Regarding plausible deniability, even without deepfakes, we’re already learning that many forensic evidence techniques were more limited than we were led to believe. Bite marks, hair comparisons, and blood spatter, all commonly used in cases, were shown to have a limited scientific basis and to have often been misapplied. An artifact in itself is almost never enough to prove something to be true; we simply have to ask more questions.
Context is a useful tool here. If a public figure is shown to have said something, for example, are there other corroborating sources? Were there multiple independent eyewitnesses? Is any surrounding media drawn from this one artifact, or are there other, independent stories drawn from other, separately-recorded evidence?
So the real change will need to be with respect to source analysis. We’ve been trained to be consumers of information: to trust what’s on the page or on the screen. As I tried to explain at the beginning, that was always an approach that left us open to exploitation. There is no text that should not be questioned; no source that cannot be critically examined.
Generally, I think the Guardian’s observation holds true: we see what we wish to see. The truth will have plausible deniability. We will need more information.
To be sure, technology solutions are also useful, although it will be an arms race. Intel claims to have a deepfake detector that works with 96% accuracy — which will be true until the inferred blood flow signals it uses can also be accurately faked (if that hasn’t happened already). Researchers at the University of Florida experimented with detecting audio deepfakes by modeling the human vocal tract. Again, we can expect deepfake technology to improve to a level where it surpasses this detection — and regardless, we still have to worry about the impact of false positives. We also should worry about any incentive to recreate a situation where we unquestioningly accept a source.
Even if a quiver of detectors can take down deepfakes, the content will have at least a brief life online before it disappears. It will have an impact. […] Technology alone can’t save us. Instead, people need to be educated about the new, nonreality-filled reality.
We will need to use all the tools at our disposal — contextual, social, and technological — to determine whether something is a true record, representative of the truth, or an outright lie. We always had to do this, but most of us didn’t. Now technology has forced our hand.
I’m writing about the intersection of the internet, media, and society. Sign up to my newsletter to receive every post and a weekly digest of the most important stories from around the web.