A fascinating New York Times (NYT) article on “deepfake” technology published on Sunday made it clear that artificial intelligence (A.I.) is already raising concerns about propaganda, identity theft, and national security.
Even in its infancy, A.I. already has the power to ruin lives and perhaps destabilize nations.
“Deepfakes” are high-quality, doctored images made quickly and easily with the assistance of A.I. technology, often using entire libraries of images of the people or places involved. The premier example of a deepfake program is FakeApp, which was created using open-source Google software and appears to have about 120,000 users at present.
Of course, doctored photos and videos have been around for a long time, dating all the way back to the earliest days of photography, but the difference is that FakeApp can create very convincing fake photos and videos with a modicum of artistic or programming skill, and A.I. helps the work get done very quickly.
As a recent study published by Cambridge University warned, the great promise of artificial intelligence is that it works very quickly and requires far less technical knowledge from human users than earlier computer applications. Veterans of the early days of search-engine technology would be astounded by how today’s Google system can turn sloppy, poorly-typed queries into meaningful results. Long before we must grapple with sci-fi concepts such as self-aware machine intelligence, A.I. is changing the world by making it possible for computers to understand what humans mean, rather than literally and precisely interpreting every word of what they say.
The power of artificial intelligence is truly a double-edged sword because it can automate mischief and destruction, as well. The Cambridge researchers worried about A.I. systems spreading chaos by manufacturing propaganda and disinformation, and the New York Times piece on deepfakes echoes precisely those concerns:
Deepfakes are one of the newest forms of digital media manipulation, and one of the most obviously mischief-prone. It’s not hard to imagine this technology’s being used to smear politicians, create counterfeit revenge porn or frame people for crimes. Lawmakers have already begun to worry about how deepfakes could be used for political sabotage and propaganda.
Even on morally lax sites like Reddit, deepfakes have raised eyebrows. Recently, FakeApp set off a panic after Motherboard, the technology site, reported that people were using it to create pornographic deepfakes of celebrities. Pornhub, Twitter and other sites quickly banned the videos, and Reddit closed a handful of deepfake groups, including one with nearly 100,000 members.
…
“This is turning into an episode of Black Mirror,” wrote one Reddit user. The post raised the ontological questions at the heart of the deepfake debate: Does a naked image of Person A become a naked image of Person B if Person B’s face is superimposed in a seamless and untraceable way? In a broader sense, on the internet, what is the difference between representation and reality?
The user then signed off with a shrug: “Godspeed rebels.”
For the uninitiated, Black Mirror is a Twilight Zone-style anthology series, currently hosted on Netflix, whose best episodes peer just a little bit into the future to create disturbing tales about the bizarre effects of Information Age technology upon society. Even the most far-fetched episodes have a knack for making the viewer stop and ask, “Is this really so different from what’s happening right now?”
The “Godspeed rebels” Redditor had it exactly right: We’re already cruising into troubled waters with the wind of hypothetical scenarios from half a decade ago filling our sails. Fake photos and videos are a problem we’ve been dealing with for ages. Very good fake photos and videos made with relatively little effort—FakeApp “isn’t simple, but it’s not rocket science, either,” as Kevin Roose wrote for the NYT—could turn an old problem into a new and dangerous one.
It is much like the way letters to newspaper editors turned into comment-board trolling, bot swarms, Twitter outrage mobs, and “Fake News” mania. Letters to the editor were around for a long time before the Internet made such feedback radically easier to create and post where lots of people would see it. This led to “astroturfing,” the manual creation of faked or scripted feedback to create illusory groundswells of opinion. Astroturfing was weaponized with bots to create some highly effective social media campaigns with relatively few human users behind them.
A.I. takes all of that to the next level, and propaganda images could be more powerful than words, hashtags, and statistical chicanery like artificially bloated Twitter follower counts. The key to FakeApp’s power is that its A.I. code learns from its mistakes, teaching itself to become a better artist in a process that works better and faster as more computing power is made available. A novice can use it to make an amusing fake video—your face on a movie star’s body!—that would not really fool anyone. An expert can create far more convincing fake imagery in a couple of days. Every variable in that equation will change as the software improves and more powerful computer resources are made available at lower prices.
It should come as no surprise that deepfake techniques are largely employed by amateur pornographers at the moment, inserting the faces of famous people into porno clips for lowbrow laughs. The creator of FakeApp expressed noble hopes that the software could be used by amateur filmmakers to grace their creations with excellent special effects at minimal cost. In the future, it could be employed to create highly effective political propaganda, ruin reputations, facilitate identity theft, enhance blackmail schemes, and other mischief that could destabilize social stability with national security implications. Who wants to see what malevolent state actors with notoriously active cyber-warfare units can do with deepfake technology?
In countries around the world, it is already quite easy to unleash virtual or physical angry mobs and get witch hunts rolling. Deepfakes reduce the cost and time required for high-quality fakery, which could allow propagandists to synchronize their efforts with the flow of news more easily. The struggle against deepfakes already looks like a losing battle, even with the adult entertainment industry mobilized to help crack down on unauthorized use of footage. Words can easily be placed in the mouths of high-level political figures. The road from hilarious movie-star parodies to vicious revenge porn and political manipulation, and on the information superhighway there are few speed limits.