SAN FRANCISCO (AP) — The issue of misleading political messages on social media arose again last week, when President Trump tweeted out an edited video showing Speaker of the House Nancy Pelosi repeatedly tearing up his State of the Union speech as he honored audience members and showed a military family reuniting.
Pelosi did tear the pages of her copy of the speech — but only after it was finished, and not throughout the address as the video depicts.
Pelosi’s office asked Twitter and Facebook to take down the video, which both sites have declined to do.
Researchers worry the video’s “selective editing” could mislead people if social media companies don’t step in and properly label or regulate similar videos. And with the proliferation of smartphones equipped with easy editing tools, the altered videos are simple to make and could multiply as the election approaches.
HOW LONG HAS DOCTORED CONTENT BEEN AN ISSUE?
Political campaign ads and candidate messages showing opponents in a negative light have long been a staple of American politics. Thomas Jefferson and John Adams attacked each other in newspaper ads. John F. Kennedy’s campaign debuted an ad showing different videos edited together of Richard Nixon sweating and looking weak.
So, to some extent, the video of Pelosi, which appears to be created by a group affiliated with conservative organization Turning Point USA, is not novel. What’s different now, said Clifford Lampe, a professor of information at the University of Michigan, is how widely such content can spread in a matter of minutes.
“The difference now is that the campaigns themselves, the president of U.S. himself, is able to disseminate these pieces of media to the public,” he said. “They no longer have to collaborate with media outlets.”
The Pelosi team has pushed back against doctored online content in the past. A video released last year was slowed down to make it seem the speaker was slurring her words.
WHAT POLICIES FROM SOCIAL MEDIA COMPANIES GOVERN THESE VIDEOS?
Facebook, Google and Twitter have all been emphasizing their efforts to cut down on disinformation on their sites leading up to the election, hoping to avoid some of the backlash generated by rampant misinformation on social media during the 2016 election.
But the video of Pelosi does not violate existing policies, both Twitter and Facebook said. Facebook has rules that prohibit so-called “deepfake” videos, which the company says are both misleading and use artificial intelligence technology to make it seem like someone authentically “said words that they did not actually say.”
Researchers say the Pelosi video is an example of a “cheapfake” video, one that has been altered but not with sophisticated AI like in a deepfake. Cheapfakes are much easier to create and are more prevalent than deepfakes, which have yet to really take off, said Samuel Woolley, director of propoganda research at the Center for Media Engagement at University of Texas.
That editing is “deliberately designed to mislead and lie to the American people,” Pelosi deputy chief of staff Drew Hammill tweeted on Friday. He condemned Facebook and Twitter for allowing the video to stay up on the social media sites.
Facebook spokesman Andy Stone replied to Hammill on Twitter saying, “Sorry, are you suggesting the President didn’t make those remarks and the Speaker didn’t rip the speech?” In an interview Sunday, Stone confirmed that the video didn’t violate the company’s policy. In order to be taken down, the video would have had to use more advanced technology and possibly try to show Pelosi saying words she didn’t say.
Twitter did not remove the video either, and pointed toward a blog post from early February that says the company plans to start labeling tweets that contain “synthetic and manipulated media.” Labeling will begin on March 5.
WHAT DOES THE LAW SAY?
Not much. Social media companies are broadly able to police the content on their own sites as they choose. A law, section 230 of the Communication Decency Act, shields tech platforms from most lawsuits based on the content posted on their sites, leaving responsibility largely in the companies’ own hands.
Most platforms now ban overtly violent videos and videos that could cause real-world harm, though of course much of that is up to internal company interpretation. Facebook, Twitter and Google’s YouTube have received a significant amount of criticism in recent years about live-streamed and offensive videos that have appeared on the sites. The companies sometimes bend to public pressure and remove videos, but often point to people’s rights to freedom of expression in leaving videos up.
WHAT HAPPENS NEXT?
Misinformation on social media, especially surrounding elections, is a varied and ever-changing conversation. Jennifer Grygiel, an assistant professor at Syracuse University, called for legislation to better regulate social media in cases of political propaganda. It gets tricky though, she admits, because the “very people who will be regulating them are the same ones using them to get elected.”