The FBI has issued a warning about a surge in extortion schemes involving AI-generated explicit content, commonly referred to as “deepfake sextortion.” According to law enforcement officials, bad actors produce fake explicit content of their targets, then demand money to remove them from the internet.
The Verge reports that the FBI has warned about an increase in extortion schemes involving AI-generated nudes, also known as “deepfake sextortion.”
Since April of this year, there has been a noticeable increase in reports of these schemes, according to the FBI. The agency describes a disturbing pattern in which malicious actors use cutting-edge AI tools to manipulate innocent victim images found on social media platforms to produce explicit content.
“The photos are then sent directly to the victims by malicious actors for sextortion or harassment,” an FBI spokesperson said. “Once circulated, victims can face significant challenges in preventing the continual sharing of the manipulated content or removal from the internet.”
The FBI goes on to say that the blackmailers typically use the altered material to demand money or real explicit images from their victims. “The key motivators for this are a desire for more illicit content, financial gain, or to bully and harass others,” the spokesperson added.
When posting pictures of themselves online, the public is advised to use caution by the agency. Regardless of being careful online, a deepfake can be made with just a few images or videos, so no one can truly be protected from these extortion tactics unless they take down all of their online images.
In 2017, users on forums like Reddit used cutting-edge AI research techniques to produce explicit content of female celebrities, which marked the start of the deepfakes phenomenon. Despite some efforts to stop the spread of this content online, deepfake nude creation tools and websites are still easily accessible.
The threat posed by such schemes is likely to increase as the technology behind deepfakes develops and becomes more widely available, making it a pressing issue for law enforcement agencies worldwide.
Despite conerns about deepfakes, Silicon Valley continues to profit off the technology. Breitbart News previously reported that Facebook allowed sexual deepfake ads to run on its platforms.
The ad campaign, which ran on Sunday and Monday, rolled out more than 230 advertisements on Facebook, Instagram, and Messenger, the report noted.
While the ads did not feature any actual sex acts, they were suggestive in nature, and were made to mimic the beginning of a porn video, even adding Pornhub’s intro track playing in the background.
Among the ads that swapped out other people’s faces with those of celebrities, 127 of them featured Watson, and another 74 featured Johansson.
Read more at the Verge here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan