Elon Musk’s social media platform X, formerly known as Twitter, is taking steps to strengthen its notorious “trust and safety” team, which was responsible for widespread censorship of conservatives under prior management, following the circulation of explicit AI-generated images of pop star Taylor Swift on the platform.
The New York Post reports that Elon Musk’s social media platform X/Twitter has pledged to hire 100 new content moderators and open a “trust and safety center” in Austin, Texas, to crack down on abusive and explicit content. The announcement comes after AI-generated deepfake nude images of pop star Taylor Swift spread rapidly across X/Twitter last week.
Breitbart News reported that the company was blocking all searches for the pop star shortly after the images went viral, writing:
Searches for “Taylor Swift” and “Taylor Swift AI” on X returned error messages on Saturday and Sunday, though Elon Musk’s platform allowed variations on the search terms, including “Taylor Swift photos AI.”
X confirmed it is deliberately blocking the search phrases for the time being.
“This is a temporary action and done with an abundance of caution as we prioritize safety on this issue,” X’s head of business operations Joe Benarroch said in a statement sent to multiple media outlets.
The Joe Biden administration and the mainstream news media shifted into high gear after the fake Taylor Swift images went viral, seeking to protect the left-wing pop star.
“We are alarmed by the reports of the circulation of the false images,” White House press secretary Karine Jean-Pierre told reporters on Friday, saying social media companies need to do a better job enforcing their own rules.
According to digital threat intelligence group Memetica, the first Swift deepfakes appeared online as early as January 6, though they went viral more recently. The images were created using AI image generators like DALL-E that can produce realistic fakes with simple text prompts.
Attempting to perform damage control, Musk’s company focused the “trust and safety” team’s work on protecting children. “X does not have a line of business focused on children, but it’s important that we make these investments to keep stopping offenders from using our platform for any distribution or engagement with CSE content,” said X executive Joe Benarroch.
Researchers say explicit deepfakes have become more common in recent years as AI technology improves and becomes more accessible. Most victims are female celebrities or public figures. The EU’s new Digital Services Act requires platforms like X to curb nonconsensual and abusive content.
Read more at the New York Post here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.
COMMENTS
Please let us know if you're having issues with commenting.