Facebook Claims to Have Gotten Better at Removing Terrorism Content

Facebook CEO and founder Mark Zuckerberg testifies during a US House Committee on Energy a
SAUL LOEB/AFP

Social media giant Facebook has claimed that it has improved its removal rates of content produced by ISIS, al-Qaeda and other terrorist groups from the platform.

In a recent blog post titled “Hard Questions: What Are We Doing to Stay Ahead of Terrorists?” Facebook discussed their efforts to remove terrorism-related content from their platform. In the post, Facebook claims to have removed 9.4 million pieces of terrorism-related content during the second quarter of 2018 and another 3 million posts during the third quarter. This shows a significant increase from May when the company stated that they had removed 1.9 million posts during the first quarter of 2018.

In the blog post, written by the company’s Global Head of Policy Management, Monika Bickert, and Brian Fishman, the head of counterterrorism policy, the two executives stated: “Online terrorist propaganda is a fairly new phenomenon; terrorism itself is not. In the real world, terrorist groups have proven highly resilient to counterterrorism efforts, so it shouldn’t surprise anyone that the same dynamic is true on social platforms like Facebook. The more we do to detect and remove terrorist content, the more shrewd these groups become.”

Discussing the addition of new machine-learning techniques to their platform, the blog post reads:

We now use machine learning to assess Facebook posts that may signal support for ISIS or al-Qaeda. The tool produces a score indicating how likely it is that the post violates our counterterrorism policies, which, in turn, helps our team of reviewers prioritize posts with the highest scores. In this way, the system ensures that our reviewers are able to focus on the most important content first.

In some cases, we will automatically remove posts when the tool indicates with very high confidence that the post contains support for terrorism. We still rely on specialized reviewers to evaluate most posts, and only immediately remove posts when the tool’s confidence level is high enough that its “decision” indicates it will be more accurate than our human reviewers.

At Facebook’s scale neither human reviewers nor powerful technology will prevent all mistakes. That’s why we waited to launch these automated removals until we had expanded our appeals process to include takedowns of terrorist content.

Facebook claims to have shortened the time that terrorism-related content remains on the platform from 43 hours in the first quarter of 2018 to 18 hours in the third quarter of 2018.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan or email him at lnolan@breitbart.com

COMMENTS

Please let us know if you're having issues with commenting.