Facebook will start using artificial intelligence to find and remove “extremist” content from the platform, according to an official blog post.
“In the wake of recent terror attacks, people have questioned the role of tech companies in fighting terrorism online. We want to answer those questions head on,” wrote Facebook on Thursday. “We agree with those who say that social media should not be a place where terrorists have a voice. We want to be very clear how seriously we take this — keeping our community safe on Facebook is critical to our mission.”
“We want to find terrorist content immediately, before people in our community have seen it. Already, the majority of accounts we remove for terrorism we find ourselves,” they continued. “But we know we can do better at using technology — and specifically artificial intelligence — to stop the spread of terrorist content on Facebook. Although our use of AI against terrorism is fairly recent, it’s already changing the ways we keep potential terrorist propaganda and accounts off Facebook. We are currently focusing our most cutting edge techniques to combat terrorist content about ISIS, Al Qaeda and their affiliates, and we expect to expand to other terrorist organizations in due course.”
Facebook’s new technology includes “image matching,” which will be able to detect when someone uploads a previously flagged “propaganda video” or image, and “language understanding,” which will analyze written support and praise of terrorist organizations in an effort to further understand their communication.
The social network will also use technology to detect whether accounts have a large number of “friends” who have previously been flagged for extremism, and further systems to combat fake accounts. The clampdown effort will similarly take place on Facebook’s sister social networks: WhatsApp and Instagram.
“AI can’t catch everything. Figuring out what supports terrorism and what does not isn’t always straightforward, and algorithms are not yet as good as people when it comes to understanding this kind of context,” Facebook explained in a section of their post about human efforts to combat extremism. “A photo of an armed man waving an ISIS flag might be propaganda or recruiting material, but could be an image in a news story. Some of the most effective criticisms of brutal groups like ISIS utilize the group’s own propaganda against it. To understand more nuanced cases, we need human expertise.”
“We want Facebook to be a hostile place for terrorists,” the company concluded. “The challenge for online communities is the same as it is for real world communities – to get better at spotting the early signals before it’s too late. We are absolutely committed to keeping terrorism off our platform, and we’ll continue to share more about this work as it develops in the future.”
In March, a group of executives from Facebook, Twitter, and Google appeared in front of the UK’s Home Affairs Select Committee, where they were criticized for being “soft” on “hate speech.”
The committee brought up videos from ex-KKK leader David Duke and called for action against “offensive” material on their platforms.
It was previously reported that the companies could face sanctions if they didn’t pledge to stop “trolling” and “cyberbullying,” and in 2016, the European Union also threatened action against social networks if they didn’t pledge to remove “hate speech” within 24 hours.
Censorship on Facebook has been erratic in the past, with dozens of harmless pages having been removed.
Several groups of page owners have attempted to revolt against the social network, after having their pages randomly removed, with one page owner even being sanctioned for posting a comedy picture of rapper Drake morphed into a Nintendo 64 controller.
In July 2016, a popular Facebook page called “Meninist,” which had nearly 400,000 likes, was permanently suspended, only to be reinstated after Breitbart Tech’s reporting.
In the same month, a meme page mocking Democratic presidential candidate Hillary Clinton was also removed.
Numerous other examples of Facebook censorship have taken place almost daily, including the suspension of gay conservative Lucian Wintrich after he used the word “fag,” the removal of a men’s rights conference page on the day of the conference, the censorship and restriction of WikiLeaks links, the removal of a popular geographical comedy page, and the deletion of anti-Islamist and even anti-ISIS content.
Despite this, extreme content in the past has been allowed to stay on the platform, including a cartoon posted by the Black Panther Party of Mississippi which portrayed a man in a black robe and mask slitting the throat of a police officer.
Charlie Nash is a reporter for Breitbart Tech. You can follow him on Twitter @MrNashington or like his page at Facebook.
COMMENTS
Please let us know if you're having issues with commenting.