It has been seventeen years since the terrorist attacks in New York, Washington D.C., and flight 93 in Shanksville, Pennsylvania, on 9/11 — while technology has advanced rapidly in that time, tech giants still struggle to block terrorist content on their platforms.
Seventeen years since 9/11, social media companies are continuing to attempt to crack down on terrorism-related content on their platforms, but as the reach of these companies grows and more users join the platform, it becomes increasingly hard for these Silicon Valley tech firms to identify and remove terrorist content.
In November of 2017, Facebook claimed that its efforts to use artificial intelligence to crack down on terrorism-related content on their platform were beginning to work. Facebook claimed at the time that the vast majority of terrorism, ISIS and Al Qaeda related content was removed automatically before being flagged by users.
“We want to find terrorist content immediately, before people in our community have seen it. Already, the majority of accounts we remove for terrorism we find ourselves,” wrote Monika Bickert, Facebook’s director of global policy management, and Brian Fishman, Facebook’s counterterrorism policy manager, in a post. “But we know we can do better at using technology — and specifically artificial intelligence — to stop the spread of terrorist content on Facebook. Although our use of AI against terrorism is fairly recent, it’s already changing the ways we keep potential terrorist propaganda and accounts off Facebook. We are currently focusing our most cutting-edge techniques to combat terrorist content about ISIS, Al Qaeda and their affiliates, and we expect to expand to other terrorist organizations in due course.”
While this is a step forward, there are still many issues with tracking down terror-related content on the platform. In another blog post Bickert and Fishman noted that the same A.I. system wouldn’t work for all terror groups: “A system designed to find content from one terrorist group may not work for another because of language and stylistic differences in their propaganda [But] we hope over time that we may be able to responsibly and effectively expand the use of automated systems to detect content from regional terrorist organizations too.”
Since then, Facebook has been criticised for their definition of “terrorism” with the U.N. claiming that the definition is too broad and actually helps to silence dissent in certain countries. Fionnuala Ní Aoláin, a U.N. human rights expert, wrote to Facebook CEO Mark Zuckerberg on September 3 of this year: “The use of such a sweeping definition is particularly worrying in light of a number of governments seeking to stigmatize diverse forms of dissent and opposition (whether peaceful or violent) as terrorism.”
Ní Aoláin further stated: “Moreover, it is unclear how Facebook determines when a person belongs to a particular group and whether the respective group or person are given the opportunity to meaningfully challenge such determination.” Unfortunately, it seems as if cracking down on terrorists isn’t as simple as Facebook may initially have believed.
Twitter has also struggled with this issue on a number of occasions. In March of 2017, the company claimed to have deleted approximately 377,000 terrorism-related accounts from their platform between July 1 and December 31, 2017. According to the company’s transparency report, Twiter shut down a total of 636,248 accounts for promoting terror or terrorism-related groups since August of 2015, which is when the company began actively seeking out accounts linked to terror groups.
Twitter has also faced multiple lawsuits for their failure to combat terrorism on their platform. The family of a former Florida Sheriff who was killed while providing police training in the Middle East sued Twitter stating that the company “knowingly permitted” ISIS accounts to spread extremist propaganda. Similarly, the father of a victim of the Paris terrorist attacks sued Facebook, Google, and Twitter for allowing terrorists to coordinate via their platforms.
Google has faced issues with extremist content on their platforms as well. A report from the Hill in May of this year stated that “dozens of pages across Google’s social media platform” post ISIS propaganda, “give news updates directly pulled from ISIS media, spread messages of hate towards Jews and other groups or show extremist imagery.” One of the extremely worrying aspects of this is that the accounts reportedly “did little to hide their affiliation.” One image from 2017 available on Google’s platform calls for Muslim’s in the west to commit acts of terror if they cannot physically make it to the Islamic State. The image reads: “A message to Muslims sitting in the West. Trust Allah, that each drop of bloodshed there relieves pressure on us here.”
A spokesperson for Google stated: “Google rejects terrorism and has a strong track record of taking swift action against terrorist content. We have clear policies prohibiting terrorist recruitment and content intending to incite violence and we quickly remove content violating these policies when flagged by our users. We also terminate accounts run by terrorist organizations or those that violate our policies,” a Google spokesperson said. “While we recognize we have more to do, we’re committed to getting this right.”
Unfortunately, 17 years since the worst terrorist attacks ever seen in the United States, the spread of terror-related content across social media still remains a huge issue. While it would appear that social media companies are attempting to crack down on these problems, many of them cannot even keep track of the data of their own users accurately. Hopefully, as technology progresses, and as the Master of the Universe are held to higher account, terrorism-related content across the internet will be tracked and shut down — instead of the speech of those who disagree politically with Silicon Valley.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan or email him at lnolan@breitbart.com