Motherboard recently reported that a Twitter employee working on machine learning has claimed that using algorithms and A.I. to detect white supremacy on the platform would also target Republican politicians.

Motherboard recently published an article titled “Why Won’t Twitter Treat White Supremacy Like ISIS? Because It Would Mean Banning Some Republican Politicians Too,” which claims that an employee at Twitter believes an algorithm designed to root out and ban white supremacists would also ban Republican politicians.

The issue reportedly arose when one employee questioned why terrorist groups such as ISIS had not been effectively removed from the platform. Motherboard writes:

At a Twitter all-hands meeting on March 22, an employee asked a blunt question: Twitter has largely eradicated Islamic State propaganda off its platform. Why can’t it do the same for white supremacist content?

An executive responded by explaining that Twitter follows the law, and a technical employee who works on machine learning and artificial intelligence issues went up to the mic to add some context. (As Motherboard has previously reported, algorithms are the next great hope for platforms trying to moderate the posts of their hundreds of millions, or billions, of users.)

With every sort of content filter, there is a tradeoff, he explained. When a platform aggressively enforces against ISIS content, for instance, it can also flag innocent accounts as well, such as Arabic language broadcasters. Society, in general, accepts the benefit of banning ISIS for inconveniencing some others, he said.

That employee also stated that for similar reasons, Twitter has not targeted white supremacists on the platform for fear of accidentally banning conservatives politicians:

In separate discussions verified by Motherboard, that employee said Twitter hasn’t taken the same aggressive approach to white supremacist content because the collateral accounts that are impacted can, in some instances, be Republican politicians.

The employee argued that, on a technical level, content from Republican politicians could get swept up by algorithms aggressively removing white supremacist material. Banning politicians wouldn’t be accepted by society as a trade-off for flagging all of the white supremacist propaganda, he argued.

There is no indication that this position is an official policy of Twitter, and the company told Motherboard that this “is not [an] accurate characterization of our policies or enforcement—on any level.” But the Twitter employee’s comments highlight the sometimes overlooked debate within the moderation of tech platforms: are moderation issues purely technical and algorithmic, or do societal norms play a greater role than some may acknowledge?

Amarnath Amarasingam, an extremism researcher at the Institute for Strategic Dialogue, discussed why there may be issues with algorithmically removing white supremacist content on Twitter telling Motherboard:

“Most people can agree a beheading video or some kind of ISIS content should be proactively removed, but when we try to talk about the alt-right or white nationalism, we get into dangerous territory, where we’re talking about [Iowa Rep.] Steve King or maybe even some of Trump’s tweets, so it becomes hard for social media companies to say all of this ‘this content should be removed,’” Amarasingam said.

“There’s going to be controversy here that we didn’t see with ISIS, because there are more white nationalists than there are ISIS supporters, and white nationalists are closer to the levers of political power in the US and Europe than ISIS ever was.”

 

Read the full article at Motherboard here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan or email him at lnolan@breitbart.com