Facebook CEO Mark Zuckerberg responded to a question from Senate Commerce Committee Chairman John Thune (R-SD) on “hate speech” on his social network Tuesday with optimism that an artificial intelligence (A.I.) system will be able to recognize and eliminate the category he refused to define.
“As we discussed in my office yesterday, the line between legitimate political discourse and hate speech can sometimes be hard to identify, especially when relying on artificial intelligence and other technology for the initial discovery,” Thune said at the joint Senate Commerce and Judiciary Committees’ hearing, then asking Zuckerberg what steps Facebook took in making these evaluations.
The Supreme Court of the United States has never defined or allowed any separate category of speech called “hate speech.” Just last year, the Court unanimously refused to do so once again, even in the less central to free speech context of trademark protection. Other counties, especially in Europe, have used so-called “hate speech laws” to censor right-of-center speech, detain and ban forever foreign conservatives, and imprison people for “offending” their fellow citizens.
Zuckerberg offered no definition of hate speech and made no attempt to suggest the kind of “line” to which Thune referred. He did, however, explain the category was a difficult one for the learning machines now essential to the operation of Big Tech to identify. “Some problems lend themselves more easily to A.I. solutions than others,” he told Thune, explaining:
Hate speech is one of the hardest. Determining if something is hate speech is very linguistically nuanced, you need to understand what is a slur, whether something is hateful, [and] not just in English.
…
Hate speech I am optimistic that, over a five to ten year period, we will have A.I. tools that can get into some of the nuisances, the linguistic nuisances of different types of content to be more accurate in flagging things for our systems, but today we’re just not there on that.
“We have people look at it, we have policies to try and make it as not subjective as possible but, until we get it more automated, there is a higher error rate than I am happy with,” Zuckerberg explained about the problem of “hate speech” on Facebook.
“Contrast that, for example, with an area like finding terrorist propaganda, which we’ve actually been very successful at deploying A.I. tools on already,” Zuckerberg said, claiming more than 99 percent of ISIS and other extremist content is screened out automatically by A.I. before the public ever sees it.
To solve the more intricate issue of hate speech, Zuckerberg proposed this month a private “Supreme Court” to rule on what is too offensive to be allowed on his social network.
In the meantime, Zuckerberg promised a veritable army of content policers is on hand to root out “hate speech,” however it is defined. “By the end of this year, by the way, we’re going to have more than 20,000 people working on security and content review … when content gets flagged to us, we have those people look at it and, if it violates our policies, then we take it down,” he told Thune.
COMMENTS
Please let us know if you're having issues with commenting.