Facebook accidentally rolled out what appeared to be a crowd-sourced “hate speech” flagging system, in which users were asked whether a post constituted hate speech or not.

Earlier today, Facebook users began reporting that questionnaires had been attached to innocuous posts, asking them if they considered the content of the posts to be “hate speech.”

It quickly became apparent that the feature had been rolled out in an unfinished state:

Facebook VP of product Guy Rosen said that the feature was a “bug” and a “test” that had been incorrectly applied to all posts, including Mark Zuckerberg’s.

The feature appears to be a method for determining what qualifies as “hate speech” based on user feedback, but it is unknown whether Facebook intends to roll this out for all users, or just a select group.

A user-driven approach is a radical departure from the “Facebook Supreme Court” that Mark Zuckerberg previously suggested, in which a hand-picked panel of elites would have the final say on what content is banned under the platform’s hate speech rules. Both approaches suffer from the problem of subjectivity and bias, as there is no objective definition of hate speech — as Zuckerberg himself concedes.

We do know that language which attacks the “immigration status” of a person is considered hate speech under Facebook’s rules. Republican congressman Lamar Smith has raised concerns that this will make criticism of illegal immigration impossible on the platform.

Allum Bokhari is the senior technology correspondent at Breitbart News. You can follow him on TwitterGab.ai and add him on Facebook. Email tips and suggestions to allumbokhari@protonmail.com.