Social-media giant Facebook has revised their Community Standards, whose violation can lead to action by the site’s administrations, including the termination of offending pages. In the course of introducing the updated standards, global policy management chief Monika Bickert and deputy general counsel Chris Sonderby explain that the objective is to help users more clearly understand what they’re not allowed to share on their pages:
We have a set of Community Standards that are designed to help people understand what is acceptable to share on Facebook. These standards are designed to create an environment where people feel motivated and empowered to treat each other with empathy and respect.
Today we are providing more detail and clarity on what is and is not allowed. For example, what exactly do we mean by nudity, or what do we mean by hate speech? While our policies and standards themselves are not changing, we have heard from people that it would be helpful to provide more clarity and examples, so we are doing so with today’s update.
There are also times when we may have to remove or restrict access to content because it violates a law in a particular country, even though it doesn’t violate our Community Standards. We report the number of government requests to restrict content for contravening local law in our Global Government Requests Report, which we are also releasing today. We challenge requests that appear to be unreasonable or overbroad. And if a country requests that we remove content because it is illegal in that country, we will not necessarily remove it from Facebook entirely, but may restrict access to it in the country where it is illegal.
That bit about suppressing content based on complaints from various governments is where the business of content restriction gets stuck, especially when it comes to “hate speech.” As a global service of immense popularity, Facebook deals daily with a reality many Americans have not directly experienced: the rest of the world — including the nicer, America-friendly parts — doesn’t understand “freedom of speech” the way we do, or at least the way we used to. Just about every government on Earth has speech controls that would be deemed unconstitutional here. Then you’ve got the authoritarian regimes that flagrantly crush every word of speech they don’t like, with a zeal that suggests they perfectly well understand the dangerous power of mass communication to spread doubleplus ungood ideas through the minds of their subjects.
Generally speaking, Facebook leans toward accommodating these totalitarian regimes, with periodic objections, rather than telling them to file their speech codes where the sun doesn’t shine. They feel it’s better to have some presence in the benighted corners of the world than walk away. Critics suggest this position is illuminated by a business appetite for revenue from massive speech-controlled markets, such as China or the combined Middle East, rather than an ideological dedication to shining the filtered light of social-media communication into every corner of the world. The bottom line is that Facebook must deal with speech restrictions Americans would consider intolerable, or else they’ll be blocked completely from large potential audiences.
Which brings us to the part about “hate speech” in the revised Community Standards. This is a considerably more difficult topic than nudity or graphic violence, with a great deal of room to impose eye-of-the-beholder subjective standards. Certain groups and regimes have wide, bloodshot eyes that behold a great deal of speech they consider “hateful” and worthy of suppression.
The first part of Facebook’s hate speech standard forbids “content that directly attacks people based on their race; ethnicity; national origin; religious affiliation; sexual orientation; sex, gender, or gender identity; or serious disabilities or diseases.” Something tells me we’re already getting into some tall grass with the “sex, gender, of gender identity” category, where “hatred” is a matter of such subjective judgment that even those who labor with exquisite care to be sensitive and supportive to every “gender identity” they can think of are denounced as “haters” by members of the identity groups they forgot to salute.
Also, the very next line in the Facebook standards says that “organizations and people dedicated to promoting hatred against these protected groups are not allowed a presence on Facebook.”
What “protected groups?” Every person on Earth has a race, ethnicity, national origin, religious affiliation, sexual orientation, etc. that could conceivably be attacked in a hateful manner. Everyone would belong to a number of “protected groups” if these standards were enforced evenly. If there’s a list of groups that will receive preferential treatment in a hierarchy of hurt feelings, Facebook needs to publish it.
Then comes the really tricky part, where Facebook tries to navigate the troubled waters of ideas labeled as “hateful” by oppressive groups and governments looking to drown them in censorship:
People can use Facebook to challenge ideas, institutions, and practices. Such discussion can promote debate and greater understanding. Sometimes people share content containing someone else’s hate speech for the purpose of raising awareness or educating others about that hate speech. When this is the case, we expect people to clearly indicate their purpose, which helps us better understand why they shared that content.
We allow humor, satire, or social commentary related to these topics, and we believe that when people use their authentic identity, they are more responsible when they share this kind of commentary. For that reason, we ask that Page owners associate their name and Facebook Profile with any content that is insensitive, even if that content does not violate our policies. As always, we urge people to be conscious of their audience when sharing this type of content.
Hmm… I can think of a number of audiences that would demand a particularly high level of “consciousness” on the part of potential offenders. There are audiences that consider virtually all criticism to be “hateful,” long before we even get into tricky business of judging whether humor and satire are offensive. If you happen to live in the shadow of such a thin-skinned government, you might not be eager to associate your name and Facebook profile with content they deem “insensitive.”
“Hate speech” is difficult to come to terms with, even with the most well-intended efforts to establish a polite virtual community where everyone feels comfortable, because it’s highly subjective at the margins. No doubt Facebook has shut down a number of pages, under every version of their community standards, that virtually every observer would agree were hateful and inappropriate, especially when they get into the business of explicit or implied threats.
The more ambiguous cases would probably be a source of tension between Facebook users and administrators under any terms of service, not just because there are provocative souls looking to push the bounds of discourse outward, but because there are perpetually offended groups looking to draw them inward. These community guidelines probably seem reassuring to users worried about getting harassed by vicious jerks, or stumbling into pits of online horror, particularly if those users have children. They read as vague enough to be a bit ominous to those concerned about organized speech-suppression campaigns and totalitarian government crackdowns.