Recently leaked internal memos outlining Facebook’s guidelines for removing content have provided unique insight into what is permissible and what is forbidden on the social media platform.
The Guardian reportedly gained access to Facebook’s internal memos, reviewing more than “100 internal training manuals, spreadsheets, and flowcharts” that apparently outline the social media platform’s rules for removing “hate speech.” disturbing imagery, sexual imagery, credible threats and a multitude of other categories of unacceptable content. So here are the six most important things we have learned from these internal memos.
1: Some Content Moderators Have As Little As 10 Seconds to Remove Content
According to The Guardian, many of Facebook’s moderators are required to review a huge number of reports daily with some moderators judging millions of reports on a regular basis. This means that time is of the essence to process a massive volume of data on a daily basis.
This has reportedly lead to moderators being forced to make challenging decisions on whether or not to delete content in very little time. The moderators often have “just 10 seconds” to make a decision on whether or not to remove to allow certain content on Facebook. One source said, “Facebook cannot keep control of its content. It has grown too big, too quickly.”
2: Many Moderators Are Confused by Facebook’s Guidelines, Particularly Relating to Sexual Imagery
Many moderators reportedly have issues with some of Facebook’s guidelines, finding them inconsistent and odd, particularly about sexual imagery on the platform. One example is that of “revenge porn,” which is the act of sharing nude or sexual imagery of someone without their consent or in an attempt to shame them. The issue that arises for many moderators is that it’s very hard to determine whether or not the person in the photo gave consent to have their image published or if the image itself is even pornographic or sexual.
A screenshot of Facebook’s guidelines relating to revenge porn can be seen below:
3: Threats Against Public Figures, such as President Trump, Are Banned from Facebook
What may shock many readers is that Facebook specifically states that threats against public figures, such as President Trump, are explicitly banned by Facebook. The social media company very openly states that threats such as “someone shoot Trump,” should be actively deleted by moderators, however, a cursory glance across many anti-Trump Facebook pages will display a number of similar threats left untouched by the Facebook moderation team.
While Facebook bans direct threats against a protected class or an individual, general disturbing comments such as “To snap a bitch’s neck, make sure to apply all your pressure to the middle of her throat” are apparently acceptable. In one of the leaked memos, Facebook states that “people use violent language to express frustration online” and feel “safe to do so” on the site. It goes on to say, “They feel that the issue won’t come back to them and they feel indifferent towards the person they are making the threats about because of the lack of empathy created by communication via devices as opposed to face to face.”
An example can be seen below of the sort of “credible violence” that is banned by Facebook. A green tick means that the comment should be deleted while a red X means that the comment should not be deleted.
4: Videos of Suicide or Death Do Not Have to Be Deleted
Facebook’s policy regarding violent videos is another complex issue, in recent months the issue of suicide and murder on Facebook’s live streaming platform has become a hot topic. Many young teenagers have used the Facebook Live video streaming platform to broadcast their suicide, while a man nicknamed the “Facebook Killer” used the live video feature to broadcast the murder of an elderly man. In response to many of the livestreamed suicides, Facebook has introduced suicide prevention tools to their streaming platform.
However, Facebook’s official policy on these videos is not to automatically delete all of them. Videos, where death and violence are celebrated, are to be deleted, but it is acceptable to leave these videos on the platform if “they can help create awareness of issues such as mental illness.” Videos of self-harm are also permitted as Facebook “doesn’t want to censor or punish people in distress.” Videos of death are deleted if people in the video or comments are celebrating that death.
5: Videos Of Abortions Are Fine – As Long As There is No Nudity
In one of the most bizarre rules that Facebook moderators must follow, video footage of actual abortion procedures is reportedly totally acceptable – once no nudity is shown. Under Facebook’s “Graphic Violence” guidelines, the definition of “violent death” includes humans dying in accidents, murders or suicides. What is not included in this are videos of fetuses.
The outline slides reads, “Videos of abortion only violate if they contain nudity.” A picture of the slide can be seen below:
6: Videos of Animal Abuse Are Permitted
Facebook permits videos of animal abuse according to their guidelines which state, “We allow photos and videos documenting animal abuse for awareness, but may add viewer protections to some content that is perceived as extremely disturbing by the audience.” They continue to say, “Generally, imagery of animal abuse can be shared on the site. Some extremely disturbing imagery may be marked as disturbing.”
Overall, these guidelines provide an interesting insight into what is deemed acceptable by Facebook, the context in which content is deleted, and how the company may moderate their platform in the future. As Facebook grows larger and slowly becomes the arbiter of truth in many situations, it is important to understand how Facebook defines what is published on their website to hold them accountable when these rules are broken.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan_ or email him at lnolan@breitbart.com