Leaked Reports: How Facebook Regulates Graphic Content?

0
120

A series of reports were published by the Guardian regarding Facebook internal guidelines on controlling graphic content, featuring new vision into how the social media application regulates what users can post.

The reports’ content

The reports disclose Facebook’s internal rules about credible threats of violence, non-sexual child abuse as well as graphic violence and cruelty to animals.

The Guardian elaborates that it has studied more than 100 internal “training manuals, spreadsheets and flowcharts,” that assist the site’s moderators to decide when content is reported.

Facebook’s files also reveal the desire for the social app to offer a platform for free speech whist avoiding real harm to the world.

The site uses automated systems to remove any content which includes child sexual abuse or terrorism. However, the systems don’t do all the whole job. What’s left of content is given to the moderators.

How Facebook’s moderators control graphic content?

In the Credible Violence files, the guideline says that “people commonly express distain or disagreement by threatening or calling for violence in generally facetious and unserious ways.” It also contain examples of what should and should be said.

“I’m going to kill you John!” such a statement is acceptable but this statement would be removed: “I’m going to kill you John, I have the perfect knife to do it!”

The moderators’ role is to differentiate between a normal post and an actual threat to the world. The guideline points out to some rules to determine the seriousness of a post.

Additionally, there are some must-do rules such as “photos and videos documenting animal abuse,” is allowed aiming to raise awareness.

Users who try to injure themselves is also permitted because it “doesn’t want to censor or punish people in distress who are attempting suicide.”