An American congressman posted a message via Facebook in which he called for the massacre of radicalized Muslims and it was not deleted. However, Black Lives Matter activist Didi Delgado described the whites as racist and Facebook deleted its content, as well as it disabled the account for 7days. This is one of the latest scenario on facebook, that the moderators of this social network must comply with.
Following the terrorist attack that took place in London in early June, a US congressman, Clay Higgins, posted on his Facebook account a message calling for the massacre of radicalized Muslims. “Kill them,” Higgins urged. The violent content of his message was untouched by Facebook workers, who are in charge of exploring the web to delete any offensive messages.
However, Black Lives Matter activist Didi Delgado issued a message describing whites as racist. Immediately, the post was deleted and they disabled their Facebook account for seven days.
US congressman, Clay Higgins, posted on his Facebook account a message calling for the massacre of radicalized Muslims. “Kill them,”
The study published by ProPublica shows a new fact about the guidelines that are followed by those in charge of moderating the content of this social network. During the last decade, the company has developed a large number of rules with certain distinctions between what is due and what should not be allowed. Thus, in these documents, the difference between the speech of Higgins and that of Delgado is established. The first case refers to “radicalized Muslims”, that is, to a specific subgroup , thus forming part of a less protected category. However, Delgado’s comment was removed by targeting whites in general, so it is termed a hate message towards a “protected category” that should be removed.
Among the differences in the rules developed by Facebook, reflects that one of the categories most protected is that of whites, while that of black children or women drivers are in the background. That is, any offensive content towards white people has priority over those that refer to black people or the conductors. They put it through a question: “Which of these groups is protected from messages of hatred?”, Whose answer is “white men”.
Another rule that developed this social network, although confirms that it is no longer in force, is to eliminate the messages of activists and journalists in disputed territories, such as Palestine, Kashmir, Crimea and Western Sahara.
This challenge of controlling content on Facebook is even more complex when it extends to the protection of various public figures . The documents reviewed by ProPublica study, for example, Donald Trump’s messages on the immigration veto, which violated company rules. According to the Wall Street Journal, Zuckerberg excluded Trump’s messages from the policies imposed on the exclusion and the comment was not deleted.
However, the way in which Facebook controls its content has become increasingly prominent in recent months due to the emergence of ‘false news’ and the concern that terrorists are using social networks for recruitment.
But it is not the first program developed by the company to control offensive content. For a long time, they are dedicated to obviously eliminating child pornography content. Recent articles published in The Guardian and Süddeutsche Zeitung detail that the biggest difficulties that face Facebook are to eliminate messages that contain graphic violence, child abuse, pornography for revenge and self-mutilation.