A Facebook spokesman confirmed in an email to Gizmodo that on December 19, local time, the day after Trump became the third president to be impeached in U.S. history, the company updated its criteria for hate speech,media outlet Gizmodo reported.
According to the update, the following is a “comparison of people to animals or objects” that Facebook users are no longer allowed to post:
– Black sand and apes or apes.
– Blacks and farm tools.
– Jews and rats.
– Muslims and pigs.
– A sexual relationship between a Muslim and a goat or a pig.
– Mexicans and worm-like creatures.
– Women as household items, or women as property or “items”
– Transgender or non-binary people are called “it”
The “declarations of denial” of existence of these marginalized groups have also been blanketed, meaning that statements such as “transgender people do not exist” will also be removed under these new guidelines. Naturally, all of these rules apply not only to text, but to all content, which means that all these pesky memes will also be held accountable.
As with all community standards in the company, the consequences of publishing these things range from minor to severe:
The consequences of violating community standards depend on the severity of the violation and the history of the people on the platform. For example, we may warn someone about a first-time violation, but if they continue to violate our policies, we may limit their ability to post or disable their personal data on Facebook. We may also notify law enforcement if we believe there is a real risk of personal injury or a direct threat to public safety.
Moreover, as with Facebook’s standards, these standards are actually considered hate speech in advance, especially since the means of comparing race to a particular animal are very much dependent on race.
As it says, using words to express what people can understand or try to get rid of when they face Facebook’s self-proclaimed AI. As early as November, for example, the company boasted about 7 million articles of its AI labeled as potential hate speech contenders, and had previously scoffed at the idea of forming a coalition of moderators dedicated to the task.
These are not the only updates hidden under the radar. In fact, according to a Facebook spokesman, while some of them, such as the company’s blanket ban on censuss, have become newslines, the ban on live-streaming executions and mockery of survivors of sexual abuse has mysteriously disappeared. Also, in the absence of RSS feeds or any type of notification system on the Community Standards Update page, these changes are likely to receive little attention.
Facebook declined to comment on whether the policies were made public like census announcements or whether they were made public.
In addition to the hate speech update, the company has quietly made eight other changes to its community standards, one of the most politically charged stories in years.
The policy of “violence and incitement” was expanded to prohibit “misinformation that contributes to impending violence or personal injury” (and not just immediately contributes to such harm).
The “Coordination of Harm and Advocacy Crime” standard has been extended to prohibit census fraud, not just voter fraud.
The “fraud and fraud” standard has been expanded to prohibit users from contacting, promoting or promoting anything related to “forgery or manipulation of documents”, such as forged coupons or medical prescriptions. It does the same in “betting manipulation”, “fake fund-raising” and “debt relief or credit repair scams”. Most importantly, recruiting labour to carry out these scams is also banned.
Policies on “sexual exploitation of adults” have been extended to include “forced exploitation”, based on the already prohibited “involuntary sexual contact, squeeze, love, etc.” Laughing at any category of victims, or admitting to being involved – would be subject to the new rules.
While sharing revenge violates previous standards, the update adds that threatening to share and state sharing intent is a violation, just as providing the images is still required.
The section of the category of “exploitation of human beings” involving private citizens has been modified to include “involuntary minors”.
Now, the “violence and image content” policy prohibits live broadcasting of “death penalty” images.
Policies around “cruel and insensitive” content have banned “animal cruelty” in detail:
The image depicts a real animal who is clearly ridiculed or insulting in any of the following situations:
– Premature death.
-Serious personal injury (including cutting)
– Physical violence from humans.
In a grim extension of the “memorial” policy, the company added that Facebook users who die of suicide could be allowed to ask their relatives to request that pictures of used weapons or anything related to death be removed from their profile photos, cover photos, and recent timeline posts. The family of a murdered Facebook user can also delete a photo of the attacker (convicted or suspected).