Facebook says it is using artificial intelligence (AI) technology to proactively detect more hate speech,media reported. A new transparency report released Thursday details the company’s social media hate after it announced policy changes earlier this year, though it still does not answer some of the big questions.
Facebook’s quarterly report includes new information about the prevalence of hate speech. The company estimates that between 0.10 and 0.11 percent of content facebook users see violates hate speech rules, equivalent to “10 to 11 hate speech for every 10,000 views.” This is a random sample based on posts, which measures the range of content rather than the number of posts alone. However, it has not yet been assessed by external resources. Guy Rosen, Facebook’s vice president of integrity, said in a telephone conversation with reporters that the company is planning and working on an audit.
Facebook insists it actively removes most hate speech before users report it. In the past three months, about 95 percent of hate speech on Facebook and Instagram has been removed on their own initiative, the company said.
That’s a huge leap from its initial effort — at the end of 2017, it voluntarily deleted only about 24 percent of the data. Facebook has also stepped up efforts to remove hate speech: about 645,000 items were deleted in the fourth quarter of 2019, up from 6.5 million in the third quarter of 2020. Organized hate groups were classified as a separate moderate category, with a much smaller increase, from 139,900 to 224,700.
Facebook said some of the cancellations were due to improvements in AI. In May, Facebook launched a research competition aimed at developing systems that better detect “hate online mestitions.” In its latest report, the company mentions its cass technology for analyzing text and pictures at the same time, capturing image macros like the one shown below.
However, this approach has obvious limitations. As Facebook points out, a new hate speech may not be the same as before, because it cites a new trend or news story. That depends on Facebook’s ability to analyze multiple languages and capture trends in specific countries, as well as Facebook’s definition of hate speech, a category that has changed over time. Holocaust denial, for example, was banned only last month.
In addition, it doesn’t necessarily help Facebook’s pagers, and despite recent changes, the new crown virus pandemic has disrupted Facebook’s normal vetting process because it doesn’t allow auditors to view highly sensitive content at home. In its quarterly report, Facebook said its reduced numbers were returning to pre-epidemic levels because of AI.
But some employees complained that they had been forced to return to work before they were safe, and 200 content administrators signed a public request for better protection against the virus. In that letter, the host said automation had failed to solve serious problems. “AI is not up to the job. Important speeches are swept into Facebook filters, while dangerous content like self-harm is preserved,” they said.
Rosen disagreed with their assessment and said Facebook’s office met or exceeded the requirements for a safe workspace. “These are extremely important staff who play an extremely important role in this work and our investment in AI is helping us detect and remove this content to keep people safe,” he said. “
Facebook’s critics, including U.S. lawmakers, may still not believe it has captured enough hate content. Last week, 15 U.S. senators pressed Facebook to respond to posts attacking Muslims around the world, while also asking it to provide more country-specific information about its moderate practices and hate speech targets. Facebook CEO Mark Zuckerberg defended the company’s modest approach at a Senate hearing, suggesting that Facebook might include the data in future reports.