Facebook on how to use AI to detect COVID-19 fake news and hate speech

In a report released Monday, Facebook details how to use a combination of human fact-checkers, moderators and artificial intelligence to more efficiently enforce community standards. The report, called the Community Standards Enforcement Report, contains data and findings from the first three to six months, but the focus is on artificial intelligence.

Facebook on how to use AI to detect COVID-19 fake news and hate speech

During the COVID-19 outbreak, Facebook relied more on technology to help moderators manage. Facebook has reached a $52 million class-action settlement with current and former moderators to compensate them for mental health problems they develop during their work, particularly post-traumatic stress disorder, The Verge reported Tuesday. The Verge has extensively reported on the working conditions of companies hired by Facebook to be moderators on its platform.

Guy Rosen, the company’s vice president of integrity, said: “This report only includes data as of March 2020, so it doesn’t reflect the full impact of the changes we made during the pandemic.” We expect that we will see the impact of these changes in our next report, and we will be transparent about them. “

Given the current situation, Facebook’s report does contain new information about how the company uses its AI tools specifically to combat coronavirus-related misinformation and other forms of platform abuse, such as price monopolies on Facebook Marketplace.

“Since March 1, we have removed more than 2.5 million items for the sale of masks, hand sanitizers, surface sanitized wipes and COVID-19 test kits,” the blog post read. But these are difficult challenges, and our tools are far from perfect. Moreover, the adversarial nature of these challenges means that work can never be completed. “

Facebook says its tags are working: in 95 per cent of cases, people who are warned decide not to read if they are warned that a piece of content contains an error message. But making these labels on its huge platform has proved to be a challenge. First, Facebook is discovering that a lot of misinformation and hate speech are now displayed in pictures and videos, not just text or links to articles.

In another blog post, he wrote: “We have found that a significant proportion of hate speech on Facebook appears in photos or videos worldwide. Hate speech, like everything else, can be multi-modal: for example, a meme might use text and pictures to attack specific groups of people. “

The company acknowledges that this is a tricky challenge for AI. Due to complex factors such as wording and language differences, AI-trained models not only have difficulty parsing a Meme picture or video, but the software must also be trained to find duplicate or minor modifications when it becomes available on Facebook.