Facebook has been working for years to develop tools to prevent and eliminate revenge in its apps, but apparently hasn’t stopped bad actors from trying to share the images,media CNET reported. Facebook, which also has popular apps Instagram, Messenger and WhatsApp, must evaluate about 500,000 revenge reports a month, according to NBC News.
Facebook, the world’s largest social network, launched an artificial intelligence tool earlier this year that detects revenge (also known as involuntary intimate images) before users report it. In 2017, the company also launched a pilot program that allows users to submit private images to Facebook to prevent them from sharing on social networks.
However, Radha Plumb, head of product policy research at Facebook, told NBC News that the initial explanation for the pilot project was not clear enough, and that after receiving negative feedback, the company launched a research program in 2018 to explore how best to prevent revenge and support victims.
“When it comes to hearing how scary the experience of sharing your picture is, the real motivation of the product team is to try to figure out that we can do something better than just respond to the report,” says Plumb. “
Facebook now reports that it has a 25-person team that does not include content regulators who focus on preventing unauthorized sharing of private photos and videos.
Facebook did not immediately respond to a request for comment.