Deepfakes is no stranger to words, and while people marvel at the realistic effects they present, the negative effects of it are spreading across the globe. In a fight against malicious Deepfake videos, Facebook led a challenge in September to jointly combine academic and corporate forces to detect Deepfakevideo videos in a low-cost manner.
Participants in the project also include Cornell Tech, The Massachusetts Institute of Technology, Oxford University and the University of Berkeley, as well as the non-profit research organization Partnership On AI – The group’s members include big technology companies such as Google, Apple, Amazon and IBM.
At this week’s NeurIPS 2019 conference in Vancouver, the competition for Deepfakes testing tools will be officially launched and will not be completed until March 2020. Irina Kofman, Facebook’s director of artificial intelligence and one of the leaders of the challenge, says:
It’s inspiring to see partners working together in multiple fields, whether from the corporate or academic worlds, and each of us brings insights from our own field, so we can think more broadly.
It is reported that from today, registered contestants can download a library to train their own testing tools. Once they have completed the final design, they can submit the code of the inspection tool to a black box for validation. During the verification process, the official system scores the effectiveness of the test tool. Entrants do not have to share their tool models, but they must agree to open up their work to be rewarded for the Challenge.
The tech giants have also contributed to the race: Facebook says it has invested more than $10 million to encourage people to compete; Amazon offers optional models for contestants; and Google’s Kaggle. The Data Science and Machine Learning platform will host the Challenge and be responsible for the rankings, which are used to assess the capabilities of Deepfakes’ detection systems.
Mike Schroepfer, Facebook’s chief technology officer, said in a blog post:
Deepfakes technology is based on AI technology synthesis of video, its authenticity and legality has been somewhat questioned. But so far, people don’t have the data set or baseline to help detect Deepfakes videos. So we hope to find a solution that helps people discover these doctored videos and avoid being misled.
To speed up the creation of data sets, Facebook not only commissioned researchers to generate realistic Deepfakes videos to serve as training materials to test the effectiveness of the detection tools. At the same time, Facebook said the videos featured a “diverse” set of paid actors, about 54 percent of women and 46 percent of men.
Christian Ferrer, Facebook’s AI research manager, said the dataset, which contains more than 100,000 videos in total, was tested at the International Computer Vision Conference in October. Also, it does not include any user data and is accessible only to teams that have signed up to use the agreement.
Although a tool developed by the University of California, Berkeley and the University of Southern California has been released in advance, its recognition accuracy is greater than 90%。 But at the same time, Deepfakes technology is constantly evolving, making testing increasingly difficult.
“It’s like a cat-and-mouse game, and if we design a Deepfakes detector, it’s like giving hackers a new simulator for countertesting,” says Siddharth Garg, an assistant professor of computer engineering at New York University’s Tanden School. “
All in all, trying to win the “arms race” with Deepfakes producers may seem like a long way off, but the race for testing tools is about to go all the way, and the forces from all sides will fight the evil Deep fakes together – and we should be a little more optimistic.