Facebook has announced the results of the first Deepfake Challenge, an open competition for algorithms aimed at discovering videos manipulated by artificial intelligence. The results are promising, but there is still a lot of work to be done before automated systems can reliably discover deep-seated counterfeitcontent, which researchers describe as “unresolved.”
Facebook says the winning algorithm can find deep fake videos that challenge the real world with an average accuracy of 65.18 percent, which, while good, is not the accuracy of any automated system you want.
Deep fake video has proved to be an exaggerated threat to social media. While the technology has raised doubts about video evidence, so far the political impact of deep-falsifying video has been minimal. In stead, more direct harm is generated by involuntary pornography, which is easier for social media platforms to identify and remove.
Mike Schroepfer, Facebook’s chief technology officer, told reporters at a news conference that he was pleased with the results of the challenge, which he said would set a benchmark for researchers and guide their future work. Some 2,114 contestants submitted more than 35,000 detection algorithms to the competition to test their ability to identify deep fake videos from a data set of about 100,000 short films. Facebook hired more than 3,000 actors to produce the clips, recording conversations in their natural environment, some of which were altered by artificial intelligence to have other actors’ faces posted on their videos.
The researchers were allowed access to the data to train their algorithms, and when tested on the materials, they produced an accuracy rate of 82.56 percent. However, when the same algorithm tested a “black box” dataset made up of unseen lenses, they performed much worse, with the best-scoring model with an accuracy of 65.18 percent. This shows that detecting deep fake videos in the field is a very challenging issue.
Facebook is currently developing its own deep fake video detection technology. The company announced earlier this year that it would ban Facebook users from posting deeply fake videos, but critics say the greater threat of disinformation comes from so-called “light fakes” that use traditional methods to edit videos. The winning algorithm for the challenge will be released as open source to help other researchers, but Facebook says it will keep its testing technology a secret to prevent it from being reversed.