Facebook AI Research says it has created a machine learning system to de-identify faces in videos, according to foreign media reports. Start-ups like D-ID and many previous works have adopted derecognition technology for still images, but this is the first technology to be used for video. In the initial tests, the method was able to block the most advanced facial recognition system.
AI for automatic video modification can be applied to each video without retraining. It maps a slightly distorted version of a person’s face, making it difficult for facial recognition technology to recognize people.
“Face recognition can lead to loss of privacy, and face replacement techniques may be misused to make misleading videos,” said one article explaining the approach. The latest world events about advances in facial recognition technology and abuse have raised the need to understand how to successfully handle de-identification. Our contribution is the only video that is available for video, including live video, and the quality of its presentation goes far beyond the documentation approach. “
Facebook’s approach pairs confrontational self-encoders with classifier networks. As part of the online training, researchers are trying to fool the facial recognition network, Lior Wolf, a Facebook AI research engineer and professor at Tel Aviv University, told VentureBeat in a telephone interview.
“So the self-encoder tries to make it harder for people to recognize the behavior of the network, and in fact, if you want to create a way to mask someone’s voice or online behavior or any other way, you can use the self-encoder.” Wolf said.
Like The FacesWap Deepfake software, AI uses the encoder-decoder architecture to generate images. During training, the human face deforms and then enters the network. The system then generates distorted and undistorted images of the face to embed it in the video.
A company spokesman told VentureBeat that Facebook has no plans to apply the technology to any part of the Facebook series, but that it would allow the public to recognize what the public is saying, but not the Artificial Intelligence system.
Anonymous faces in the video can also be used for privacy-conscious training in AI systems. In May, Google used the Mannequin Challenge video to train aI systems to improve video depth perception. Researchers at the University of California, Berkeley, have also used YouTube videos as training data sets for training AI “agents” to dance like humans or to flip back.
The work will be presented at the International Conference on Computer Vision (ICCV) in Seoul, South Korea, next week.
The news comes after Mike Schroepfer, Facebook’s chief technology officer, announced last week that the Deepfakes Challenge preview dataset is now available and that Amazon’s AWS is now the deepfake launched by Facebook and Microsoft last month. Member of the Detection Challenge Initiative. The challenge is to improve the robustness of deepfake detection systems.
Facebook, which made facial recognition its default setting on its platform earlier this year, is fighting a $35 billion class-action lawsuit over misuse of facial recognition data. This week, the social network also launched a News app for some users in the United States.