Microsoft Research wants to develop AI technology that makes it easier to change faces

In addition to a new set of facial forgery detection tools, two research papers published jointly by Microsoft Research and Peking University are also planning to develop a new set of face-changing AI technologies,media reported. Both tools provide high-quality results compared to other available services, while maintaining similar performance and using less data to achieve this.

Microsoft Research wants to develop AI technology that makes it easier to change faces

FaceShifter is a face-change solution proposed by both agencies. Like Reflect and FaceSwap, the new tool also takes into account many variables such as color, light, facial expression, and more. However, according to the authors of the published paper, the tool differs in that it can also interpret differences in posture and angle.

Microsoft Research wants to develop AI technology that makes it easier to change faces

Faceshifter is understood to have used a generative confrontation network (GAN), where a generator is combined with an identifier to mistake the image for the authenticity. Compared to existing tools, it does not require a prior understanding of the methods used for face change or human supervision, but simply creates a grayscale image to determine whether it can be broken down into two composite images from different sources.

In addition to FaceShifter, Face X-Ray is a tool for detecting fake facial images. Face exchange and image processing are often used to some extent for malicious purposes, which has led researchers to study and develop new AI tools to detect these false images. Instead of using any previous image processing knowledge, this set of tools requires training using forensicFaces, a large video catalog.

Microsoft Research wants to develop AI technology that makes it easier to change faces

The tool has shown that it can distinguish images that have not been seen before, while also reliably predicting composite areas, but the team notes that this also means that it may not be able to detect fully synthesized images and may also be defeated by confrontational samples.