Facebook has developed a “de-identification” system that allows you to “incognito” in live video

DeepPrivacy: A Generative Network for Face Anonymization from the Norwegian University of Science and Technology One paper says it deceives the face recognition system with a new, more challenging representation: anonymizing faces without changing the distribution of the original data, and more colloquially, by exporting a realistic face, but without changing the attitude and background of the original person.

With the addition of this technology, the face recognition system can still function normally, but it is completely impossible to identify the original face identity, counterfeiters can pretend to be others free access to the facial recognition system facilities.

According to the authors’ tests, the face of their anonymized face remained close to the face recognition of the original image, and the average accuracy of the recognized face decreased by only 0.7% for the anonymized image. The natural information contained in the face is naturally 100% uncosised.

Using AI to trick AI, this wave of operation is simply 666 .

Previously, Facebook also did the anti-face recognition has been tried, recently, finally have the results.

Facebook’s artificial intelligence lab, Facebook AI Research (FAIR), recently developed a “de-identification” system that can trick facial recognition systems, such as the one that lets you identify as a female star,media VentureBeat reported.

The technology uses machine learning to change key facial features of people in video in real time, tricking facial recognition systems into identifying objects incorrectly.

The technology is said to be paired with a trained facial classifier to distort a person’s face slightly, while confusing the facial recognition system while maintaining a natural look that can be seen in video and even real-time video.

In fact, this “de-recognition” technology has existed in the past, and D-ID, an Israeli provider of automated anti-facial recognition systems, has developed de-recognition technology for still images. There is also a so-called adversarial example, which exploits the weaknesses of computer vision systems, where people trick facial recognition systems into seeing things that don’t exist by wearing confrontational patterns.

Past techniques have often been used for photos, still images, or pre-planned face recognition systems that use anti-image spoofing to use anti-images. Now, FAIR’s research is aimed at real-time image and video scripting, and FAIR says the technology is the industry’s first and that it is resistant to sophisticated facial recognition systems.

Facebook also published a paper explaining its attitude to new technologies. It raises the view that facial recognition may violate privacy and that face replacement technology could be used to make misleading videos. In order to control the abuse of face recognition technology, the company introduced a method of de-recognition of video, and achieved good results.

In addition, Facebook does not intend to use the anti-face recognition technology in any commercial product, but the study could have an impact on future personal privacy tools, VentureBeat reported. And, as the study highlights in “misleading videos,” it can prevent personal portraits from being used to make fake videos.

In fact, anti-face recognition technology has grown rapidly in recent years, and as early as last year, a team of Professor Parham Aarabi and graduate student Avishek Bose of the University of Toronto developed an algorithm that can dynamically disrupt face recognition systems.

Simply put, the method they chose was to interfere with the face recognition algorithm to achieve the function of hindering face recognition. The algorithm alters the detection results of the recognizer by changing tiny pixels that are almost unrecognizable to some people’s eyes. Although the algorithm is very small for pixel modification, it is fatal for the detector.

The researchers also confirmed the feasibility of this approach in the results of the tests on the 300-W database. The dataset, which contains more than 600 face photos with multiple races, different lighting conditions and background environments, is an industry standard library. The results showed that their system could reduce the proportion of faces that would otherwise be detectable from nearly 100 percent to 0.5 percent.

What’s even more frightening is that this anti-face recognition system has the ability of neural networks to learn independently, which can change itself as the face recognition system evolves.

Add a Comment

Your email address will not be published. Required fields are marked *