The University of Chicago has developed new tools to protect users from facial recognition systems.

Face recognition is fast becoming one of the biggest threats to privacy, with many start-ups looking to create a business service around the technology while taking advantage of the large number of photos users upload,media reported. While there is a solution to this problem, its effectiveness will ultimately depend on the adoption of large platforms.

Facial recognition technology has developed so rapidly that its use has become a real concern for many consumers, who see it as another violation of their lives. Last week, it was revealed that Rite Aid, a drugstore chain, had been using facial recognition technology to make it easier to identify burglars and discriminate against low-income people or areas of color.

Wearing a mask would be a way to get rid of some of the most advanced facial recognition technologies currently in use, but that will soon change, as the new solution will use an open mask selfie data set to retrain artificial intelligence. Although some companies like IBM have pulled out of the face recognition field because of close scrutiny from regulators and privacy groups, there is reason to believe that new legislation and some form of protection are needed to prevent companies that abuse the new technology.

Clearview AI, for example, creates a database of facial recognition by taking photos of people from Twitter, Facebook, Google and YouTube. However, when using these services, the company did not seek any user’s permission, and its security measures are said to be poor, which is why it was recently sued.

To do this, researchers at the University of Chicago have come up with a solution to develop a software tool that gives people the opportunity to protect images they upload online from unauthorized use.

The University of Chicago has developed new tools to protect users from facial recognition systems.

The tool, called Fawkes, takes advantage of the feature that machine learning models must enter large numbers of images to identify individual-specific characteristics with any degree of precision. Fawkes makes tiny pixel-level changes to a user’s image that the eye is imperceptible, but effectively “covers” people’s faces so that machine learning models can’t use their magic. The tool is understood to have even fooled Facebook, Microsoft, Amazon and Megvii about facial recognition systems.

Fawkes, however, isn’t perfect — the tool can’t do anything with models built with unmodified images, such as images already used by companies such as Clearview AI, as well as images from law enforcement agencies. While stealth images are still useless after being compressed, Fox’s success will depend on its widespread use among people who upload photos online.

Interestingly, Hoan Ton-Then, CEO of Clearview AI, told the New York Times that Fawkes doesn’t seem to be able to counter the facial recognition system developed by its company. He also noted that, given the sheer number of unmodified images available on the Internet, “it is almost certainly too late to refine technology like Fawkes and deploy it on a large scale.” “

So far, Fawkes has posted and downloaded more than 100,000 times on the project’s website.

Professor Ben Zhao, who co-ordinates Fawkes’ development, says services like Clearview AI are a long way from providing everyone on the internet with a comprehensive set of images. Zhao also points out that Fawkes is based on poison attacks, which are designed to gradually destroy facial recognition models, and that the percentage of invisible images increases over time.

Zhao’s team wants to see companies like Clearview AI marketed, but its members acknowledge that it will be a cat-and-mouse game, with third parties always finding short-term ways to apply tools like Fawkes.