40 organizations have written to independent U.S. agencies calling for a moratorium on face recognition apps

On January 27th, local time, 40 social organizations sent a joint letter to the Privacy and Civil Liberties Oversight Board, or PCLOB, which advises the U.S. government, calling on the U.S. government to “wait for further review. ” Before pausing face recognition technology.

40 organizations have written to independent U.S. agencies calling for a moratorium on face recognition apps

The joint letter, drafted by the Privacy Protection Organization’s Electronic Privacy Information Center, referred to the New York Times A recent facebookion report: More than 600 U.S. law enforcement agencies are using the face recognition system at start-up Clearview-AI, which covers 3 billion images taken from major mainstream websites.

“On behalf of leading consumer, privacy and civil liberties groups, we urge PCLOB to recommend to the President and the Secretary of Homeland Security a moratorium on facial recognition systems pending further review,” the letter said. ”

The 40 social organizations include the American Consumer Alliance, the Electronic Frontier Foundation, and the Electronic Privacy Information Center.

The letter argues that face recognition technology may not only not apply to people of color, but can also be used to “control minority populations and limit dissent.” ”

The letter notes that a recent study of 189 face recognition algorithms by 99 developers by the National Institute of Standards and Technology found that the software had 100 times higher false positives for Asians and African Americans than whites.

“While we do not believe that improving the accuracy of face recognition can justify further deployment (face recognition systems), the apparent bias and discrimination of the current system is another reason why we recommend a full suspension,” the letter said. ”

The MIT Technology Review called the move one of the biggest efforts to date to stop the use of face recognition technology, and the letter shows that attitudes toward face recognition are becoming increasingly negative.

At present, the discussion about “whether face recognition technology is applied to public places” is beginning to shift. Initial concerns focused on the accuracy and bias of technology, but the MIT Technology Review notes that lawmakers attending the third face recognition hearing in the U.S. Congress in early January 2020 have begun asking whether people should use the technology even if it is 100% accurate.

At this stage, there are different attitudes to the wide-ranging application of face recognition technology around the world.

In 2019, the cities of San Francisco, Somerville and Oakland announced bans on the use of facial recognition technology by local governments. In early 2020, the EU was revealed to be planning to ban face recognition technology in public places within three to five years. In addition, Washington is considering legislation to regulate face recognition applications deployed by private and corporate persons in public places.

On the other hand, Seoul, South Korea, said in early January 2020 that it planned to install 3,000 aI cameras in the city to detect potential crimes. London police also announced on January 24th that they would deploy face recognition cameras throughout the city.