The European Union’s proposal to temporarily ban facial recognition technology won the support of Sundar Pichai, ceo of Alphabet, Google’s parent company, on Monday, but Brad Smith, Microsoft’s president and chief legal officer, reacted lukewarmly. Pichai noted that the technology could be used for unethical purposes and should be suspended. But Mr Smith said the ban was like using a “meat cutter” rather than a “surgical knife” to solve a potential problem.
“I think it’s important that governments and regulators address this issue as quickly as possible and provide a framework for it. Mr Pichai told a conference organised by Bruegel, a Brussels think-tank. “This technology is close to being implemented, but we may need to wait a while to really think about how to use it. He added that the use of the technology “has governments to plan the direction”.
Smith cites the benefits that facial recognition technology can bring in some cases, such as ngias that can be used by NGOs to find missing children. “I really don’t want to say that let’s stop people from using a technology that can help reunite families. Smith said. “The second thing I want to say is that if you believe there is a reasonable alternative that would allow us to solve this problem with a ‘surgical knife’ rather than a ‘meat cutter’, then you wouldn’t want to ban it.” “
Smith said it was important to identify problems and then make rules to ensure the technology was not used for large-scale surveillance. “At the end of the day, there’s only one way to make technology better, and that’s to use it,” he said. “
The European Commission will take a tougher stance on artificial intelligence (AI) than the United States, strengthening existing privacy and data rights regulations, according to a proposal obtained by Reuters, in part by suspending the use of facial recognition technology in public areas for up to five years to give the EU time to study how to prevent abuse.
The European Commission will publish its proposals in a few days, and Mr Pichai urged regulators to take a “proportionate approach” in drafting the rules.
As companies and law enforcement agencies increasingly adopt artificial intelligence, regulators are struggling to find ways to govern aI while encouraging innovation while trying to curb possible abuse.
Mr Pichai said artificial intelligence certainly needed to be regulated, but rulemakers should be cautious. “Reasonable regulation must also take a proportionate approach, balancing potential harm and social opportunities, especially in high-risk and high-value areas,” he said. He said regulators should tailor rules for individual industries, citing medical devices and self-driving cars as examples, pointing out that they need different rules. He also said governments should adjust their rules and agree on core values.
Earlier this month, the U.S. government issued guidelines to limit the authorities’ targeting of artificial intelligence regulationand and urged Europe not to adopt radical approaches.
Mr Pichai says it is important to be aware that artificial intelligence can go wrong. While this technology is bound to bring huge benefits, the possible negative consequences are also a real cause for concern. For example, so-called “deep fake” (deep fake, which uses artificial intelligence to tamper with video or audio clips) is worrying. Mr Pichai said Google had released open data sets to help the research community create better tools to detect such fraud.
Alphabet said last month that it was building policies and technical safeguards and that Google Cloud does not offer a common facial recognition API (app interface).