Is IBM’s exit from face recognition a moral model or commercial speculation?

Last week, IBM, Amazon and Microsoft issued statements announcing that they would no longer provide face recognition technology to the police. Among them, Amazon said it would suspend the sale of facial recognition tools to police for a year; Microsoft set the ban “before federal laws regulating facial recognition technology are in place”; and IBM was the first to say that it would permanently stop providing its facial recognition software for “mass surveillance or racial induction.”

Journalists . . . Wu Yangyang

Editing . . . Chen Rui

Is IBM's exit from face recognition a moral model or commercial speculation?

There has been a long history of privacy claims about AI, including face recognition, but it is not a direct cause of the tech company’s announcement.

On May 25, George Floyd, an African-American, was arrested in Minnesota on suspicion of using counterfeit money and killed by police for pressing him in the neck. The incident sparked an anti-racism campaign in American society under the slogan “Black Lives Matter, Black Lives Matter.”

In the United States, IBM, Amazon, Microsoft and other companies have delivered facial recognition tools to U.S. law enforcement agencies to varying degrees. With such tools, police can quickly match the images on their cell phone cameras while patrolling the streets, and match hundreds of thousands of photos in the police database. This facial recognition tool reinforces the police’s ability to bring bias.

Before technology companies came out, consumer-goods companies such as Nike and Starbucks, known for their values, had spoken out, with Nike temporarily changing its brand slogan “Just do it” to “Don’t do it” in an ad, and Starbucks, which also allowed employees to wear BLM-related clothing and distributed 250,000 accessories to employees since June 12.

Why did IBM start this head?

In the letter to members of Congress, CEO Krishna specifically stated that IBM will no longer provide face recognition technology and analysis software based on common purposes.

The expression is seen as ambiguous, as one of the bbc’s interpretations by an NGO in London called Privacy International is: is it possible to continue to offer custom-based facial recognition technology and analytics software?

IBM’s industry position in this area is the main reason for the question of its starting point – in the field of face recognition, IBM’s market share is actually small.

Is IBM's exit from face recognition a moral model or commercial speculation?

To be precise, artificial intelligence technology, including face recognition, is not a stand-alone product in itself, but an underlying technology that requires end devices such as smart cameras, smartphones, smart doorbells, and back ends that require cloud computing facilities to store and process data.

In the U.S., technology companies such as Amazon and Microsoft, which own the technology, sell the technology primarily to sell their end products and ancillary cloud computing services, and IBM has a small market share in both scenarios. IBM’s face recognition flagship Watson visual detection is primarily used in the field of security, with government security or other security agencies being the main customers.

According to Gartner, a market research firm, the company with the highest share of the global cloud computing market in 2019 will be Amazon (45%), followed by Microsoft (17.9%) and Alibaba Cloud (9.1%), Google with 5.3% and the remaining 23.7% shared by “other” companies, with IBM as the “other” segment.

IBM has been frustrated in this area for a long time, even though it was supposed to be a bellwether in the era of artificial intelligence. Back in 2008, then-CEO Peng Mingsheng came up with the concept of a “smart planet” and suggested that the then Obama administration invest in a new generation of smart infrastructure. The concept became the prototype of the later “smart city”, which was standardized with smart cameras, sensors, image recognition and face recognition.

In 1997, IBM’s artificial intelligence program, Deep Blue, defeated chess champion Kasparov.

But IBM has lacked the ability to find landing scenarios for these technologies and further commercialize them.

Amazon, by contrast, has gained real market share from cloud computing to artificial intelligence by applying new technologies to new and expanding scenarios. In the case of artificial intelligence, the retailer has not only Rekognition, a face recognition software that sells directly to the police, as well as smart doorbell Ring, smart speaker Echo and more for the average consumer.

Questions avoided:

AI abuse and its invasion of privacy

In the letter, IBM called for a “bias test” of artificial intelligence systems used in law enforcement. However, there is no evidence that the police acted properly on May 25 and had any thing to do with George Floyd’s criminal record or whether the police used the appropriate AI tools.

IBM took a risky step. It demonstrates the social responsibility of an established technology company to eliminate discrimination, burning the fire of reflection into areas that are more far-reaching than face recognition and prejudice, until it reaches the heart of the problem – concerns about AI abuse and its invasion of privacy.

The toughest privacy law in the United States today is the California Consumer Privacy Act, which came into effect on January 1, 2020. By law, California consumers not only have the right to know whether their data is sold or transferred to a third party, but they also have the right to block transactions.

But trading is only part of the value of the data, and even the “toughest” law does not regulate how technology companies can access and use user data. Not to mention the difficulty of enforcing the law, technology author Charlie Warzel said in a January op-ed in The New York Times that companies are simply falsely claiming that they comply with privacy laws because they know regulators will not censor them at all.

Not neutral AI

Research and discussion of AI bias and AI discrimination had been going on for several years before the BLM campaign broke out.

AI is not as neutral as expected, “it enhances existing prejudices and discrimination.” Maria Axente, a AI ethics expert at PricewaterhouseCoopers, said.

This reinforced bias is far-reaching.¬†Using AI to recommend songs on Spotify or movies on Netflix, even if it doesn’t taste bad, would not have much bad consequences, but if an AI can decide who to lend to or diagnose the disease, it’s about survival and life.

In 2017, Joy Buolamwini, founder of the Algorithmic Justice League, tested the face recognition algorithms of IBM, Microsoft and three companies. He collected 1,270 photos of faces from three African countries and three European countries and gave them product recognition to three companies, which showed that the three products were less likely to identify women than men and to identify dark people as much as whites. Among them, IBM’s algorithm accuracy error is the largest, and the error rate difference between light-skinned men and dark-skinned women is 34.4%.

The reason for the algorithm’s “bias” is simple – mainly due to insufficient data. In a 2010 study published by researchers at the National Institute of Standards and Technology and the University of Texas at Dallas, researchers found that algorithms designed and tested in East Asia were better at identifying East Asians, while those designed by Western countries were more accurate in identifying Caucasians. The immediate cause of this result is the type of data they can collect and the size of it.

But algorithmic bias is not purely a technical issue. In 2017, Tay, Microsoft’s artificial intelligence chatbot, launched on Twitter with the goal of interacting with and learning and training with 18-24-year-olds. Twenty-four hours later, however, Tay was quickly pulled out for verbally abusing users and making racist comments. After the accident, Microsoft switched to the content-reviewed version of Zo, but it didn’t find a fundamental solution.

Is IBM's exit from face recognition a moral model or commercial speculation?

The account of the robot Tay has been withdrawn.

Games that don’t stop:

The Game between Radicals, Moralism and Technology Neutrals

There is no social agreement on which data is private, which can be offended and which can be compromised.

If a spectrum is drawn into attitudes toward privacy, technology geeks and radical human rights groups, conservative lawmakers will split the spectrum, and technology companies will be in the middle — most of them technology neutrals.

Margrethe Vestager, the EU’s competition and antitrust commissioner, argues that the government needs to limit face recognition technology before it becomes “ubiquitous”.

In May 2018, the EU General Data Protection Regulation (GDPR) came into effect. Under the regulations, companies that operate or process EU data in the EU must obtain user consent before collecting data. At the same time, companies must provide users with the “right to be forgotten” – to delete their information as required.

A head of Amazon’s cyber services department has told the BBC it is up to politicians to decide whether restrictions should be imposed on AI. This statement represents the position of the vast majority of technology companies and their shareholders.

Some of Amazon’s shareholders had tried to oppose Amazon’s sale of face recognition technology to the police, but at a general meeting in May 2019 shareholders voted on the issue, scoring nearly 8.3 million votes in favor of the government’s proposal to ban the government’s use of Rekognition software, but 327 million votes were cast in support of the existing business, with about 5.5 million abstaining – only 2.4 percent of shareholders opposed to the sale of Rekognition software to the police.

Acceptitors or not, privacy is fading. That’s what business is all about.