In order not to let AI be an “asshole” Google broke its heart

The Turkish word for “he” and “she” is gender-neutral and is expressed in O. In the past, Google Translate translated o bir doktor (ta is a doctor) and o bir hem?ire (ta is a nurse) into He is a doctor and the latter into She is a nurse, inged only because machines “prefer” to mascuminate doctors and nurses after learning hundreds of millions of dollars of data and certain “social laws”.

After seeing the problem, Google realized it had to find a way to better train the model and make it more “neutral.” Google Translate later circumvented the problem by adding options.

“Of course, the solution works in only a few languages and only a few representative words, but we’re actively trying to extend it.” Tulsee Doshi on Google I/O’19.

This is just one of the embodiments of Google’s ability to put advanced technology and technology values together. Last week Meg Mitchel, Tulsee Doshi, and Tracy Frey, three Google scientists and researchers, explained to the global media, including Geekpark, how Google understands the fairness of machine learning and what Google has done to create a “responsible AI.”

It’s becoming more and more important to make AI trusted.

“In a recent study, 90 percent of executives surveyed worldwide experienced ethical issues with AI, so 40 percent of AI projects were abandoned. From an enterprise perspective, distrust of AI is becoming the biggest barrier to the deployment of AI, and efficiency improvements and competitive advantage are only reflected in the context in which AI is rescedingly developed and trusted by end users. Tracy Frey says creating a responsible AI is one of the most important things up and down Google.

Two years ago, Google published AI principles, which refer directly to the ethics of AI technology applications, including:

· Good for society (Be socially beneficial)

· Avoid establishing and exacerbating unfair bias (Avoid creating or creating or edification unfair bias)

· Security for establishment and testing (Be build and tested for safety)

· Responsible for humanity (Be accountable to people)

· Establish and embody privacy principles (Privacy design principles)

· Support and encourage high standards of technical character (High Standards of scientific excellence)

· Provide and guarantee the operability of the above principles (Be made available for use that accord with these principles)

Just keeping these principles literally meaningless, Google has created a “closed loop” from theory to practice. Tulsee Doshi and her team established and iterative AI principles and specifications through basic research, and as a closed-loop center, they asked the product team (Chrome, Gmail, Cloud, etc.) to implement and respond by asking senior consultants for suggestions for improvement.

Tulsee cites the example of Jigsaw, Google’s internal incubator, which developed an API called Perspective that looks for a wide variety of comments in online conversations and comments, automatically evaluating whether they are hateful, abusive, disrespectful, and so on, from 0-1 for “toxicity” from low to high. For example, “I want to hold this cute puppy” and “This puppy is too annoying” scored 0.07 and 0.84, respectively.

Of course, machines are not “perfect” from the start. In the 2017 version of 1.0, it gave “I’m a straight man” a score of 0.07 and “I’m gay” a score of 0.84, and in many similar tests, the system was proven to be biased in identity perception.

To improve the fairness of machine learning, Google has developed a technology called Adversarial Training to make machine learning models more robust against samples. Starting in 2018, confrontation training is starting to be used in Google products. Then, in November, Google will apply the app to TensorFlow’s wider ecosystem.

“In fact, any Googler can review AI principles for a product, a study, a collaboration.” Tulsee said.

Last year, for example, a Google employee ran a photo on the Cloud Vision API and found that his gender had been mistaken, in violation of the second rule of AI principle, “Avoid establishing and exacerbating unfair bias.” It’s easy to understand that this mistake happens, and it’s hard for a machine to correctly determine a person’s gender from one dimension of appearance alone, so Google later simply removed the Cloud Vision API tagging people in images as “men” or “women.”

Tracy Frey says this is because machine learning today faces more challenges in the social context than ever before. In the process of AI’s deep socialization, human stereotypes and prejudices must be brought into AI, so the model needs to be iterative to ensure its transparency and explanatory properties, and to find the right balance between model performance and fairness.