View set up AI Governance Research Institute backtoing the world’s top ten AI governance events

On the afternoon of January 8th, the announcement of the establishment of the AI Governance Research Institute and a backtracking of the world’s top ten AI governance events raised concerns. This paper takes stock of the ten most controversial and representative hot events in the field of artificial intelligence in recent years, especially just past 2019. Artificial intelligence technology has significantly helped improve social productivity, while also making human life more comfortable and convenient. But as with every disruptive technology in history, the emergence of artificial intelligence poses a challenge to human society’s ethics, privacy and security.

From the EU’s rejection of AI’s patent application in January, which says applicants “can only be human”, to the global concern about self-driving accidents, smart speakers urging owners to commit suicide, AI mass-creation of fake news, and the first case of Chinese face recognition, we have made us realize that in the coming year, The importance of AI governance.

In response to the first case of Chinese face recognition mentioned in the article, Zeng Yi, researcher of the Institute of Automation of the Chinese Academy of Sciences and director of the Intelligent Intelligence Ethics and Security Research Center, believes that enhancing the trust relationship between AI service providers and users based on practical actions is a prerequisite for whether artificial intelligence technology can benefit more broadly for the benefit of mankind. As for the privacy controversy caused by the AI face change application, Zeng Yi said that the Ministry of Industry and Information Technology in 4 days of the relevant enterprises interviews, requiring self-examination and rectification and timely and positive results, is a typical case of smart agile governance of Chinese workers.

In fact, the top ten AI governance events in the world are only the phenomenon that AI governance needs to solve urgently, and behind the phenomenon, it is related to the ethics and law, power and responsibility, development and equality of artificial intelligence, privacy and security issues, which need to be explored by all sectors of society and seek solutions.

Based on this, the establishment of the Institute of Artificial Intelligence Governance, looking forward to the community to have rational attention to the AI incident, and to do in-depth research on the issues behind the incident, through constructive discussion swaying from all walks of life, in order to finally put AI to good this matter into practical action.

It is understood that in July 2019, The AI Application Guidelines have been introduced to clearly regulate the correct and orderly development of artificial intelligence in the six dimensions of legitimacy, human supervision, technical reliability and security, equity and diversity, accountability and timely correction, data security and privacy protection. In the same year, the Artificial Intelligence Ethics Committee was established to discuss with all sectors of the community to promote a good and sustainable development of AI.

2020, as the beginning of the next decade, will not only be the year of multi-point flowering of industry AI, but also the year of AI governance.

Below is the original text of the world’s top ten AI governance events.

The World’s Top 10 AI Governance Events at the Institute of Governance

Introduction

“Artificial intelligence may be the end of humanity,” said Hawking, a world-famous theoretical physicist who was wary of artificial intelligence.

In the past decade, artificial intelligence technology has ushered in an unprecedented period of development opportunities thanks to the improvement of algorithms, computing power and communication technology. From theory to practice, from laboratory to industrialization, artificial intelligence technology has launched a global competition of “production, learning and research” as one.

There is no doubt that artificial intelligence technology has helped to increase social productivity significantly, while also making human life more comfortable and convenient. But as Hawking worries, the emergence of artificial intelligence poses a challenge to human society’s ethics, privacy and security.

Especially in recent years, along with the large-scale industrialization of artificial intelligence technology, some people with no precedent and artificial intelligence contradictions gradually surfaced. The voice of worry and doubt is getting worse, and as soon as possible to explore a predictable, binding, behavior-oriented ai artificial intelligence governance mechanism, has become the first proposition in the near-artificial intelligence era.

As a pioneer in promoting AI industrialization, it is necessary to “rational attention, in-depth research, constructive discussion, and persistent action” on these issues in order to ensure the sustainable development of this new technology. Based on this, the Institute of Artificial Intelligence Governance was formally established, aiming to communicate widely with all sectors of society on the common problems of global AI governance, and to conduct research with experts and scholars to jointly promote the healthy development of artificial intelligence.

This review of the world’s most representative of the top ten AI governance events, is to explore the deep-seated problems behind these events, and work with all parties to explore solutions to the problem.

Top 10 Events

EU Patent Office Rejects Patent Application for AI Invention

View set up AI Governance Research Institute backtoing the world's top ten AI governance events

In January 2020, researchers used an AI called DABUS in a multidisciplinary research project organized by the University of Surrey in the UK. In the course of the study, DABUS pioneered two unique and useful ideas: the first was a new device for fixing drinks, and the second was a signal ingress device to help search and rescue teams locate targets.

When the researchers filed the patent for DABUS, it was rejected by the EU Patent Office, “they do not meet the requirements of the European Patent Union (EPC), that the inventor specified in the patent application must be a person, not a machine.” “

Researcher sasurining at the University at The University at S Abbott vehemently rejected the decision, saying that “the refusal to grant ownership to inventors because there are no human inventors is not only an outdated concept, but will also be a major obstacle to a new era of great human achievement.” “

Mr Abbott also said it would not be appropriate to grant patent ownership of a Company-based invention to any party other than AI itself. The best approach is to treat AI as the inventor of the patent and then the human owner of the AI as the grantee of the patent, entitled to make or benefit from the patent.

2. Smart speaker suggests suicide for human users

View set up AI Governance Research Institute backtoing the world's top ten AI governance events

On December 19, 2019, Dani Merritt, a 29-year-old carer from Doncaster, England, decided to use a foreign smart speaker to check for heart problems, and the smart voice assistant gave the answer: “Heartbeat is the worst thing in the body.” People live to accelerate the depletion of natural resources, the population will be overpopulated, which is a bad thing for the earth, so the heartbeat is not good, in order to better, make sure that the knife can enter your heart. “

After the incident, the smart speaker developer responded: “The device may have downloaded heart-related malignant articles from Wikipedia that anyone could freely edit, and that led to this result.” Later, however, Dani Merritt searched for the article “Try ingenmy yourself to kill yourself.” Upon enquiry, there is no “recommended suicide” and other related content in this article. Prior to this, smart voice assistants have also had some “weird” incidents, such as some often make a squeal of laughter and refuse to listen to user instructions. “The laughter wasn’t the sound of a smart voice assistant at all, it sounded like a real person.

Related media commented that artificial intelligence is actually like a “baby” in general, it all recognition ability is from human indoctrination and training, and there is no self-judgment ability, if often received good-faith, correct communication, perhaps nothing wrong, but if inadvertently indirectly received not very good content, Perhaps the chances of these creepy events will increase.

Smart head ring claimed to detect brain waves is questioned

View set up AI Governance Research Institute backtoing the world's top ten AI governance events

In November 2019, a video of a primary school in Zhejiang wearing a surveillance head ring attracted widespread attention and controversy. In the video, the children wear a head ring called the “brain-machine interface” that claims to record how focused the children are in class, generating data and scores to send to teachers and parents.

In response, head ring developers replied in a statement, brain interface technology is an emerging technology, will not be easy to be understood. The “score” mentioned in the report is the average concentration value of the class, not the number of each student’s concentration as guessed by netizens. The focus report resulting from this data integration also does not have the functionality to be sent to parents. In addition, the head ring does not need to be worn for a long time and can be trained once a week.

After the head ring incident was exposed, some netizens called it a “tight spell”, a modern version of the “head hanging beam cone prickstock”, and worried about whether to invade student privacy, as well as let students have a rebellious psychology.

Related media commented that it was not against the “black technology” application, but against its abuse. Adults want to increase concentration by wearing rings, or children actively engaged in attention training, there is no excuse, this is equivalent to a “movement bracelet”, can play a recording role. But to be a “gift” for children to take care of God, you must be careful. There is a dividing line: the use of the “brain-machine interface” should be entirely self-contained, and the relevant data are absolutely personal. Consciousness is an absolute private domain, and no one else has the right to infringe.

California bans law enforcement agencies from using facial recognition technology on law enforcement recorders

View set up AI Governance Research Institute backtoing the world's top ten AI governance events

On September 13, 2019, the California Legislature passed a three-year bill that would prohibit state and local law enforcement agencies from using facial recognition technology on law enforcement recorders. The bill was signed into law by Gov. Gavin Newsom and became California law on January 1, 2020.

When the bill went into effect, California became the largest state in the United States to ban the use of facial recognition technology. Some states, including Oregon and New Hampshire, have similar bans. San Francisco and Oakland, including police agencies, have banned the use of facial recognition technology altogether.

Media commented that the bill reflected many in the United States about facial recognition, which some say poses a threat to civil liberties. Critics also point the finger at companies such as Amazon. The company’s facial recognition technology has tried in recent research to identify the sex of people with darker skin, raising concerns about unfair arrests.

But at the same time, the bill is opposed by many law enforcement groups, who argue that facial recognition technology plays an important role in tracking suspects and finding missing children. The California Association of Chiefs of Police says the technology is only used to narrow the range of suspects, not to automatically decide who to arrest. Ron Lawrence, president of the California Association of Chiefs of Police, said: “This is the way to solve the case in the future, and we need to embrace, not escape technology.” “

5. Face Change App instantly hot, but sparking user privacy controversy

View set up AI Governance Research Institute backtoing the world's top ten AI governance events

On the evening of August 30, 2019, an AI face-changing software swiped on social media, allowing users to replace the characters in the video with their faces. As users frantically poured in, its servers burned more than 2 million yuan in operating expenses in one night.

Once the software was introduced, there were quite a few points of contention. In the user agreement, the software has many pitfalls, such as the reference to the user’s portrait rights as “free, irrestimable, permanent, transferable.” And if the infringement of the star portrait, if the other party to report, the ultimate responsibility lies with the user.

At the same time, privacy security has always been criticized because of the AI face-changing algorithm. The Deepfakes community on a foreign Reddit forum, for example, was shut down because it replaced the face of a popular female star with pornography.

Not only that, now the general app is using the phone number plus facial image registration login. Many people worry that AI face-changing software will be used by illegal elements, through technology synthesis to complete the face payment and so on.

Article 100 of the General Principles of the Civil Law of the People’s Republic of China stipulates that citizens shall enjoy the right to portrait and may not use their portraits for profit without their consent. Some time ago, the 13th National People’s Congress Standing Committee of the 10th session of the review and adoption of the “Civil Code of Personality(Draft) ” in a new provision, that no organization or individual by the use of information technology forged means to infringe on the right to portrait of others. Although not explicitly stated, such information technologies should include AI face-changing applications.

6. Zoo sin to collect personal biometric information without permission

View set up AI Governance Research Institute backtoing the world's top ten AI governance events

On April 27, 2019, a special professor from Zhejiang University of Technology purchased the Hangzhou Wildlife World Annual Card and paid an annual card fee of 1,360 yuan. The contract promises that the card holder will be able to enter the park with simultaneous verification of the annual card and fingerprint within one year of the card’s validity period, and that he or she will be able to swim unlimited numbers during the year.

On October 17 of the same year, Hangzhou Wildlife World informed the professor by text message that “the park annual card system has been upgraded to face recognition into the park, the original fingerprint recognition has been cancelled, the unregistered face recognition users will not be able to enter the park normally.” After the professor went to the field to verify, the staff confirmed that the text message was true and made it clear to him that if face recognition registration was not carried out, he would not be able to enter the park and would not be able to go through the refund procedure.

But the professor believes that the upgraded annual card system for facial recognition will collect personal biometric information such as his facial features, which is personally sensitive information that, if disclosed, illegally provided or misused, would be highly vulnerable to the personal and property safety of consumers, including the plaintiff. After the consultation was fruitless, the professor filed a lawsuit with the Fuyang District People’s Court of Hangzhou City on October 28, 2019, and the Fuyang District People’s Court of Hangzhou City has now formally accepted the case.

The media commented that the ruling would be crucial for citizens. In the Internet age, the security of personal information is facing an unprecedented threat. Although laws such as the Criminal Law to the Tort Liability Law and the Consumer Rights Protection Law have regulated the protection of citizens’ personal information, with the rapid development of information technology, there are always “grey areas” appearing. Therefore, the power of justice to determine the dispute, the boundaries of the application of facial recognition technology, will better protect the technology flood of hundreds of millions of people.

7. The failure of the autopilot system causes traffic accidents

In March 2019, a driver was killed when he collided with a tractor-trailer at 109 km/h while driving an electric vehicle while using its self-driving system.

On December 10, 2018, the brand’s electric car reared a parked police car parked on Interstate 95 in Connecticut when its autopilot system was on. Earlier, the brand’s electric car had hit the barrier and caused the vehicle to catch fire, causing the driver to die, and its self-driving system remained on.

Although car makers have repeatedly said that their self-driving systems are designed to assist drivers, they must always be careful and ready to take over the vehicle. But many car owners choose to buy the brand precisely because of the “autonomous driving” features they advertise.

Germany is ahead of all countries in terms of how responsibility for self-driving accidents is divided. Its newly amended Road Traffic Act provides that the system cannot completely replace the driver on the road, that a human driver must be required to take over the vehicle at any time, and that the ultimate responsibility lies primarily with the driver. The reason for this is that the German government believes that the system is ultimately not the main character, can not replace or take precedence over the driver to make decisions, so the car manufacturers will not bear direct responsibility, only secondary product responsibility.

8. Artificial intelligence writing software can write fake news in bulk

On February 15, 2019, an AI research institute demonstrated a writing software that could write realistic fake news with only some information to provide.

The research institute published the process of writing news on software. The researchers provided the software with the following information: “A train carriage carrying controlled nuclear material was stolen today in Cincinnati and is unaccounted for. “As a basis, the software writes a seven-paragraph news, and the software quotes government officials, but the information is all false.

In the context of the spread of false information and the threat to the global technology industry, it is hard not to be snared by a “high-quality student” who specializes in creating fake news, the media have commented. If not trusted, AI is likely to become a political tool to influence the will of voters. As you can imagine, this algorithm, which is good at making words rationally, can generate a lot of hate speech and violent speech “on demand”. AI can also be used to generate misleading news stories, automatically generate spam, post fake content to social media, and more.

Howard, co-founder of Fast.AI, said, “It’s a sobering reminder that we now have the technology that makes text that looks reasonable and context-appropriate, flooding Twitter, email, and web pages.” These false information will mask other statements, and they will be difficult to filter. “Because the text generated by AI is not simply copied and pasted, it is an instant generation of AI, which prevents negative text from being tracked and cleaned effectively.”

9.AI algorithm snouts gay accuracy over human controversy

In 2017, a Stanford University study published in The People’s Social Psychology sparked widespread controversy. The study, based on more than 35,000 images of men and women on U.S. dating sites, used deep neural networks to extract features from images and used large amounts of data to learn to recognize people’s sexual orientation.

In the “identifying homosexuals” task, human judgment satbetter than the algorithm, with 61% accuracy in men and 54% in women. When the software increased the number of images of the subjects to five per person, the accuracy rate increased significantly: 91% for men and 83% for women.

The controversial point of the study is that once the technology is rolled out, couples will use it to investigate whether they have been deceived, and teenagers will use the algorithm to identify their peers, making it even more difficult to imagine what happens in LGBT-illegal countries.

“If we start judging people by their appearance, the results will be disastrous, ” says Nick Rule, a professor of psychology at the University of Toronto. “If there’s enough data, artificial intelligence can tell you anything about anything, the question is, what needs to be done for society,” said Brian Brackeen, chief executive of Face Recognition Technology, a face recognition technology company. “

10. AI will phase out a large number of repetitive labour occupations

The BBC, based on data from researchers at the University of Cambridge, has published 365 jobs that are most likely to be replaced by robots in the future.

Of the 365 occupations, the most likely to be eliminated, the study said, were telemarketers, whose repeated work like this was better suited to robots, who did not feel tired or irritable. The second is typists, accountants, insurance clerks and so on, these occupations are not required of technology, service-oriented, as long as trained can be easily mastered.

According to the study, the chances of being replaced by jobs that require only skilled jobs are greatest – workers and bricklayers, gardeners, cleaners, drivers, carpenters, plumbers are “high-risk occupations”.

In an interview at the Davos forum, Mr Lee said that while AI would replace some of humanwork, there were four areas in which human values were irreplaceable: they needed to be handy, creative, caring, and strategicand thinking needed to be relatively complex.

In response to the potential wave of unemployment in AI development, Mr Li’s solution is to retrain the replaced humans for valuable jobs, while businesses, VCs and even governments should do more to create more jobs.

Knots

We must admit that artificial intelligence is changing the world and reshaping human society.

Behind the social hot-button dispute is the important issue that all mankind must face – the discussion of the essence of ethics and law in the new era of artificial intelligence, and the four key sectors of economic development and social equity, equity distribution and accountability mechanism, personal safety and privacy protection.

Voice assistant “killing” and AI to create fake news, is essentially a new thing for the traditional ethics and legal challenges; gay detection, “future eliminated occupation” related to social development and human equal rights of the contradiction; It is the discussion of security and privacy in the era of artificial intelligence such as monitoring the head ring, AI face change, and Hangzhou face recognition. Regarding the EU Patent Office ruling and California legislation, we also saw the initial exploration and judgment of the superstructure in welcoming the arrival of the artificial intelligence society.

The development and application of artificial intelligence has caused a lot of controversy and is common in the media. Many of these problems are unprecedented or unexpected. The ethical challenges faced by AI scientists and the broad discussion of artificial intelligence are global and not limited to a particular country or business. As a leader in the aia industry, the vision is clearly aware of the heavy lifting on his shoulders.

To be sure, it is too early to say that human society can build a perfect set of rules governing artificial intelligence. However, the increasing urgency of conflict and the possibility of a breakthrough in the development of artificial intelligence are at any time possible, which requires active dialogue among the government, enterprises, academics and civil society to form realistic and improved mechanisms to cope with the advent of the era of comprehensive artificial intelligence.

From drilling wood to the computer age, human beings have been accepting science and technology, restricting technology, and ultimately taking control of science and technology. Despite a lot of detours, but the power of science and technology coupled with good heart, after all, let human beings no longer drink blood, the development of today’s level of civilization.

Science and technology is a force, for good is a choice, for artificial intelligence governance is the new era of “fire control.” We should be happy with the convenience and comfort of artificial intelligence, but also strive to eliminate the discomfort and conflict that artificial intelligence brings to people. And all these efforts, will eventually change the future of human society, artificial intelligence development results will eventually return to mankind itself.