Google CEO Pichai: AI must be regulated and not allowed to indulge the market

Sundar Pichai, chief executive of Google and its parent company Alphabet, has written in person claiming that artificial intelligence (AI) is too important and must be regulated, and there are serious concerns about the potential negative consequences of AI,media reported. Companies can’t just build new technology, but let market forces decide how to use it.

Google CEO Pichai: AI must be regulated and not allowed to indulge the market

皮查伊撰文全文如下:

I grew up in India and always had a passion for technology. Every new invention has changed the lives of me and my family in a meaningful way. With the phone, we don’t have to travel long distances to the hospital to see the results. The fridge means we can spend less time cooking, and TV allows us to watch the world news and cricket matches we can only imagine while listening to short-wave radios.

Now, I’m honored to help shape new technologies that we hope will change people’s lives around the world. Among the most promising technologies, including AI: just this month, three concrete examples of how Alphabet and Google are tapping into AI’s potential have been released. Nature published our research showing that AI models can help doctors more accurately detect breast cancer in mammograms; Tests use AI to help reduce flight delays.

However, there are many examples in history that technology can also have unexpected negative effects. The internal combustion engine allows people to travel long distances, but it also causes more accidents. The Internet makes it possible for people to contact anyone to get information from anywhere, but it is also easier to spread misinformation. These lessons tell us that we need to be aware of the problems that may arise.

There are real concerns about the potential negative consequences of AI, from deep face-changing to curbing the use of facial recognition technology. While many companies have done a lot to address these concerns, there will inevitably be more challenges ahead, and no single company or industry can handle them alone. The European Union and the United States have begun to develop regulatory proposals, and international coordination will be key to making global standards work.

To achieve this, we need to agree on core values. Companies like ours can’t just create promising new technologies and then let market forces decide how to use them. We also have a responsibility to make sure that technology is used so that everyone can use it. Now, in my opinion, There is no doubt that AI needs to be regulated. This technology is too important to let go.

The only question is how to treat it. That’s why Google released its own AI principles in 2018 to help guide the technology in ethical lying. These guidelines help us avoid bias, rigorously test security, put privacy first at the outset of design, and make technology accountable to people. These principles also specify areas where we do not design or deploy AI, such as support for mass surveillance or human rights violations.

But the principle of staying on paper has no practical significance. As a result, we have also developed tools to put them into practice, such as testing the fairness of AI decisions and conducting independent human rights assessments of new products. We’ve gone even further to make these tools and related open source widely available, which will allow others to use AI in good faith. We believe that any company developing new AI tools should follow these guidelines and rigorous vetting procedures, and that government regulation also needs to play an important role.

We don’t have to do it from scratch. Existing rules can serve as a solid basis, such as the General Data Protection Regulations in Europe. A good regulatory framework takes into account security, interpretability, fairness, and accountability to ensure that we develop the right tools in the right way. Smart regulation must also take a proportionate approach to balance potential harm, particularly in high-risk areas. Regulation can provide broad guidance while allowing for targeted deployment of technology across different sectors.

The existing framework provides a good regulatory basis for the use of some AI technologies, such as regulated medical devices, including AI-assisted heart monitors. For newer areas such as self-driving cars, the government will need to establish appropriate new rules and take into account all the costs and benefits involved. Google’s role began with the recognition that applying AI requires a principled, regulated approach, but it has not.

We want to be a partner for regulators as they grapple with inevitable tensions and trade-offs. We can provide our expertise, experience and tools when we work together on these issues. AI has the potential to improve the lives of billions of people, and the biggest risk may be that it will not be possible. By ensuring that AI is built in a responsible way that benefits everyone, we can inspire future generations to believe in the transformative power of technology as I do.