Google executive’s blackening: Alpha dog that forced Li Shishi to retire is really not a big deal

BEIJING, Dec. 19 (Xinhua) — Measuring the “intelligence” of artificial intelligence technology is one of the most difficult and important issues in computer science. If you can’t be sure that today’s technology is smarter than yesterday’s, how can you know that technology has improved? At first glance, this seems to be a pseudo-proposition. “Obviously, artificial intelligence is getting smarter” is the answer.

With a flood of money and talent pouring into the field of artificial intelligence, artificial intelligence has also made some landmark advances, such as the victory over top human go-go players, and problems that were not possible a decade ago, such as image recognition. All of this marks a great step forward in artificial intelligence.

The other answer is that these advances are not really good indicators of the “INTELLIGENCE” of artificial intelligence technology. It’s great to outperform the top human players in chess and go, but what’s the point if the smartest computers don’t perform as well as toddlers or mice in solving common problems?

This is a point put forward by Francois Chollet, a Google software engineer and machine learning mogul. Cholett is the developer of Keras, a widely used neural network development tool.

In a recently published paper entitled “Iq Measurement,” Cholet treads the idea that the field of artificial intelligence needs to refocus on what IQ is. If researchers want to make progress in general artificial intelligence, he says, they need to abandon popular indicators such as video games and board games and start thinking about smarter skills, such as the ability to generalize and adapt.

In an e-mail interview with The Verge, Mr. Jollet explained his thoughts on the issue, explaining why he thinks the current aI achievement is “misleading,” how we should measure the “intelligence” of aiintelligence in the future, and concerns about super-artificial intelligence (represented by Elon Musk Musk) would limit the public imagination. Here’s a slightly edited transcript of the interview:

Google executive's blackening: Alpha dog that forced Li Shishi to retire is really not a big deal

Artificial intelligence framework Keras developer Francois Cholett

Q: In your paper, you describe two different concepts of “intelligence” in the field of artificial intelligence, one that sees “intelligence” as a capability to excel in a variety of tasks, and the other that focuses more on the ability to generalize and adapt – the ability of artificial intelligence to respond to new challenges. Which concept is more influential and what are the consequences?

A: In the first 30 years of aI development, the first concept was more influential: intelligence was presented as a set of static programs and clear knowledge. Now, things are changing 180 degrees: the artificial intelligence world’s description of intelligence is “whiteboard.” Regrettably, this framework has largely failed to meet the challenges or even to be scrutinized. These issues have a long history – perhaps decades, and the current aia world is largely ignorant of these histories, perhaps because most people currently working in deep learning entered the field after 2016.

This intellectual monopoly is not a good thing, especially when it comes to the answers to few ertimed scientific questions. It limits people’s questioning, limits the space for people to innovate. I think researchers should recognize the facts at the moment.

Q: In your paper, you also make a point: To advance, the field of artificial intelligence needs a better definition of intelligence. In your opinion, researchers are now focusing on static testperformance, such as beating humans in playing games and playing chess. Why do you think this method of measuring intelligence is inadequate?

A: The problem is that once an indicator is selected, people will do everything they can to improve the performance of artificial intelligence on this indicator. For example, if chess were used as a measure of the intelligence of artificial intelligence (as was the case in the 1970s and 1990s), you would end up with a chess system that would not be good at other tasks and would not help us understand human intelligence. Currently, developing artificial intelligence technologies that specialize in games such as Dota or StarCraft will fall into exactly the same smart trap.

This may not be obvious, because for humans, skills and intelligence are closely related. Humans can use their universal intelligence to acquire mission-specific skills. A man who plays chess well is considered to have a fairly high level of intelligence, because we know that his chess skills are not innate, but that they learn to play chess gradually with the help of universal intelligence. His goal in life is not to play chess. We know that he can use universal intelligence to efficiently learn the skills needed to accomplish other tasks. That’s the power of universal intelligence.

However, there are some limitations to the machine. A machine can be designed to play chess. Therefore, the inference we get in humans – “can play chess, so it must be intelligent” – is not true. Universal intelligence builds skills to accomplish specific tasks, but vice versa. For machines, skill is not equal to intelligence. As long as unlimited data (or unlimited engineering resources) is available for a particular task, the machine can acquire the skills to complete the task. But that won’t bring them one step closer to universal intelligence.

The key point is that there is no task that allows a machine to acquire universal intelligence, unless it is a meta-task – acquiring new skills by solving a large number of previously unknown problems, which is the measure of the intelligence of artificial intelligence.

Google executive's blackening: Alpha dog that forced Li Shishi to retire is really not a big deal

Researchers at Google’s Artificial Intelligence Lab watch AlphaStar confront human players in StarCraft II

Q: Since these indicators are not currently conducive to the development of artificial intelligence technologies with universal intelligence, why are they widely used?

A: There is no doubt that artificial intelligence technology development that beats top human players in well-known games is mainly media-driven. If the public is not interested in such flashy “milestones”, researchers will do other more meaningful work.

I think it’s sad because scientific research should solve unresolved scientific problems, not show up. If I start edify and beat top human players in Warcraft III with deep learning, I’m sure I’ll succeed if I get enough engineering and computing skills. But even if this is achieved, will I have a new understanding of intelligence or induction? The answer to this question is obviously no, the big deal is that I’ll just have more technical knowledge of the large-scale deep learning system. Therefore, I really do not think that this falls within the scope of scientific research, because it does not give us new knowledge and does not resolve any outstanding issues.

Q: What do you think of the actual achievements of these projects? How serious is the misperception of them?

A: One of the completely misconceptions I found was that these game-playing systems represent a real advance in artificial intelligence technology that can cope with real-world complexity and uncertainty, which is not the case at all. In OpenAI’s OpenAI Five, for example, it was first unable to cope with the complexity of Dota 2, which was trained with 16 game characters and the entire game contained more than 100 characters. It’s the equivalent of humans playing Dota 2 for 45,000 years.

If we want artificial intelligence technology to one day deal with the complexity and uncertainty of the real world, we must now start thinking about the question: What is generalization? How do you measure and maximize induction in the learning system? This is not the same thing as training large neural networks with 10 times as much data.

Q: What are the indicators that better measure intelligence?

A: In short, we need to stop evaluating the skills of AI technologies in completing specific tasks and start evaluating their ability to acquire them. This means using only new tasks that the system was not familiar with before, measuring the system’s ability to use prior knowledge and the efficiency of using samples. The less information (a priori knowledge and experience) you need to get a given skill, the smarter the system becomes. Today’s artificial intelligence systems aren’t really smart.

(My most recent paper) presents a new test data set, ARC, that looks a lot like an IQ test. ARC consists of a series of inference tasks, each explained by a demonstration, and we should use the demo to complete the task. Currently, ARC can be completely completed – even without any verbal explanation or prior training, no artificial intelligence technology has so far passed this test.

Google executive's blackening: Alpha dog that forced Li Shishi to retire is really not a big deal

Cholett’s ARC Artificial Intelligence Test Data Set

Q: Do you think the ai intelligence technology that only piles up can continue to develop? Some argue that this has historically been the most successful way to improve the performance of Artificial Intelligence systems. Others argue that if we continue to rely solely on build-up, we will soon see a decline in returns.

A: This is absolutely true when it comes to solving a specific task. Investing more training data and calculation on a vertical task will make the task more beautiful. But for improving the inductive ability of artificial intelligence systems, increasing computing power does not help.

In the case of self-driving cars, millions of trainings are not enough to generate an end-to-end deep learning model that allows cars to drive safely. That’s why the L5-class self-driving car hasn’t been available yet. If the deep learning model is to reason, the L5 self-driving car should be available as early as 2016.

Google executive's blackening: Alpha dog that forced Li Shishi to retire is really not a big deal

Self-driving cars are developing much slower than many people think.

Q: Given that you discussed the constraints facing a usanomy today, I would like to ask you what you think about super-intelligent , fearing that an exceptionally powerful AI system could threaten humanity in the near future, do you think that concern makes sense?

A: The answer to this question is no, I don’t think the claim about super-intelligence is justified. We have never developed a fully autonomous intelligent system, and there is no indication that we will be able to develop such a system for the foreseeable future. Current advances in artificial intelligence technology will not lead to such a system. Even if such a system can be developed in the distant future, there is absolutely no way to speculate on its characteristics at this time.

Add a Comment

Your email address will not be published. Required fields are marked *