Google is trying to develop a new generation of computer chips with deep learning technology

Google is trying to speed up the development of dedicated chips through artificial intelligence programs, according to Jeff Dean, head of artificial intelligence research. “We’re using artificial intelligence technology internally in a range of chip design projects,” Dean told ZDNet after speaking at the international solid State Circuits Conference’s annual technology conference in San Francisco. “

Over the past few years, Google has developed an AI hardware family, the Tensor Processing Unit (TPU chip), to process AI on server computers. Using AI to design a chip is a virtuous circle: AI makes the chip better, the modified chip enhances the AI algorithm, and so on.

In his keynote speech, Dean introduced participants to how machine learning programs can be used to determine the circuit layout of a computer chip, with a sharpor or even higher level of design than a human chip designer.

When “wiring” tasks are used by chip designers, software is often used to determine the layout of the circuits in the chip, somewhat like a building design floor plan. In order to find the best layout for multiple goals, there are a number of factors to consider, including providing chip performance while avoiding unnecessary complexity that could increase chip manufacturing costs. This balance requires a lot of human heuristic thinking, designed in the best way possible. Now, Artificial intelligence algorithms can also be designed in this heuristic way of thinking.

Dean, for example, says deep learning neural networks take only 24 hours to solve problems, while human design takes six to eight weeks, and the former has better solutions. This reduces the total number of chip cabling, resulting in increased efficiency.

Google is trying to develop a new generation of computer chips with deep learning technology

Google is trying to develop a new generation of computer chips with deep learning technology

Dean told attendees that the machine learning model used to solve the problem had come up with a chip design in just 24 hours, while it took human designers eight weeks to complete.

This deep learning program is similar to the AlphaZero program developed by Google DeepMind to conquer Go games, and is also an enhanced learning program. To achieve the goal, the program tries various steps to see which steps lead to better results, but instead of playing chess, it designs the optimal circuit layout in the chip.

Unlike Go, the solution has a much larger “space” (number of cablings) and, as mentioned above, has a number of requirements that must be met, not just a goal of winning a game.

Dean says the in-house study is still in its early stages of understanding deep learning techniques. “We’re having our designers experiment ingres and see how we start using the program in our workflow. Also, we’re trying to understand what the program is useful for and where it can be improved. “

Google’s move into AI design is at a time of chip production, designed to allow dedicated chips of all sizes to run machine learning faster. Machine learning scientists believe that dedicated AI hardware could lead to larger, more efficient machine learning software projects.

Even if Google expands its AI design program, Dean says, there will still be many AI hardware start-ups, such as Cerebras Systems and Graphcore, that will bring diversity to the market and grow rapidly. And it’s going to be interesting to say.

“I’m not sure if these start-ups will survive in the market, but it’s interesting because a lot of them have a very different design approach. Some accelerated models are small eifos that can be used for on-chip SRAM. “This means that the machine learning model can be very small and does not require external memory.

“If your model is available for SRAM, it’s going to be very efficient, but if it doesn’t work, it’s not the chip you should choose. “

Google says the machine learning program has created many novel circuit designs that even human designers don’t think of.

Asked if the chips would fit into some standard designs, Dean suggested that diversification was possible, at least for now. “I do think there’s going to be a lot of things going on because the current research on machine learning is exploding, machine learning is being used to solve all kinds of problems, and when there are so many choices, you certainly don’t want to just stare at one choice, but want five or six — not a thousand, But there have to be five or six different design points. “

Dean adds, “It’s interesting to see which design approaches stand out, whether it’s a common approach to solving a lot of problems or a way to accelerate a particular aspect.” “

Speaking about Google’s move beyond TPU, Dean said the company is experimenting with more and more dedicated chips. Asked if Google’s AI hardware might extend beyond its existing offerings, Dean replied: “Oh, yes.” “

“There is no doubt that machine learning is becoming more and more widely used in Google products, including data center-based services and a lot of products on mobile phones. Dean points out that Google Translate is a program that gets rid of complexity and now supports 70 different languages that can be used on mobile phones even in airplane mode.

Dean points out that Google has expanded its family of chips for AI. Edge TPU, for example, covers “different design points,” including low-power applications, as well as high-performance applications at the core of the data center. Asked if Google would further expand its diversity, Dean replied: “I think so.” “

“Even in non-data centers, you’ll see the difference between different high-power environments, such as self-driving cars, not necessarily one watt, maybe 50 or 100 watts,” Dean says. Therefore, you need a different approach to this environment and for the mobile phone environment. “There are also ultra-low power applications such as agricultural sensors that can perform some AI processing without sending any data to the cloud.” If a support for AI is supported, the sensor can evaluate whether any data (such as cameras) has been collected and send these individual data points back to the cloud for analysis.