Are you satisfied with the AI feature in 2020? If you think of the conversation with the AI smart speaker crying and laughing, and standing at the AI face recognition gate for a long time to identify who you are, you have a big chance to give a negative answer.
The improvement of AI can be either through continuous optimization of existing chips and algorithms, or through transformative technologies such as increasingly well-known neuromorphic computing (or brain-like computing) and quantum computing. Quantum computing, by contrast, has a higher heat, but the recent release of Intel Pohoiki Springs, the world’s most powerful neuromorphic system, and the olfactory neuromorphic chip, are sure to make you look forward to the future of neuromorphic computing and AI.
Photo from PCworld
What can the world’s strongest neuromorphic system do?
On Tuesday, the journal Nature-Machine Intelligence reported on intel’s work with scientists at Cornell University in the United States to build mathematical algorithms, and in collaboration, Nabil Imam, a senior research scientist at the Intel Institute’s Neuromorphic Computing Group, used a data set of 72 chemical sensor smhttps. , it responds to 10 gaseous substances (odors) circulated in a wind tunnel experiment, including harmful gases such as acetone, ammonia and methane. And even strong environmental interference can recognize these odors.
In other words, Intel’s neuromorphic chip, Loihi, also has a “sense of smell”, the “electronic nose system” that the chemical sensor field has been looking for for years. In the future, robots equipped with “smell” neural morphic chips have great potential for environmental monitoring, hazardous material detection and factory quality control.
It is important to know that loihi can achieve a high accuracy of more than 90 percent with only one sample training, and if traditional methods, including a deep learning solution, are used to achieve the same classification accuracy as Loihi, learning more than 3,000 times the training sample sq. per type of odor is required.
Just two days later, Intel announced that its most powerful new neuromorphic research system, Pohoiki Springs, is ready to provide computing power for 100 million neurons. 100 million neurons are equivalent to the number of neurons in the brain of a small mammal.
It’s easier to understand, with a ladybug brain with about 250,000 to 500,000 neurons, a cockroach brain with about 1 million neurons, a zebrafish brain with about 10 million neurons, and a hamster brain with about 90 million neurons.
The Evolution Of Intel Loihi Systems
Pohoiki Springs, which has 10 million more neurons than the hamster’s brain, is a data center rack-mounted system that integrates 768 Loihi neuromorphic research chips into five standard server-sized chassis with less than 500 watts of power running.
Data Center Rack System Pohoiki Springs (Source: Tim Herman/Intel)
The Pohoiki Springs system is the largest neural morphing computing system ever developed by Intel and, of course, the most powerful neuromorphic system in the world. Intel’s newly released Pohoiki Springs and its previously released 8 million neurons, the Pohoiki Beach system, are still in the research phase and could provide researchers with a tool to develop and characterize new neuro-inspired algorithms for real-time processing, problem solving, adaptation, and learning.
Behind these two eye-catching neuromorphic calculations, both Intel’s first self-learning neuromorphic chip, code-named Loihi, was developed in 2017.
Where is the difficulty of neural morphation calculation?
Loihi’s design ideas originate in the human brain, integrate training and inference into one chip, and integrate computing and storage functions: 128 small cores in a single chip contain 1,000 neuronal hardware design architectures, simulate multiple “logical neurons”, and support scalable on-chip learning capabilities across multiple learning modes. Enable many different neural network breakthroughs.
The advantages of this design are clear, allowloihi to handle demanding workloads at 1,000 times faster than conventional processors and 10,000 times more efficiently.
Intel Loihi Neuromorphic Chip
Even with such significant advantages, the number of chips for neural morphing is small. That’s because only a handful of big companies around the world, such as Intel and IBM, and a handful of start-ups are developing. This status quo is closely related to the history and technology of neural morphcity computing, the concept of neuromorphic computing was proposed in the 1980s, later than the concept of quantum computing.
In the face of a whole new concept, especially in the context of limited human scientific research on the brain, only a few schools and institutions have studied the calculation of neuromorphics. And when applications are not visible, neuromorphic computing researchers are less motivated to solve technical challenges.
The goal of neuromorphic computing is to understand the efficient working mechanisms of the brain that can process complex information in real time while consuming very little energy, and to use these mechanisms in chips, including fine-grained parallel computing, neurodynamics, time domain coding, time-based information processing, and so on.
Therefore, what neural morphology computing needs is a bottom-up rethink of computer architecture. Intel Loihi’s bottom-up, self-critical design for computer architecture, allows each Loihi to have 128 small cores each containing 1,000 neuronal hardware, both computed and stored, as well as network management, to simulate logical neurons.
Architectural innovation sits in mind that the power consumption of neural morphing chips will still be challenged if they are designed in a widely used synchronous circuit. CPU, GPU, FPGA are all using synchronous circuit design, that is, the same clock drive, the chip’s computing units and storage units are working simultaneously according to this common clock, the advantage of this design can solve the problem of large-scale integrated circuits prone to error, but not efficient enough.
To do this, Intel Loihi uses a novel asynchronous pulse approach, which is driven by multiple independent clocks, which, depending on the needs of the application, allows only the work that needs to be done, while the rest is on standby, allowing Loihi’s power consumption to reach milliwatt levels.
But this raises new questions, thanks to the design of asynchronous pulses, when Loihi large-scale interconnection is integrated into Pohoiki Beach, Pohoiki Springs neuromorphic systems, with a view to achieving linear growth in performance. It is a challenge to have a neuronal message complete the information needed between hundreds of chips in a single slice of time.
In an interview, Song Jiqiang, director of the Intel Institute of Research, said that time Step can be used to solve the problem of time series in neural morphcity calculations, the problem of small asynchronous circuits is easy to solve, but the larger system will have challenges, the multi-chip interconnection is also the industry has not done things.
Clearly, from Loihi of 130,000 neurons to the Pohoiki Springs system of 100 million neurons today, Intel is not doing so much as integrating systems, but also solving the key problems of time series brought about by asynchronous circuits, and on top of the hardware, it requires software layer support.
“In order to support interconnect computing, distributed computing, and flexible partitioning of neural figurative systems while taking into account ease of use, software is needed to minimize the difference in hardware connectivity.” No one’s done this, no one in academia, Intel started doing this experiment in the industry. Song Jiqiang said.
After exploring and achieving results in no man’s land for neural figurine computing, the next step for Intel is to support developers in dynamic planning and optimization experimentsupport through a tool chain. This is also the key to making Intel’s neural computing a new universal architecture that can be applied to both front-end perceptive computing and large-scale computing needs, promoting the popularity of neural morphing computing.
When does neural morphic computing change the world?
The development of new technologies and large-scale applications, it is very important to find a good application orientation, neuromorphic computing is the same. On the one hand, Intel’s efforts through a strong team of technical experts provide a better chain of hardware and software tools for neural-morphing computing, and on the other, Intel is working with more partners to advance this new technology.
As a result, in 2018, the Intel Neuromorphic Research Community (INRC) was established to provide researchers with its Loihi cloud system and Loihi-based USB-shaped system Kapoho Bay for practical applications through INRC.
It is reported that INRC was founded when only a few dozen members, with the 2019 including Accenture, Airbus, General Electric, Hitachi, including the first group members, as well as the world’s leading universities, government laboratories, neuromorphic start-ups, the number of members of the community is now close to 100.
“INRC is an open community, but to join requires a proposal first, and once we’ve identified the Loihi chip and the system to help the proponents, they can join and get our follow-up technical support.” Song Jiqiang said in an interview with Lei Feng.com.
Examples of promising and highly scalable algorithms currently being developed for Loihi include constraint gratification, search maps and patterns, optimization issues.
Song Jiqiang explained, “Today’s social networks, including the future AIoT, will use large-scale map search, in these large-scale diagrams how to quickly find a good path, or at the same time find a few possible constraints to be satisfied, with the original high-performance computing to spend a lot of computing resources, And neuromorphic computing now has some very effective algorithms that can be tested concurrently, the effect is much better than before, and it is also the application of large companies in our community. “
Traditional applications include behavioral recognition in time series scenarios, and synchronous follow-up of the robot’s visual controls, as well as olfactory scenes, which schools and start-ups like to do.
In this process, it is necessary for Intel to continue to do a lot of work at the software level to connect hardware and software to better meet the application needs, but also need developers to develop better SNN (pulse neural network).
Neuromorphic chips, including Loihi, are designed for the high performance, low power, low cost, and greater continuous learning and online learning capabilities required by the next generation of AI. If the current popular Network Model of DNN and CNN is converted to a neural morphic system, it is also possible to construct an architecture for enhanced learning, but to maximize the benefits of neural morphic computing, the SNN model needs to be developed.
The new network is also key to applications that can achieve high accuracy with a single training session of Loihi. Unlike DNN and CNN, which draw on the highly abstract abstractions of neurons in the human brain, SNN is closer to the concept of simulating human brain neurology. DNN and CNN networks have a large middle layer that requires sufficient and well-labeled data training networks. The SNN model requires only a small amount of data to get the initial value of the network, but it is more difficult to design than DNN and CNN.
Mr Song said Intel and the community, as well as major international SNN research teams, mostly in Europe and the United States, had built and connected.
With the continued innovation of the Loihi architecture, and the continuous optimization of the upper-level software SDK and high-level language interfaces, coupled with advanced semiconductor processes, and the active push of Intel and eco-partners, we can expect neuromorphic computing to change the world in the near future.
Song Jiqiang’s view is, “It’s hard to tell which are neuromorphic computational killer applications, and maybe in another year, we’ll have clearer ideas.” “
He also stressed that neuromorphic chips can coexist well with existing chips and there is no substitute relationship. Neuromorphic chips are now primarily to address the perception of the front end in comparison with low power consumption, and need to continue to learn to improve recognition capabilities of applications such as constraint satisfaction, graph search. But for game reading or image classification, labeling and other applications, the existing chip can be very satisfied.
Lei Feng Net Summary
Neuromorphic computing and quantum computing are seen as key to leading us into the new era of AI, and Intel, as a leader in the chip industry, continues to lead in both neuromorphic computing and quantum computing. In terms of neuromorphics, from the introduction of the neuromorphic chip Loihi in 2017 to the establishment of INRC in 2018, from the Wolf Mountain system of 510,000 neurons at the end of 2017 to the Pohoiki system of 100 million neurons today. Intel not only demonstrates its leadership with a powerful and more versatile neural-morphing computing system, but also works with partners to build a sound ecosystem to drive the commercialization of neural-morphic computing.
Intel also maintains leadership in the field of more quantum computing for heavyweight players, with the horse Ridge cryogenic control chip for quantum computing announced by QuTech in December, a commercially viable milestone in quantum computing.
Intel, which leads both neuromorphic computing and quantum computing, will bring us any unexpected surprises in the future?