On November 12, 2019, The Intel AI Summit was held in San Francisco, during which Intel showcased a range of New AI-related products and related developments, including Training-oriented (NNP-T1000) and Reasoning-oriented The (NNP-I1000) Intel Nervana Neural Network Processor (NNP) is on the back, and Intel has unveiled a new generation of Movidius Myriad vision processing units.
Naveen Rao, Vice President and General Manager of The Artificial Intelligence Products Division, said:
With the further development of artificial intelligence, both computing hardware and memory will reach a critical point. Dedicated hardware such as Intel Nervana NNP and Movidius Myriad VPU are essential if significant progress is to be made in this area. With more advanced system-level artificial intelligence, we will move from the “data-to-information conversion” phase to the “information-to-knowledge transition phase”.
Nervana NNP is in production and delivered
For Intel, Nervana NNP is an important product in neural network processors, arguably the first AI commercial chip, and it has actually gone through a long product cycle from release, testing, mass production to application.
The new generation of Nervana NNP was first unveiled in May 2018. At the Intel Artificial Intelligence Developer conference (AIDevCon 2018), Naveen Rao unveiled a new generation of neural network processor (NNP) chips designed for machine learning, saying it was Intel’s first commercial NNP chip that would be available not only to a small number of partners. Ship in 2019.
By August 2019, Intel had announced more information about the NNP chip during the Hot Chips conference, which is used for training and reasoning, depending on its use, Nervana NNP-T and Nervana NNP-I, respectively.
Nervana NNP-T code-named Spring Crest, with TSMC’s 16nm FF-plus process, has 27 billion transistors and 680 square millimeters of silicon wafers to support TensorFlow, Paddle, PYTORCH training frameworks, and also Supports the Library of Deep Learning software and the compiler nGraph.
The Nervana NNP-I, codenamed Spring Hill, is a inference chip designed for large data centers. Built on 10nm technology and the Ice Lake core, the chip is built in Haifa, Israel, and Intel claims to be able to handle high load sprees with minimal energy, with a resNet50 efficiency of 4.8TOPs/W and a power range of 10W to 50W.
Officially, the Intel Nervana Neural Network Training Processor (Intel Nervana NNP-T) balances computing, communication, and memory, allowing near-linear and energy-efficient expansion for small clusters or the largest pod supercomputers. The Intel Nervana Neural Network Inference Processor (Intel Nervana NNP-I) is energy efficient and low-cost, and its flexible form factor makes it ideal for running high-intensity multi-modal reasoning at real-scale scale. Both products are targeted at cutting-edge AI customers such as Baidu and Facebook, and have been tailored to their AI processing needs.
On the site of the 2019 Intel AI Summit, Intel announced that the new Intel Nervana Neural Network Processor (NNP) is now in production and delivered to customers. Among them, Misha Smelyanskiy, co-design director of Facebook’s Artificial Intelligence System, said:
We are excited to be working with Intel to deploy faster and more efficient inference calculations using the Intel Neural Network Inference Processor (NNP-I). At the same time, our latest deep learning compiler, Glow, will also support NNP-I.
Separately, Baidu AI researcher Kenneth Church said at the site that in July, Baidu partnered with Intel to announce a partnership with Nervana NNP-T to train growing complex models with maximum efficiency through hardware and software collaborations. Kenneth Church also announced that Intel’s NNP-T has been brought to market with the addition of Baidu X-Man 4.0.
Next-generation Movidius VPU next year
At the summit, Intel unveiled a new generation of Movidius VPU.
The next-generation Intel Movidius VPU, codenamed Keem Bay, is a product built specifically for Edge AI, focusing on deep learning reasoning, computer vision, and media processing, with a new high-performance architecture, and accelerated with Intel’s OpenVINO. According to official data, it is four times faster than the Nvidia TX2 and 1.25 times faster than the Huawei Hayes Ascend 310. Also in terms of power and size, it also far exceeds the opponent.
Intel says the next-generation Movidius, which is scheduled to go on sale in the first half of 2020, offers industry-leading performance with its unique, efficient architecture: more than 10 times more than the previous generation of VUSUs, and up to 6 times more energy efficiency.
Intel introduced a Movidius Myriad X Vision Processor (VPU) in August 2017, a low-power SoC that uses a 16nm manufacturing process, contracted by TSMC, and is primarily used for deep learning and AI algorithm acceleration for vision-based devices such as drones, smart cameras, VR/AR helmets.
In addition to the next generation of Movidius, Intel has released a new Intel DevCloud for The Edge, which aims to work with the Intel Distribution of OpenVINO toolkit to address the developer’s main pain point, namely, before purchasing hardware, Ability to try, deploy, and test AI solutions on a wide range of Intel processors.
In addition, Intel describes the progress of its own Intel Xeon scalable processors in AI.
Intel says advancing deep learning reasoning and applications requires extremely complex data, models, and technologies, so architectural choices need to be considered differently. In fact, most organizations in the industry deploy artificial intelligence based on Intel Xeon’s scalable processors. Intel will continue to improve the platform through features such as the Intel Vector Neural Network Directive (VNNI) and Intel Deep Learning Acceleration (DL Boost), thereby improving the performance of AI inference in data centers and edge deployments.
Intel stressed that for many years to come, Intel Xeon’s scalable processors will continue to be a strong pillar of AI computing.
At the 2019 Intel AI Summit, Intel also announced its overall solution for AI. In fact, Intel’s Advantage in AI is not limited to breakthroughs in the AI chip itself, but more importantly, Intel’s ability to fully consider compute, memory, storage, interconnect, encapsulation, and software to maximize efficiency and programmability and ensure the critical capabilities of extending deep learning to thousands of nodes.
Not only that, but Intel is able to leverage its existing market advantages to bring its own ai capabilities to market and commercialize AI ‘ – and it’s worth noting that at the summit, Intel announced that its portfolio of artificial intelligence solutions has been further strengthened and is on track to generate more than $3.5 billion in revenue by 2019.
So Intel has finally taken a confident step toward stoain in moving AI technology to commercial landing.