On May 13, Rockchip officially announced that its AI chip RK1808, RK1806 adapted to the Paddle Paddle open source deep learning platform, fully compatible with the paddle lightweight reasoning engine Paddle Lite. This rockchip and Baidu cooperation, aimed at the AI industry to enable more application scenorios, accelerate the AI product landing process.
Baidu paddle and rockchip compatibility certification
In the AI era, deep learning frameworks are similar to operating systems, playing the role of connecting chips and applications. With a powerful AI chip holding, AI technology will be more widely available.
NPU era comes Soft and hard combined performance optimization
rockchip AI chip RK1808 and RK1806, built-in independent NPU nerve computing unit, INT8 computing force up to 3.0TOPs, the use of 22nm FD-SOI process, the same performance of power consumption compared to the mainstream 28nm process products reduced by about 30%, in the calculation, performance, power consumption and other indicators have excellent performance. It has been measured that the Swiss core microAI chip in Paddle Lite to run MobileNet V1 only 6.5 ms, the frame rate of up to 153.8 FPS, both fully compatible and efficient and stable operation.
Based on Baidu’s many years of deep learning technology research and industrial applications, the in-depth learning core training and prediction framework, basic model library, end-to-end development kit, tool components and service platform sits in one, and officially open source in 2016, is China’s influential fully open source, technology-leading, full-featured industry-level deep learning platform. Paddle Lite is a versatile, easy-to-use, and high-performance lightweight reasoning engine that supports a wide range of hardware, multiple platforms, and important features such as lightweight deployment and high-performance implementation.
Rockchip RK18xx Series Chip Adaptation Paddle Lite
As shown in the figure below, the measured results can be seen, compared with mobile phones and other mobile phones commonly used in the domestic and foreign mainstream CPU, RK18 series NPU in the MobileNET_v1 time-consuming performance, thus proving that in AI-related fields, such as image classification, target detection, voice interaction, dedicated AI chip will bring better results.
Rockchip RK18xx series chips on MobileNETV1 compared with mainstream CPU performance excellence
Through the adaptive paddle open source deep learning platform, Rockchip will be better able to empower domestic users of business needs, for the end-side AI to provide a strong computing power, the integration of the two will give full play to the advantages of software and hardware combination, accelerate the development and deployment speed, promote more AI applications landing.
Domestic core cooperation upgrade practical tutorial explained in detail
The detailed operation of the rockchip AI chip on the paddle can be found in the Paddle Lite documentation, which covers supported chips, device lists, Paddle models and operators, and reference example sylmish.
(Search path: Baidu searches for “Paddle-Lite document”, lower left select version release-v2.6.0, deployment case plate “PaddleLite uses RK NPU predictive deployment”)
Test equipment (RK1808 EVB)
In addition, in addition to RK1808 and RK1806 chip solutions, rockchip’s NPU-equipped AI series of chips will also be upgraded to adapt Baidu flying oars, further deepen the relationship between the two sides, and work together to help China’s self-controld AI ecological construction.