Intel has announced its level zero-zero interface specification for one API, providing bare-metal access to accelerators, complementing the API-based direct programming model of one API. OneAPI is intel’s open programming model for heterogeneous systems, released by Intel in mid-November.
The goal of oneAPI is to provide a unified programming model for cross-architecture applications that value performance. It reuses code to eliminate the complexity of a stand-alone code base, multi-tools, and workflow. The beta version of oneAPI was launched in November and will be used in Aurora’s billion-time supercomputer.
OneAPI is based on industry standards and open specifications, and it includes industry consensus and Intel’s own execution of oneAPI. The industry consensus on “Open to All Hardware Vendors” suggests the need to use a direct programming language – Data Parallel C , based on the cross-platform abstraction layer of C and SYCL , and API-based programming to accelerate domain-focused functionality. Many of these parts are open source, such as software developer Codeplay, which has announced that it is developing Nvidia GPU support for oneAPI.
Intel’s oneAPI development kit includes new tools such as compatibility tools for CUDA, Intel’s own Python distribution, FPGA add-on tools, and debugging tools. The tool currently supports Intel’s own Core, Xeon and Atom processors, Intel’s collection, and Arria FPGAs.
According to Phoronix, the programming models for these direct and API libraries are also a low-level (Low-level) and DTM interfacethat that can be used for acceleration hardware released this week.
Zero-level APIs can be dual-purpose. Although it provides granular access to multiple low-level features, most applications do not have such precise control. As a result, the zero-level API also provides control over the runtime API and libraries at a high level.