Intel Xe Discrete Graphics Support Int8 Integers: Enhanced Deep Learning Computing

Intel recently released the latest version of its high-performance deep learning optimization library, DNNL 1.2, confirming a new skill for the upcoming new Xe architecture standalone GPU, which supports int8 integer data types. Formerly known as MKL-DNN, DNNL is Intel’s Mathematical Core Library (MKL) for Deep Learning Networks (DNN), and currently does not support running Int8 on the GPU because the current nuclear display architecture does not have this capability.

Now, DNNL 1.2 is joining the in-support Int8, apparently for the new Xe architecture, and again highlighting Intel’s ambition to build a portfolio of architectural synergies such as CPUs, GPUs, FPGAs, and AI accelerators through high-performance stand-alone GPUs to enhance computing power.

The Intel Xe architecture will appear on a number of product lines, with xe HPC wins for high-performance computing revealing the first product code-named Ponte Vecchio, with a built-in matrix engine (similar to the NVIDIA Tensor Core) that supports 32x Int8 operations.

It is not clear whether Xe HP for mainstream and high-end consumer markets and Xe LP in the entry-level market also supports Int8, and is likely to provide limited support to support AI capabilities, which Is what Intel is currently very concerned about.

In addition, DNNL 1.2 supports AVX-512 instruction set, Cascade Lake 2nd Generation Xeon Extensible Processor DLBoost Deep Learning Acceleration, which improves Int8 operational performance.

Intel Xe Discrete Graphics Support Int8 Integers: Enhanced Deep Learning Computing