oneAPI as a Catalyst for Open Innovation

Intel started its oneAPI journey in 2019. From the beginning, we committed to creating an open ecosystem for developing performant cross-architecture applications for CPUs, GPUs, and other accelerators. We believe that an open approach wins over proprietary systems in the long run because it leverages the power of a large community and allows developers to choose what works best for them, from top to bottom in the software and hardware stack.

We pursued openness by adopting existing open-source technologies, contributing projects when there was a gap, and opening up the design and specification process. I am editor of the oneAPI specification and helped organize the oneAPI open-source activity at Intel. This blog focuses on open-source projects.

SYCL is the heart of oneAPI, enabling cross-platform data parallel programming in modern C++. SYCL is a Khronos standard with broad participation across research institutions and companies. Intel contributes to the development of SYCL by participating in the SYCL standard committee and contributing SYCL support to the LLVM project. The SYCL support, combined with LLVM’s SPIR-V, PTX, and CPU backends enable targeting SYCL programs to a wide variety of CPUs and accelerators. SYCL is augmented with the oneAPI DPC++ Library (oneDPL), which provides STL-like capabilities for programming accelerators. When you need high performance math, the oneAPI Math Kernel Library (oneMKL) includes BLAS, LAPACK, FFTs, and random number generation. There are many good math libraries out there already, and open source oneMKL provides a common SYCL-based interface that lets you integrate low-level proprietary and open-source libraries.

Beyond C++, Intel and the community are enabling open-source implementations of Python, Julia, and Java for oneAPI. Intel developed oneAPI accelerated, drop-in replacements for the very popular NumPy, pandas, and scikit-learn packages and enabled writing accelerator kernels directly in Python with Numba. You can write accelerator kernels using Julia thanks to Julia Computing, and in Java through the efforts of University of Manchester’s Advanced Processor Technologies group.

The languages all sit on top of the Level Zero runtime. Level Zero provides services for loading and executing programs, allocating memory, etc. Looking forward, we are leveraging our experiences with Python, Julia, and Java to provide better language runtime support in Level Zero.

Open-source deep learning frameworks such as TensorFlow and PyTorch continue to be a focus for oneAPI. The computationally intensive parts are usually implemented with low-level libraries. Over the years, Intel has successfully integrated the oneAPI Deep Neural Network Library (oneDNN) into the leading frameworks to provide optimal performance on its CPUs and more recently extended that capability to GPUs. It can be very challenging to change a framework to take advantage of new hardware capabilities. To solve this problem, we collaborated with the TensorFlow community to develop a pluggable interface that makes it easy to add accelerator support. This has become the flagship example for pluggable interfaces. We are excited about all the opportunities this opens up for more aggressive hardware-specific optimization in TensorFlow. For example, we are using the pluggable interface to enable graph-level optimization in oneDNN. The integration of oneDNN into these frameworks and its open-source license makes it an attractive shortcut for porting deep learning frameworks to new platforms, resulting in increased activity in porting oneDNN to new architectures. In fact, Fujitsu took the open source oneDNN library and ported it to support their own ARM-based CPUs. That library is currently running on Fugaku, the world’s fastest supercomputer.

We are tirelessly improving and augmenting open-source software for the oneAPI platform, ensuring it integrates into popular frameworks, and working with the ever-growing community of developers. The future is bright for accelerator development tools to be truly open.

Robert Cohn, Sr. Principal Engineer, Intel Development Tools Software

N​o license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document, with the sole exception that code included in this document is licensed subject to the Zero-Clause BSD open source license (OBSD), http://opensource.org/licenses/0BSD.

© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others​.​​