Intel oneAPI helps developers quickly and correctly build performant code with a complete set of cross-architecture libraries and tools for heterogeneous computing.
As the industry moves to an open, unified, cross-architecture programming model for accelerator architectures – which include CPUs, GPUs and FPGAs – Intel and other organizations are cooperating to support the oneAPI industry initiative.
Based on standards, the initiative provides a common programming model that simplifies software development and increases performance for accelerated compute, yet eliminates any requirement for proprietary hardware. Another benefit of the initiative is to enable developers to integrate legacy code. This method allows them to choose the best architecture for the specific problem they need to solve, while avoiding the need to dedicate resources on rewriting software for additional architectures and platforms.
The bottom line: the oneAPI Specification empowers software developers to write code once and then tune it for multiple accelerator platforms used in heterogeneous computing.
In addition to investing in Intel’s own implementation of the oneAPI Spec, the company is also investing in research, technologies, and education. Much of that effort will occur at 11 new Intel oneAPI Centers of Excellence (CoEs). These CoEs will concentrate on HPC, AI and graphics in order to help deliver strategic code ports and hardware support as well as new technologies, services and curriculums that can increase the adoption of the oneAPI ecosystem.
The Intel CoEs are located throughout the United States as well as in the United Kingdom, Europe, and now China – with the latest announcement of a strategic collaboration between Intel and the Institute of Computing Technology of the Chinese Academy of Sciences. Intel is also expanding its Intel Graphics Visualization Institutes of Xellence (Intel GVI) and aligning them under the oneAPI Center of Excellence umbrella. This includes institutions such as the Scientific Computing and Imaging Institute (SCI) at University of Utah; the Texas Advanced Computing Center (TACC) at University of Texas, Austin with Kitware, Inc.; and the Visualization Institute of the University of Stuttgart (VISUS).
Intel oneAPI Centers of Excellence (CoE) are located at some of the world’s most prestigious educational institutions.
For instance, as a new Intel oneAPI CoE, the Department of Electrical Engineering and Computer Sciences (EECS) at the University of California, Berkeley launched its Center for Energy Efficient Deep Learning (CEEDL). The center will focus its attention on producing energy-efficient algorithms and implementations for computationally-intensive workloads used in deep learning.
“We increasingly see computing take up large portions of an organization’s energy budget. To reduce these costs, many of our CoEs, like the UC Berkeley CEEDL, are developing energy-efficient algorithms for natural language understanding systems and training recommendation systems in order to overcome important AI challenges,” according to Joe Curley, Vice President & General Manager – Software Products & Ecosystem. “Despite improvements in energy-efficiency in the data center, the demand for data-centric computing, especially in compute-intensive workloads, is expected to grow at an even greater rate,” said Joe Curley, VP of Software Products & Ecosystem at Intel.“ In establishing a oneAPI Center of Excellence in partnership with the UC-Berkeley Center for Energy Efficient Deep Learning, we will explore energy-efficient algorithms for natural language understanding and training recommendation systems – with the hope of making data-centric computing even more energy-efficient.” CEEDL will use the oneAPI Deep Neural Network Library (oneDNN) and the oneAPI Collective Communications Library (oneCCL) to optimize their work.
In addition to developing more efficient algorithms they also must be able to run on a wide range of computational platforms in order to increase adoption and deliver a significant return on investment. By using the oneAPI Spec and unified heterogeneous programming, it becomes possible to simplify the development of portable implementations across numerous architecture forms: CPUs, GPUs, FPGAs, and other accelerators.
Professors Kurt Keutzer and Joey Gonzalez will help run the Intel oneAPI CoE at the Department of Electrical Engineering and Computer Sciences (EECS) at the University of California, Berkeley.
“It’s great that Intel is playing a leadership role in the development of oneAPI, a much-needed standard to enable smooth deployment across diverse computational platforms,” said Professor Kurt Keutzer of the Department of Electrical Engineering and Computer Sciences (EECS) at University of California, Berkeley. Keutzer joined Berkeley’s EECS faculty in 1998 after fifteen years in industry, including roles such as CTO and Senior Vice-President at Synopsys.
Keutzer will also join the oneAPI Collective Communications Library (oneCCL) and oneAPI Deep Neural Network Library (oneDNN) Technical Advisory Boards to provide insights into extending these libraries and other oneAPI compilers and tools to help support researchers and developers around the world. “Our center will use oneAPI to enable the easy migration of natural language and recommendation system workloads,” said Keutzer.
Keutzer’s colleague at the EECS, UC Berkeley, professor Joey Gonzalez adds, “Transitioning to oneAPI will significantly reduce the overhead of porting from one platform to another, which is a challenge of proprietary programming approaches and will enable greater innovation in both hardware and software systems for machine learning.”
oneAPI 2022 – the Intel oneAPI Implementation Gets a New Update and 900 Added Features
Intel’s latest version of its oneAPI implementation, named oneAPI 2022, will be released this quarter. With more than 900 additional features than the original implementation released in December 2020, the new version will add cross-architecture development capabilities for CPUs and GPUs through the first unified C++/SYCL/Fortran compiler and Data Parallel Python.
Intel oneAPI expands Intel Advisor accelerator performance modeling, includes VTune Flame Graph to visualize performance hot spots, and improves productivity through extended Microsoft Visual Studio Code integration and Microsoft WSL 2 support.
Get Involved and Review the oneAPI Specification
Learn about the latest oneAPI updates, industry initiative and news. Check out our videos and podcasts. Visit our GitHub repo – review the spec and give feedback or join the conversation happening now on our Discord channel. Then get inspired, network with peers and participate in oneAPI events.