How to Install oneAPI Products and Run Data Parallel C++ Code Samples

By Loc Q Nguyen, published on December 31 , 2019

Introduction to oneAPI Products

A modern platform can include diverse types of hardware such as CPU, GPU, AI, FPGA, and other accelerators. Each architecture requires a different programming model to achieve maximum performance. The Intel® oneAPI Toolkits, including the Intel® oneAPI Base Toolkit (Base Kit) and specialty add-on toolkits, enable programmers to write a single portable program that can be reused across hardware targets in a heterogeneous platform.

Diagram of oneAPI components
Figure 1. oneAPI components for cross-architecture code development

The oneAPI initiative feature highlights include:

  • The Intel® oneAPI DPC++ Compiler for direct programming is an evolution of C++ that incorporates SYCL*. It allows code reuse across hardware targets and enables high productivity and performance across CPU, GPU, and FPGA architectures while permitting accelerator-specific tuning. DPC++ is an implementation of SYCL with extensions. 
  • Libraries for API-based programming, including the following libraries in the Base Kit beta:
    • Intel® oneAPI DPC++ Library
    • Intel® oneAPI Deep Neural Network Library
    • Intel® oneAPI Math Kernel Library
    • Intel® oneAPI Threading Building Blocks
    • Intel® oneAPI Video Processing Library
    • Intel® oneAPI Data Analytics Library
    • Intel® oneAPI Collective Communications Library
  • Advanced analysis and debug tools, including Intel® VTune™ Profiler, Intel® Advisor, GDB*.

The Intel oneAPI Base Toolkit will serve the majority of your development needs, but for specialized workloads, oneAPI products include seven domain-specific toolkits. These include:

  • The Intel® oneAPI HPC Toolkit is used to build, analyze, optimize and scale HPC applications.
  • The Intel® oneAPI IoT Toolkit is used to build smart, connected devices for healthcare, smart homes, aerospace, security, and more.
  • The Intel® oneAPI Rendering Toolkit is used to build high-fidelity visualization applications that require massive amounts of raw data to be quickly rendered into rich, realistic visuals.
  • The Intel® oneAPI DL Framework Developer Toolkit is used to build or customize existing deep learning frameworks.
  • The Intel® Distribution of OpenVINO™ toolkit is used to accelerate deep learning inference applications.
  • The Intel® AI Analytics Toolkit is used to achieve end-to-end performance for AI workloads.
  • The Intel® System Bring-up Toolkit is used to strengthen system reliability and optimize system power and performance.

Hardware Support for the Beta Release

The oneAPI toolkits(Beta) support the following target hardware platforms:

  • CPU: Intel® Core™ processor family, Intel® Xeon® processor family, and Intel® Xeon® Scalable processors
  • GPU: 9th generation Intel® Core™ processor graphics
  • Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA

Before installing the Base Kit, verify the system CPU:

$ lscpu | grep "Model name"
Model name: Intel(R) Core(TM) i7-6770HQ CPU @ 2.60GHz

To determine GPU availability:

$ lspci | grep VGA
00:02.0 VGA compatible controller: Intel Corporation Iris Pro Graphics 580 (rev 09)

In this example system, the output shows that Intel® Iris™ Pro Graphics 580 is available. Note that Intel Iris Pro Graphics 580 is a version of Intel® Processor Graphics Gen9.

Install oneAPI Toolkits on Linux*

This section describes how to install oneAPI toolkits on a Linux system. The example is shown on an Intel® NUC Kit NUC6i7KYK with Intel Iris Pro Graphics 580, installed with Ubuntu* 18.04.3. The Installation Guide for Intel® oneAPI Toolkits provides complete toolkit installation details.

The general process for installing oneAPI toolkits is as follows:

  1. Go to the Intel® oneAPI Toolkits website.
  2. Click on Get the Base Kit in the “Start with the Intel oneAPI Base Toolkit” section and select your desired operating system.
  3. To install the Base Kit, use either the online installer or local installer. This article uses the local installer.
  4. After downloading the Base Kit, extract the .tar file.
    $ tar xvzf l_BaseKit_b_<version>_offline.tar.gz
  5. Launch the GUI installation as root.
    $ cd l_BaseKit_b_<version>_offline
    $ sudo ./ 
  6. Install additional packages depending on your needs. In this example, the Intel oneAPI HPC Toolkit is installed using the CLI mode.
    $ tar xvzf l_HPCKit_b_<version>_offline.tar.gz
    $ cd l_HPCKit_b_<version>_offline

    To install using the CLI mode, edit the silent.cfg file, set the parameter ACCEPT_EULA=accept, and then type
    $ sudo ./ --silent ./silent.cfg

Add Users to the Video Group

Non-root users do not have access to GPU devices. Therefore, you must add non-root users to the Video group so that they have access to GPU devices as follows:

$ sudo usermod -a -G video <username>

Try Out oneAPI Code Samples

To become familiar with DPC++ code, download the code sample package.

The following example shows how to compile the vector-add code sample in the Base Kit. First, set up the environment variables:

$ source /opt/intel/inteloneapi/

You can list the OpenCL™ platforms and devices available on your system:

$ clinfo -l
Platform #0: Intel(R) OpenCL
 `-- Device #0: Intel(R) Core(TM) i7-6770HQ CPU @ 2.60GHz
Platform #1: Intel(R) OpenCL HD Graphics
 `-- Device #0: Intel(R) Gen9 HD Graphics NEO
Platform #2: Intel(R) FPGA Emulation Platform for OpenCL(TM)
 `-- Device #0: Intel(R) FPGA Emulation Device

In the above example, the system has three available OpenCL devices: a CPU (Intel® Core™ i7-6770HQ), a GPU (Intel® Gen9 HD Graphics), and an Intel® FPGA Emulation Platform for OpenCL™ software.

You can navigate to the vector-add code sample in the Base Kit and then compile and run the code. By default, the runtime selects a preferred OpenCL device:

$ cd <Path to oneapi-toolkit>/DPC++Compiler/vector-add
$ make all
dpcpp -o vector-add.exe src/vector-add.cpp -lOpenCL -lsycl
$ make run
Device: Intel(R) Gen9 HD Graphics NEO

The above example shows that the device code runs on the GPU.

By setting the environment variable SYCL_DEVICE_TYPE=GPU, you force the kernel to be executed in the GPU:

$ SYCL_DEVICE_TYPE=GPU ./vector-add.exe
Device: Intel(R) Gen9 HD Graphics NEO

Alternately, you can select the CPU to run the device code by setting the environment variable SYCL_DEVICE_TYPE=CPU as follows:

$ SYCL_DEVICE_TYPE=CPU ./vector-add.exe
Device: Intel(R) Core(TM) i7-6770HQ CPU @ 2.60GHz

You can also execute the kernel in the Intel FPGA Platform for OpenCL software by setting the environment variable SYCL_DEVICE_TYPE=ACC:

$ SYCL_DEVICE_TYPE=ACC ./vector-add.exe
Device: Intel(R) FPGA Emulation Device


This article introduces the Intel oneAPI Toolkits. It also shows how to download and install the toolkits on a Linux* system. To try DPC++ code samples, download the code sample package and run the samples as shown in this article.


Appendix: Installing Intel® CPU Runtime for OpenCL™ Devices and Intel® GPU Runtime

Optionally, you can execute OpenCL kernels directly on Intel processors as OpenCL target devices. Go to OpenCL Runtimes for Intel processors and download the drivers as follows:

  • For systems with an Intel Xeon processor or Intel Core processor, click on the Download button at “Intel® CPU Runtime for OpenCL™ Applications 18.1 for Linux* OS (64bit only)“ to get the following:
  $ tar -xvzf l_opencl_p_18.1.0.015.tgz
  $ cd l_opencl_p_18.1.0.015
  $ sudo ./ --ignore-signature

By default, the OpenCL applications for the CPU driver will be installed in /opt/intel/opencl:

   $ clinfo -l
   Platform #0: Intel(R) CPU Runtime for OpenCL(TM) Applications
   `-- Device #0: Intel(R) Core(TM) i7-6770HQ CPU @ 2.60GHz
  • For systems that have Intel® Processor Graphics Gen9, click on Manual Download and Install at “Intel Graphics Technology Runtimes, Linux* OS.” This brings you to the Intel compute-runtime Github* page.

The following example shows how to install release 19.38:

$ wget
$ wget
$ wget
$ wget
$ wget
$ sudo dpkg -i *.deb
$ apt list --installed | grep intel                                                      
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

intel-gmmlib/now 19.2.4 amd64 [installed,local]
intel-igc-core/now 1.0.11-2500 amd64 [installed,local]
intel-igc-opencl/now 1.0.11-2500 amd64 [installed,local]
intel-microcode/bionic-updates,bionic-security,now 3.20190618.0ubuntu0.18.04.1 amd64 [installed]
intel-ocloc/now 19.38.14237 amd64 [installed,local]
intel-opencl/now 19.38.14237 amd64 [installed,local]
libdrm-intel1/bionic-updates,now 2.4.97-1ubuntu1~18.04.1 amd64 [installed]
xserver-xorg-video-intel-hwe-18.04/bionic-updates,now 2:2.99.917+git20171229-1ubuntu1~18.04.1 amd64 [installed]

$ clinfo -l
Platform #0: Intel(R) CPU Runtime for OpenCL(TM) Applications
 `-- Device #0: Intel(R) Core(TM) i7-6770HQ CPU @ 2.60GHz
Platform #1: Intel(R) OpenCL HD Graphics
 `-- Device #0: Intel(R) Gen9 HD Graphics NEO


Product and Performance Information


Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserverd for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804