Intel® oneAPI Base Toolkit(Beta)
Essential Kit for Diverse Workloads
The Intel® oneAPI Base Toolkit is a core set of tools and libraries for building and deploying high-performance, data-centric applications across diverse architectures.
It features the Data Parallel C++ (DPC++) language, an evolution of C++ that:
- Allows code reuse across hardware targets—CPUs, GPUs, and FPGAs†
- Permits custom tuning for individual accelerators
Domain-specific libraries and the Intel® Distribution for Python* provide drop-in acceleration across relevant architectures. Enhanced profiling, design assistance, and debug tools complete the kit.
For specialized workloads, additional toolkits are available that complement the Intel oneAPI Base Toolkit. For example:
- If you are looking for award-winning Intel® Fortran Compilers and Intel® C++ Compilers with OpenMP* or MPI, you also need to download the Intel® oneAPI HPC Toolkit.
- For access to C++ compilers, performance libraries, and analyzers to write IoT software, you need to download the Intel® oneAPI IoT Toolkit.
- For rendering and ray tracing libraries optimized for visualization, you need to download the Intel® oneAPI Rendering Toolkit.
†FPGA design requires an additional download of the Intel® FPGA Add-On for oneAPI Base Toolkit. This is offered as an optional download to this toolkit.
Develop, Test, and Run Your oneAPI Code in the Cloud
Get what you need to build and optimize your oneAPI projects for free. With an Intel® DevCloud account, you get 120 days of access to the latest Intel® hardware—CPUs, GPUs, FPGAs—and Intel oneAPI tools and frameworks. No software downloads. No configuration steps. No installations.
Download the Toolkit
Implement optimized communication patterns to distribute deep learning model training across multiple nodes.
Boost machine learning and data analytics performance.
Compile and optimize DPC++ code for CPU, GPU, and FPGA target architectures.
Speed up data parallel workloads with these key productivity algorithms and functions.
Develop fast neural networks on Intel® CPUs and GPUs with performance-optimized building blocks.
Accelerate math processing routines, including matrix algebra, fast Fourier transforms (FFT), and vector math.
Simplify parallelism with this advanced threading and memory-management template library.
Deliver fast, high-quality, real-time video decoding, encoding, transcoding, and processing for broadcasting, live streaming and VOD, cloud gaming, and more.
Design code for efficient vectorization, threading, and offloading to accelerators.
Achieve fast math-intensive workload performance without code changes for data science and machine learning problems.
Migrate legacy CUDA code to a multi-platform program in DPC++ code with this assistant.
Speed up performance of imaging, signal processing, data compression, and more.
Find and optimize performance bottlenecks across CPU, GPU, and FPGA systems.
Program these reconfigurable hardware accelerators to speed specialized, data-centric workloads. Requires installation of the Intel oneAPI Base Toolkit.
Enable deep, system-wide debug of DPC++, C, C++, and Fortran code.
- Intel and compatible processors
- Intel® Processor Graphics Gen9
- Intel® Arria® 10 FPGAs
- Data Parallel C++ (DPC++)
Your success is our success. Access these support resources when you need assistance.
- Intel® oneAPI Base Toolkit
- Intel oneAPI DPC++ Compiler and Intel® DPC++ Compatibility Tool
- Intel® oneAPI Data Analytics Library
- Intel® oneAPI DPC++ Library
- Intel® oneAPI Threading Building Blocks
- Intel® Advisor
- Intel® Integrated Performance Primitives
- Intel® Distribution for Python*
- Intel® VTune™ Profiler
For additional help, see our general oneAPI Support.