Intel® oneAPI Deep Neural Network Library
Increase Deep Learning Framework Performance on CPUs and GPUs
Building Blocks to Optimize AI Applications
The Intel® oneAPI Deep Neural Network Library (oneDNN) helps developers improve productivity and enhance the performance of their deep learning frameworks. Use the same API to develop for CPUs, GPUs, or both. Then implement the rest of the application using Data Parallel C++. This library is included in both the Intel® oneAPI Base Toolkit and Intel® oneAPI DL Framework Developer Kit.
The library is built around three concepts:
- Primitive: Any low-level operation from which more complex operations are constructed, such as convolution, data format reorder, and memory
- Engine: A hardware processing unit, such as a CPU or GPU
- Stream: A queue of primitive operations on an engine
Top benefits:
- Supports key data type formats, including 16- and 32-bit floating points, bfloat16, and 8-bit integers
- Implements rich operators, including convolution, matrix multiplication, pooling, batch normalization, activation functions, recurrent neural network (RNN) cells, and long short-term memory (LSTM) cells
- Accelerates inference performance with automatic detection of Intel® Deep Learning Boost technology
Develop in the Cloud
Get what you need to build, test, and optimize your oneAPI projects for free. With an Intel® DevCloud account, you get 120 days of access to the latest Intel® hardware—CPUs, GPUs, FPGAs—and Intel oneAPI tools and frameworks. No software downloads. No configuration steps. No installations.
Download the Library
oneDNN is included as part of the Intel® oneAPI Base Toolkit.
Documentation & Code Samples
Get Started
Code Samples
Learn how to access oneAPI code samples in a tool command line or IDE.
Specifications
Processors:
- Intel Atom® processors with Intel® Streaming SIMD Extensions
- Intel® Core™ processors
- Intel® Xeon® processors
- Intel® Xeon® Scalable processors
GPUs:
- Intel® Processor Graphics Gen9 and above
- Xe architecture
Host & target operating systems:
- Linux*
- Windows*
- macOS*
Languages:
- Data Parallel C++ (DPC++)
Note Must have Intel oneAPI Base Toolkit installed - C and C++
Compilers:
- Intel® oneAPI DPC++/C++ Compiler
- Intel® C++ Compiler Classic
- GNU C++ Compiler
- Clang*
For more information, see the system requirements.
Threading runtimes:
- Intel® oneAPI Threading Building Blocks
- OpenMP*
- Data Parallel C++ (DPC++)
Get Help
Your success is our success. Access these forum and GitHub* resources when you need assistance.
Open-Source Version
Intel oneAPI Deep Neural Network Library is available as an open-source library.
Product and Performance Information
Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.