Get Started

Get Started with the Intel® oneAPI DL Framework Developers Toolkit for Linux*

ID 758906
Date 12/04/2020
Public

Get Started with the Intel® oneAPI DL Framework Developer Toolkit

Follow These Steps for the Intel® oneAPI DL Framework Developer Toolkit:

The following instructions assume you have installed the Intel® oneAPI software. Please see the Intel oneAPI Toolkits page for installation options.

  1. Configure Your System
  2. Build and run a sample project using the Command Line.

Introduction

If you wish to use oneDNN and oneCCL samples, you must install the Intel® oneAPI Base Toolkit. The Base Kit contains all Intel® oneAPI DL Framework Developer Toolkit (DLFD Kit) components with all required dependencies.

If you wish to use the DL DevKit libraries without trying the provided samples, you only need to install the DLFD Kit. Otherwise, install the Intel® oneAPI Base Toolkit.

This toolkit is a suite of development libraries that make it fast and easy to build or optimize a deep learning framework that gets every last ounce of performance out of the newest Intel® processors. This toolkit enables Deep Learning Framework with flexible options including optimal performance on a CPU or GPU.

Included in this toolkit are:

  • Intel® oneAPI Deep Neural Network Library
  • Intel® oneAPI Collective Communications Library

Intel® oneAPI Deep Neural Network Library

The Intel® oneAPI Deep Neural Network Library is an open-source performance library for deep learning applications. The library includes basic building blocks for neural networks optimized for Intel® Architecture Processors and Intel® Processor Graphics. This library is intended for deep learning applications and framework developers interested in improving application performance on Intel CPUs and GPUs. Many popular Deep Learning frameworks are integrated with this library.

Intel® oneAPI Collective Communications Library

The Intel® oneAPI Collective Communications Library is a library providing an efficient implementation of communication patterns used in deep learning.

  • Built on top of Intel® MPI Library, allows for use of other communication libraries.
  • Optimized to drive scalability of communication patterns.
  • Works across various interconnects: Intel® Omni-Path Architecture, InfiniBand*, and Ethernet
  • Common API to support Deep Learning frameworks (Caffe*, Theano*,Torch*, etc.)
  • This package comprises the Intel® MLSL Software Development Kit (SDK) and the Intel® MPI Library Runtime components.