Get Started

  • 0.08
  • 06/18/2020
  • Public Content

Get Started with the
Intel® oneAPI DL Framework Developer Toolkit
(Beta)

Follow These Steps for the
Intel® oneAPI DL Framework Developer Toolkit
:

The following instructions assume you have installed the Intel® oneAPI software. Please see the Intel oneAPI Toolkits page for installation options.
  1. Build and run a sample project using the Command Line.

Introduction

If you wish to use oneDNN and oneCCL samples, you must install the
Intel® oneAPI Base Toolkit
. The Base Kit contains all
Intel® oneAPI DL Framework Developer Toolkit
(
DLFD Kit
) components with all required dependencies.
If you wish to use the DL DevKit libraries without trying the provided samples, you only need to install the
DLFD Kit
. Otherwise, install the
Intel® oneAPI Base Toolkit
.
This toolkit is a suite of development libraries that make it fast and easy to build or optimize a deep learning framework that gets every last ounce of performance out of the newest Intel
®
processors. This toolkit enables Deep Learning Framework with flexible options including optimal performance on a CPU or GPU.
Included in this toolkit are:
  • Intel® oneAPI Deep Neural Network Library
  • Intel® oneAPI Collective Communications Library

Intel® oneAPI Deep Neural Network Library

The
Intel® oneAPI Deep Neural Network Library
is an open-source performance library for deep learning applications. The library includes basic building blocks for neural networks optimized for Intel
®
Architecture Processors and Intel
®
Processor Graphics. This library is intended for deep learning applications and framework developers interested in improving application performance on Intel CPUs and GPUs. Many popular Deep Learning frameworks are integrated with this library.

Intel® oneAPI Collective Communications Library

The
Intel® oneAPI Collective Communications Library
is a library providing an efficient implementation of communication patterns used in deep learning.
  • Built on top of Intel
    ®
    MPI Library, allows for use of other communication libraries.
  • Optimized to drive scalability of communication patterns.
  • Works across various interconnects: Intel
    ®
    Omni-Path Architecture, InfiniBand*, and Ethernet
  • Common API to support Deep Learning frameworks (Caffe*, Theano*,Torch*, etc.)
  • This package comprises the Intel
    ®
    MLSL Software Development Kit (SDK) and the Intel
    ®
    MPI Library Runtime components.

Product and Performance Information

1

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804