Get Started

  • 0.09
  • 09/21/2020
  • Public Content

Get Started with the
Intel® AI Analytics Toolkit
(Beta)

The following instructions assume you have installed the Intel® oneAPI software. Please see the Intel oneAPI Toolkits page for installation options.
Follow these steps for the
Intel® AI Analytics Toolkit
(AI Kit):

Migrating Existing Projects

No special modifications to your existing projects are required to start using them with this toolkit.

Components of This Toolkit

The AI Kit includes:
  • The Intel® oneAPI Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) is included in PyTorch as the default math kernel library for deep learning. See this article on the Intel® Developer Zone for more details.
  • This version integrates primitives from the Intel® oneAPI Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) into the TensorFlow runtime for accelerated performance.
  • Get faster Python application performance right out of the box, with minimal or no changes to your code. This distribution is integrated with Intel® Performance Libraries such as the Intel® oneAPI Math Kernel Library and the Intel®oneAPI Data Analytics Library. The distribution also includes daal4py, a Python module integrated with the Intel® oneAPI Data Analytics Library as well as the Python Data Parallel Processing Library (PyDPPL), a light weight Python wrapper for Data Parallel C++ and SYCL that provides a data parallel interface and abstractions to efficiently tap into device management features of CPUs and GPUs running on Intel® Architecture.
  • Intel® Distribution of Modin
    , which enables you to seamlessly scale preprocessing across multi nodes using this intelligent, distributed dataframe library with an identical API to pandas. For more information see, Installing the Intel® AI Analytics Toollkit with the Conda* Package Manager.
    Standard Python installations are fully compatible with the AI Kit, but the Intel® Distribution for Python is preferred.
  • Model Zoo for Intel® Architecture
    : Access pretrained models, sample scripts, best practices, and step-by-step tutorials for many popular open source machine learning models optimized by Intel to run on Intel® Xeon® Scalable processors.
  • Low Precision Optimization Tool
    : Provide a unified, low-precision inference interface across multiple deep learning frameworks optimized by Intel with this open-source Python library.
Although not required to run projects, additional programming options and instructions specific to other programming languages are available through the Intel
®
Data Analytics Acceleration Library
.

Product and Performance Information

1

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804