Intel® AI Analytics Toolkit Release Notes

By Hung-Ju Tsai, PREETHI VENKATESH

Published:12/05/2020   Last Updated:03/11/2021

Overview

This document provides details about new features and known issues for the Intel® AI Analytics Toolkit. The toolkit includes the following components:

  • Intel® Optimization for TensorFlow*

  • Intel® Optimization for PyTorch*

  • Intel® Distribution for Python*

  • Intel® Low Precision Optimization Tool

  • Model Zoo for Intel® Architecture

  • Intel® Distribution for Modin*

Version History

Date Version Major Change Summary
Mar 2021 2021.2 bug fixes and improvement
Dec 2020 2021.1 Initial Release

Where to Find the Release

Please check the release page for more information on how to acquire the package.

Compatibility Notes

  • Intel® Optimization for TensorFlow* is compatible with version 2.3

  • Intel® Optimization for PyTorch * is compatible with version 1.7

  • Intel® Distribution for Python* is compatible with cpython version 3.7

  • Intel® Optimization of Modin* is compatible with version 0.8.2

What's New

  • Intel® Optimization for TensorFlow* 

  • Intel® Optimization for PyTorch * 

    • New PyTorch * 1.7.0 was newly supported by Intel extension for PyTorch *.

    • Device name was changed from DPCPP to XPU.

    • Enabled the launcher for end users.

    • Improvement for INT8 optimization with refined auto mixed precision API.

    • More operators are optimized for the int8 inference and bfp16 training of some key workloads, like: MaskRCNN, SSD-ResNet34, DLRM, RNNT.

    • New custom operators: ROIAlign, RNN, FrozenBatchNorm, nms.

    • Performance improvement for several operators: tanh, log_softmax, upsample, embeddingbad and enables int8 linear fusion.

    • Bug fixes

  • Intel® Model Zoo

    • Several new TensorFlow* and PyTorch* models added to the Intel® Model Zoo.
    • Ten new TensorFlow workload containers and model packages that are available on the Intel® oneContainer Portal
    • Two new PyTorch workload containers and model packages that are available on the Intel® oneContainer Portal
    • Three new TensorFlow Kubernetes packages that are available on the Intel® oneContainer Portal:
    • A new Helm chart to deploy TensorFlow Serving on a K8s cluster
    • Bug-fixes, improvements to documentations
  • Intel® Low Precision Optimization Tool

    • New backends (PyTorch/IPEX, ONNX Runtime) preview support
    • Add built-in industry dataset/metric and custom registration
    • Preliminary input/output node auto-detection on TensorFlow models
    • New INT8 quantization recipes: bias correction and label balance
    • 30+ OOB models validated

System Requirements

Please see system requirements.

How to Start Using the Tools

Please reference the usage guides for each of the included tools:

Known Limitation

  • Intel® Optimization for PyTorch* 
    • Multi-node training still encounter hang issues after several iterations. The fix will be included in the next official release.

Notices and Disclaimers

Intel technologies may require enabled hardware, software or service activation.

No product or component can be absolutely secure.

Your costs and results may vary.

© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.

No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.

The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request.

Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.