Published:08/09/2017 Last Updated:04/03/2020
TensorFlow* is a widely-used machine learning framework in the deep learning arena, demanding efficient utilization of computational resources. In order to take full advantage of Intel® architecture and to extract maximum performance, the TensorFlow framework has been optimized using oneAPI Deep Neural Network Library (oneDNN) primitives, a popular performance library for deep learning applications. For more information on the optimizations as well as performance data, see this blog post TensorFlow* Optimizations on Modern Intel® Architecture .
Anaconda* has now made it convenient for the AI community to enable high-performance-computing in TensorFlow. Starting from TensorFlow v1.9, Anaconda has and will continue to build TensorFlow using oneDNN primitives to deliver maximum performance in your CPU.
This install guide features several methods to obtain Intel Optimized TensorFlow including off-the-shelf packages or building one from source that are conveniently categorized into Binaries, Docker Images, Build from Source.
Now, Intel Optimization for Tensorflow is also available as part of Intel® AI Analytics Toolkit. Download and Install to get separate condo environments optimized with Intel's latest AI accelerations. Code samples to help get started with are available here.
*Supports Py36 and Py37
Available for Linux*, Windows*, MacOS*
TensorFlow* version: 2.2.0
Installation instructions:
If you don't have conda package manager, download and install Anaconda
Open Anaconda prompt and use the following instruction
conda install tensorflow
In case your anaconda channel is not the highest priority channel by default(or you are not sure), use the following command to make sure you get the right TensorFlow with Intel optimizations
conda install tensorflow -c anaconda
Open Anaconda prompt and use the following instruction
conda install tensorflow-mkl
(or)
conda install tensorflow-mkl -c anaconda
Besides the install method described above, Intel Optimization for TensorFlow is distributed as wheels, docker images and conda package on Intel channel. Follow one of the installation procedures to get Intel-optimized TensorFlow.
Note: All binaries distributed by Intel were built against the TensorFlow v2.2.0 tag in a centOS container with gcc 4.8.5 and glibc 2.17 with the following compiler flags (shown below as passed to bazel*)
--cxxopt=-D_GLIBCXX_USE_CXX11_ABI=0 --copt=-march=corei7-avx --copt=-mtune=core-avx-i --copt=-O3 --copt=-Wformat --copt=-Wformat-security --copt=-fstack-protector --copt=-fPIC --copt=-fpic --linkopt=-znoexecstack --linkopt=-zrelro --linkopt=-znow --linkopt=-fstack-protector
Available for Linux*
TensorFlow* version: 2.2.0
Installation instructions:
Open Anaconda prompt and use the following instruction. Available for Python 3.6 and 3.7.
conda install tensorflow -c intel
Available for Linux*
TensorFlow* version: 2.2.0
Installation instructions:
There are multiple options provided to download Intel® AI Analytics Toolkit, including Conda, online/offline installer, repositories and containers.
Available for Linux*
TensorFlow version: 2.3.0
Installation instructions:
Note:
For TensorFlow versions 1.13, 1.14 and 1.15 with pip > 20.0, if you experience invalid wheel error, try to downgrade the pip version to <20.0
For e.g
python -m pip install --force-reinstall pip==19.0
Run the below instruction to install the wheel into an existing Python* installation, preferably Intel® AI Analytics Toolkit. Python versions supported are 3.5, 3.6, 3.7, 3.8.
pip install intel-tensorflow
If your machine has AVX512 instruction set supported please use the below packages for better performance.
pip install intel-tensorflow-avx512
Pip packages are posted on Google Cloud and AWS for easy access to customers.
Note: If your machine has AVX-512 instruction set supported, please download and install the wheel file with AVX-512 as minimum required instruction set from the table above.
Note: If you ran into the following Warning on ISA above AVX2, please download and install the wheel file with AVX-512 as minimum required instruction set from the table above.
I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Note: If you run a release with AVX-512 as minimum required instruction set on a machine without AVX-512 instruction set support, you will run into "Illegal instruction (core dumped)" error.
Note than for 1.14.0 install we have fixed a few vulnerabilities and the corrected versions can be installed using the below commands. We identified new CVE issues from curl and GCP support in the previous pypi package release, so we had to introduce a new set of fixed packages in PyPI
Starting version 1.14, Google released DL containers for TensorFlow on CPU optimized with oneDNN by default. The TensorFlow v1.x CPU container names are in the format "tf-cpu.<framework version>", TensorFlow v2.x CPU container names are in the format "tf2-cpu.<framework version>" and support Python3. Below are sample commands to download the docker image locally and launch the container for TensorFlow 1.14 or TensorFlow 2.3. Please use one of the following commands at one time.
# TensorFlow 1.14
docker run -d -p 8080:8080 -v /home:/home gcr.io/deeplearning-platform-release/tf-cpu.1-14
# TensorFlow 2.3
docker run -d -p 8080:8080 -v /home:/home gcr.io/deeplearning-platform-release/tf2-cpu.2-3
This command will start the TensorFlow 1.14 or TensorFlow 2.3 with oneDNN enabled in detached mode, bind the running Jupyter server to port 8080 on the local machine, and mount local /home directory to /home in the container. The running JupyterLab instance can be accessed at localhost:8080.
To launch an interactive bash instance of the docker container, run one of the below commands.
# TensorFlow 1.14
docker run -v /home:/home -it gcr.io/deeplearning-platform-release/tf-cpu.1-14 bash
# TensorFlow 2.3
docker run -v /home:/home -it gcr.io/deeplearning-platform-release/tf2-cpu.2-3 bash
You can find all supported docker tags/configurations here.
These docker images are all published at http://hub.docker.com in intel/intel-optimized-tensorflow and intel/intel-optimized-tensorflow-avx512 namespaces and can be pulled with the following command:
# intel-optimized-tensorflow
docker pull intel/intel-optimized-tensorflow
# intel-optimized-tensorflow-avx512
docker pull intel/intel-optimized-tensorflow-avx512:latest
For example, to run the data science container directly, simply
# intel-optimized-tensorflow
docker run -it -p 8888:8888 intel/intel-optimized-tensorflow
# intel-optimized-tensorflow-avx512
docker run -it -p 8888:8888 intel/intel-optimized-tensorflow-avx512:latest
And then go to your browser on http://localhost:8888/
For those who want to navigate through the browser, follow the links:
You can find all supported docker tags/configurations for intel-optimized-tensorflow and intel-optimized-tensorflow-avx512.
To get the latest Release Notes on Intel® Optimization for TensorFlow*, please refer this article
More containers for Intel® Optimization for TensorFlow* can be found at the Intel® oneContainer Portal.
Building TensorFlow from source is not recommended. However, if instructions provided above do not work due to unsupported ISA, you can always build from source.
Building TensorFlow from source code requires Bazel installation, refer to the instructions here, Installing Bazel.
Installation instructions:
git clone https://github.com/tensorflow/tensorflow
git checkout r2.3
PATH
can be changed to point to a specific version of GCC compiler:
export PATH=/PATH//bin:$PATH
LD_LIBRARY_PATH
can also be to new:
export LD_LIBRARY_PATH=/PATH//lib64:$LD_LIBRARY_PATH
bazel build --config=mkl -c opt --copt=-march=native //tensorflow/tools/pip_package:build_pip_package
bazel build --config=mkl --cxxopt=-D_GLIBCXX_USE_CXX11_ABI=0 --copt=-march=sandybridge --copt=-mtune=ivybridge --copt=-O3 //tensorflow/tools/pip_package:build_pip_package
bazel build --config=mkl -c opt --copt=-mavx --copt=-mavx2 --copt=-mfma --copt=-mavx512f --copt=-mavx512pf --copt=-mavx512cd --copt=-mavx512er //tensorflow/tools/pip_package:build_pip_package
Flags set above will add AVX, AVX2 and AVX512 instructions which will result in "illegal instruction" errors when you use older CPUs. If you want to build on older CPUs, set the instruction flags accordingly.
3. Install the optimized TensorFlow wheel
bazel-bin/tensorflow/tools/pip_package/build_pip_package ~/path_to_save_wheel
pip install --upgrade --user ~/path_to_save_wheel/<wheel_name.whl>
Prerequisites
Install the below Visual C++ 2015 build tools from https://visualstudio.microsoft.com/vs/older-downloads/
Installation
set PATH=%PATH%;output_dir\external\mkl_windows\lib
3. Bazel build with the with "mkl" flag and the "output_dir" to use the right mkl libs
bazel --output_base=output_dir build --config=mkl --config=opt //tensorflow/tools/pip_package:build_pip_package
4. Install the optimized TensorFlow wheel
bazel-bin\tensorflow\tools\pip_package\build_pip_package C:\temp\path_to_save_wheel
pip install C:\temp\path_to_save_wheel\<wheel_name.whl>
Prerequisites
pip
as an optional feature and add it to your %PATH%
environmental variable.
pip3 install six numpy wheel
pip3 install keras_applications==1.0.6 --no-deps
pip3 install keras_preprocessing==1.0.5 --no-deps
pacman -S zip unzip patch diffutils git
set PATH=%PATH%;<path to the Bazel binary>
Install Visual C++* Build Tools 2019. It comes with Visual Studio* 2019 but can be installed separately. Go to the Visual Studio Downloads, download and install the following:
Microsoft Visual C++ 2019 Redistributable from Other Tools and Frameworks
Microsoft Build Tools 2019 from Tools for Visual Studio 2019
Installation
BAZEL_SH: C:\msys64\usr\bin\bash.exe
BAZEL_VS: C:\Program Files (x86)\Microsoft Visual Studio
BAZEL_VC: C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC
Note: For compile time reduction, please set:
set TF_VC_VERSION=16.6
More details can be found here.
python path, e.g. C:\Program Files\Python36
oneDNN runtime lib location that will be created during the build process, e.g. D:\output_dir\external\mkl\windows\lib
the Bazel path, e.g. C:\Program Files\Bazel-3.2.0
MSYS2 path, e.g. C:\msys64;C:\msys64\usr\bin
Git path, e.g. C:\Program Files\Git\cmd;C:\Program Files\Git\usr\bin
set PATH=%PATH%;C:\Program Files\Python36;D:\output_dir\external\mkl_windows\lib;C:\Program Files\Bazel-3.2.0;C:\msys64;C:\msys64\usr\bin;C:\Program Files\Git\cmd;C:\Program Files\Git\usr\bin
git clone https://github.com/Intel-tensorflow/tensorflow.git
cd tensorflow
git checkout r2.3-windows
python ./configure.py
set OneDNN_DIR=<path-to-oneDNN-output-dir>\one_dnn_dir
set PATH=%OneDNN_DIR%;%PATH%
bazel --output_base=%OneDNN_DIR% build --announce_rc --config=opt \
--config=mkl \
--action_env=PATH="<user is expected to expand the system path here>" \
--define=no_tensorflow_py_deps=true \
tensorflow/tools/pip_package:build_pip_package
Note: Based on bazel issue #7026 we set --action_env=PATH=<value>
. Open cmd.exe
, run echo %PATH%
and copy the output to the value of --action_env=PATH=<value>
. If found, please use single quotes with folder names of white spaces.
Once Intel-optimized TensorFlow is installed, running the below command must print "True" if oneDNN optimizations are present.
import tensorflow as tf
major_version = int(tf.__version__.split(".")[0])
if major_version >= 2:
from tensorflow.python import _pywrap_util_port
print("MKL enabled:", _pywrap_util_port.IsMklEnabled())
else:
print("MKL enabled:", tf.pywrap_tensorflow.IsMklEnabled())
export TF_DISABLE_MKL=1
However, note that this flag will only disable oneDNN calls, but not MKL-ML calls.
Although oneDNN is responsible for most optimizations, certain ops are optimized by MKL-ML library, including matmul, transpose, etc. Disabling MKL-ML calls are not supported by TF_DISABLE_MKL flag at present and Intel is working with Google to add this functionality.
import tensorflow # this sets KMP_BLOCKTIME and OMP_PROC_BIND
import os
# delete the existing values
del os.environ['OMP_PROC_BIND']
del os.environ['KMP_BLOCKTIME']
If you have further questions or need support on your workload optimization, Please submit your queries at the TensorFlow GitHub issues with the label "comp:mkl" or the Intel AI Frameworks forum.
Version |
Wheels(2.7, 3.5, 3.6) |
---|---|
1.6 |
https://anaconda.org/intel/tensorflow/1.6.0/download/tensorflow-1.6.0-cp27-cp27mu-linux_x86_64.whl (or) */tensorflow-1.6.0-cp35-cp35mu-linux_x86_64.whl (or) */tensorflow-1.6.0-cp36-cp36mu-linux_x86_64.whl |
1.9 |
https://storage.googleapis.com/intel-optimized-tensorflow/tensorflow-1.9.0-cp27-cp27mu-linux_x86_64.whl |
1.10 | |
1.11 |
Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.