Preview TensorFlow on 3rd-Generation Intel® Xeon® Scalable Processors

Published: 04/08/2021

Pull Command

docker pull intel/intel-optimized-tensorflow:mpi-horovod

Tags & Pull Commands for Other Versions

OS Target Version Size Updated Pull Command
Linux* x86_64 2.5 742.3 MB 2/2021 docker pull intel/image-recognition:tf-r2.5-icx-b631821f-resnet50v1-5-fp32-training
Linux x86_64 2.5 533.83 MB 2/2021 docker pull intel/image-recognition:tf-r2.5-icx-b631821f-resnet50v1-5-int8-inference
Linux x86_64 2.5 670.88 MB 2/2021 docker pull intel/image-recognition:tf-r2.5-icx-b631821f-resnet50v1-5-fp32-inference
Linux x86_64 2.5 1.9 GB 2/2021 docker pull intel/object-detection:tf-r2.5-icx-b631821f-ssd-resnet34-int8-inference
Linux x86_64 2.5 2.12 GB 2/2021 docker pull intel/object-detection:tf-r2.5-icx-b631821f-ssd-resnet34-fp32-inference
Linux x86_64 2.5 1.33 GB 2/2021 docker pull intel/object-detection:tf-r2.5-icx-b631821f-ssd-mobilenet-int8-inference
Linux x86_64 2.5 1.37 GB 2/2021 docker pull intel/object-detection:tf-r2.5-icx-b631821f-ssd-mobilenet-fp32-inference
Linux x86_64 2.5 499.25 MB 2/2021 docker pull intel/image-recognition:tf-r2.5-icx-b631821f-mobilenet-v1-int8-inference
Linux x86_64 2.5 519.5 MB 2/2021 docker pull intel/image-recognition:tf-r2.5-icx-b631821f-mobilenet-v1-fp32-inference
Linux x86_64 2.5 8.01 GB 2/2021 docker pull intel/language-modeling:tf-r2.5-icx-b631821f-bert-large-int8-inference
Linux x86_64 2.5 9.59 GB 2/2021 docker pull intel/language-modeling:tf-r2.5-icx-b631821f-bert-large-fp32-inference

Description

The main pull command contains the official 2.5 release of Intel® Optimization for TensorFlow*. The images in the "Other Versions" table are for customer preview only.

3rd generation Intel® Xeon® Scalable processors, code-named Ice Lake, deliver industry-leading, workload-optimized platforms with built-in AI acceleration, providing a seamless performance foundation to help speed data’s transformative impact. Enhanced Intel® Deep Learning Boost, with the industry’s first x86 support of Brain Floating Point 16-bit (blfoat16) numeric format and vector neural network instructions (VNNI), brings enhanced artificial intelligence inference and training performance, with up to 1.93X more AI training performance and 1.87X more AI inference performance for image classification vs. the prior generation.

Intel® Optimization for TensorFlow* with Open MPI and Horovod* is a binary distribution of TensorFlow* with Intel® oneAPI Deep Neural Network Library (oneDNN) primitives, a popular performance library for deep learning applications. TensorFlow* is a widely used machine learning framework in the deep learning arena, demanding efficient utilization of computational resources. In order to take full advantage of Intel® architecture and to extract maximum performance, the TensorFlow* framework has been optimized using oneDNN primitives.

Includes the Python* 3 interpreter and the following wheel(s) and librarie(s) are pre-installed:
Intel® Optimization for TensorFlow*
OpenMPI*
Horovod*


Documentation and Sources

Get Started
Docker Hub*
GitHub* Repository
README
Get Started

Code Sources
Sources

 


Legal Notice

By accessing, downloading or using this software and any required dependent software (the “Software Package”), you agree to the terms and conditions of the software license agreements for the Software Package, which may also include notices, disclaimers, or license terms for third party software included with the Software Package. Please refer to Intel Simplified Software License (Version February 2020) for additional details.


Related Containers and Solutions

Intel® Optimization for TensorFlow* with Open MPI* and Horovod*
ResNet50* V1.5 FP32 Training TensorFlow* Container
ResNet50* V1.5 Int8 Inference TensorFlow* Container
ResNet50* V1.5 FP32 Inference TensorFlow* Container
ResNet34* SSD Int8 Inference TensorFlow* Container
ResNet34* SSD FP32 Inference TensorFlow* Container
MobileNet* SSD Int8 Inference TensorFlow* Container
MobileNet* SSD FP32 Inference TensorFlow* Container
MobileNet V1 Int8 Inference TensorFlow* Container
MobileNet V1 FP32 Inference TensorFlow* Container
BERT Large FP32 Inference TensorFlow* Container

View All Containers and Solutions 🡢

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.