TensorFlow* on 3rd Gen Intel® Xeon® Scalable Processors

Published:04/08/2021

Pull Command

docker pull intel/intel-optimized-tensorflow:tf-r2.5-icx-b631821f-mpi-horovod

Tags & Pull Commands for Other Versions

OS Target Version Size Updated Pull Command
Linux x86_64 2.5 742.3MB 2/2021 docker pull intel/image-recognition:tf-r2.5-icx-b631821f-resnet50v1-5-fp32-training
Linux x86_64 2.5 533.83MB 2/2021 docker pull intel/image-recognition:tf-r2.5-icx-b631821f-resnet50v1-5-int8-inference
Linux x86_64 2.5 670.88MB 2/2021 docker pull intel/image-recognition:tf-r2.5-icx-b631821f-resnet50v1-5-fp32-inference
Linux x86_64 2.5 1.9GB 2/2021 docker pull intel/object-detection:tf-r2.5-icx-b631821f-ssd-resnet34-int8-inference
Linux x86_64 2.5 2.12GB 2/2021 docker pull intel/object-detection:tf-r2.5-icx-b631821f-ssd-resnet34-fp32-inference
Linux x86_64 2.5 1.33GB 2/2021 docker pull intel/object-detection:tf-r2.5-icx-b631821f-ssd-mobilenet-int8-inference
Linux x86_64 2.5 1.37GB 2/2021 docker pull intel/object-detection:tf-r2.5-icx-b631821f-ssd-mobilenet-fp32-inference
Linux x86_64 2.5 499.25MB 2/2021 docker pull intel/image-recognition:tf-r2.5-icx-b631821f-mobilenet-v1-int8-inference
Linux x86_64 2.5 519.5MB 2/2021 docker pull intel/image-recognition:tf-r2.5-icx-b631821f-mobilenet-v1-fp32-inference
Linux x86_64 2.5 8.01GB 2/2021 docker pull intel/language-modeling:tf-r2.5-icx-b631821f-bert-large-int8-inference
Linux x86_64 2.5 9.59GB 2/2021 docker pull intel/language-modeling:tf-r2.5-icx-b631821f-bert-large-fp32-inference

Description

These images are for customer preview only. Optimizations in these containers will be available officially in the 2.5 release of Intel® Optimizations for TensorFlow*.

3rd Gen Intel® Xeon® Scalable processors, code-named Ice Lake, deliver industry-leading, workload-optimized platforms with built-in AI acceleration, providing a seamless performance foundation to help speed data’s transformative impact. Enhanced Intel Deep Learning Boost, with the industry’s first x86 support of Brain Floating Point 16-bit (blfoat16) numeric format and Vector Neural Network Instructions (VNNI), brings enhanced artificial intelligence inference and training performance, with up to 1.93X more AI training performance and 1.87X more AI inference performance for image classification vs. the prior generation.

Intel® Optimizations for TensorFlow* with Open MPI* and Horovod* is a binary distribution of TensorFlow* with Intel® oneAPI Deep Neural Network Library(Intel® oneDNN) primitives, a popular performance library for deep learning applications. TensorFlow* is a widely-used machine learning framework in the deep learning arena, demanding efficient utilization of computational resources. In order to take full advantage of Intel® architecture and to extract maximum performance, the TensorFlow* framework has been optimized using Intel® oneDNN primitives.

Includes the Python3 interpreter and the following wheel(s) and librarie(s) are pre-installed:
intel-tensorflow
OpenMPI*
Horovod*


Documentation and Sources

Get Started
DockerHub
GitHub Repo
README
Get Started

Code Sources
Sources

 


Legal Notice

By accessing, downloading or using this software and any required dependent software (the “Software Package”), you agree to the terms and conditions of the software license agreements for the Software Package, which may also include notices, disclaimers, or license terms for third party software included with the Software Package. Please refer to Intel Simplified Software License (Version February 2020) for additional details.


Related Containers and Solutions

Intel® Optimizations for TensorFlow* with Open MPI* and Horovod*
ResNet50v1.5 FP32 Training TensorFlow* Container
ResNet50v1.5 Int8 Inference TensorFlow* Container
ResNet50v1.5 FP32 Inference TensorFlow* Container
SSD-ResNet34 Int8 Inference TensorFlow* Container
SSD-ResNet34 FP32 Inference TensorFlow* Container
SSD-MobileNet Int8 Inference TensorFlow* Container
SSD-MobileNet FP32 Inference TensorFlow* Container
MobileNetV1 Int8 Inference TensorFlow* Container
MobileNetV1 FP32 Inference TensorFlow* Container
BERT Large Int8 Inference TensorFlow* Container
BERT Large FP32 Inference TensorFlow* Container

View All Containers and Solutions 🡢

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.