docker pull intel/intel-optimized-tensorflow:tf-r2.5-icx-b631821f-mpi-horovod
Tags & Pull Commands for Other Versions
These images are for customer preview only. Optimizations in these containers will be available officially in the 2.5 release of Intel® Optimizations for TensorFlow*.
3rd Gen Intel® Xeon® Scalable processors, code-named Ice Lake, deliver industry-leading, workload-optimized platforms with built-in AI acceleration, providing a seamless performance foundation to help speed data’s transformative impact. Enhanced Intel Deep Learning Boost, with the industry’s first x86 support of Brain Floating Point 16-bit (blfoat16) numeric format and Vector Neural Network Instructions (VNNI), brings enhanced artificial intelligence inference and training performance, with up to 1.93X more AI training performance and 1.87X more AI inference performance for image classification vs. the prior generation.
Intel® Optimizations for TensorFlow* with Open MPI* and Horovod* is a binary distribution of TensorFlow* with Intel® oneAPI Deep Neural Network Library(Intel® oneDNN) primitives, a popular performance library for deep learning applications. TensorFlow* is a widely-used machine learning framework in the deep learning arena, demanding efficient utilization of computational resources. In order to take full advantage of Intel® architecture and to extract maximum performance, the TensorFlow* framework has been optimized using Intel® oneDNN primitives.
Documentation and Sources
By accessing, downloading or using this software and any required dependent software (the “Software Package”), you agree to the terms and conditions of the software license agreements for the Software Package, which may also include notices, disclaimers, or license terms for third party software included with the Software Package. Please refer to Intel Simplified Software License (Version February 2020) for additional details.
Related Containers and Solutions
Intel® Optimizations for TensorFlow* with Open MPI* and Horovod*
ResNet50v1.5 FP32 Training TensorFlow* Container
ResNet50v1.5 Int8 Inference TensorFlow* Container
ResNet50v1.5 FP32 Inference TensorFlow* Container
SSD-ResNet34 Int8 Inference TensorFlow* Container
SSD-ResNet34 FP32 Inference TensorFlow* Container
SSD-MobileNet Int8 Inference TensorFlow* Container
SSD-MobileNet FP32 Inference TensorFlow* Container
MobileNetV1 Int8 Inference TensorFlow* Container
MobileNetV1 FP32 Inference TensorFlow* Container
BERT Large Int8 Inference TensorFlow* Container
BERT Large FP32 Inference TensorFlow* Container
Product and Performance Information
Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.