docker pull intel/intel-optimized-tensorflow:mpi-horovod
Tags & Pull Commands for Other Versions
The main pull command contains the official 2.5 release of Intel® Optimization for TensorFlow*. The images in the "Other Versions" table are for customer preview only.
3rd generation Intel® Xeon® Scalable processors, code-named Ice Lake, deliver industry-leading, workload-optimized platforms with built-in AI acceleration, providing a seamless performance foundation to help speed data’s transformative impact. Enhanced Intel® Deep Learning Boost, with the industry’s first x86 support of Brain Floating Point 16-bit (blfoat16) numeric format and vector neural network instructions (VNNI), brings enhanced artificial intelligence inference and training performance, with up to 1.93X more AI training performance and 1.87X more AI inference performance for image classification vs. the prior generation.
Intel® Optimization for TensorFlow* with Open MPI and Horovod* is a binary distribution of TensorFlow* with Intel® oneAPI Deep Neural Network Library (oneDNN) primitives, a popular performance library for deep learning applications. TensorFlow* is a widely used machine learning framework in the deep learning arena, demanding efficient utilization of computational resources. In order to take full advantage of Intel® architecture and to extract maximum performance, the TensorFlow* framework has been optimized using oneDNN primitives.
Documentation and Sources
By accessing, downloading or using this software and any required dependent software (the “Software Package”), you agree to the terms and conditions of the software license agreements for the Software Package, which may also include notices, disclaimers, or license terms for third party software included with the Software Package. Please refer to Intel Simplified Software License (Version February 2020) for additional details.
Related Containers and Solutions
Intel® Optimization for TensorFlow* with Open MPI* and Horovod*
ResNet50* V1.5 FP32 Training TensorFlow* Container
ResNet50* V1.5 Int8 Inference TensorFlow* Container
ResNet50* V1.5 FP32 Inference TensorFlow* Container
ResNet34* SSD Int8 Inference TensorFlow* Container
ResNet34* SSD FP32 Inference TensorFlow* Container
MobileNet* SSD Int8 Inference TensorFlow* Container
MobileNet* SSD FP32 Inference TensorFlow* Container
MobileNet V1 Int8 Inference TensorFlow* Container
MobileNet V1 FP32 Inference TensorFlow* Container
BERT Large FP32 Inference TensorFlow* Container
Product and Performance Information
Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.