TensorFlow* Performance Comparison Jupyter* Notebooks

Pull Command

docker pull intel/intel-optimized-tensorflow:tf-2.3.0-imz-2.1.1-jupyter-performance

Description

This is a container with Jupyter* notebooks and pre-installed environments for analyzing the performance benefit from using Intel® Optimizations for TensorFlow* with Intel® oneAPI Deep Neural Network Library (Intel® oneDNN). There are two different analysis types:

  • For the "Stock vs. Intel® Optimizations for TensorFlow*" analysis type, users can understand the performance benefit between stock and Intel® Optimizations for TensorFlow*
  • For the "FP32 vs. BFloat16 vs. Int8" analysis type, users can understand the performance benefit among different data types on Intel® Optimizations for TensorFlow*
Analysis Type Notebook Notes
Stock vs. Intel® Optimizations for TensorFlow* 1. benchmark_perf_comparison Compare performance between stock and Intel® Optimizations for TensorFlow* among different models
^ 2. benchmark_perf_timeline_analysis Analyze the performance benefit from Intel® oneDNN among different layers by using TensorFlow* Timeline
FP32 vs. BFloat16 vs. Int8 1. benchmark_data_types_perf_comparison Compare Intel® Model Zoo benchmark performance among different data types on Intel® Optimizations for TensorFlow*
^ 2. benchmark_data_types_perf_timeline_analysis Analyze the BFloat16/Int8 data type performance benefit from Intel® oneDNN among different layers by using TensorFlow* Timeline

How to Run the Notebooks

  1. Launch the container with:

    docker run \
        -d \
        -p 8888:8888 \
        --env LISTEN_IP=0.0.0.0 \
        --privileged \
        intel/intel-optimized-tensorflow:tf-2.3.0-imz-2.1.1-jupyter-performance

    Most of the notebook functionality works without a real dataset (by using synthetic data), but if you want to mount a dataset, use an option like:

    -v <host path to dataset>:<container path to dataset>

    If your machine is behind a proxy, you will need to pass proxy arguments to the run command. For example:

    --env http_proxy="http://proxy.url:proxy_port" --env https_proxy="https://proxy.url:proxy_port"

     

  2. Display the container logs with docker logs <container_id>, copy the jupyter service URL, and paste it into a browser window.

  3. Click the 1st notebook file (benchmark_perf_comparison.ipynb or benchmark_data_types_perf_comparison) from an analysis type.

    Note: For "Stock vs. Intel® Optimizations for TensorFlow*" analysis type, please change your Jupyter* notebook kernel to either "stock-tensorflow" or "intel-tensorflow"

    Note: For "FP32 vs. BFloat16 vs. Int8" analysis type, please select "intel-tensorflow" as your Jupyter* notebook kernel.

  4. Run through every cell of the notebook one by one.

    NOTE: For "Stock vs. Intel® Optimizations for TensorFlow*" analysis type, in order to compare between stock and Intel® Optimizations for TensorFlow* results, users need to run all cells before the comparison section with both stock-tensorflow and intel-tensorflow kernels.

  5. Click the 2nd notebook file (benchmark_perf_timeline_analysis.ipynb or benchmark_data_types_perf_timeline_analysis) from an analysis type.

  6. Run through every cell of the notebook one by one to get the analysis result.

    NOTE: There is no requirement for the Jupyter* kernel when users run the 2nd notebook to analyze performance in detail.

Documentation and Sources

Get Started​
Docker Repo
Main Github
Readme
Release Notes
Get Started Guide

Code Sources
Dockerfile
Report Issue


License Agreement

LEGAL NOTICE: By accessing, downloading or using this software and any required dependent software (the “Software Package”), you agree to the terms and conditions of the software license agreements for the Software Package, which may also include notices, disclaimers, or license terms for third party software included with the Software Package. Please refer to the license file for additional details.


Related Containers and Solutions

BERT Large FP32 Inference TensorFlow* Container
ResNet50 FP32 Inference TensorFlow* Container
ResNet50 Int8 Inference TensorFlow* Container
ResNet50v1.5 FP32 Inference TensorFlow* Container
ResNet50v1.5 Int8 Inference TensorFlow* Container
ResNet50v1.5 BFloat16 Inference TensorFlow* Container
ResNet50v1.5 FP32 Training TensorFlow* Container

Product and Performance Information

1

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804