SSD-ResNet34 Int8 Inference Tensorflow* Model

Download Command

wget https://storage.googleapis.com/intel-optimized-tensorflow/models/v2_3_0/ssd-resnet34-int8-inference.tar.gz

Description

This document has instructions for running SSD-ResNet34 Int8 inference using Intel® Optimizations for TensorFlow*.

SSD-ResNet34 uses the COCO dataset for accuracy testing.

Download and preprocess the COCO validation images using the instructions here. After the script to convert the raw images to the TF records file completes, rename the tf_records file:

mv ${OUTPUT_DIR}/coco_val.record ${OUTPUT_DIR}/validation-00000-of-00001

Set the DATASET_DIR to the folder that has the validation-00000-of-00001 file when running the accuracy test. Note that the inference performance test uses synthetic dataset.

Quick Start Scripts

Script name Description
int8_inference.sh Run inference using synthetic data and outputs performance metrics.
int8_accuracy.sh Tests accuracy using the COCO dataset in the TF Records format.

Bare Metal

To run on bare metal, the following prerequisites must be installed in your environment:

  • Python* 3
  • intel-tensorflow
  • numactl
  • git
  • libgl1-mesa-glx
  • libglib2.0-0
  • numpy==1.17.4
  • Cython
  • contextlib2
  • pillow>=7.1.0
  • lxml
  • jupyter
  • matplotlib
  • pycocotools
  • horovod==0.20.0
  • tensorflow-addons==0.8.1
  • opencv-python

In addition to the libraries above, SSD-ResNet34 uses the TensorFlow* models and TensorFlow* benchmarks repositories. Clone the repositories using the commit ids specified below and set the TF_MODELS_DIR to point to the folder where the models repository was cloned:

# Clone the TensorFlow models repo
git clone https://github.com/tensorflow/models.git tf_models
cd tf_models
git checkout f505cecde2d8ebf6fe15f40fb8bc350b2b1ed5dc
export TF_MODELS_DIR=$(pwd)
cd ..

# Clone the TensorFlow benchmarks repo
git clone --single-branch https://github.com/tensorflow/benchmarks.git ssd-resnet-benchmarks
cd ssd-resnet-benchmarks
git checkout 509b9d288937216ca7069f31cfb22aaa7db6a4a7
cd ..

After installing the prerequisites and cloning the required repositories, download and untar the model package. The model package includes the SSD-ResNet34 Int8 pretrained model and the scripts needed to run inference.

wget https://storage.googleapis.com/intel-optimized-tensorflow/models/v2_3_0/ssd-resnet34-int8-inference.tar.gz
tar -xzf ssd-resnet34-int8-inference.tar.gz
cd ssd-resnet34-int8-inference

Set an environment variable for the path to an OUTPUT_DIR where log files will be written. If the accuracy test is being run, then also set the DATASET_DIR to point to the folder where the COCO dataset validation-00000-of-00001 file is located. Once the environment variables are setup, then run a quickstart script.

To run inference using synthetic data:

export OUTPUT_DIR=<directory where log files will be written>

quickstart/int8_inference.sh

To test accuracy using the COCO dataset:

export DATASET_DIR=<path to the coco directory>
export OUTPUT_DIR=<directory where log files will be written>

quickstart/int8_accuracy.sh

Documentation and Sources

 

Get Started​
Main GitHub
Readme
Release Notes
Get Started Guide

Code Sources
Report Issue


License Agreement

LEGAL NOTICE: By accessing, downloading or using this software and any required dependent software (the “Software Package”), you agree to the terms and conditions of the software license agreements for the Software Package, which may also include notices, disclaimers, or license terms for third party software included with the Software Package. Please refer to the license file for additional details.


Related Containers and Solutions

SSD-ResNet34 Int8 Inference TensorFlow* Container

View All Containers and Solutions 🡢

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.