Use the Model Downloader and Model Optimizer for the Intel® Distribution of OpenVINO™ Toolkit on Raspberry Pi*

Raspberry Pi* 3A+ with NCS
pi3 with ncs 2 and clear case
pi VNC to transfer OpenVINO files

Overview

pi3a+ with NCS2

"The OpenVINO™ toolkit support on Raspberry Pi only includes the inference engine module of the Intel® distribution of OpenVINO™ toolkit."

The model downloader and model optimizer are not supported on this platform but work. 

The inference engine requires optimized models. Optimized models processed through the Model Optimizer are available in the full desktop version.

These models consist of a .bin and .xml file.

There are multiple ways you can obtain the optimized models. 

Requires:

 

OpenVINO™ toolkit installed on supported system with GNU*/Linux* distro

The @latest OpenVINO™ toolkit inference engine installed on Raspbian*

Download Pre-Trained models link icon

Download pre-trained models from the Intel® Open Source Technology Center. 

Select the link that corresponds to your release.

click icon Click on the open_model_zoo  to browse the pre-trained models.

Use Pre-Trained Models From A Full Install

Use the pre-trained models from a full Intel® Distribution of OpenVINO™ toolkit  install on one of the supported platforms.

More info icon For more information about the location of the pre-trained models in a full install, visit the Pre-trained Models webpage

Transfer Downloader and Optimizer + Run on Pi

Download and optimize the models on Raspbian* using the Model Downloader and Model Optimizer.

Transfer Model Optimizer and Model Downloader to the pi from a full install.

Run the Model Downloader and Model Optimizer on the pi (see Get Started below)

Run Optimizer on Desktop + transfer to pi

Run Model Optimizer on the Desktop or full install, transfer files to the Rasbperry Pi.  

Model Downloader and Model Optimizer

Model Downloader

What is the Model Downloader?

click iconClick images for larger view

Displays in a new browser window

location of Model optimizer and downloader shell
 location of Model optimizer and downloader GUI

Where it is                                                

For the @latest full install, the Downloader is located here:

/opt/intel/openvino/deployment_tools

The Model Downloader contains a collection of pre-trained models along with the download location for each one displayed. This way, you can easily find those downloads.

What iT is

The Model Downloader provides a command line interface for developers to download various publicly available open source pre-trained Deep Neural Network (DNN) models, in a variety of problem domains.

 More info icon For more information about the model downloader, visit the Model Downloader Essentials article

Note: Although the "Model Downloader" is written in Python*, you can use the models that you download with any of the programming languages supported by the Intel® Distribution of OpenVINO™ toolkit.                                           

Model Optimizer

What is the Model Optimizer?

Where it is

For the @latest full install, the optimizer is located here:

/opt/intel/openvino/deployment_tools

What It Is

Model Optimizer is a cross-platform command-line tool that facilitates the transition between the training and deployment environment, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices.

Deep Learning Framework

Model Optimizer process assumes you have a network model trained using a supported deep learning framework. The visual below illustrates the typical workflow for deploying a trained deep learning model.

How It Works

model optimizer visual
 More info icon For more detailed information about the model optimizer, visit the Model Optimizer Guide.

Get Started

Equipment  Needed to Get Started

Hardware

Software

Raspberry Pi:

  • 3B+
  • 3A+

Raspbian 9 (Stretch) OS

OpenVINO inference engine software from Intel® Open Source Technology Center - https://download.01.org/opencv/2019/openvinotoolkit/

Completed full or host install of OpenVINO™ toolkit

PReRequisites

To run the converted/optimized models that are downloaded in this article with the the benchmark_app, build benchmark_app.

ARMv7

Location/Command

Create a build directory.
/opt/intel/openvino/deployment_tools/inference_engine/samples/build
Build the benchmark app on the pi.
cmake -DCMAKE_BUILD_TYPE=Release
-DCMAKE_CXX_FLAGS="-march=armv7-a"

Build the sample:

make -j2 benchmark_app

Benchmark_app is found in the samples directory.

../samples/build/armv7l/Release/

Location of benchmark_app

Transfer Model Downloader and Model Optimizer to Pi

From a completed full or host install of OpenVINO™ toolkit - transfer the directories to the pi.

There are several ways you can do this:

  • You could create an archive.
  • You might use Nextcloud* to transfer the files.
  • Your preference may be scp to copy the files.
  • You could use the "transfer files" option realVNC viewer icon to transfer files - larger in your VNC Viewer. RealVNC* used here.

         pi VNC viewer to transfer OpenVINO files from host

Once the model_downloader and model_optimizer directories are transferred to the pi, you are ready to begin!

The Model Downloader does not just download models to convert with the model optimizer, but also includes pre-trained models. The download location of these models is displayed upon downloading.

Model Downloader

Use the Model Downloader (downloader.py) included with OpenVINO™ toolkit found in the model_downloader directory

cd ~/model_downloader

A list of all the models can be displayed by running:

./downloader.py --print_all

Troubleshooting: If you receive a No module named 'yaml' error, try 

pip install pyyaml or pip3 install pyyaml

Troubleshooting: If you need pip try

apt install python-pip
Resultsdownloader results to print all

Details about the models are found in:

 

list_topologies.yml

Fields include but are not limited to:                      
  • name 
  • description
  • model
  • model_hash
  • weights
  • weights_hash
  • model_optimizer_args
  • framework

 

Downloader Example 1

squeezenet 1.1

lists topologies details for squeezenet1.1
# PUBLIC TOPOLOGIES
#
- name: "squeezenet1.1"
    description: "SqueezeNet v1.1 (https://github.com/DeepScale/SqueezeNet/tree/master/SqueezeNet_v1.1)"
    model: https://raw.githubusercontent.com/DeepScale/SqueezeNet/a47b6f13d30985279789d08053d37013d67d131b/SqueezeNet_v1.1/deploy.prototxt
    model_hash: d041bfb2ab4b32fda4ff6c6966684132f2924e329916aa5bfe9285c6b23e3d1c
    weights: https://github.com/DeepScale/SqueezeNet/raw/a47b6f13d30985279789d08053d37013d67d131b/SqueezeNet_v1.1/squeezenet_v1.1.caffemodel
    weights_hash: 72b912ace512e8621f8ff168a7d72af55910d3c7c9445af8dfbff4c2ee960142
    output: "classification/squeezenet/1.1/caffe"
    old_dims: [10,3,227,227]
    new_dims: [1,3,227,227]
    model_optimizer_args: "--framework caffe --data_type FP32 --input_shape [1,3,227,227] --input data --mean_values data[104.0,117.0,123.0] --output prob --input_model <squeezenet1.1.caffemodel> --input_proto <squeezenet1.1.prototxt>"
    framework: caffe
    license: https://github.com/DeepScale/SqueezeNet/blob/master/LICENSE

Run this command from the model_downloader directory

sudo ./downloader.py --name squeezenet1.1
(you may need to enter your passwd)
[sudo] password for vino-r4:

Download begins

###############|| Start downloading models ||###############
...100%, 9 KB, 18879 KB/s, 0 seconds passed 
========= squeezenet1.1.prototxt ====>/opt/intel/openvino/model_downloader/classification/squeezenet/1.1/caffe/squeezenet1.1.prototxt

Downloads weights

 

Location:

###############|| Start downloading weights ||###############                                                                            
 ...100%, 4834 KB, 4061 KB/s, 1 seconds passed  
========= squeezenet1.1.caffemodel ====> /opt/intel/openvino/model_downloader/classification/squeezenet/1.1/caffe/squeezenet1.1.caffemodel

Downloads topologies in tarballs

Post Processing

###############|| Start downloading topologies in tarballs ||###############
                                                                                                
###############|| Post processing ||###############                
                                                                         
========= Changing input dimensions in squeezenet1.1.prototxt =========

downloaded squeeznet caffemodel and prototxt

Model Optimizer -  Convert to Floating Point 16 (FP16)

Use the Model Optimizer to produce an Intermediate Representation (IR) of the network, which can be read, loaded, and inferred with the Inference Engine. The Inference Engine offers a unified API across a number of supported Intel® platforms. The Intermediate Representation is a pair of files describing the model: 

  • .xml - Describes the network topology
  • .bin - Contains the weights and biases binary data.

model optimizer visual

These can be set to either Single Precision (FP32) by default or add the --datatype flag to indicate Half-Point Precision (FP16). Change to the model_optimizer directory on your system.

Optimizer Example  1

Squeezenet 1.1                                                                                                                                                                                                                                                            

Install prerequisites

install_prerequisites.sh is the main file, called by individual scripts.

 

install_prerequisites.sh

install_prerequisites_caffe.sh

install_prerequisites_tf.sh

install_prerequisites_mxnet.sh

install_prerequisites_kaldi.sh

install_prerequisites_onnx.sh

Use the install_prerequisites.sh for all of the following:

  • caffe
  • tf
  • mxnet
  • kald
  • onnx

Edit install_prerequisites.sh

Open a terminal or editor to edit this file.

Before

before editing the install prerequisites script

Line 53.  Change lsb-release to os-release

Line 54. Change ubuntu to raspbian

After

Line 71. Change ubuntu to raspbian

ctrl+x   ctrl+c

Save when prompted.

Hint: If using nano, use the command ctrl+w to search for the word ubuntu.

Run this command from install_prerequisites directory

sudo ./install_prerequisites_caffe.sh

Now, you are ready to use the model optimizer. This is the command format.

python3 mo.py --input_model <INPUT_MODEL>.caffemodel  --data_type=FP16                

Example command

python3 mo.py --input_model /opt/intel/openvino/model_downloader/classification/squeezenet/1.1/caffe/squeezenet1.1.caffemodel --data_type=FP16
Results
root@raspberrypi:/ python3 mo.py --input_model /opt/intel/openvino/model_downloader/classification/squeezenet/1.1/caffe/squeezenet1.1.caffemodel --data_type=FP16
Model Optimizer arguments:
Common parameters:
    - Path to the Input Model:     /opt/intel/openvino/model_downloader/classification/squeezenet/1.1/caffe/squeezenet1.1.caffemodel                                          
    - Path for generated IR:     /opt/intel/openvino/model_optimizer/.
    - IR output name:   squeezenet1.1
    - Log level:        ERROR
    - Batch:            Not specified, inherited from the model
    - Input layers:     Not specified, inherited from the model
    - Output layers:    Not specified, inherited from the model
    - Input shapes:     Not specified, inherited from the model
    - Mean values:      Not specified
    - Scale values:     Not specified
    - Scale factor:     Not specified
    - Precision of IR:  FP16
    - Enable fusing:    True
    - Enable grouped convolutions fusing:      True
    - Move mean values to preprocess section:  False
    - Reverse input channels:          False
Caffe specific parameters:
    - Enable resnet optimization:      True
    - Path to the Input prototxt:     /opt/intel/openvino/model_downloader/classification/squeezenet/1.1/caffe/squeezenet1.1.prototxt
    - Path to CustomLayersMapping.xml:  Default
    - Path to a mean file:         Not specified
    - Offsets for a mean file:     Not specified
Model Optimizer version:     1.5.12.49d067a0
Please expect that Model Optimizer conversion might be slow. You are currently using Python protobuf library implementation.
However you can use the C++ protobuf implementation that is supplied with the OpenVINO toolkitor build protobuf library from sources.
Navigate to "install_prerequisites" folder and run: python -m easy_install protobuf-3.5.1-py($your_python_version)-win-amd64.egg
set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp

 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #80.

[ SUCCESS ] Generated IR model.
[ SUCCESS ] XML file: /opt/intel/openvino/model_optimizer/./squeezenet1.1.xml
[ SUCCESS ] BIN file: /opt/intel/openvino/model_optimizer/./squeezenet1.1.bin
[ SUCCESS ] Total execution time: 51.59 seconds.
root@raspberrypi:/opt/intel/openvino/model_optimizer#

You can move the optimized models or not. Note where the models are when running the application.

Run an application using the optimized model and Intel® NCS 2

Use the Benchmark App

 

./path/to/benchmark_app -m /path/to/optiimzed/model -i /path/to/image -d MYRIAD

./armv7l/Release/benchmark_app -m /opt/intel/openvino/deployment_tools/inference_engine/samples/squeezenet1.1.xml -d MYRIAD -i /home/pi/Desktop/car.png
8 steps

After running the app with the optimized model, 8 steps are completed with results for latency and Throughput. Your results may vary.

results benchmark app

TroubleshootingMake sure to use the built applications in the build directory in /opt and not the

inference_engine_samples_build directory, if you ran the build samples script.

Additional Example: Download Pre-Trained Models on Raspbian* using the Model Downloader

DownlOADER

Example 2 

Vehicle License Plate Detection Barrier FP16                                                                                  

Run this command to download the FP16 pre-trained model for vehicle license plate detection

sudo ./downloader.py --name vehicle-license-plate-detection-barrier-0106-fp16
 
###############|| Start downloading models ||###############
...100%, 93 KB, 1527 KB/s, 0 seconds passed
========= vehicle-license-plate-detection-barrier-0106-fp16.xml ====>  /opt/intel/openvino/model_downloader/Security/object_detection/barrier/0106/dldt/vehicle-license-plate-detection-barrier-0106-fp16.xml
 
###############|| Start downloading weights ||###############
...100%, 1256 KB, 4470 KB/s, 0 seconds passed 
========= vehicle-license-plate-detection-barrier-0106-fp16.bin ====> /opt/intel/openvino/model_downloader/Security/object_detection/barrier/0106/dldt/vehicle-license-plate-detection-barrier-0106-fp16.bin

###############|| Start downloading topologies in tarballs ||############### 
                                      
###############|| Post processing ||###############                                                             
Download pre-trained modelsdownload pre-trained                                                                                                                     
For more complete information about compiler optimizations, see our Optimization Notice.