Install the Intel® Distribution of OpenVINO™ Toolkit for Raspbian* OS

NOTE: The Intel® Distribution of OpenVINO™ toolkit was formerly known as the Intel® Computer Vision SDK.

Introduction

This guide applies to 32-bit Raspbian* 9 OS, which is an official OS for Raspberry Pi* boards.

IMPORTANT:
- All steps in this guide have been validated with Raspberry Pi 3.
- All steps in this guide are required unless otherwise stated.
- The Intel® Distribution of OpenVINO™ toolkit for Raspbian* OS includes the MYRIAD plugin only. You can use it with the Intel® Movidius™ Neural Compute Stick (Intel® NCS) or the Intel® Neural Compute Stick 2 plugged in one of USB ports.

Your installation is complete when these are all completed:

  1. Install the Intel® Distribution of OpenVINO™ toolkit.
  2. Set the environment variables.
  3. Add USB rules.
  4. Run the Object Detection Sample and the Face Detection Model (for OpenCV*) to validate your installation.

About the Intel® Distribution of OpenVINO™ Toolkit

The Intel® Distribution of OpenVINO™ toolkit quickly deploys applications and solutions that emulate human vision. Based on Convolutional Neural Networks (CNN), the toolkit extends computer vision (CV) workloads across Intel® hardware, maximizing performance. The Intel Distribution of OpenVINO toolkit includes the Intel® Deep Learning Deployment Toolkit (Intel® DLDT).

Included in the Installation Package

The Intel Distribution of OpenVINO toolkit for Raspbian OS is an archive with pre-installed header files and libraries. The following components are installed by default:

ComponentDescription
Inference EngineThis is the engine that runs the deep learning model. It includes a set of libraries for an easy inference integration into your applications.
OpenCV* version 4.0OpenCV* community version compiled for Intel® hardware.
Sample ApplicationsA set of simple console applications demonstrating how to use the Inference Engine in your applications.

System Requirements

Hardware:

  • Raspberry Pi* board with ARMv7-A CPU architecture
  • One of Intel® Movidius™ Visual Processing Units (VPU):
    • Intel® Movidius™ Neural Compute Stick
    • Intel® Neural Compute Stick 2

Operating Systems:

  • Raspbian* Stretch, 32-bit

Installation Steps

The guide assumes you downloaded the the Intel Distribution of OpenVINO toolkit for Raspbian. If you do not have a copy of the toolkit package file, download the latest version here and then return to this guide to proceed with the installation.

NOTE: The Intel Distribution of OpenVINO toolkit for Raspbian OS is distributed without installer so you need to perform extra steps comparing to the Intel® Distribution of OpenVINO™ toolkit for Linux* OS.

Install the Package

  1. Open the Terminal* or your preferred console application.
  2. Go to the directory in which you downloaded the Intel Distribution of OpenVINO toolkit. This document assumes this is your ~/Downloads directory. If not, replace ~/Downloads with the directory where the file is located.
    cd ~/Downloads/

    By default, the package file is saved as l_openvino_toolkit_ie_p_<version>.tgz.

  3. Unpack the archive:
    tar -xf l_openvino_toolkit_ie_p_<version>.tgz
  4. Modify the setupvars.sh script by replacing <INSTALLDIR> with the absolute path to the installation folder:
    sed -i "s|<INSTALLDIR>|$(pwd)/inference_engine_vpu_arm|" inference_engine_vpu_arm/bin/setupvars.sh

Now the Intel Distribution of OpenVINO toolkit is ready to be used. Continue to the next sections to configure the environment and set up USB rules.

Set the Environment Variables

You must update several environment variables before you can compile and run Intel Distribution of OpenVINO toolkit applications. Run the following script to temporarily set the environment variables:

source inference_engine_vpu_arm/bin/setupvars.sh

(Optional) The Intel Distribution of OpenVINO environment variables are removed when you close the shell. As an option, you can permanently set the environment variables as follows:

  1. Open the .bashrc file in <user_directory>:
    vi <user_directory>/.bashrc
  2. Add this line to the end of the file:
    source ~/Downloads/inference_engine_vpu_arm/bin/setupvars.sh
  3. Save and close the file: press Esc and type :wq.
  4. To test your change, open a new terminal.
    You will see the following:
    [setupvars.sh] OpenVINO environment initialized

Add USB Rules

  1. Add the current Linux user to the users group:
    sudo usermod -a -G users "$(whoami)"

    Log out and log in for it to take effect.

  2. To perform inference on the Intel® Movidius™ Neural Compute Stick or Intel® Neural Compute Stick 2, install the USB rules as follows:
    sh inference_engine_vpu_arm/install_dependencies/install_NCS_udev_rules.sh

Build and Run Object Detection Sample

Follow the next steps to run pre-trained Face Detection network using samples from Intel Distribution of OpenVINO toolkit:

  1. Go to the folder with samples source code:
    cd inference_engine_vpu_arm/deployment_tools/inference_engine/samples
  2. Create build directory:
    mkdir build && cd build
  3. Build the Object Detection Sample:
    cmake .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-march=armv7-a"
    make -j2 object_detection_sample_ssd
  4. Download the pre-trained Face Detection model or copy it from a host machine:
    • To download the .bin file with weights:
      wget --no-check-certificate https://download.01.org/openvinotoolkit/2018_R4/open_model_zoo/face-detection-adas-0001/FP16/face-detection-adas-0001.bin
    • To download the .xml file with the network topology:
      wget --no-check-certificate https://download.01.org/openvinotoolkit/2018_R4/open_model_zoo/face-detection-adas-0001/FP16/face-detection-adas-0001.xml
  5. Run the sample with specified path to the model:
    ./armv7l/Release/object_detection_sample_ssd -m face-detection-adas-0001.xml -d MYRIAD -i <path_to_image>
  6. Optional: Build all samples:
    • Navigate to samples directory:
      cd ~/Downloads/inference_engine_vpu_arm/deployment_tools/inference_engine/samples
    • Run script that will build and place samples in $HOME/inference_engine_samples_build/armv71/Release:
      ./build_samples.sh

Run Face Detection Model Using OpenCV* API

To validate OpenCV* installation, you may try to run OpenCV's deep learning module with Inference Engine backend. Here is a Python* sample, which works with Face Detection model:

  1. Download the pre-trained Face Detection model or copy it from a host machine:
    • To download the .bin file with weights:
      wget --no-check-certificate https://download.01.org/openvinotoolkit/2018_R4/open_model_zoo/face-detection-adas-0001/FP16/face-detection-adas-0001.bin
    • To download the .xml file with the network topology:
      wget --no-check-certificate https://download.01.org/openvinotoolkit/2018_R4/open_model_zoo/face-detection-adas-0001/FP16/face-detection-adas-0001.xml
  2. Create a new Python file named as openvino_fd_myriad.py and copy the following script there:
    import cv2 as cv
    
    # Load the model 
    net = cv.dnn.readNet('face-detection-adas-0001.xml', 'face-detection-adas-0001.bin') 
    
    # Specify target device 
    net.setPreferableTarget(cv.dnn.DNN_TARGET_MYRIAD)
          
    # Read an image 
    frame = cv.imread('/path/to/image')
          
    # Prepare input blob and perform an inference 
    blob = cv.dnn.blobFromImage(frame, size=(672, 384), ddepth=cv.CV_8U) net.setInput(blob) 
    out = net.forward()
          
    # Draw detected faces on the frame 
    for detection in out.reshape(-1, 7): 
        confidence = float(detection[2]) 
        xmin = int(detection[3] * frame.shape[1]) 
        ymin = int(detection[4] * frame.shape[0]) 
        xmax = int(detection[5] * frame.shape[1]) 
        ymax = int(detection[6] * frame.shape[0])
    
        if confidence > 0.5:
            cv.rectangle(frame, (xmin, ymin), (xmax, ymax), color=(0, 255, 0))
    
    # Save the frame to an image file 
    cv.imwrite('out.png', frame) 
    
  3. Run the script:
    python3 openvino_fd_myriad.py

In this script, OpenCV* loads the Face Detection model in the Intermediate Representation (IR) format and an image. Then it runs the model and saves an image with detected faces.

For more complete information about compiler optimizations, see our Optimization Notice.

10 comments

Top
Gupta, Anchal's picture

Hi,

Can you please share how to install opencv-openvino?

Gupta, Anchal's picture

Hi,

Can you please provide steps for installing Opencv-openVino? 

Keyes, Mike's picture

Olesya -- Thanks for putting this out there. I'm very excited to be working with the NCS on a Pi. I'm working with the Python Face Detection sample and I'm having a bit of trouble. Using your script I was able to substitute an IR generated from the Tensorflow faster_rcnn_inception_v2_coco_2018_01_28 model directly from the Model Zoo. That worked great! However, when I substitute an IR of a model I've retrained from that faster_rcnn model I get this error:

File "openvino_fd_myriad.py", line 15, in <module>
    out = net.forward()
cv2.error: OpenCV(4.0.1-openvino) /home/jenkins/workspace/OpenCV/OpenVINO/build/opencv/modules/dnn/src/op_inf_engine.cpp:553: error:

(-215:Assertion failed) Failed to initialize Inference Engine backend: AssertionFailed: newDims[newPerm[i]] == 1

in function 'initPlugin'

 

Any ideas what would be causing that?

Moeed, Abdul's picture

I'm having an issue loading the IR (cv.dnn.readNet) in Python 2, with the following error:

error: (-2:Unspecified error) Build OpenCV with Inference Engine to enable loading models from Model Optimizer. in function 'readFromModelOptimizer'

Note that the same script works perfectly fine for Python 3.

I've also tried reinstalling OpenCV with the following flags, as suggested by some other article:

cmake -DWITH_INF_ENGINE=ON -DENABLE_CXX11=ON

But the issue persists. Any help in the matter would be appreciated. As the rest of my code base is in Python 2, it would be ideal for OpenVino to run in the same environment. Thanks.

Hauchecorne, Léo Flaventin's picture

Hey,

I'm already using a customized openCV, is there any way to get the source of the distributed opencv version so that I can integrate my modification into it ?

Price, Shaun's picture

For anybody with a Hardkernel Odroid XU4 running Ubuntu 18.04 you can get OpenVino running on it by copying or renaming the raspbian_9 folder in the directory /inference_engine_vpu_arm/deployment_tools/inference_engine/lib/ to a folder called ubuntu_18.04. All the other instructions are the same.

so:
/inference_engine_vpu_arm/deployment_tools/inference_engine/lib/raspbian_9
becomes:
/inference_engine_vpu_arm/deployment_tools/inference_engine/lib/ubuntu_18.04

I've tested it with both the NCS and NCS 2 sticks.

Wood, Joe's picture

Hi Pieter,

For the issue with line 13, move net.setInput(blob) into the next line or put ; in front of it.

# Prepare input blob and perform an inference
blob = cv.dnn.blobFromImage(frame, size=(672,384), ddepth=cv.CV_8U)
net.setInput(blob)
out = net.forward()
 

Just in case...

Replace /path/to/image with the actual path to your image.

Ps. Adrenayova is a surname.

Lucny, Andrej's picture

The fix for example 2, line 13, is easy:

blob = cv.dnn.blobFromImage(frame, size=(672, 384), ddepth=cv.CV_8U) 
net.setInput(blob) 

instead of
blob = cv.dnn.blobFromImage(frame, size=(672, 384), ddepth=cv.CV_8U) net.setInput(blob)

(and cmake is called once, but make should be used for each other project)

(all stuff is working fine for me with Movidius neural compute stick)

Geelen, Pieter's picture

Hi Adrenayova,

Thank you for your post! I got the first example running immediately. The second example has an issue however. The python code you posted has an issue in line 13 and will not run. Could you check what went wrong there? (I am not great at OpenCV, so I cant tell). Furthermore I wanted to understand samples a little better, I noticed that there are a lot of samples available. Do I still need to build them with CMake? 

 

Thank you for your time :)

Regards, Pieter

Isaac&amp;#039;s, Paul's picture

I'm trying to get OpenVino/NCS2 combination running on a PINE64 SoPine Clusterboard ( 7 x 4-core Arm64 each with 2GB RAM running Ubuntu 18 Bionic). Combatting all the little gotchas in getting the SoPine (SoDimm format) single unit to load OpenVino right to the point of:

root@pine64:~/Developer/inference_engine_vpu_arm/deployment_tools/inference_engine/samples/build# make -j2 object_detection_sample_ssd Scanning dependencies of target format_reader Scanning dependencies of target ie_cpu_extension [ 0%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_argmax.cpp.o [ 0%] Building CXX object common/format_reader/CMakeFiles/format_reader.dir/MnistUbyte.cpp.o [ 4%] Building CXX object common/format_reader/CMakeFiles/format_reader.dir/bmp.cpp.o [ 8%] Building CXX object common/format_reader/CMakeFiles/format_reader.dir/format_reader.cpp.o [ 12%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_base.cpp.o [ 16%] Building CXX object common/format_reader/CMakeFiles/format_reader.dir/opencv_wraper.cpp.o [ 20%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_ctc_greedy.cpp.o [ 24%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_detectionoutput.cpp.o [ 24%] Linking CXX shared library ../../intel64/Release/lib/libformat_reader.so /home/ubuntu/Developer/inference_engine_vpu_arm/opencv/lib/libopencv_imgcodecs.so.4.0.1: error adding symbols: File in wrong format clang: error: linker command failed with exit code 1 (use -v to see invocation) common/format_reader/CMakeFiles/format_reader.dir/build.make:175: recipe for target 'intel64/Release/lib/libformat_reader.so' failed make[3]: *** [intel64/Release/lib/libformat_reader.so] Error 1 CMakeFiles/Makefile2:190: recipe for target 'common/format_reader/CMakeFiles/format_reader.dir/all' failed make[2]: *** [common/format_reader/CMakeFiles/format_reader.dir/all] Error 2 make[2]: *** Waiting for unfinished jobs.... [ 24%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_gather.cpp.o [ 28%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_grn.cpp.o [ 32%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_interp.cpp.o [ 36%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_list.cpp.o [ 36%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_mvn.cpp.o [ 40%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_normalize.cpp.o [ 44%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_pad.cpp.o [ 44%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_powerfile.cpp.o [ 48%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_priorbox.cpp.o [ 52%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_priorbox_clustered.cpp.o [ 56%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_proposal.cpp.o [ 56%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_psroi.cpp.o [ 60%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_region_yolo.cpp.o [ 64%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_reorg_yolo.cpp.o [ 68%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_resample.cpp.o [ 68%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_simplernms.cpp.o [ 72%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/ext_spatial_transformer.cpp.o [ 76%] Building CXX object ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/simple_copy.cpp.o [ 80%] Linking CXX shared library ../intel64/Release/lib/libcpu_extension.so /home/ubuntu/Developer/inference_engine_vpu_arm/deployment_tools/inference_engine/lib/ubuntu_18.04/libinference_engine.so: error adding symbols: File in wrong format clang: error: linker command failed with exit code 1 (use -v to see invocation) ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/build.make:642: recipe for target 'intel64/Release/lib/libcpu_extension.so' failed make[3]: *** [intel64/Release/lib/libcpu_extension.so] Error 1 CMakeFiles/Makefile2:85: recipe for target 'ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/all' failed make[2]: *** [ie_cpu_extension/CMakeFiles/ie_cpu_extension.dir/all] Error 2 CMakeFiles/Makefile2:1343: recipe for target 'object_detection_sample_ssd/CMakeFiles/object_detection_sample_ssd.dir/rule' failed make[1]: *** [object_detection_sample_ssd/CMakeFiles/object_detection_sample_ssd.dir/rule] Error 2 Makefile:450: recipe for target 'object_detection_sample_ssd' failed make: *** [object_detection_sample_ssd] Error 2

Which basically means that obviously trying to link 32-bit arm7-a libraries to 64-bit aarch64 architecture doesn't work...

Please release the source for these libraries so I can recompile for aarch64... Please Intel!

I at least got as far as Build and Run Object Detection Sample Step3.

Add a Comment

Have a technical question? Visit our forums. Have site or software product issues? Contact support.