Install the Intel® Distribution of OpenVINO™ toolkit for Linux*

This guide applies to Ubuntu*, CentOS*, and Yocto* OSes. If you are using the Intel® Distribution of OpenVINO™ toolkit on Windows* OS, see the Installation Guide for Windows*. If you are using the Intel® Distribution of OpenVINO™ toolkit with Support for FPGA, see the Installation Guide for Intel® Distribution of OpenVINO™ toolkit with Support for FPGA.

Introduction

Important:
- All steps in this guide are required unless otherwise stated.
- In addition to the downloaded package, you must install dependencies and complete configuration steps.

Your installation is complete when these are all completed:

  1. Installed the external software dependencies.
  2. Installed the Intel® Distribution of OpenVINO™ toolkit core components.
  3. Set the Intel Distribution of OpenVINO toolkit environment variables and (optional) update .bashrc.
  4. Configured the Model Optimizer.
    • Ran two demos.
    • Optional. Installed software or drivers for:

    About the Intel® Distribution of OpenVINO™ toolkit

    The Intel® Distribution of OpenVINO™ toolkit quickly deploys applications and solutions that emulate human vision. Based on Convolutional Neural Networks (CNN), the toolkit extends computer vision (CV) workloads across Intel® hardware, maximizing performance. The Intel Distribution of OpenVINO toolkit includes the Intel® Deep Learning Deployment Toolkit (Intel® DLDT).

    The Intel Distribution of OpenVINO toolkit for Linux*:

    • Enables CNN-based deep learning inference on the edge
    • Supports heterogeneous execution across a CPU, Integrated Graphics and Intel® Movidius™ Neural Compute Stick (NCS) and and Intel® Neural Compute Stick 2
    • Speeds time-to-market via an easy-to-use library of computer vision functions and pre-optimized kernels
    • Includes optimized calls for computer vision standards, including OpenCV*, OpenCL™, and OpenVX*

    Included with the installation

    The following components are installed by default:

    ComponentDescription
    Model Optimizer Developer Guide

    This tool imports, converts, and optimizes models, which were trained in popular frameworks, to a format usable by Intel tools, especially the Inference Engine.

    Note: Popular frameworks include Caffe*, TensorFlow*, MXNet*, and ONNX*.

    Inference Engine Developer GuideThis is the engine that runs a deep learning model. It includes a set of libraries for an easy inference integration into your applications.
    Drivers and runtimes for OpenCL™ version 2.1Enables OpenCL on the GPU/CPU for Intel® processors
    Intel® Media SDKOffers access to hardware accelerated video codecs and frame processing
    OpenCV* version 4.0OpenCV* community version compiled for Intel® hardware. Includes PVL libraries for computer vision
    OpenVX* version 1.1Intel's implementation of OpenVX* 1.1 optimized for running on Intel® hardware (CPU, GPU, IPU).
    Pre-trained modelsSet of Intel's pre-trained models for learning and demo purposes or to develop deep learning software.
    Sample ApplicationsA set of simple console applications demonstrating how to use Intel's Deep Learning Inference Engine in your applications. For additional information about building and running the samples, refer to the Inference Engine Developer Guide

    System requirements

    This guide covers the Linux version of the Intel Distribution of OpenVINO toolkit that does not include FPGA support. For the toolkit that includes FPGA support, see Installing the Intel Distribution of OpenVINO toolkit for Linux with FPGA Support.

    Hardware

    • 6th-8th Generation Intel® Core™
    • Intel® Xeon® v5 family
    • Intel® Xeon® v6 family
    • Intel® Pentium® processor N4200/5, N3350/5, N3450/5 with Intel® HD Graphics
    • Intel® Movidius™ NCS 
    • Intel® Neural Compute Stick 2

    Processor Notes:

    Operating Systems

    • Ubuntu* 16.04 long-term support (LTS), 64-bit
    • CentOS* 7.4 or higher, 64-bit
    • Yocto Project* Poky Jethro* v2.0.3, 64-bit (for target only)

    Installation Steps

    This guide assumes you downloaded the Intel Distribution of OpenVINO toolkit for Linux* OS. If you do not have a copy of the toolkit package file, download the latest version and then return to this guide to proceed with the installation.

    Install the Intel® Distribution of OpenVINO™ toolkit core components 

    1. Open the Terminal* or your preferred console application and go to the directory in which you downloaded the Intel® Distribution of OpenVINO™ toolkit. This document assumes this is your ~/Downloads directory. If not, replace ~/Downloads with the directory where the file is located:
      cd ~/Downloads/
      By default, the package file is saved as l_openvino_toolkit_p_<version>.tgz
    2. Unpack the .tgz file you downloaded:
      tar -zxf l_openvino_toolkit_p_<version>.tgz
      The files are unpacked to a directory named l_openvino_toolkit_p_<version>
    3. Go to the l_openvino_toolkit_p_<version> directory:
      cd l_openvino_toolkit_p_<version>
    4. If you have a previous version of the toolkit installed, rename or delete two directories:
      • /home/<user>/inference_engine_samples
      • /home/<user>/openvino_models
    5. Choose one of the installation options below and run the related script with root or regular user privileges. The default installation directory path depends on the privileges you choose for the installation.
      You can use either a GUI installation wizard or command-line instructions. The only difference between the two options is that the command-line instructions are text-based. This means that instead of clicking options in a GUI, command-line prompts ask for input on a text screen.

      Use only one of these options:
      • Option 1: GUI Installation Wizard:
        sudo ./install_GUI.sh
      • Option 2: Command-line Instructions:
        sudo ./install.sh
    6. Follow the instructions on your screen. Watch for informational messages such as the following in case you must complete additional steps:

      OpenVINO Installation Prerequisites screen
       
    7. If needed, change the components you want to install or the installation directory. Pay attention to the installation directory, because you will need this information later. If you select the default options, the Installation summary GUI screen looks as follows:

      OpenVINO Installation Summary screen 
       
      • If you used root privileges to run the installer, it installs the Intel Distribution of OpenVINO toolkit in this directory: /opt/intel/computer_vision_sdk_<version>/

        For simplicity, a symbolic link to the latest installation is also created: /opt/intel/computer_vision_sdk/

      • If you used regular user privileges to run the installer, it installs the Intel Distribution of OpenVINO toolkit in this directory: /home/<user>/intel/computer_vision_sdk_<version>/

        For simplicity, a symbolic link to the latest installation is also created: /home/<user>/intel/computer_vision_sdk/

    8. A Complete screen indicates the installation is finished. Write down the version number of the software beginning with the year. You will need this information later in the document.

      OpenVINO Installation Complete screen

       

    The core components are installed. Continue to the next section to install additional dependencies.

    Install external software dependencies

    1. Change to the install dependencies directory.
      cd /opt/intel/computer_vision_sdk/install_dependencies
    2. Run a script to automatically download and install external software dependencies. These dependencies are required for the Intel-optimized version of OpenCV, the Deep Learning Inference Engine, and the Deep Learning Model Optimizer tools. Install these before the Intel Distribution of OpenVINO toolkit.
      sudo -E ./install_cv_sdk_dependencies.sh

      As an option, you can install all the dependencies manually instead of running install_cv_sdk_dependencies.sh. In this case, use the list of dependencies at System Requirements.

    The dependencies are installed. Continue to the next section to set the environment variables.

    Note: The Model Optimizer has additional prerequisites that are addressed later in this document.

    Set the environment variables 

    You must update several environment variables before you can compile and run OpenVINO™ applications. Run the following script to temporarily set your environment variables:

    source /opt/intel/computer_vision_sdk/bin/setupvars.sh

    (Optional) The Intel Distribution of OpenVINO toolkit environment variables are removed when you close the shell. As an option, you can permanently set the environment variables as follows:

    1. Open the .bashrc file in <user_directory>:
      vi <user_directory>/.bashrc
    2. Add this line to the end of the file:
      source /opt/intel/computer_vision_sdk/bin/setupvars.sh
    3. Save and close the file: press the Esc key and type :wq.
    4. To test your change, open a new terminal. You will see [setupvars.sh] OpenVINO environment initialized

    The environment variables are set. Continue to the next section to configure the Model Optimizer.

    Configure the Model Optimizer 

    Important: This section is required. You must configure the Model Optimizer for at least one framework. The Model Optimizer will fail if you do not complete the steps in this section.

    The Model Optimizer is a key component of the Intel Distribution of OpenVINO toolkit. You cannot do inference on your trained model without running the model through the Model Optimizer. When you run a pre-trained model through the Model Optimizer, your output is an Intermediate Representation (IR) of the network. The IR is a pair of files that describe the whole model:

    • .xml: Describes the network topology
    • .bin: Contains the weights and biases binary data

    The Inference Engine reads, loads, and infers the IR files, using a common API across the CPU, GPU, or VPU hardware.

    The Model Optimizer is a Python*-based command line tool (mo.py), which is located in /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer.

    Use this tool on models trained with popular deep learning frameworks such as Caffe*, TensorFlow*, MXNet*, and ONNX* to convert them to an optimized IR format that the Inference Engine can use.

    This section explains how to use scripts to configure the Model Optimizer either for all of the supported frameworks at the same time or for individual frameworks. If you want to manually configure the Model Optimizer instead of using scripts, see the using manual configuration process section in the Model Optimizer Developer Guide.

    For more information about the Model Optimizer, see the Model Optimizer Developer Guide.

    Model Optimizer configuration steps

    You can either configure the Model Optimizer for all supported frameworks at once, or for one framework at a time. Choose the option that best suits your needs. If you see error messages, make sure you installed all dependencies.

    Note: If you did not install the Intel Distribution of OpenVINO toolkit to the default installation directory, replace /intel/ with the directory where you installed the software to.

    Option 1: Configure the Model Optimizer for all supported frameworks at the same time:

    1. Go to the Model Optimizer prerequisites directory:
      cd /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/install_prerequisites
    2. Run the script to configure the Model Optimizer for Caffe, TensorFlow, MXNet, Kaldi*, and ONNX:
      sudo ./install_prerequisites.sh

    Option 2: Configure the Model Optimizer for each framework separately:

    1. Go to the Model Optimizer prerequisites directory:
      cd /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/install_prerequisites
    2. Run the script for your model framework. You can run more than one script:
      • For Caffe:
        sudo ./install_prerequisites_caffe.sh
      • For TensorFlow:
        sudo ./install_prerequisites_tf.sh
      • For MXNet:
        sudo ./install_prerequisites_mxnet.sh
      • For Kaldi:
        sudo ./install_prerequisites_kaldi.sh
      • For ONNX:
        sudo ./install_prerequisites_onnx.sh

    The Model Optimizer is configured for one or more frameworks. You are ready to use two short demos to see the results of running the Intel Distribution of OpenVINO toolkit and to verify your installation was successful. The demo scripts are required since they perform additional configuration steps. Continue to the next section.

    If you want to use a GPU or VPU, read through the Optional Steps section. 

     

    Use the Demo Scripts to Verify Your Installation 

    Important: This section is required. In addition to confirming that your installation was successful, the demo scripts perform additional steps, such as setting up your computer to use the Model Optimizer samples.

    Note: To run the demo applications on Intel® Processor Graphics, Intel® Movidius™ Neural Compute Stick or Intel® Neural Compute Stick 2, make sure you completed the Additional Installation Steps first.

    To learn more about the demo applications, see README.txt in /opt/intel/computer_vision_sdk/deployment_tools/demo/.

    For detailed description of the pre-trained object detection and object recognition models, go to /opt/intel/computer_vision_sdk/deployment_tools/intel_models/ and open index.html.

    Note:
    The paths in this section assume you used the default installation directory to install the Intel Distribution of OpenVINO toolkit. If you installed the software to a directory other than /opt/intel/, update the directory path with the location where you installed the toolkit to.
    If you installed the product as a root user, you must switch to the root mode before you continue: sudo -i

    Run the image classification demo

    1. Go to the Inference Engine demo directory:
      cd /opt/intel/computer_vision_sdk/deployment_tools/demo
    2. Run the Image Classification demo: 
      ./demo_squeezenet_download_convert_run.sh
      The Image Classification demo uses the Model Optimizer to convert a SqueezeNet model to .bin and .xml Intermediate Representation (IR) files. The Inference Engine component uses these files.

      For a brief description of the Intermediate Representation .bin and .xml files, see Configuring the Model Optimizer.

      This demo creates the directory /home/<user>/inference_engine_samples/.

      This demo uses car.png in the demo directory. When the demo completes, you will see the label and confidence for the top-10 categories:

      command screen output

    This demo is complete. Continue to the next section to run the Inference Pipeline demo.

    Run the inference pipeline demo

    1. While still in /opt/intel/computer_vision_sdk/deployment_tools/demo/, run the Inference Pipeline demo:
      ./demo_security_barrier_camera.sh
      This demo uses car.png in /opt/intel/computer_vision_sdk/deployment_tools/demo/ to show an inference pipeline. This demo uses three pre-trained models. The demo uses vehicle recognition in which vehicle attributes build on each other to narrow in on a specific attribute. The demo works as follows:
      1. An object is identified as a vehicle.
      2. This identification is used as input to the next model, which identifies specific vehicle attributes, including the license plate.
      3. The attributes identified as the license plate are used as input to the third model, which recognizes specific characters in the license plate.
      For more information about the demo, see the Security Camera Sample.​
    2. When the demo completes, two windows are open:
      • A console window that displays information about the tasks performed by the demo
      • An image viewer window that displays a picture similar to the following:

        car identified by AI
    3. Close the image viewer screen to end the demo.

    In this section, you saw a preview of the Intel Distribution of OpenVINO toolkit capabilities.

    Your have completed all the required installation, configuration, and build steps to work with your trained models using CPU.

    If you want to use Intel® Processor graphics (GPU), Intel® Movidius™ Neural Compute Stick or Intel® Neural Compute Stick 2 (VPU), read through the next section for additional steps.  

    Note: If you are migrating from the Intel® Computer Vision SDK 2017 R3 Beta version to the Intel Distribution of OpenVINO toolkit, read this information about porting your applications.

    Read the Summary for your next steps.

    Optional Steps 

    Use these steps to prepare your computer for to use the Intel® Processor GraphicsIntel® Movidius™ Neural Compute Stick or Intel® Neural Compute Stick 2.

    Additional installation steps for processor graphics (GPU) 

    Note: These steps are only required if you want to enable the toolkit components to utilize processor graphics (GPU) on your system.

    1. Go to the install_dependencies directory:
      cd /opt/intel/computer_vision_sdk/install_dependencies/
    2. Enter the super user mode:
      sudo -E su
    3. Check your kernel version. The minimum supported kernel is 4.14:
      uname -r

      Note: You can use a kernel at or above 4.14.

    4. If your kernel version is below 4.14, update it to the minimum of 4.14:
      ./install_4_14_kernel.sh
    5. Install the Intel® Graphics Compute Runtime for OpenCL™ Driver components required to use the GPU plugin and write custom layers for Intel® Integrated Graphics:
      ./install_NEO_OCL_driver.sh

      Note: Two command-line suggestions display:
      — Add OpenCL user to video group
      — Run script to install the 4.14 kernel script
      Both suggestions are incorrect. Disregard them and continue.

    6. Reboot the machine:
      reboot
    7. (Optional) Install header files to allow compiling a new code. You can find the header files at KhronosGroup OpenCL-Headers.

    Additional installation steps for the Intel® Movidius™ Neural Compute Stick and Intel® Neural Compute Stick 2  

    Note: These steps are only required if you want to perform inference on Intel® Movidius™ NCS powered by the Intel® Movidius™ Myriad™ 2 VPU or Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X VPU. See also the Get Started page for Intel® Neural Compute Stick 2: https://software.intel.com/en-us/neural-compute-stick/get-started

    1. Add the current Linux user to the users group:
      sudo usermod -a -G users "$(whoami)"
      
    2. To perform inference on Intel® Movidius™ Neural Compute Stick and Intel® Neural Compute Stick 2, install the USB rules as follows:
      cat <<EOF > 97-usbboot.rules
      SUBSYSTEM=="usb", ATTRS{idProduct}=="2150", ATTRS{idVendor}=="03e7", GROUP="users", MODE="0666", ENV{ID_MM_DEVICE_IGNORE}="1"
      SUBSYSTEM=="usb", ATTRS{idProduct}=="2485", ATTRS{idVendor}=="03e7", GROUP="users", MODE="0666", ENV{ID_MM_DEVICE_IGNORE}="1"
      SUBSYSTEM=="usb", ATTRS{idProduct}=="f63b", ATTRS{idVendor}=="03e7", GROUP="users", MODE="0666", ENV{ID_MM_DEVICE_IGNORE}="1"
      EOF
      
      sudo cp 97-usbboot.rules /etc/udev/rules.d/
      sudo udevadm control --reload-rules
      sudo udevadm trigger
      sudo ldconfig
      rm 97-usbboot.rules

     

    Summary 

    In this document, you installed the Intel Distribution of OpenVINO toolkit and the external dependencies. In addition, you might have installed software and drivers that will let you use GPU or VPU to infer your models.

    After the software was installed, you ran two demo applications to compile the extensions library and configured the Model Optimizer for one or more frameworks.

    You are now ready to learn more about converting models trained with popular deep learning frameworks to the Inference Engine format, following the links below, or you can move on to running the sample applications.

    To learn more about converting models, go to:

    Additional Resources

    Intel Distribution of OpenVINO Toolkit home page

    Intel Distribution of OpenVINO Toolkit documentation

    Intel Distribution of OpenVINO Toolkit Hello World Activities, see the Inference Tutorials for Face Detection and Car Detection Exercises.

    Intel® Neural Compute Stick 2 Get Started page

    For more complete information about compiler optimizations, see our Optimization Notice.