Documentation

  • 2021.2
  • 06/30/2021
  • Public Content

Run
OpenVINO™
Sample Applications in Docker* Container

Run the Sample Application

  1. Go to the
    AMR_containers
    folder:
    cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_<version>/AMR_containers
  2. Run the command below to start the Docker container:
    ./run_interactive_docker.sh amr-ubuntu2004-full-flavour-sdk:<TAG>
  3. Set up the
    OpenVINO™
    environment:
    source /opt/intel/openvino/bin/setupvars.sh
  4. Run Inference Engine object detection on a pretrained network using the SSD method. Run the detection demo application for CPU:
    object_detection_demo -i /data_samples/media_samples/plates_720.mp4 -m /opt/intel/openvino/deployment_tools/open_model_zoo/tools/downloader/intel/vehicle-license-plate-detection-barrier-0106/FP32/vehicle-license-plate-detection-barrier-0106.xml -d CPU -at ssd
    You should see a video with at least one license plate of a car recognized by the Neural Network.
  5. Run the detection demo application for GPU:
    object_detection_demo -i /data_samples/media_samples/plates_720.mp4 -m /opt/intel/openvino/deployment_tools/open_model_zoo/tools/downloader/intel/vehicle-license-plate-detection-barrier-0106/FP32/vehicle-license-plate-detection-barrier-0106.xml -d GPU -at ssd
    You should see the same video as the previous step, with at least one license plate of a car recognized by the Neural Network.
    There is a known issue that if you choose to run the
    object_detection_demo
    using the
    –d MYRIAD
    option, a core dump error will be thrown when the demo ends.
    To use the
    –d MYRIAD
    option, you should start Docker as root. Use the following command:
    docker run -it --rm --network=host --env="USER=root" --env "DISPLAY=$DISPLAY" --env="QT_X11_NO_MITSHM=1" --security-opt apparmor:unconfined --volume /tmp/.X11-unix:/tmp/.X11-unix --volume "${HOME}"/.Xauthority:/home/"$(whoami)"/.Xauthority:rw --volume "${HOME}":/home/"$(whoami)":rw --volume "${HOME}"/.cache:/.cache:rw --volume /run/user:/run/user --volume /var/run/nscd/socket:/var/run/nscd/socket:ro --volume /dev:/dev --volume /lib/modules:/lib/modules --volume /tmp:/tmp:rw --privileged amr-ubuntu2004-full-flavour-sdk:<TAG>
    If errors occur, remove the following file and try again:
    rm -rf /tmp/mvnc.mutex
  6. Use the Model Optimizer to convert a TensorFlow Neural Network model:
    python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo_tf.py --reverse_input_channels --input_model /data_samples/shared_box_predictor/frozen_inference_graph.pb --transformations_config /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config /data_samples/shared_box_predictor/pipeline.config --output_dir /data_samples/shared_box_predictor_ie
    Expected output:
    [ SUCCESS ] Generated IR version 10 model. [ SUCCESS ] XML file: /data_samples/shared_box_predictor_ie/frozen_inference_graph.xml [ SUCCESS ] BIN file: /data_samples/shared_box_predictor_ie/frozen_inference_graph.bin [ SUCCESS ] Total execution time: 32.58 seconds. [ SUCCESS ] Memory consumed: 1207 MB.
  7. After the conversion is done, run the Neural Network against the Inference Engine for CPU.
    object_detection_demo -i /data_samples/media_samples/plates_720.mp4 -m /data_samples/shared_box_predictor_ie/frozen_inference_graph.xml -d CPU -at ssd
    You should see a video with cars that are recognized by the Neural Network.
    Expected output:
    [ INFO ] InferenceEngine: API version ......... 2.1 Build ........... 2021.2.0-1877-176bdf51370-releases/2021/2 [ INFO ] Parsing input parameters [ INFO ] Reading input [ INFO ] Loading Inference Engine [ INFO ] Device info: [ INFO ] CPU MKLDNNPlugin version ......... 2.1 Build ........... 2021.2.0-1877-176bdf51370-releases/2021/2 Loading network files [ INFO ] Batch size is forced to 1. [ INFO ] Checking that the inputs are as the demo expects [ INFO ] Checking that the outputs are as the demo expects [ INFO ] Loading model to the device
    To close the application, press
    CTRL+C
    here or switch to the output window and press
    ESC
    or the
    q
    key.
    To switch between min_latency and user_specified modes, press the
    TAB
    key in the output window.
  8. Run the Neural Network again with the Inference Engine for integrated GPU:
    object_detection_demo -i /data_samples/media_samples/plates_720.mp4 -m /data_samples/shared_box_predictor_ie/frozen_inference_graph.xml -d GPU -at ssd
    You should see a video with cars that are recognized by the Neural Network.
    Expected output:
    [ INFO ] InferenceEngine: API version ......... 2.1 Build ........... 2021.2.0-1877-176bdf51370-releases/2021/2 [ INFO ] Parsing input parameters [ INFO ] Reading input [ INFO ] Loading Inference Engine [ INFO ] Device info: [ INFO ] GPU clDNNPlugin version ......... 2.1 Build ........... 2021.2.0-1877-176bdf51370-releases/2021/2 Loading network files [ INFO ] Batch size is forced to 1. [ INFO ] Checking that the inputs are as the demo expects [ INFO ] Checking that the outputs are as the demo expects [ INFO ] Loading model to the device
    To close the application, press
    CTRL+C
    here or switch to the output window and press
    ESC
    or the
    q
    key.
    To switch between min_latency and user_specified modes, press the
    TAB
    key in the output window.

Summary and Next Steps

In this tutorial, you learned how to run Inference Engine object detection on a pretrained network using SSD method and how to run the detection demo application for CPU and GPU. You also learned that the Model Optimizer can convert a TensorFlow Neural Network model and after the conversion is done, how to run the Neural Network with Inference Engine for CPU and GPU.

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.