The Intel® Distribution of OpenVINO™ toolkit includes many different demo vision applications intended to teach developers about how to design and integrate their own applications with the toolkit. The demos span from simple image classification to human emotion detection – whatever your use-case, you can find valuable information from these demos.
Note: The Inference Engine demos are covered under the Apache* 2.0 license, giving you the freedom to modify for your purposes. Do be aware that other parts of the Intel® Distribution of OpenVINO™ toolkit have different licenses. More information can be found in this directory: C:\Program Files(x86)\IntelSWTools\openvino\licensing\readme.txt.
One of the demos included is the Human Pose Estimation demo, showcasing multi-person 2D human pose estimation. Its purpose is to predict a current body pose for every recognizable human in a video.
Note: More information about the demo can be found here, or at the README distributed with the demo in the demo folder.
The demo uses a pre-trained pose estimation network named human-pose-estimation-0001. This model is available in the Open Model Zoo, and can be fetched using the Model Downloader available in the toolkit. The network is a pre-trained multi-person 2D pose estimation network with a tuned MobileNet* v1 as a feature extractor. The demo uses a video stream, a camera, network stream, or video file, as an input and generates certain keypoints and the connections between them, building a skeleton that reflects the current pose.
This article will walk through setting up and running the demo on Windows, using both your already available Intel® Core™ Processor and the Intel® Neural Compute Stick 2 (Intel® NCS 2). Before we begin, make sure that you meet the prerequisites.
Make sure you have completed the following steps. Many of these components may have been completed during the installation of the Intel® Distribution of OpenVINO™ Toolkit, but make sure everything is installed.
As long as all of the prerequisites are met, then you should continue to build the demos. The demos ship as source code, giving you the power to learn and modify for your uses. To build the demosand their Visual Studio solutions, a script has been provided in C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\open_model_zoo\demos\ named build_demos_msvc.bat. Run this in an elevated command prompt to build the demos using the command-line Build Tools for Visual Studio. If you’ve already built the demos, you can skip this step.
Note: This article assumes you’ve installed Intel® Distribution of OpenVINO™ toolkit into the default install directory, located at C:\Program Files (x86)\IntelSWTools\openvino\. If you’ve changed the installation directory, make sure to change your paths.
After completing, the built demos and their solutions are placed in %USERPROFILE%\Documents\Intel\OpenVINO\omz_demos_build\. The master Visual Studio solution (.sln) is located in this folder, and the individual project files are located in their respective folders. The application binaries are in intel64\Release\.
You’ll need to follow some steps to set up the proper environment variables and ensure that you have the right network models. Follow the instructions below:
To begin, open an elevated command prompt and scope to the OpenVINO installation directory. Run the setupvars.bat script in the \bin\ folder to set the environment variables for your current session.
cd "C:\Program Files (x86)\IntelSWTools\openvino\bin" .\setupvars.bat
Note: You need to run this script every time you’re working in a shell. Alternatively, you can add the environment variables to your system to have them set every time a new command prompt is opened.
The model that we will be using for this demo is the human-pose-estimation-0001 network available in the Open Model Zoo. You can fetch this model using the Model Downloader, a script distributed with Intel® Distribution of OpenVINO™ toolkit. The Model Downloader is a Python script located at C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\tools\model_downloader\. The following command is an example – it fetches the human-pose-estimation-0001 and places it in a subfolder named Transportation in the same folder as the Model Downloader:
python downloader.py --name human-pose-estimation-0001
You can use the -o flag to change the output directory if desired.
Note: If you have Python 2.7.3 installed on your system, then the python command may point to that Python version. In that case, use python3 to access Python 3.
You can also fetch the model directly from the Open Model Zoo at:
Download the FP16 for inferencing on the Intel® NCS 2. Make sure you download both the .bin file and the .xml file and place them in the same folder.
The Intel® Neural Compute Stick 2 requires using an FP16 model, a model that has a floating-point precision of 16 bits. FP16 models allow for inferencing with nearly the same amount of precision with less computational overhead compared to classical FP32 models. OpenVINO 2019 R2 supports the use of FP16 models with every plugin, including the MYRIAD plugin support the Intel® NCS 2.
With your model and your demo video, you’re ready to run the demo. If you’ve closed your Command Prompt before this point, you’ll need to rerun setupvars.bat in the OpenVINO installation directory to set the proper environment variables like above. After, scope to the folder that contains the demo:
The demos are command-line programs that use flags as options for running. The full list of options for the demo can be seen by running a demo with the –h flag:
The demo can work with an active camera such as the integrated webcam in your development laptop or monitor.
human_pose_estimation_demo.exe -m "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\tools\model_downloader\Transportation\human_pose_estimation\mobilenet-v1\dldt\FP16\human-pose-estimation-0001.xml" –i cam –d MYRIAD
Note: The cam input tells OpenCV to look for a connected camera. OpenCV will find the first available camera device – for simplicity, make sure your desired camera is the only one activated.
The demo takes the input camera and displays it while estimating human poses. The MYRIAD device selector activates the MYRIAD plugin which loads networks to the Intel® NCS 2 and manages inference. A FP16 model is required for use with the MYRIAD plugin. The Open Model Zoo provides FP32 and FP16 versions of compatible networks, such as the one we are using here.
These demos can also be run on any computer with at least a 6th Generation Intel® Core Processor.
Scope to the location of the demo, changing <current user> to reflect your Windows user:
Finally, use the following command to run the demo using the CPU and your integrated webcam.
human_pose_estimation_demo.exe -i cam -m "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\tools\model_downloader\Transportation\human_pose_estimation\mobilenet-v1\dldt\FP16\human-pose-estimation-0001.xml" –d CPU
We encourage you to explore the human_pose_estimation project to see how the code interacts with the network and the Inference Engine and the best ways to integrate your application with Intel® Distribution of OpenVINO™ toolkit.
Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.
Notice revision #20110804