Smart Video and Artificial Intelligence (AI) Workload Reference Implementation

Published: 04/22/2021

Edge Software Hub   /   Smart Video and Artificial Intelligence (AI) Workload  /  Documentation


This reference implementation helps you set up quickly and adjust the core concurrent video analysis workload with a configuration file to obtain the best performance of video codec, post-processing and inference based on integrated Graphics Procesing Unit (GPU) from Intel. You can use the sample application, video_e2e_sample, to complete runtime performance evaluation or as a reference for debugging core video workload issues.

Select Configure & Download to download the reference implementation and the software listed below.  

Configure & Download

Time to Complete


Available Software

20 minutes


  • Intel® Media SDK 20.3.0 
  • Intel® Distribution of OpenVINO™ toolkit 2021.1 

Recommended Hardware
The below hardware is recommended for use with this reference implementation. See the Recommended Hardware page for other suggestions. 

Target System Requirements

  • 7th - 10th generation Intel® Core™ processors.
  • Ubuntu* 18.04.02. 
  • Intel® platforms supported by the Intel® Media SDK 20.3.0 and Intel® Distribution of OpenVINO™ toolkit 2021.1. 
  • See GitHub* for major platform dependencies for the back-end media driver. 
  • 250 GB Hard Drive. 
  • At least 16 GB RAM. 

How It Works

This reference implementation includes a concurrent end-to-end video-based media and AI workload along with the source code and software dependencies.  

An end-to-end workload is built easily via combining components such as the following:

  • Retrieving video stream from network and local storage  
  • Decoding  
  • Post-processing  
  • Transcoding  
  • Video stream forwarding  
  • Composition  
  • Screen display 
  • AI inferencing 

This can significantly accelerate the evaluation and implementation cycle of edge video appliances (smart Network Video Recorder [NVR], Video Wall Controller, Video Conference, AI Box) on heterogenous platform consisting of CPU, integrated GPU and VPU from Intel.

Figure 1: Overview Diagram
Figure 2: Architecture Diagram

Get Started

NOTE: Make sure you have a system with a UI to run the application. If you execute the installation steps remotely, the application will be installed. However, you need a user interface (UI) for verification and running the application. 

Step 1: Install the Reference Implementation 

Select Configure & Download to download the reference implementation and then follow the steps below to install it.  

Configure & Download


  1. Open a new terminal, go to the download folder and unzip the downloaded reference implementation package:  


  2.  Go to the smart_video_and_ai_workload_reference_implementation directory:
    cd smart_video_and_ai_workload_reference_implementation
  3. Change permission of the executable edgesoftware file: 
    chmod 755 edgesoftware
  4. Run the command below to install the Reference Implementation: 
    ./edgesoftware install


  5. During the installation, you will be prompted for the Product Key. The Product Key is contained in the email you received from Intel confirming your download. 
    Figure 3: Product key


  6. When the installation is complete, you see the message “Installation of package complete” and the installation status for each module. 
    Figure 4: Install Success

    7. Once the installation is successfully completed, open a new terminal to proceed with verification and running the reference implementation.

Step 2: Verify Application Dependency

Follow the steps below to verify the installation.

1. Go to the application directory:

cd Smart_Video_and_AI_workload_Reference_Implementation_2021.1/Smart_Video_and_AI_workload_Reference_Implementation/SMART-VIDEO-AND-AI-WORKLOAD

2. Set the Intel® Distribution of OpenVINO™ toolkit Environment Variables: 

source /opt/intel/openvino_2021/bin/

3. If installation is successful, run vainfo and you will see the output below:

$ vainfo 

error: can't connect to X server! 

libva info: VA-API version 1.9.0 

libva info: User environment variable requested driver 'iHD' 

libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/ 

libva info: Found init function __vaDriverInit_1_9 

libva info: va_openDriver() returns 0 

vainfo: VA-API version: 1.9 (libva 2.9.0) 

vainfo: Driver version: Intel iHD driver for Intel(R) Gen Graphics - 20.3.0 (dcc5f0e) 

vainfo: Supported profile and entrypoints 

 VAProfileNone : VAEntrypointVideoProc 

 VAProfileNone : VAEntrypointStats 

 VAProfileMPEG2Simple: VAEntrypointVLD 

 VAProfileMPEG2Simple: VAEntrypointEncSlice 

 VAProfileMPEG2Main: VAEntrypointVLD 

 VAProfileMPEG2Main: VAEntrypointEncSlice 

 VAProfileH264Main: VAEntrypointVLD 

 VAProfileH264Main: VAEntrypointEncSlice 

 VAProfileH264Main: VAEntrypointFEI 

 VAProfileH264Main: VAEntrypointEncSliceLP 

 VAProfileH264High: VAEntrypointVLD 

 VAProfileH264High: VAEntrypointEncSlice 

 VAProfileH264High: VAEntrypointFEI 

 VAProfileH264High: VAEntrypointEncSliceLP 

 VAProfileVC1Simple: VAEntrypointVLD

NOTE: If you do not see the output above, use the command below to check if there are any missing libraries: 

$ ldd ./bin/video_e2e_sample | grep "not found"

NOTE: If there are any libraries not found, it means the installation was not completed. Contact your account manager from Intel and send the output of the command above in an email.

Run the Application to Visualize the Output

1. Go to the application directory. (Skip this step if you are already in the directory.)

cd Smart_Video_and_AI_workload_Reference_Implementation_2021.1/Smart_Video_and_AI_workload_Reference_Implementation/SMART-VIDEO-AND-AI-WORKLOAD

2. Set the Intel® Distribution of OpenVINO™ toolkit Environment Variables: 

source /opt/intel/openvino_2021/bin/

3. With the application,  two sample videos are provided for testing: svet_intro.h264 and car_1080p.h264. These video files are present in the application directory. They are in “.h264” format as they are the element streams extracted from their base files. Here svet_intro.h264 file is used to test n16_face_detection_1080p.par file and car_1080p.h264 file is used to test n4_vehicel_detect_1080p.par file. Learn more about these par files in Step 4. 

Optional: If you are using a custom video file {e.g., classroom.mp4} of format .mp4, use the command below to extract the element stream from MP4 file and then try using the par files for inferencing: 

ffmpeg -i classroom.mp4 -vcodec copy -an -bsf:v h264_mp4toannexb classroom.h264


NOTE: You can see classroom.mp4 inside the application directory since it is a sample video provided with the package. Instead of classroom.mp4, you can place any MP4 file [Example: sample.mp4] in the directory and execute the above command to extract the element stream from the MP4 file. Here the element stream will have the name classroom.h264 and will be placed in the same path where the classroom.mp4 is present [inside the application directory].

4. From the current working directory, navigate to the inference directory. Inside the inference directory, there are par files present for different models and different number of channels. 

cd par_file/inference/

NOTE: For example, here we use the face detection model with 16 channels for sample video “svet_intro.h264”. To use it, choose the file named n16_face_detection_1080p.par and follow the instructions from Step 5 or if you are using “car_1080p.h264” choose the file named n4_vehicel_detect_1080p.par and continue with Step 5.

5. Open the par file n16_face_detection_1080p.par using any editor (vi / vim). In each line after “-i::h264” represents the video channel. Modify the video path after “-i::h264” with the absolute path of the converted video or element stream in every line of the par file and then follow Step 6. 

vi n16_face_detection_1080p.par


NOTE: For example purposes we have used “n16_face_detection_1080p.par”. Depending on the use cases, you can use any of the par files listed there. Make sure you use the absolute path of the element stream extracted from the sample video clip in every line for rendering it to the channel. (In this case, give the absolute path of svet_intro.h264 [e.g.: /home/user/video/ svet_intro.h264] in each line wherever /<path>/<filename>.h264 is found). 


6. There are two ways you can run the application: using -rx11 and using -rdrm-DisplayPort

  • Run application with rx11 
    • Use this method if you want to run video_e2_sample with normal user or with X11 display. 
    • After making the changes in the par file, move to the last line and replace “-rdrm-DisplayPort” with “-rx11” and save the file. Then follow the instructions from Step 7. 

NOTE: Before you use this step, make sure you are not using a remote system because here you will be changing your system to text mode, and you will not be able to visualize the screen.

  • Run application with -rdrm-DisplayPort  

NOTE: Be physically available in front of the system as the output when using this method will be shown to the same system monitor where you have installed the application. If there are alive VNC (Virtual Network Client) sessions, close them first. The “-p” option is to keep the current user environment variables settings.

  • By default, the par files come with this option. 
    • Make necessary changes with the chosen par file and make sure the last line contains “-rdrm-DisplayPort” where it is mentioned and save the file. 
    • To run with “-rdrm-DisplayPort” in the par file, you must switch Ubuntu to text mode by using Ctrl + Alt + F3.  
    • After switching to text mode, login with your username [if prompted]. 
    • Move to the application directory [path mentioned in Step 1]. 
    • Source Intel® Distribution of OpenVINO™ toolkit environment as mentioned in Step 2. 
    • Use the command  “su -p” to preserve the environment settings. Also, because the DRM direct rendering requires root permission and no X clients running.  
    • If you are using this method, since you are already in the application directory, skip Step 7 and proceed with Step 8.  
    • You can see output on the monitor. 

7. After making the changes, navigate to the application directory.  

cd ../../

8. Execute the following command to run the application using the par file:

./bin/video_e2e_sample -par par_file/inference/n16_face_detection_1080p.par


   You will see the output shown below: 

FIgure 5: Output from Running Par File


NOTE: If you want to stop the application, press Ctrl + c in the bash shell. 

To learn more about running other par files, refer to GitHub.  

Summary and Next Steps 

This reference implementation supports core video analytic workloads for digital surveillance and retail use cases. 

Learn More 

To continue learning, see the following guides and software resources: 

Support Forum 

If you're unable to resolve your issues, contact the Support Forum

Product and Performance Information


Performance varies by use, configuration and other factors. Learn more at