Intelligent Traffic Management Reference Implementation

Overview

Intelligent Traffic Management is designed to detect and track vehicles and pedestrians and estimate a safety metric for an intersection. Object tracking recognizes the same object across successive frames, giving the ability to estimate trajectories, speeds of the objects. The reference implementation also detects collisions and near miss collisions. A real-time dashboard visualizes the intelligence extracted from the traffic intersection along with annotated video stream(s).

This collected intelligence can be used to adjust traffic lights to optimize the traffic flow of the intersection, or to evaluate and enhance the safety of the intersection by allowing Emergency services notifications, i.e., 911 calls, to be triggered by collision detection, reducing emergency response times.

Table 1
Time to Complete 30 - 40 minutes
Programming Language C++
Software Intel® Distribution of OpenVINO™ toolkit 2020 Release

 

Target System Requirements

  • Ubuntu* 18.04.3 LTS
  • 6th to 10th Generation Intel® core™ processors with Iris® Pro graphics or Intel® HD Graphics
  • USB webcam (optional)

Recommended Development Kits

How It Works

The application uses the DLStreamer included in the Intel® Distribution of OpenVINO™ toolkit. Initially, the pipeline is executed with the provided input video feed and models. The DLStreamer preprocess the input and performs inference according to the pipeline settings. The inference results are parsed using callback function. These results are fed to the tracking, collision and areas of interest functions. The sections below explain in detail about the flow and features

Detection

Detections are performed by using gvadetect plugin element in the DLStreamer pipeline. The pipeline consists of detection model, model proc file, target device and input stream. The object Region of interest (ROI) are obtained from DLStreamer call functions and updated to result vector. This result vector is used for further processing.

Tracking

Once the detections are obtained, the Region of Interest(ROI) results are added to the tracker system, which begins tracking the object over the successive frames. The results are updated for every frame to give feedback to the tracker with the new information:

  1. If there is a new object, adds it to the tracking system.

  1. If the tracker lost the object, adds it again.

  1. If the ROI of the object moved from the original object, reset the tracker information for that object.

  1. If a detected object is about to exit the scene, remove the object.

  1. If a lost object(object detection missed due to obstacles or detection missed by model) that is being tracked, remove the object.

Collision Detection

With object tracking enabled, every object’s actual and past positions can be retrieved. Object locations are averaged through a sliding window (width of 5) to filter the noise of the detection models. The velocity derived by calculating the difference of object locations in two consecutive positions. The speed is normalized with a factor of 1/y (being y the vertical position of the object in the image). This is because objects moving closer to the camera appear to be moving faster (in terms of pixels/frame) but that it is not the reality. Once the velocities of each tracked object are obtained, the acceleration is calculated analogously.

A vehicle assumed to be in a dangerous situation when a sudden spike of acceleration is detected. A spike is defined as the difference between the current acceleration and the average of the previous three. If this difference is bigger than an empirical defined value (which could be tuned), it is considered a possible collision. If the ROI of an object flagged to be in a dangerous situation overlaps with any other object’s ROI, it is considered as near-miss. If the other object is also flagged to be in a dangerous situation, it is considered as collision.

Near misses are an extension of collisions. The application can identify thresholds in speed and acceleration and search for other vehicles in the neighborhood of the offender to detect near misses. The following is an example of near-miss scenario:

Cars that are going through an intersection (in perpendicular directions) and suddenly decrease their velocity and change their direction to avoid a collision.

Areas of Interest

Areas of Interest allows to determine the boundaries of the space where the object detection will take place, so that all objects that pass through or stay within the area will be properly identified and tracked. The user can select specific rectangular region of interest on the first frame of the video. The areas involved are streets, sidewalks and crosswalks. Taking into account of these criteria for defining areas of interest, rules are defined for detecting dangerous scenarios, i.e., a car going from the street to the sidewalk, car with zero velocity in crosswalk or cars in a crosswalk with pedestrians on it.

The data obtained from the above features is stored in InfluxDB for analysis and visualized on Grafana dashboard.

Figure 1: Architecture Diagram

 

Get Started

Install the Reference Implementation

Follow the steps below to install the Reference Implementation.

NOTE: If the host system already has Docker images and containers, you might encounter errors while building the reference implementation packages. If you do encounter errors, refer to the Troubleshooting section at the end of this document before starting the reference implementation installation.

1. Open a new terminal, go to the downloaded folder and unzip the downloaded RI package.

unzip intelligent_traffic_management.zip

2. Go to intelligent_traffic_management/ directory.

cd intelligent_traffic_management/

3. Change permission of the executable edgesoftware file.

chmod 755 edgesoftware

4. Run the command below to install the Reference Implementation:

./edgesoftware install

5.During the installation, you will be prompted for the Product Key. The Product Key is contained in the email you received from Intel confirming your download..

6. When the installation is complete, you see the message “Installation of package complete” and the installation status for each module.

 

NOTE: If you encounter any issues, please refer to the Troubleshooting section at the end of this document. Installation failure logs will be available at path /var/log/esb-cli/Intelligent_Traffic_Management_<version>/<Component_name>/install.log.

7. Go to the working directory:

cd Intelligent_Traffic_Management_<version>/Intelligent_Traffic_Management/intelligent-traffic-management/

NOTE: <version> is the Intel® Distribution of OpenVINO™ toolkit version downloaded.

Build the Docker Image

1. From the working directory, i.e., intelligent-traffic-management, run the command below to build the smart city image:

sudo docker build --no-cache -t smart_city .

NOTE: If network calls fail during image build on corporate network, e.g,, ‘apt-get update’ error, please refer to the Troubleshooting section at the end of this document.

2. To check the smart_city Docker image, run the command below:

sudo docker images  

3. If Docker images built successfully, the results will be as follows:

Run the Application with Test Video

The steps below use a test video included with the Reference Implementation. For instructions on how to run the application with feed from a camera or how to use other videos, see Optional Steps

To run the application, use the command below:

NOTE: By default, Tracking and Collision features are enabled in run.sh file.

chmod +x run.sh
./run.sh

 

NOTE: For errors such as could not open display (null), please run the command below and re-run the application.

xhost +

Run the Application with Other Features

 

1. To Enable Tracking:

Run the command with --tracking argument:

sudo docker run -it --net=host --env DISPLAY=$DISPLAY --memory 500m --health-cmd='stat /etc/passwd || exit 1' --volume $HOME/.Xauthority:/root/.Xauthority:rw smart_city /bin/bash -c "source /opt/intel/openvino/bin/setupvars.sh && ./smart_city --input ../data/video1_640x320.mp4 --vp_model ../models/person-vehicle-bike-detection-crossroad-0078/FP16/person-vehicle-bike-detection-crossroad-0078.xml --vp_proc ../model_proc/person-vehicle-bike-detection-crossroad-0078.json --tracking"  

 

Tracking

2. To Enable Collisions:

Run the command with --tracking and --collision argument:

sudo docker run -it --net=host --env DISPLAY=$DISPLAY --memory 500m --health-cmd='stat /etc/passwd || exit 1' --volume $HOME/.Xauthority:/root/.Xauthority:rw smart_city /bin/bash -c "source /opt/intel/openvino/bin/setupvars.sh && ./smart_city --input ../data/video82.mp4 --vp_model ../models/person-vehicle-bike-detection-crossroad-0078/FP16/person-vehicle-bike-detection-crossroad-0078.xml --vp_proc ../model_proc/person-vehicle-bike-detection-crossroad-0078.json --tracking --collision --threshold 0.7"   

 

Collisions

 

3. To Draw Areas of Interest:

Run the command with the --show_selection argument:

sudo docker run -it --net=host --env DISPLAY=$DISPLAY --memory 500m --health-cmd='stat /etc/passwd || exit 1' --volume $HOME/.Xauthority:/root/.Xauthority:rw smart_city /bin/bash -c "source /opt/intel/openvino/bin/setupvars.sh && ./smart_city --input ../data/video82.mp4 --vp_model ../models/person-vehicle-bike-detection-crossroad-0078/FP16/person-vehicle-bike-detection-crossroad-0078.xml --vp_proc ../model_proc/person-vehicle-bike-detection-crossroad-0078.json --tracking --collision --show_selection"  

After executing the above the command, the video will freeze, and a window will be prompted to specify the rectangular area of interest.

Crop the Area of Interest: Draw rectangle by dragging and dropping with the mouse click. The user can select F to select the entire frame.

Cropped Frame

 

Draw Region of Interest: After cropping the area of interest, a window is prompted to draw the region of interest that will add information to the near-misses logic.

  • Press S to draw streets. This is the enabled by default. Use left mouse button to draw contours of the regions (with at least 3 points).

  • To draw new area, press N.

  • Particularly, when drawing streets (after pressing N), window is prompted to define an orientation (n, s, e, w). Below is the sample image while selecting the Streets.

Streets

 

  • Press W, to draw the area of interested sidewalks. To define a new area, press N. Below is the sample image of sidewalk selection.
Sidewalks

 

  • Press Z to draw area of interested crosswalks. To define a new area, press N. Below is the sample image of crosswalk selection.
Crosswalks

 

  • Once all the areas of interest are specified, press F.
Output of all areas of Interest specified

 

After specifying the areas of interest, the cropped area is slightly highlighted which is considered for detection and tracking. If the below rules are violated, that area will be highlighted with the red color.

  1. Car moving from the street to the sidewalk

  1. Car with zero velocity in crosswalk

  1. Car in a crosswalk while pedestrians on it

Detection

Optional Steps 

Step 1: Test with Custom Input Video

Test Video is present already. If you want to run the application with other video files, copy them to <path_to_intelligent_traffic_management>/app/data directory and specify the path of the video file in the run.sh as shown below:

INPUT1="../data/<name_of_the_video_file>"

Step 2: Test with USB Camera

If you want to test with a USB camera, specify the index in the run.sh file. On Ubuntu, to list all available video devices, use the following command:

ls /dev/video*

For example, if the output of the above command is /dev/video0, then replace the INPUT1 variable and command to run the application in the run.sh file as shown below:

INPUT1="/dev/video0"
sudo docker run -it --net=host --env DISPLAY=$DISPLAY --memory 500m --health-cmd='stat /etc/passwd || exit 1' --volume /dev:/dev --volume $HOME/.Xauthority:/root/.Xauthority:rw smart_city /bin/bash -c "source /opt/intel/openvino/bin/setupvars.sh && ./smart_city --input $INPUT1 --vp_model $PERSON_DETECTOR_MODEL --vp_proc $PERSON_DETECTOR_PROC --tracking --collision"

 

Step 3: Create the Model proc file for other models

Json files such as pedestrian-detection-adas-0002.json,  person-vehicle-bike-detection-crossroad-0078.json, etc. from <path-to-intelligent-traffic-management>/app/model_proc directory has input_preproc and output_preproc keys, which contain information about the model. For example, labels, color format of the model, name of the output layer, etc. If you want to customize these models proc json files, please refer to the two sections below.

Input_preproc section:
  • color_format: Fill this field with the input format of the model (BGR, RGB, etc.). Below is the sample format for input_preproc section:
"input_preproc": [
    {
      "color_format": "BGR"
    }
     ]
Output_preproc section:
  • layer_name: This parameter defines output layer of the model to get the results.
  • labels: This field contains the labels that the model is trained on. Please fill the labels in list format.
  • Converter: It converts the model output to ROI or labels depends on the type of the model.

Below is the sample format for output_preproc section:

"output_postproc": [
       {
      "layer_name": "detection_out",
      "labels": [
        "",
        "person",
                   "vehicle",
                   "bike"
               ],
      "converter": "tensor_to_bbox_ssd"
   }
 ]

Run on Different Hardware

Application uses only one DL model, which is loaded to the pipeline at a time and specifies the HW accelerator for the model of a particular use case. For example, user can specify one of the parameters below for the target device in the command line argument:

  • vp_device : Target device for Person Vehicle Bike network (CPU, GPU, MYRIAD)

  • p_device : Target device for Pedestrian detection network (CPU, GPU, MYRIAD).

  • v_device : Target device for Vehicle detection network (CPU, GPU, MYRIAD).

To run the application with pedestrian-detection-adas-0002 model with GPU, use the command below:

sudo docker run -it --device /dev/dri:/dev/dri --net=host --env DISPLAY=$DISPLAY --memory 500m --health-cmd='stat /etc/passwd || exit 1' --volume $HOME/.Xauthority:/root/.Xauthority:rw smart_city /bin/bash -c "source /opt/intel/openvino/bin/setupvars.sh && ./smart_city --input ../data/video1_640x320.mp4 --p_model ../models/pedestrian-detection-adas-0002/FP16/pedestrian-detection-adas-0002.xml --p_proc ../model_proc/pedestrian-detection-adas-0002.json --tracking --collision --p_device GPU"  

 

Similarly, to run the application with vehicle-detection-adas-0002 model use --v_model, --v_proc and --v_device arguments for detection model, model proc and target device, respectively.

NOTE:
- GPU works with both FP16 and FP32 precision.
- Intel® Neural Compute Stick 2 and HDDL works with FP16 precision
- Application also has support for other models like pedestrian-detection-adas-0002, vehicle-detection-adas-0002.
- To filter the false detections, adjust the threshold value using --threshold command line argument. By default, the threshold value is set to 0.5.

 

To run on CPU:

sudo docker run -it --net=host --env DISPLAY=$DISPLAY --memory 500m --health-cmd='stat /etc/passwd || exit 1' --volume $HOME/.Xauthority:/root/.Xauthority:rw smart_city /bin/bash -c "source /opt/intel/openvino/bin/setupvars.sh && ./smart_city --input ../data/video82.mp4 --vp_model ../models/person-vehicle-bike-detection-crossroad-0078/FP16/person-vehicle-bike-detection-crossroad-0078.xml --vp_proc ../model_proc/person-vehicle-bike-detection-crossroad-0078.json --tracking --collision --vp_device CPU "       


To run on GPU:

sudo docker run -it --device /dev/dri:/dev/dri --net=host --env DISPLAY=$DISPLAY --memory 500m --health-cmd='stat /etc/passwd || exit 1' --volume $HOME/.Xauthority:/root/.Xauthority:rw smart_city /bin/bash -c "source /opt/intel/openvino/bin/setupvars.sh && ./smart_city --input ../data/video1_640x320.mp4 --vp_model ../models/person-vehicle-bike-detection-crossroad-0078/FP16/person-vehicle-bike-detection-crossroad-0078.xml --vp_proc ../model_proc/person-vehicle-bike-detection-crossroad-0078.json --tracking --collision --vp_device GPU"  


To run on MYRIAD:

sudo docker run -it --device-cgroup-rule='c 189:* rmw' -v /dev/bus/usb:/dev/bus/usb --net=host --env DISPLAY=$DISPLAY --memory 500m --health-cmd='stat /etc/passwd || exit 1' --volume $HOME/.Xauthority:/root/.Xauthority:rw smart_city /bin/bash -c "source /opt/intel/openvino/bin/setupvars.sh && ./smart_city --input ../data/video1_640x320.mp4 --vp_model ../models/person-vehicle-bike-detection-crossroad-0078/FP16/person-vehicle-bike-detection-crossroad-0078.xml --vp_proc ../model_proc/person-vehicle-bike-detection-crossroad-0078.json --tracking --collision --vp_device MYRIAD"    

 

 

Data Visualization on Grafana


1. Navigate to localhost:3000 on your browser.
2. Log in with user as admin and password as admin.
3. Click Home and select Intelligent_Traffic_Management.

 

Dashboard

Summary and Next Steps

This application successfully leverages Intel® Distribution of OpenVINO™ toolkit plugins for detecting and tracking vehicles and pedestrians and estimating a safety metric for an intersection.

As a next step, the reference implementation can be extended to provide support for feed from network stream (RTSP camera). For example, an existing independent traffic controller could easily receive the metadata feed via ethernet or by polling the database and optimize the traffic flow of the intersection in real-time with a minimal software upgrade. This means there is the possibility of upgrading the traffic controller to use cutting edge visual intelligence without replacing the traffic controller.

Learn More

To continue your learning, see the following guides and software resources:

Known Issues

Legacy applications such as Grafana and InfluxDB can have port issues while they are running in the host system and Docker image. To prevent this, stop the Grafana and InfluxDB services in the host system. To check whether the services are running, run the below command:

sudo systemctl status grafana-server.service
sudo systemctl status influxdb.service

The examples of what you will see when the InfluxDB and Grafana services are active are below.

 

If any of the services are active as shown in the above diagram, follow the steps below to stop the services and rerun the images.

  • To stop the InfluxDB and Grafana containers, make sure that you are in <path_to_ intelligent-traffic-management> directory and run the command below:
sudo docker-compose down
  • Stop the Grafana and InfluxDB services in the host system by running the commands below:
sudo systemctl stop grafana-server.service
sudo systemctl stop influxdb.service
  • Rerun the Grafana and InfluxDB images:
cd <path-to-intelligent_traffic_management>/Intelligent_Traffic_Management_<version>/Intelligent_Traffic_Management/intelligent-traffic-management
sudo docker-compose up -d

 

Troubleshooting

Installation Failure

If host system already has Docker images and its containers running, you will have issues during the RI installation. You must stop/force stop existing containers and Images.

  • To remove all stopped containers, dangling images, and unused networks:

sudo docker system prune --volumes
  • To stop Docker containers:

sudo docker stop $(sudo docker ps -aq)
  • To remove Docker containers:

sudo docker rm $(sudo docker ps -aq)

 

  • To remove all Docker images:

sudo docker rmi -f $(sudo docker images -aq)

System Reboot

1. If system is rebooted, run the command below to start the InfluxDB and Grafana images:

cd $HOME/intelligent_traffic_management/Intelligent_Traffic_Management_<version>/Intelligent_Traffic_Management/intelligent-traffic-management
sudo docker-compose up -d

2. Then follow the steps under Run the Application with Test Video section.

Docker Image Build Failure

If Docker image build on corporate network fails, follow the steps below.

1. Get DNS server using the command:

nmcli dev show | grep 'IP4.DNS'

2. Configure Docker to use the server. Paste the line below in   /etc/docker/daemon.json file:

{
    "dns": ["<dns-server-from-above-command>"]
}

3. Restart Docker:

sudo systemctl daemon-reload && sudo systemctl restart docker

Product and Performance Information

1

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserverd for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804