Intelligent Traffic Management Reference Implementation

Published:08/12/2020

Overview

Intelligent Traffic Management is designed to detect and track vehicles as well as pedestrians and to estimate a safety metric for an intersection. Object tracking recognizes the same object across successive frames, giving the ability to estimate trajectories and speeds of the objects. The reference implementation also detects collisions and near misses. A real-time dashboard visualizes the intelligence extracted from the traffic intersection along with annotated video stream(s).   

This collected intelligence can be used to adjust traffic lights to optimize the traffic flow of the intersection, or to evaluate and enhance the safety of the intersection by allowing emergency services notifications, such as 911 calls, to be triggered by collision detection, reducing emergency response times.  

Table 1
Time to Complete 20 - 30 minutes
Programming Language Python3*
Software Intel® Distribution of OpenVINO™ toolkit Release

 

Target System Requirements

  • Ubuntu* 18.04.3 LTS
  • 6th to 11th Generation Intel® Core™ processors with Iris® Pro Graphics or Intel® HD Graphics
  • USB webcam (optional)

Recommended Development Kits

How It Works

The application uses the DL Streamer included in the Intel® Distribution of OpenVINO™ toolkit. Initially, the pipeline is executed with the provided input video feed and models. The DL Streamer preprocess the input and performs inference according to the pipeline settings. The inference results are parsed using callback function. These results are fed to the detection, tracking, and collision functions. Below sections explain in detail about the flow and features. 

The application also has multi-channel support to use multiple input video feeds. The camera feeds can be accessed using their geographic coordinates on MapUI. 

Detection

Detections are performed by using gvadetect plugin element in the DL Streamer pipeline. The pipeline consists of detection model, model proc file, target device and input stream. The object Region of Interest (ROI) are obtained from DL Streamer call functions and updated to result vector. This result vector is used for further processing. 

Tracking

Once the detections are obtained, the Region of Interest (ROI) results are added to the tracker system, which begins tracking the object over the successive frames. The results are updated for every frame to give feedback to the tracker with the new information:  

  1. If there is a new object, adds it to the tracking system.  

  1. If the tracker lost the object, adds it again.  

  1. If the ROI of the object moved from the original object, reset the tracker information for that object.  

  1. If a detected object is about to exit the scene, remove the object.  

  1. If a lost object (object detection missed due to obstacles or detection missed by model) that is being tracked, remove the object. 

Collision Detection

With object tracking enabled, every object’s actual and past positions can be retrieved. Object locations are averaged through a sliding window (width of 5) to filter the noise of the detection models.  
The velocity derived by calculating the difference of object locations in two consecutive positions. The speed is normalized with a factor of 1/y (being y the vertical position of the object in the image). This is because objects moving closer to the camera appear to be moving faster (in terms of pixels/frame) but that it is not the reality. Once the velocities of each tracked object are obtained, the acceleration is calculated analogously.  

A vehicle assumed to be in a dangerous situation when a sudden spike of acceleration is detected. A spike is defined as the difference between the current acceleration and the average of the previous three. If this difference is bigger than an empirical defined value (which could be tuned), it is considered a possible collision. If the ROI of an object flagged to be in a dangerous situation overlaps with any other object’s ROI, it is considered as near-miss. If the other object is also flagged to be in a dangerous situation, it is considered as collision.  

Near misses are an extension of collisions. The application can identify thresholds in speed and acceleration and search for other vehicles in the proximity of the offender to detect near misses. The following is an example of near-miss scenario:  

Cars that are going through an intersection (in perpendicular directions) and suddenly decrease their velocity and change their direction to avoid a collision. 

 

Figure 1: Architecture Diagram

 

Get Started

Install the Reference Implementation

After you have downloaded the reference implementation, follow the steps below to install it. 

1. Open a new terminal, go to the downloaded folder and unzip the downloaded RI package.

unzip intelligent_traffic_management.zip

2. Go to intelligent_traffic_management/ directory.

cd intelligent_traffic_management/

3. Change permission of the executable edgesoftware file.

chmod 755 edgesoftware

4. Run the command below to install the Reference Implementation:

./edgesoftware install

5. During the installation, you will be prompted for the Product Key. The Product Key is contained in the email you received from Intel confirming your download.

Figure 2: Product Key

 

6. When the installation is complete, you see the message “Installation of package complete” and the installation status for each module.

Figure 3: Successful Installation

 

7. Go to the working directory:

cd Intelligent_Traffic_Management_<version>/Intelligent_Traffic_Management/intelligent-traffic-management

NOTE: <version> is the Intel® Distribution of OpenVINO™ toolkit version downloaded.

 

Build the Docker Images

NOTE: Run the commands mentioned below from the working directory (intelligent-traffic-management), unless otherwise specified.

1. Run the command below to build the Docker images:

sudo -E docker-compose build

 

Run the Application 

NOTE: The steps below use sample videos included with the Reference Implementation. For instructions on how to run the application with feed from a camera or how to use other videos, see the Optional Steps section.

Set the HOST_IP environment variable using: 

export HOST_IP=$(hostname -I | cut -d' ' -f1)

To start the application, run the below command: 

sudo -E docker-compose up -d

To check if the containers are running properly, run the below command: 

sudo -E docker-compose ps

To check smartcity container logs: 

sudo docker logs -f itm_smartcity
Figure 4: Status of Containers

Visualize the MapUI on Grafana

1. Navigate to localhost:3000 on your browser.

NOTE: If accessing remotely, go to http://<host_system_ip>:3000. Get host system ip using: 

hostname -I | cut -d' ' -f1


2. Log in with user as admin and password as admin.
3. Click Home and select ITM to open the main dashboard.

Figure 5: Grafana Home Screen

 

Figure 6: Grafana Dashboard List

 

Dashboard

Figure 7: Grafana Main Dashboard - ITM

 

The blue drop pins on the map are the geographic coordinates of camera. By clicking on these pins, a small window of the camera feed can be visible with the detection results, as shown in the figure below. 

Figure 8: Detection Results on MapUI

 

To open the Grafana Dashboard for a particular camera with the detection results and other data metrics, click on the camera feed on the small window, as shown in the figure below. 

NOTE: To close the small window with camera feed, click the close button (X) on the top left corner of the window. 

Figure 9: Grafana Dashboard of an Individual Camera Feed

 

To view the detection results of all the configured camera feeds, click on View All Streams from top right corner on the MapUI from the main Grafana Dashboard i.e. ITM (refer to Figure 7: Grafana Main Dashboard – ITM). 

Figure 10: Detection results of all the configured camera feeds

 

NOTE: To open combined streams in full tab, go to http://localhost:5000/get_all_streams.

 

Optional Steps 

Configure the Input  

The camera_config.json file on the working directory contains all the necessary configurations including the path to default input video, if you wish to change the input, edit camera_config.json file and add the required information. The role of elements in the camera_config.json file are:  

  • address: Name of the camera’s geographic location. Must be a non-empty alphanumeric string. 
  • latitude: Latitude of the camera’s geographic location. 
  • longitude: Longitude of the camera’s geographic location. 
  • analytics: Attribute to be detected by the model. 

NOTE: The default model support is pedestrian, vehicle and bike detection. User can select desired attributes from these, e.g., "analytics": "pedestrian vehicle detection".   

  • device: Hardware device to be used for inferencing, e.g., CPU.

NOTE: Supported hardware devices are CPU, GPU, HDDL and MYRIAD. Also combination of these using MULTI, e.g., MULTI:CPU,HDDL. 

  • path: Path to the input video. 

NOTE: Input videos should always be placed in /resources folder.  
To use camera stream instead of video, replace the video file name with /dev/video0
To use RTSP stream instead of video, replace the video file name with the RTSP link. 

NOTE: If any of the device value in camera_config.json has MYRIAD, add the following line to smartcity service in docker-compose.yml file:

privileged: true


To validate the config file after changes (optional):

cd app/ && python3 validate_config.py && cd ../


To restart the smartcity container:

sudo docker restart  itm_smartcity

Stop the application

To stop all the containers:

sudo docker-compose down

 

Create the Model proc file for other models

Model proc files contains the information of the model. For example, labels, color format of the model, name of the output layer, etc. DLStreamer parses the model proc files for inference. There are two important sections in the model proc file namely input_preproc and output_postproc.

Input_preproc section:
  • color_format: Fill this field with the input format of the model (BGR, RGB, etc.). Below is the sample format for input_preproc section:
"input_preproc": [
    {
      "color_format": "BGR"
    }
     ]
Output_preproc section:
  • layer_name: This parameter defines output layer of the model to get the results.
  • labels: This field contains the labels that the model is trained on. Please fill the labels in list format.
  • Converter: It converts the model output to ROI or labels depends on the type of the model.

Below is the sample format for output_preproc section:

"output_postproc": [
       {
      "layer_name": "detection_out",
      "labels": [
        "",
        "person",
        "vehicle",
        "bike"
       ],
      "converter": "tensor_to_bbox_ssd"
   }
 ]

 

View the available model proc file <path-to-intelligent-traffic-management>/resources/model_proc.json for reference. 

To validate the config file after changes (optional): 

cd app/ && python3 validate_config.py && cd ../

Restart the smartcity container: 

sudo docker restart itm_smartcity

 

Summary and Next Steps

This application successfully leverages Intel® Distribution of OpenVINO™ toolkit plugins for detecting and tracking vehicles and pedestrians and estimating a safety metric for an intersection.

 

Learn More

To continue your learning, see the following guides and software resources:

 

Known Issues

Limitation in Distance Between Individual Cameras 

The distance between individual cameras, which is configured using their geographic coordinates, should not exceed to the range of hundreds of kilometers. If the distance exceeds the limit, all the map drop pins may not be displayed on the home screen. 

 

Troubleshooting

Address Already in Use 

If running the application results in Error starting userland proxy: listen tcp4 0.0.0.0:3000: bind: address already in use, use the following command to check and force stop the process: 

sudo kill $(pgrep grafana)

NOTE: If the issue persists, it may be possible that Grafana is running in a Docker container. In that case, stop the container using.

sudo docker stop $(sudo docker ps -q --filter expose=3000)

Support Forum 

If you're unable to resolve your issues, contact the Support Forum.  

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.