Wireless Network-Ready Intelligent Traffic Management Reference Implementation

Published:08/11/2020

Edge Software Hub   /   Wireless Network-Ready Intelligent Traffic Management  /  Documentation

Overview

Wireless Network Ready Intelligent Traffic Management is designed to detect and track vehicles and pedestrians and provides intelligence required to estimate a safety metric for an intersection. In addition, the Open Network Edge Services Software (OpenNESS) toolkit included in the reference implementation could be used to host a 5G radio access network (RAN) on the same edge device. 

Vehicles, motorcyclists, bicyclists and pedestrians are detected and located in video frames via object detection deep learning modules.  Object tracking recognizes the same object detected across successive frames, giving the ability to estimate trajectories, and speeds of the objects. The reference implementation automatically detects collisions and near miss collisions. A real-time dashboard visualizes the intelligence extracted from the traffic intersection along with annotated video stream(s).   

This collected intelligence can be used to adjust traffic lights to optimize the traffic flow of the intersection in near real time, or to evaluate and enhance the safety of the intersection. For example, emergency services notifications, i.e, 911 calls, could be triggered by collision detection, reducing emergency response times; or intersections with higher number of collisions and near-miss collisions detections could be flagged for authority's attention as high-risk intersections. 

The data from the traffic cameras in the intersection could be routed easily using the OpenNESS high-speed data plane for near-real time video analytics in the field. Further, OpenNESS helps to build and manage the infrastructure to deploy, monitor, and orchestrate virtualized applications across multiple edge devices. 

Select Configure & Download to download the reference implementation and the software listed below.  

Configure & Download

Time to Complete

Programming
Language

Available Software

45 - 60 minutes

Python*


Intel® Distribution of OpenVINO™
toolkit 2021 Release

OpenNESS 20.12

 

 

Target System Requirements

WARNING: For OpenNESS 20.12 and earlier, the network on which the Edge Nodes and Edge Controller are installed is limited to networks that are NOT 192.168.1.x/24. Attempts to install on a 192.168.1.x/24 network will produce erratic results or complete failures of pods to run. This may be corrected in future releases.

Edge Nodes 

  • One of the following processors: 
    • Intel® Xeon® scalable processor. 
    • Intel® Xeon® processor D. 
  • At least 64 GB RAM. 
  • At least 256 GB hard drive. 
  • An Internet connection. 
  • CentOS* 7.8.2003
  • IP camera or pre-recorded video(s) 

Edge Controller 

  • One of the following processors: 
    • Intel® Core™ processor. 
    • Intel® Xeon® processor. 
  • At least 32 GB RAM. 
  • At least 256 GB hard drive. 
  • An Internet connection. 
  • CentOS* 7.8.2003.  

How It Works

The application uses the inference engine and the DL streamer included in the Intel® Distribution of OpenVINO™ toolkit. The solution is designed to detect and track vehicles and pedestrians by using the Intel® Open Network Edge Services Software (OpenNESS).

Figure 1: How it Works

 

The Wireless Ready ITM application requires the application pod, database and a visualizer. Once the installation is successful, the application is ready to be deployed using helm. After the deployment, the application pod takes in the virtual/real RTSP stream addresses and performs inference and sends metadata for each stream to the influxdb database. The visualizer, in parallel, shows the analysis over the metadata like pedestrians detected, observed collisions and processed video feed.  

The application has capability to perform inferences over as much as 20 channels. In addition, the visualizer is capable to show each feed separately as well as all the feeds at the same time using Grafana*. You can visualize the output remotely over a browser, provided that they are in the same network. 

Figure 2: Architecture Diagram

 

In the Architecture Diagram, the following OpenNESS abbreviations are used:

  • EAA:  Edge Application Agent - Implements the Edge Application APIs and Edge Application Authentication APIs for service registration, discovery and availability, communication, persistent storage, and more
  • ELA:  Edge Lifecycle Agent - Implements the Edge Lifecycle Management API for managing platform configuration, application rules and requirements, lifecycle support, etc.
  • DNS:  Edge DNS - Domain Name System Server for apps deployed on Edge; for apps not on the Edge, the server behaves as a DNS forwarder
  • EDA:  Edge Dataplane Agent - Establishes routing among applications, networks, services, etc.
  • EVA:  Edge Virtualization Agent - Implements the Edge Virtualization Infrastructure API to manage virtualized resources
  • GW:  OpenNESS Gateway - API gateway for Edge Platform and controller communication
  • VIM:  Virtualized Infrastructure Manager 

Get Started

Prerequisites 

Make sure that the following conditions are met properly to ensure a smooth installation process. 

  1. Hardware Requirements 
    Make sure you have a fresh CentOS 7.8.2003 installation with the Hardware specified in the Target System Requirements section. 
  2. Proxy Settings 
    If you are behind a proxy network, please ensure that proxy addresses are configured in the system. 
    export http_proxy=<proxy-address>:<proxy-port> 
    export https_proxy=<proxy-address>:<proxy-port>
  3. Date & Time  
    Make sure that the Date & Time are in sync with current local time.
  4. IP Address Conflict 
    Make sure that the Edge Controller IP is not conflicting with OpenNESS reserved IPs. For more details, please refer to IP address range allocation for various CNIs and interfaces in the Troubleshooting section.

Step 1: Install the Reference Implementation 

NOTE: The following sections may use in a URL or command. Make note of your Edge Controller’s IP address and substitute it in these instructions. For a single-node installation, the Controller_IP is the same as your device IP address. To verify this, you can get the INTERNAL-IP of the node(s) using the command: kubectl get node -o wide 

Select Configure & Download to download the reference implementation and then follow the steps below to install it.  

 

NOTE: The reference implementation may already be installed with the Converged Edge Insights package. If so, you can skip to “Step 2: Deploy to Kubernetes”. Look for the implementation in /root/converged_edge_insights/Converged_Edge_Insights_/Reference_Implementation__Wireless_NetworkReady_Intelligent_Traffic_Management 

 

1. Make sure that the Target System Requirements are met properly before proceeding further.  

  • For single-device mode, only one machine is needed. (Both controller and edge node will be on same device.) 
  • For multi-device mode, make sure you have at least two machines (one for controller and other for Edge Node). 
    NOTE: Multi-device mode is not supported in the current release. 

2. Open a new terminal as a root user and move the downloaded zip package to /root folder. 

sudo passwd
su
mv <path-of-downloaded-directory>/wireless_network_ready_intelligent_traffic_management.zip/root 

 

NOTE: If the operating system was pre-installed on your hardware, refer to your manufacturer’s documentation for the root password. 

3. Go to /root directory using the following command and unzip the RI.  

cd /root 
unzip wireless_network_ready_intelligent_traffic_management.zip 

4. Go to wireless_network_ready_intelligent_traffic_management/ directory. 

cd wireless_network_ready_intelligent_traffic_management 

5. Change permission of the executable edgesoftware file to enable execution. 

chmod 755 edgesoftware 

6. Run the command below to install the Reference Implementation: 

./edgesoftware install 

7. During the installation, you will be prompted for the Product Key. The Product Key is contained in the email you received from Intel confirming your download. 

NOTE: Installation logs are available at:  /var/log/esb-cli/Wireless_NetworkReady_Intelligent_Traffic_Management_//install.log 

Figure 3: Product Key

 

8. During the installation, you will be prompted to configure a few things before installing OpenNESS. Refer to the screenshot below to configure.
    NOTE: Multi Device is not supported in this release. Select Single Device when prompted to select the type of installation.     

Figure 4: OpenNESS Configuration

 

    NOTE: If you are using an Azure instance, enter the Private IP address as the IP address of the controller. 

9. When the installation is complete, you see the message  “Installation of package complete” and the installation status for each module. 

Figure 5: Successful Installation

 

10. If OpenNESS is installed successfully, running the following command should show output similar to the image below. All the pods should have a status of either Running or Completed. 

kubectl get pods -A 
Figure 6: Status of Pods

Step 2: Deploy to Kubernetes* 

1. Open a new terminal as a root user, and move to the working directory. 

NOTE: If you are using a Converged Edge Insights package or Development package then the path might be different for Wireless_NetworkReady_Intelligent_Traffic_Management reference implementation.

cd /root/converged_edge_insights/Converged_Edge_Insights_<version>/Reference_Implementation__Wireless_NetworkReady_Intelligent_Traffic_Management

NOTE: is the version selected before download.  

2. Run the command below inside the controller and go to the application directory. This directory has all the deployment configuration files:

cd WNR_ITM/deploy 

3. Create the monitoring namespace where the InfluxDB and Grafana deployments are supposed to run. You might receive an Error message if the namespace already exists.

kubectl create namespace monitoring

4. Run the commands below to deploy Grafana and InfluxDB containers: 

NOTE: Make sure to execute the following commands as root user. 

helm install grafana  ./grafana 
helm install influxdb  ./influxdb


5. Deploy the ITM application container using the following command:

helm install itm ./itm --set hostIP=<IP-address-of-controller> 

6. Check the pods are running by using the commands below: 

kubectl get pods  
kubectl get pods -n monitoring 

 

Figure 7: Status of Grafana and Influxdb Pods 

NOTE: If the pods have a status of ContainerCreating, please wait for some time, since Kubernetes will pull the images from the local registry and then deploy them. This happens only the first time the containers are deployed and the wait time will depend upon the network bandwidth available. 

Data Visualization on Grafana

1. Navigate to :30800 on your browser. 

2. Login with user as admin and password as admin. Set up a new password on your first login. 

3. Click Home and Select the ITM to open the main dashboard.

​​​

Figure 8: Grafana Home Screen

 

Figure 9: Grafana Dashboard List

An example of an ITM dashboard:

Figure 10: Grafana Main Dashboard - ITM

 

The above dashboard shows the number of vehicles, pedestrians and collisions detected on the left side. These may be used for adjusting traffic lights and calling emergency services if collisions are detected.

The blue drop pins on the Map are the geographic coordinates of cameras. By clicking on these pins, a small window of the camera feed can be visible with the detection results, as shown in the figure below.

Figure 11: Detection Results on MapUI

 

To open the Grafana Dashboard for a particular camera with the detection results and other data metrics, click on the camera feed on the small window, as shown in the figure below. 

NOTE: To close the small window with camera feed, click the close button (X) on the top left corner of the window. 

Figure 12: Grafana Dashboard of an Individual Camera Feed


To view the detection results of all the configured camera feeds, click on View All Streams from the top right corner on the MapUI from the main Grafana Dashboard, i.e. ITM. Refer to Figure 10: Grafana Main Dashboard – ITM. 

Figure 13: Detection Results of all the Configured Camera Feeds

 

NOTE: To open combined streams in full tab, go to: http://:30300/get_all_streams 

 

Optional steps 

Configure the input  

The camera_config.json file on the working directory contains all the necessary configurations including the path to default input video. If you wish to change the input, edit camera_config.json file and add the required information. The roles of elements in the camera_config.json file are:  

  • address: Name of the camera’s geographic location. Must be a non-empty alpha numeric string. 
  • latitude: Latitude of the camera’s geographic location. 
  • longitude: Longitude of the camera’s geographic location. 
  • analytics: Attribute to be detected by the model. 
    NOTE: The default model supports pedestrian, vehicle and bike detection. User can select desired attributes from these. (E.g.: "analytics": "pedestrian vehicle detection")  
  • path: path to the input video 
    NOTE: Input videos should always be placed in /resources folder.  
    To use camera stream instead of video, replace the video file name with /dev/video0.
    To use RTSP stream instead of video, replace the video file name with the RTSP link. 

Stop the Application

To remove the deployment of this reference implementation, run the following commands.

NOTE: The following commands will remove all the running pods and the data and configuration stored in the device.

helm delete itm 
helm delete grafana 
helm delete influxdb  

Summary and Next Steps

This application successfully implements Intel® Distribution of OpenVINO™ toolkit plugins for detecting and tracking vehicles and pedestrians and estimating a safety metric for an intersection. It can be extended further to provide support for a feed from a network stream (RTSP or camera device).

As a next step, you can experiment with accuracy/throughput trade-offs by substituting object detector models, tracking and collision detection algorithms with alternative ones.

In addition, on an appropriate platform with supporting RAN hardware, you can onboard a third party 5G RAN implementation that will make it easy to host a private or public 5G small cell. To perform video analytics, wireless IP cameras can be connected through the small cell, and the video traffic from the cameras can be routed via high speed OpenNESS data plane to the visual intelligence container. With the 5G RAN and visual intelligence workloads hosted in a single system, the solution benefits from faster data transfers between the workloads and reduced total cost of ownership.

Learn More

To continue your learning, see the following guides and software resources:

 

Troubleshooting 

Pods status check 

Verify that the pods are Ready as well as in Running state using below command: 

kubectl get pods -A

If they are in ImagePullBackOff state, manually pull the images using: 

docker login 
docker pull <image-name>

If any pods are not in Running state, use the following command to get more information about the pod state: 

kubectl describe -n <namespace> pod <pod_name>

Docker Pull Limit Issue 

If a Docker pull limit error is observed, login with your Docker premium account. 

If Harbor Pods are not in the Running state, login using the below command: 

docker login

If Harbor Pods are in Running state, login using the below commands: 

docker login 
docker login https://<Machine_IP>:30003  
<Username – admin> 
<Passsword - Harbor12345> 

Installation Failure 

If the OpenNESS installation has failed on pulling the OpenNESS namespace pods like Grafana, Telemetry, TAS, etc., reboot the system. After rebooting, execute the following command: 

reboot 
su  
swapoff -a  
systemctl restart kubelet  # Wait till all pods are in “Running” state.
./edgesoftware install 

Pod status shows “ContainerCreating” for long time 

If Pod status shows ContainerCreating or Error or CrashLoopBackOff for 5 minutes or more, run the following commands: 

reboot 
su  
swapoff -a  
systemctl restart kubelet  # Wait till all pods are in “Running” state. 
./edgesoftware install 

Subprocess:32 issue 

If you see any error related to subprocess, run the command below: 

pip install --ignore-installed subprocess32==3.5.4 

Grafana Dashboard Not Showing on Browser 

Run the following commands: 

helm delete itm 
helm install itm ./itm --set hostIP=<IP-address-of-controller>

IP Address Range Allocation for Various CNIs and Interfaces 

The OpenNESS Experience kits deployment uses/allocates/reserves a set of IP address ranges for different CNIs and interfaces. The server or host IP address should not conflict with the default address allocation. In case if there is a critical need for the server IP address used by the OpenNESS default deployment, it would require modifying the default addresses used by the OpenNESS. 

The following files specify the CIDR for CNIs and interfaces. These are the IP address ranges allocated and used by default just for reference. 

flavors/media-analytics-vca/all.yml:19:vca_cidr: "172.32.1.0/12" 
group_vars/all/10-default.yml:90:calico_cidr: "10.243.0.0/16" 
group_vars/all/10-default.yml:93:flannel_cidr: "10.244.0.0/16" 
group_vars/all/10-default.yml:96:weavenet_cidr: "10.32.0.0/12" 
group_vars/all/10-default.yml:99:kubeovn_cidr: "10.16.0.0/16,100.64.0.0/16,10.96.0.0/12" 
roles/kubernetes/cni/kubeovn/controlplane/templates/crd_local.yml.j2:13:  cidrBlock: "192.168.{{ loop.index0 + 1 }}.0/24"

The 192.168.x.y is used for SR-IOV and interface service IP address allocation in the Kube-ovn CNI. The server IP address must not fall within this range or it will conflict and cause erratic behavior. Completely avoid the 192.168.0.0/16 address range for the server IP address. 

If the server/host IP address absolutely must be in the 192.168.x.y range used for SRIOV interfaces in OpenNESS, then the IP address range for the cidrBlock in the roles/kubernetes/cni/kubeovn/controlplane/templates/crd_local.yml.j2 file must be changed to something like 192.167.{{ loop.index0 + 1 }}.0/24 (or other non-conflicting address range) to reconfigure the IP segment used for SR-IOV interfaces.

 

Support Forum 

If you're unable to resolve your issues, contact the Support Forum.  

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.