Wireless Network-Ready Intelligent Traffic Management Reference Implementation


Wireless Network Ready Intelligent Traffic Management is designed to detect and track vehicles and pedestrians and provides intelligence required to estimate a safety metric for an intersection. In addition, the Open Network Edge Services Software (OpenNESS) toolkit included in the reference implementation could be used to host a 5G radio access network (RAN) on the same edge device. 

Vehicles, motorcyclists, bicyclists and pedestrians are detected and located in video frames via object detection deep learning modules.  Object tracking recognizes the same object detected across successive frames, giving the ability to estimate trajectories, and speeds of the objects. The reference implementation automatically detects collisions and near miss collisions. A real-time dashboard visualizes the intelligence extracted from the traffic intersection along with annotated video stream(s).   

This collected intelligence can be used to adjust traffic lights to optimize the traffic flow of the intersection in near real time, or to evaluate and enhance the safety of the intersection. For example, emergency services notifications, i.e, 911 calls, could be triggered by collision detection, reducing emergency response times; or intersections with higher number of collisions and near-miss collisions detections could be flagged for authority's attention as high-risk intersections. 

The data from the traffic cameras in the intersection could be routed easily using the OpenNESS high-speed data plane for near-real time video analytics in the field. Further, OpenNESS helps to build and manage the infrastructure to deploy, monitor, and orchestrate virtualized applications across multiple edge devices. 

Table 1
Time to Complete 20 - 30 minutes
Programming Language C++








Target System Requirements

Edge Nodes 

  • One of the following processors: 
    • Intel® Xeon® scalable processor. 
    • Intel® Xeon® processor D. 
  • At least 32 GB RAM. 
  • At least 256 GB hard drive. 
  • An Internet connection. 
  • CentOS* 7.6.1810
  • IP camera or pre-recorded video(s) 

Edge Controller 

  • One of the following processors: 
    • Intel® Core™ processor. 
    • Intel® Xeon® processor. 
  • At least 16 GB RAM. 
  • At least 256 GB hard drive. 
  • An Internet connection. 
  • CentOS* 7.6.1810.  

How It Works

The application uses the inference engine and the DL streamer included in the Intel® Distribution of OpenVINO™ toolkit. The solution is designed to detect and track vehicles and pedestrians. By using the Intel® Open Network Edge Services Software (OpenNESS) which it comes with, the application follows producer-consumer architecture and uses EAA service to communicate with the consumer.

Figure 1: How it Works


The producer is a Go application telling the consumer what to do, e.g., tracking or collision detection, and which hardware to use, such as CPU, integrated Intel GPU, Intel MyriadX VPU, or integrated Intel HDDL. The producer gives requests that are registered to the OpenNESS EAA service, an agent that receives data and gives it to the registered application.

The consumer app talks to EAA and requests producer generated data, which is used for inference inside the Consumer app. Since the generated o/p can't be shown directly through a window because of Kubernetes Platform limitations, a flask server is set up inside consumer running on port 5000 of Kubernetes cluster, which takes each o/p frame in real time and hosts it. Since it is inside a cluster network, we can’t access it from the host unless we expose that port to host port, which is done while deploying.

Influx DB and Grafana come with persistent storage (previous inferences data is not lost unless persistent volume claim is deleted) and preconfigured data source and dashboard, respectively. They are used as a separate deployment and talk over Kubernetes’ internal network. The same goes for consumer and Influx DB. The data is then taken by Grafana to visualize the o/p. The flow is as follows: Producer > EAA > Consumer > Influx DB > Grafana.

Figure 2: Architecture Diagram


Get Started

Step 1: Install the Reference Implementation 

Follow the steps below to install the Reference Implementation. 

1. Make sure that the Target System Requirements are met properly before proceeding further.  

  • For multi-device mode, make sure you have at least two machines (one for controller and other for Edge Node). 
  • For single-device mode, only one machine is needed. (Both controller and edge node will be on same device.) 

2. Open a new terminal as a root user, and move the downloaded zip package to /root folder. 

mv <path-of-downloaded-directory>/wireless_network_ready_intelligent_traffic_management.zip /root 

3. Go to /root directory using the following command and unzip the RI  

cd /root 
unzip wireless_network_ready_intelligent_traffic_management.zip 

4. Go to wireless_network_ready_intelligent_traffic_management/ directory. 

cd wireless_network_ready_intelligent_traffic_management 

5. Change permission of the executable edgesoftware file. 

chmod 755 edgesoftware 

6. Run the command below to install the Reference Implementation: 

./edgesoftware install 

7. During the installation, you will be prompted for the Product Key. The Product Key is contained in the email you received from Intel confirming your download. 



8. When the installation is complete, you see the message “Installation of package complete” and the installation status for each module. 

NOTE: Installation failure logs will be available at path - /var/log/esb-cli/Wireless_NetworkReady_Intelligent_Traffic_Management_<version>/<Component_name>/install.log 

9. If OpenNESS is installed, running the following command should show output similar to the image below. All the pods should be either in running or completed stage. 

kubectl get pods -A 

Step 2: Deploy to Kubernetes* 

1. Open a new terminal as a root user, and move to the working directory. 

NOTE: If you are using a Converged Edge Insights package or Development package then the path might be different for Wireless_NetworkReady_Intelligent_Traffic_Management reference implementation.

cd /root/converged_edge_insights/Converged_Edge_Insights_<version>/Reference_Implementation__Wireless_NetworkReady_Intelligent_Traffic_Management

<version> is the version selected before download. Check the readme file for version information if needed. 

2. Run the command below inside the controller and go to the application directory. This directory has all the deployment configuration files:

cd smart_city_cera/deploy 

2. Create monitoring namespace where the InfluxDB and Grafana deployments are supposed to run. You might receive an Error message if the namespace already exists.

kubectl create namespace monitoring

4. Create config map for Grafana so that it can load the data source configuration and the dashboard on startup.

  • The “smart_city.json” file is the dashboard configuration, while “Influxdb-datasource.yaml” is data source configuration.
  • The config map created is given to Grafana deployment while creating it.
kubectl create configmap grafana-config -n monitoring  --from-file=influxdb-datasource.yml=influxdb-datasource.yaml  --from-file=grafana-dashboard-provider.yml=grafana-dashboard-provider.yaml  --from-file=smart_city.json=smart_city.json

5. Create persistent volume claim for InfluxDB. This helps retain data even after InfluxDB deployment stops or restarts and preserves data unless the persistent volume claim is deleted.

kubectl create -f influxdb_pvc.yaml

6. To deploy InfluxDB in Kubernetes, run the following command. This configuration file will create the deployment as well as a service, which helps to communicate with other deployments on the same cluster network:

kubectl create -f influxdb.yaml

7. Run a similiar configuration for Grafana:

kubectl create -f grafana.yaml

8. Deploy the producer and consumer application containers using the following commands:

kubectl create -f smart-city-prod-app.yaml 
kubectl create -f smart-city-cons-app.yaml 

9. Check the pods are running by using the commands below: 

kubectl get pods  
kubectl get pods -n monitoring 


NOTE: If the pods are in stage of “ContainerCreating”, please wait for some time, since Kubernetes will pull the image from local registry and then deploy them. This happens only the first time of deployment and wait time depends upon the network bandwidth available. 

10. Expose the flask service running inside consumer application using the following command. 

kubectl port-forward deployment/smart-city-cons-app 8000:5000 

NOTE:  Here “8000” is the host port and “5000” is the Flask port inside the cluster. The output feed is thus redirected to port 8000, so that the output can be viewed.  If this step is not completed, flask hosts the output over the internal network port, which cannot be accessed from the external system’s browser. The port forward exposes the internal network port 5000 to the system’s 8000 port.  

11. Keep the terminal for port forwarding open and continue with next steps. 

Data Visualization on Grafana

1. To obtain the CLUSTER-IP of Grafana, open a new terminal, login as root user ($ su root) and run the following command. Copy the Grafana Cluster IP.

kubectl get svc -n monitoring 

2. Navigate to <Grafana-CLUSTER-IP >:3000 on your browser.

3. Login with user as admin and password as admin. Set up a new password on your first login.

4. Click on Home.


5. Select Smart_City.

An example of the Smart City dashboard:


The above dashboard shows number of vehicles, pedestrians, collisions detected on the left side, which can be used for adjusting traffic lights, calling emergency services if collisions are detected. Furthermore, the visual output on the right side is the live stream of inference happening on input camera feed or video.

NOTE: If your visual output takes time to load and stream, it’s because it requires further performance improvement and is under beta stages of development.

Stop the Application

To remove the deployment of this reference implementation, run the following commands.

NOTE: This will remove all the running pods as well as it will remove the data and configuration stored in the device.

kubectl delete -f smart-city-cons-app.yaml  
kubectl delete -f smart-city-prod-app.yaml  
kubectl delete -f grafana.yaml  
kubectl delete -f influxdb.yaml  

NOTE: If you are using a Converged Edge Insights package or Development package then the path might be different for Wireless_NetworkReady_Intelligent_Traffic_Management reference implementation.

cd /root/converged_edge_insights/Converged_Edge_Insights_<version>/Reference_Implementation__Wireless_NetworkReady_Intelligent_Traffic_Management

<version> is the version selected before download. Check the readme file for version information if needed

Remove Grafana Configuration and Persistent Storage

NOTE: By running the command below the previous inference data will be lost completely. Perform it if only necessary.

kubectl delete -f influxdb_pvc.yaml  
kubectl delete configmaps grafana-config -n monitoring

Summary and Next Steps

This application successfully implements Intel® Distribution of OpenVINO™ toolkit plugins for detecting and tracking vehicles and pedestrians and estimating a safety metric for an intersection. It can be extended further to provide support for feed from network stream (RTSP camera).

As a next step, you can experiment with accuracy/throughput trade-offs by substituting object detector models, tracking and collision detection algorithms with alternative ones.

In addition, you can onboard a 3rd party 5G RAN implementation that will make it easy to host a private or public 5G small cell. To perform video analytics, wireless IP cameras can be connected through the small cell, and the video traffic from the cameras can be routed via high speed OpenNESS data plane to the visual intelligence container. With the 5G RAN and visual intelligence workloads hosted in a single system, the solution benefits from faster data transfers between the workloads and reduced total cost of ownership.

Learn More

To continue your learning, see the following guides and software resources:



Active Internet Connection

Make sure you have an active internet connection during the full installation. If you lose Internet connectivity at any time, the installation might fail. 

CentOS* Linux installation

Make sure you are using a fresh CentOS* Linux installation. Earlier software, especially Docker*, Docker Compose*, and Kubernetes* can cause issues. 

OpenNESS Failures 

  • Run the command: 
kubectl create -f grafana.yaml. 
  • If you get build error at TASK:  
[telemetry/tas: build TAS] 

Then, run the following two commands and start the installation process again: 

yum remove git* -y 
yum install -y –enablerepo=ius-archive git2u-all 
  • If you get build error at TASK [install docker-compose] then run the command below on target system and start the installation process again. Make sure to know what device the command is failing on (Edge Node/Edge Controller). 

yum install libffi-devel -y  
  • If you get the error CryptographyDeprecationWarning.. SyntaxError: invalid syntax, then run the command below on target system and start the installation process again: 
pip install bcrypt==3.1.7 
  • If you get any error like end the playbook: either ovs-ovn pod did not start or the socket was not created restart the installation process. Sometime pods take extra time to start. 

Invalid or Duplicate Hostnames

If you get the error hostname of this machine changed. Close the terminal. Restart in new terminal, exit the active terminal, open new terminal, and start the installation process again. Installation might fail otherwise. Invalid or duplicate hostnames are not allowed. 

NTP Server Error

If you get an NTP server related error, verify the NTP server availability: 

ntpdate -qu <ntp-server-address> 

For example: 

ntpdate -qu 0.pool.ntp.org 

Known Issues 

  • Grafana UI is unable to load on some devices with DNS configured. 

    • Fix: Remove the DNS entry, reboot the device, and redeploy the applications.  

Support Forum

If you're unable to resolve your issues, contact the Support Forum

Информация о продукте и производительности


Производительность зависит от вида использования, конфигурации и других факторов. Дополнительная информация — по ссылке: www.Intel.com/PerformanceIndex.