ASTRO - The Robot solution for a Safer and more Productive Workplace

1. Robots and ASTRO

Robots are becoming an increasingly ubiquitous part of our society, providing help with a wide array of tasks such as cleaning our homes, delivering room service or guiding us to our departure gate at the airport. The adoption of robot solutions is becoming a sensible choice, if not a priority, for an increasing number of businesses.

ASTRO by Helios Vision 3D Visualization

ASTRO is a versatile robot designed to improve productivity and safety in the workplace. Its name is an acronym for Assistance, Safety and Telepresence Robot Officemate (A.S.T.R.O.)

ASTRO Prototype - 3D Avatar

1.1. What does ASTRO do?

ASTRO uses Artificial Intelligence to solve important problems such as keeping people safe, informed and connected, by integrating a powerful set of features in a highly functional package.

Quick access to important information

ASTRO has conversational intelligence which allows it to talk to and assist its fellow colleagues with their queries. It is compatible with APIs such as IBM Watson, Amazon Alexa and Microsoft Cognitive Services and builds upon OpenCV* to recognize faces and interpret facial cues.

UVC compliant depth Camera

Keeping people safe, healthy and comfortable

The well-being of people working in the office is a top priority for ASTRO. Using its sensors suite, ASTRO makes sure everything from temperature to air quality is optimal within the office environment.

Making virtual presence simple and efficient

Great team communication is essential to enable distributed teams working together across different cities and time-zones. ASTRO adds a new layer of flexibility for remote video conferencing and provides a sense of being truly present in meetings with colleagues and clients.

ASTRO Prototype - Telepresence

1.2. What makes ASTRO special?

Advanced Human-Machine Interaction

ASTRO introduces a comprehensive approach to human-machine interaction: an AI-driven 3D avatar that can see and understand it’s interlocutor and provide useful answers to queries.

In a nutshell, ASTRO possesses Audiovisual Conversational Intelligence and features a suite of cognitive functions such as Face Detection, Emotion Recognition, Advanced Speech and Language Understanding.

ASTRO Prototype - Smart Assistant

Forward-looking visual interface

A very important step in developing the 3D avatar for ASTRO was researching the most popular robots on the market and analyzing their strengths and weaknesses in regards to visual expression.

We noticed a trend for user interface over-simplification and conservative design. We believe the main reasons behind it are

  • Fear of the “uncanny valley”
  • Technical challenges implied by more complex visual interfaces

In a world where 3D experiences, gaming and technologies like VR and AR are bridging the gap between the virtual space and reality, interaction with a computer generated avatar is steadily becoming a norm rather than an exception.

We believe that our visual interface approach for ASTRO blazes a new trail for robot interaction with an intelligent 3D avatar that is aware, responsive and relatable.

ASTRO Visual Interface (*Contains sugar, spice and everything nice)

2. Software

ASTRO’s built-in Artificial Intelligence interprets natural language in order to provide useful information or execute tasks such as:

  • Booking a meeting with a fellow colleague or client
  • Getting real-time feedback of its sensor readings
  • Patrolling the office after work hours

ASTRO Environment Monitoring 3D Visualization

2.1. Navigation and sensing

ASTRO understands it’s surrounding space and discovers all the accessible areas within the office. By combining navigation with a robust sensors suite, ASTRO can collect environment data and identify areas with problems and prevent possible hazards.

References to sensors documentation and Arduino 101 .ino file for monitoring Air quality, CO2, Temperature and Light are available at our Github - https://github.com/heliosvision/arduino101

Rviz - ROS 3D Visualization Tool

ASTRO uses ROS for navigation - http://www.ros.org/ - short for Robot Operating System. ROS is composed of several open source software libraries and tools that make the task of building robot applications much easier. It supports a wide array of hardware platforms, components and sensors, not to mention state-of-the-art computer vision software and powerful developer tools.

During development we got the opportunity to explore a handful of inbuilt ROS capabilities and we decided to help newcomers  bootstrap their robot development by writing a step-by-step tutorial on how to setup and use ROS with turtlebot and a UVC compliant depth camera for SLAM (see chapter 4).

2.2. Cognition

Contemporary advances in Cognitive Science, Natural Language Processing, Computer Vision and Machine Learning have brought robot conversation, vision and decision-making capabilities closer to human-level.

With outstanding open libraries and machine learning platforms, such as OpenCV and TensorFlow, research is accelerated and time to market is reduced. Furthermore, toolkits like the Intel Computer Vision SDK provide platform-specific optimizations for both OpenCV and TensorFlow and boost inference performance.

AI cloud frameworks and services like IBM Watson, Amazon Alexa or Microsoft Cognitive Services have also become very popular, as they provide very accessible means of adding robot intelligence, with comprehensive developer APIs and robust tools for building custom interaction models.

Developers can integrate AI functionality such as Language, Speech, Vision into new products by simply accessing RESTful web services. Sample code for using the Azure Cognitive Services Text Analytics API to recognize the sentiment and key-phrases from text input can be found at our Github page - https://github.com/heliosvision/AzureCognitiveServicesSamples

2.3. Security

Security represets a major concern in the world of connected devices. We opted for Intel IoT Gateway Technology to add an extra layer of security for storing sensors data in the cloud and communicating with ASTRO from outside the office.

A step-by-step tutorial for connecting the Intel IoT Gateway to popular cloud platforms, such as Amazon Web Services (AWS)*, Google Cloud Network*, IBM Watson IoT* Microsoft Azure*, can be found here https://software.intel.com/en-us/getting-started-with-intel-iot-gateways-and-iotdk.

3. Hardware

ASTRO is powered by an Intel-enabled small form factor computing hardware stack including the UP Squared kit and Intel IoT Gateway.

3.1. Head module

While the current ASTRO prototype Visual Intelligence runs on a Windows tablet, for the next iteration we are aiming to step up  performance by upgrading to an Intel Core M5 Compute Stick and an UVC compliant depth camera.

3.2. Robot base

An UP Squared IoT board and an UVC compliant depth camera are at the core of ASTRO’s navigation capabilities. The UP Squared board is one of the most powerful maker boards available at this time, sporting a quad-core x86 CPU clocked at 2.5Ghz and 8GB RAM and providing support for air quality, CO2, temperature and light sensors.

4. SLAM with ASTRO and ROS - A step-by-step guide

Software and hardware prerequisites

The ASTRO base is powered by an UP Squared IoT board. It runs the LTS base ROS version - Kinetic Kane - on top of Ubuntu 16.04 LTS and uses an UVC compliant depth camera for map building and autonomous navigation - ROS SLAM.

The ASTRO visualization workstation is powered an Intel i7 NUC and it runs the latest LTS full ROS version which sports a processing-intensive 2D/3D computer vision toolset.

ROS base - UP Squared IoT board

For the following setup walkthrough, we assume Ubuntu 16.04 is installed on the UP Squared and an UV camera and Turtlebot 2 (Kobuki) are connected and powered up.  Complete tutorials on how to get started with the UP Squared  kit and Turtlebot 2 are available at https://software.intel.com/en-us/iot/hardware/up-squared-grove-dev-kit and at http://learn.turtlebot.com.

Installation

​​Set sources

sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'

Set keys

sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-key 421C365BD9FF1F717815A3895523BAEEB01FA116

Update Debian package index

sudo apt-get update

Install ROS - Base

sudo apt-get install ros-kinetic-ros-base

Init rosdep

sudo rosdep init
rosdep update

Set environment

echo "source /opt/ros/kinetic/setup.bash" >> ~/.bashrc
source ~/.bashrc

Enable Kernel sources

wget -O enable_kernel_sources.sh http://bit.ly/en_krnl_src
bash ./enable_kernel_sources.sh

Install UVC compliant depth camera

sudo apt-get install 'ros-*-realsense-camera'

Load uvcvideo kernel

sudo modprobe uvcvideo

Install Turtlebot

sudo apt-get install ros-kinetic-turtlebot ros-kinetic-turtlebot-apps ros-kinetic-turtlebot-interactions ros-kinetic-turtlebot-simulator ros-kinetic-kobuki-ftdi ros-kinetic-ar-track-alvar-msgs

Udev rules Kobuki

. /opt/ros/indigo/setup.bash 
rosrun kobuki_ftdi create_udev_rules

Set UVC compliant depth camera

export TURTLEBOT_3D_SENSOR=YOUR_CAMERA_TYPE

Source setup

echo "source /opt/ros/kinetic/setup.bash" >> ~/.bashrc

Setup SSH

sudo apt-get install openssh-server

Network configuration

echo export ROS_MASTER_URI=http://localhost:11311 >> ~/.bashrc
echo export ROS_HOSTNAME=IP_OF_UP_SQUARED >> ~/.bashrc

Mapping demo

Start roscore in new terminal

roscore

Launch the turtlebot in new terminal

roslaunch turtlebot_bringup minimal.launch

Launch the mapping in new terminal

roslaunch turtlebot_navigation gmapping_demo.launch

Save current map (optional)

rosrun map_server map_saver -f /tmp/my_map

 

ROS workstation - i7 NUC

For the following setup walkthrough, we assume Ubuntu 16.04 is installed on the NUC.

Installation

Set sources

sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'

Set keys

sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-key 421C365BD9FF1F717815A3895523BAEEB01FA116

Update Debian package index

sudo apt-get update

Install ROS - Desktop Full

sudo apt-get install ros-kinetic-desktop-full

Init rosdep

sudo rosdep init
rosdep update

Set environment

echo "source /opt/ros/kinetic/setup.bash" >> ~/.bashrc
source ~/.bashrc

Network configuration

echo export ROS_MASTER_URI=http://IP_OF_UP_SQUARED:11311 >> ~/.bashrc
echo export ROS_HOSTNAME=IP_OF_NUC >> ~/.bashrc

Mapping demo

Start visualization in new terminal

roslaunch turtlebot_rviz_launchers view_navigation.launch

Remote keyboard control in new terminal

roslaunch turtlebot_teleop keyboard_teleop.launch --screen

Save current map (optional)

rosrun map_server map_saver -f /tmp/my_map

 

About the developers

Leveraging a background in Computer Vision, Artificial Intelligence and Internet of Things, Helios Vision develops forward-thinking projects such as HELIOS and ASTRO.

Para obtener información más completa sobre las optimizaciones del compilador, consulte nuestro Aviso de optimización.