Deep Learning Brings Touch to Robots | AI News

  • Overview
  • Transcript

We are constantly trying to make robots more human-like, and one important thing scientists have tried to master is how to make robots understand their environment. Learn how deep learning can bring "touch" to robots!

Learn about Somantic

Read about SenseNet: 3D Objects Database and Tactile Simulator

Look into Open AI’s Gym

Learn about Intel’s Reinforcement Learning Coach

Read the full article

Subscribe to the Intel Software YouTube Channel

AI News YouTube Playlist

Robots. Stephanie has said it before. 

Robots, they're the future. 

Indeed they are. This time, we talk about an unlikely AI advancement-- robots that have tactile feedback. I'm David Shaw, and in this episode of AI news, we look at how deep learning is bringing touch to robots. 

One aspect of robots that makes them so cool is that they're a reflection of us. We are constantly trying to make robots more and more human-like. One thing scientists have tried to master is how to make robots understand their environment. While many advances have been made in the realm of computer vision, this is just one aspect of understanding. 

A richer understanding can lead to more advanced systems with enhanced capabilities. To advance further, the industry can benefit from adding robotic touch to the picture. To put it more scientifically, tactile sensing and haptic behavior. So how can we make this possible? Through deep learning algorithms. Deep learning creates lots of opportunity for developing robotics applications, and bringing touch to robots is becoming a huge industry driver. 

Jason Toy, an AI enthusiast, sought to make more headway in this unfolding technology. He embarked on a project for training AI systems to interact with the environment based on haptic input. The research focuses on adding neural systems and tactile feedback to robotic systems. This is meant to expand their mapping of the environment beyond visual imagery to include contours, textures, shapes, hardness, and object recognition by touch. 

Jason's ongoing project called SenseNet-- 3D Objects Database and Tactile Simulator can be used in a reinforcement learning environment. He used Intel's reinforcement learning coach to help accelerate training and testing of reinforcement learning algorithms. Read this full article to learn about Jason's process and how AI is expanding the boundaries of robotics. Thanks for watching, and I'll see you next week.