Using Unity to expand SAW's obstacle detection into navigation direction

Using Unity to expand SAW's obstacle detection into navigation direction

Hi everyone,

Having learned the "NavMesh" path-finding system in the Unity game engine (which uses AI to help an object to work out how to steer around objects in its path in real-time), I spent a few days thinking about how a NavMesh might be used to expand upon SAW's vibration-based imminent collision detectors.

The design idea below is not exactly a making-of guide, as I do not have an R200 camera to test it with.  Instead, I use my experience with the F200 camera and knowledge of Unity's workings to speculate how a Unity R200 application that can give 'sat-nav' instructions to the SAW wearer might work.


Create an object in Unity that will act as a master linkage point for the objects in this project.  I recommend using a type of object called an Empty GameObject that has no solid form.  This is created by going to the 'GameObject' menu of Unity and selecting the sub-option 'Create Empty.'

Next, create an object representing the human body's height and width.  This could just be a simple cube stretched into a rectangle.  Since I already had an existing RealSense-powered avatar though, I chose to use that as my object representing the human wearer.

Select your avatar object in the Hierarchy panel of Unity and drop it onto the Empty GameObject that you first created, in order to make the avatar object a sub-object "child" of the Empty GameObject.

t is important to emphasize that the "avatar" object will never be seen by the human wearer.  The ultimate purpose of the Unity application that it is running in is to generate audio "sat-nav instruction" outputs for the SAW wearer.  So although we are using a realistic avatar in our example just because we already had it pre-made, you may as well use a simple plain rectangle as your avatar.


Next, we need to give our avatar object a Unity "NavMesh" component so that it is able to try to use AI to calculate a route around a detected obstacle and then carry out that route, adjusting to any new obstacles in its path in real-time.

The NavMesh should be added to the 'master parent' object of your avatar's object hierarchy (the object at the very top of the hierarchy.)  If you are just using a plain stretched cube then simply place the NavMesh in that cube, since you have no other sub-objects attached at this point.


Create a set of eight 'Empty GameObject' type objects (objects that have no solid form) and equip them with 'Box' type collider fields.  Size the colliders into small thin pad shapes and position them at positions in front of your avatar object that roughly correspond to the positions of the real-world vibrators on the SAW user's body.

Position the colliders so that they are just ahead of the avatar object rather than close to the skin.  The reason for this will be explained a little later in this article.

Drag and drop the Empty GameObject pads onto the master parent Empty GameObject to make the pads child-linked to that master.

Because the pads and the avatar object are both childed to the parent, they will be able to move independently.  In other words, if the avatar object moves then the pads will not move with them because they are on the same level of the object hierarchy as the avatar object rather than attached children of it.  If the pads were attached to the avatar then they would have no independence of movement as they would be compelled to move wherever the avatar went.  

This would mean that it would be impossible for the avatar to collide with the pads, as the pads would always be the same exact distance from the avatar.  This is why avatar and pad need to be able to move separately so that the avatar is able to be steered around the pad obstacles by the NavMesh AI inside the avatar.


Set the collider pads to be inactive by default.


Place a pair of scripts inside each of the Empty GameObjects that will enable or disable the collider field, depending on which of the two scripts has an activation signal sent to it.

Here is the code for a pair of such scripts in Unity JavaScript code.


#pragma strict

function Start () {

   GetComponent.<Collider>().enabled = false;



#pragma strict

function Start () {

   GetComponent.<Collider>().enabled = true;



Write a script that will turn on the collider of a specific Empty GameObject pad when the real-world vibrator on the SAW user's body that corresponds to that pad's body position is activated, and keep the collider active for the duration that the vibrator is buzzing.  When that vibrator ceases buzzing, the script should turn that Empty GameObject's collider off again.

I wish that I could provide pre-made code for such a script but as I said at the start of this article, I do not have an R200 camera and so this is a guide about general operating principles of a possible SAW sat-nav system rather than a detailed making-of guide.

It is at this point that we are able to explain the reason for positioning the Empty GameObject pads away from the surface of the body.  When the collider field is activated by the actuation of its matching vibrator in the real world, the NavMesh inside the avatar object will recognize that a physical obstacle (represented the the collider field) has suddenly appeared right in front of it, similar to how a visual-impaired person may only see an approaching obstacle at the last moment, if at all.

The NavMesh will then try to work out a movement path for the avatar object that will move it around the activated collider, which represents the real-world obstacle that is generating the vibrator buzzing that is generating the pad's collider field.  

If the obstacle is too close then the avatar object will be unable to avoid it and will not try to go around it.  This is why we place the colliders away from the body - to give the NavMesh time to react to the collider when it suddenly appears on the NavMesh's radar and plot a path around it.


Create a set of three large Empty GameObjects that are taller and wider than the avatar object.  Place one to the left of the avatar object, one to the right and one in front.  As we did with the pads, child-link the three Empty GameObjects to the master linkage object so that the avatar can move independently of them instead of the boxes moving with the avatar.


Create a set of three trigger scripts that will activate when the avatar steps within their boundaries.  Place one of the scripts in each of the three Empty GameObject boxes surrounding the avatar.  Give them an 'OnTriggerStay' type function, so that the script will loop continuously for as long as the avatar is within the boundaries of the collider field of the Empty GameObject hosting that script and then cease running when the avatar moves outside of the field.

Program each of the three scripts to play an audio clip to the SAW wearer saying "Step left", "Step right" or "Step forward."  Place the 'left' script in the large box on the left-hand side of the avatar, the right-script in the right-hand box and the forward-script in the box in front of the avatar.


Here's how the system design in this article works.

1.  When a vibrator on the user's real body buzzes, it causes the Empty GameObject pad associated with that body area to switch its collider on so that it registers as an obstacle to the navigational NavMesh component inside the avatar object.

Because the collider field is only active so long as the vibrator on the real-life body is buzzing, this means that the avatar object will keep trying to steer away from the pad until the user in the real world has cleared the obstacle and the vibrator stops buzzing, at which point the pad's collider turns off and the avatar stops trying to walk away from the obstacle.

2.  As the avatar object steers away from the pad, its travel causes it to move inside one of the large Empty GameObject boxes at the sides and front of the avatar.   Stepping into the box will cause its scripting to play a voice clip telling the SAW user to step left, right or forwards.

Because the avatar keeps moving for as long as one of the real-life vibrators is buzzing due to an obstacle in front of it and generating a collider field on the corresponding pad that the avatar is trying to plot a route away from, the scripts in the large boxes will keep looping and continuously generating the 'turn left', turn right' or 'move forward' audio clip for as long as the avatar is steering through one of the fields.

This potentially means that the system can deal with a real-world object of any size, even if the pad itself is only small.  This is because the vibrator will keep buzzing and compelling the avatar's navigation AI to steer away in a particular direction until the user has cleared the RL obstacle and the vibrator buzzing ceases.


Due to the general-principles approach of this guide (not a detailed how-to), it is intended as a way to inspire thinking about expanding the capabilities of SAW beyond simple obstacle detection.  The reader is free to adapt any of the ideas outlined in this article for their own projects.  Best of luck!

3 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

How is it possible that no one is working on this? 

I have noticed in recent times that non-RealSense products with alternative approaches to obstacle sensing for the vision-impaired are being developed.  I suspect that as time goes by and the RealSense product line evolves (e.g the rumored forthcoming RealSense augmented-reality headset), ways will be found to use RealSense for obstacle vision that don't require the vibrating body-pads.

Leave a Comment

Please sign in to add a comment. Not a member? Join today