SAW - my first thoughts

SAW - my first thoughts

Hi everyone,

Having had an initial read of the Intel RealSense Spatial Awareness Wearable (SAW) website released on the Intel Developer Zone newsletter, I thought I would provide some thoughts on it.  First though, here's a link to that site if you haven't seen it.

https://software.intel.com/en-us/spatial-awareness-wearable

Some background information: in my RealSense project, I built a full-body avatar that has had to address the problem of not being able to track a walking person with a static F200 camera by using an auto-walking animation triggered by lifting the head up instead of true leg tracking.  The technicalities of real-time leg tracking (which is possible with the F200, though impractical) are not entirely relevant to a discussion about the R200-based SAW, as the R200 lacks the hand joint tracking that makes leg detection feasible with the F200.  I simply provide this information to illustrate that I very much understand the issues and challenges involved in tracking a moving person with RealSense.

Throughout my project, a feature high on my dream wish-list - if true walking had been practical - has been a way of detecting obstacles in the path of the walker in a limited-size room.  

A compromise system I used to give physicality to virtual objects in a Unity game environment was to use C# scripting to turn off RealSense tracking scripts when the avatar's body parts touched the collider field surrounding another object and then switch tracking back on when the body part separates from the object's surface.  This gives the user visual feedback of an impression that the body part had been stopped by making contact with the surface of the object.  This could conceivably be expanded to use with real objects that the camera identifies in a room using Scene Perception. 

Valve's HTC Vive provides obstacle tracking to its headset wearers with its Lighthouse system, where a pair of sensors are placed in two corners of the room.  These rapidly spin a laser that casts a grid across the room and calculates where the player is in the room based on what the laser hits.  When the player approaches a physical object such as a wall or piece of furniture, a visual and audio warning is given in the headset.  Even though thee is no way for the headset to actually prevent the wearer from walking into a physical obstacle, it has been demonstrated that the feedback alone is enough to make the player change direction or stop walking forward.

The RealSense SAW wearable approaches this problem by converting sensor readings about obstructions into vibrations on an actuator device on the wearable.  It is reminiscent of the secret-discovering mechanic in the classic Nintendo 64 console game 'The Legend Of Zelda: Ocarina Of Time', where walking near to a wall with a secret hidden behind a breakable spot would make the control pad rumble in the hands.

Other thoughts that come to mind when reviewing the SAW website include:

*  One of the challenges that the developers of the Vive system have had to deal with is how to miniaturize the system.  Beta versions of the equipment had the headset attached to a computer via a weighty worn pack and wires connected to the computer that restricted the mobility of the wearer.  It was intended that the final consumer version would be lightweight and wireless.

Whilst SAW is already wireless, it does require an Ultrabook computer.  If it uses IP addresses and TCP then I guess the laptop could be located anywhere in the world so long as it could connect to the wearable over the internet via the wi-fi connection.  Normally, one would need to be in the proximity of a public free wi-fi connection like those in public places like restaurants and airports if they were using such equipment outdoors.   SAW comes with a portable wireless router of its own.  

This aspect of the technology would benefit from a clearer explanation, as it is not obvious whether SAW's router is a wireless "dongle" that connects to a nearby wireless internet connection (like the restaurant / airport example above) or whether the router somehow generates its own connection to the internet (like 3G / 4G cellphone internett connections.)  The sense that I get from the website is that the portable router attaches to the laptop and SAW uses a wi-fi component to pass data to and from the laptop-attached router.  This would suggest that users would be limited to being in the same room that the laptop is in, similar to the Vive system.

If this is the case, it is conceivable though that the system could be made outdoors-portable if the laptop was carried by the user in a backpack.  This may not be too much of a problem in regard to weight if it is an Ultrabook.  It is also conceivable that data generated by the SAW could be transferred to other people via an internet connection (whether public wi-fi or a cellphone 3g / 4g data cinnection) so that they can experience what the SAW wearer is experiencing and also interact with the wearer by sending data back along the internet connection to the SAW to add to the wearer's experience and expand the potential applications of the SAW.

*  Perhaps a couple of vibrators could be added to the hands of the wearer via gloves (similar to the 'repulsor beams' on the hands of Iron Man's suit) so that when the palm was in proximity to an object, the hand receives a vibration.

I'll add more thoughts as I think of them!

6 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

Hi Marty,

Thank you for your post. The work has been ongoing, and we have shared many of the thoughts that you expressed. 

On the next update to the SAW code base and tutorial, you will see a next version that addresses some of your concerns and suggests directions that we think are promising. Regardless, we encourage the community to build systems and experiment and change things as they see fit. This is meant to be a seed for much more than we can accomplish alone. We should also be clear what some of our early constraints were (both due to technology and due to self-imposed limitations).  We were thinking practical wearable from the beginning. So, while our initial prototypes are large and less elegant than wearables we'd expect to be produced for end-users, we are aimed at minimizing the visual impact on a wearer and the power challenges to daily operation even in our prototype models.

We're intrigued by the F200 work that you mention. We'd love to hear more about that or see some video if you think that would help us and the larger community understand what you're doing there. Of course, our system is not visual display based, but there may be relevant overlaps that prove intriguing. 

We are aware of some of the work with HMDs (head mounted displays) that you mention. Of course, for our problem space, an HMD is undesirable for many cases (though may hold some other interesting opportunities for those with RP ). I think the audio feedback component is the thing that you are suggesting as possibly relevant in this case. We came to the vibration motors due to some hesitancy in interfering with the user's hearing of environmental sounds. As I'm sure you are aware, for many people with diminished sight hearing is critical to their normal day to day lives. Information that those of us with access to highly functioning visual systems rely upon from sight is garnered from sound in many respects. We wanted to see how much we could accomplish as augmentation that did not interfere with existing methods for getting on in the world. That was a self-imposed constraint quite consciously. We are very interested in possible solutions that can augment sound without interfering with normal hearing (a sort of AR of sound). 

To answer your questions concerning the wireless, SAW does not need an internet connection. The wireless network is used primarily as a way for the R200 based sensing connected to a compute unit to communicate with the feedback motors. The constraint is that the camera which is sensing and attached to the body needs to be connected to compute.

Our original prototypes involved a backpack. These are the units outlined on the website. We will have future version released that use other form factors. We are currently testing a belt version in which all compute, the power, and the R200 are all belt mounted. It is a large and very visible belt, but is moving us towards the idea of smaller, more wearable, and lower power. That system uses a Minnow board in place of the Ultrabook. There are challenges here, but these are expected with general purpose hardware being used as we attempt to reduce the power, complexity, and size. We have even tested a very early proof of concept using one of the phablet system (http://androidcommunity.com/hands-on-intel-tango-phablet-with-realsense-20150819/). Systems such as these provide the compute, camera, and wireless in a single, small, low-power unit capable of communicating with the existing feedback motor framework. We hope to make more code and documentation available as such systems are available to the public. 

We are very excited that folks like yourself are engaging this space. Keep in mind that we are not a team dedicated to this project. It is a very personal project for us that lies outside our day to day activities here. We've been lucky enough that Intel has seen the opportunity to share this work and encourage others to grow it in the research as well as the business/product spaces. We've been encouraged to share the work without restriction in order to accomplish work of which we are not capable on our own. Thank you for engaging it! We hope that we can provide some small piece that might assist you and others to do more than we can accomplish on our own.

Please, share your work if you can! We'd love to see it and we're hoping that through this space, folks might find possible collaborative opportunities.

Thank you very much for your long and detailed reply, Robert.  I appreciate it.

The portability of the Ultrabook was not really a concern to me.  It was more an observation of how hardware-based products that start off as bulky in their beta phase tend to reduce down to a smaller, neater package in their final release.  *smiles*  You are also constrained by the development kit available to you at the time.  As R200 support for further platforms like the Intel Atom Cherry Trail and Android become available, incorporation of SAW into those compact mobile devices will likely become feasible (like with the Project Tango link that you provided).

I imagine that the vibrators could also be used with blind wheelchair users.  Perhaps they could be attached to the chair itself in the points where the user has physical contact with the chair - a vibration through the left and right wheels that is felt through the tires as the user grips them (or the joystick if it's a motorized one), and in the foot-plate for frontal obstacle detection and the back of the chair for rear-approaching things (like the vibrating backpacks once popular with light-gun games.) 

Thanks too for your interest in my F200 work.  I will endeavor to put together a new video and demo containing our latest advances, such as the batch toggling of the on-off state of TrackingAction scripts and avatar arms that reach and poke forwards when the user hand is reached forwards.

Regarding VR head wearables: most headset developers have been aiming to get their visors small enough to mimic the visor of blind Star Trek TNG crew member Geordi La Forge, and they're starting to get there now.  After that, they'll likely seek to reduce it further to the size of spectacles (like the AR Google Glass) and then VR contact lenses (perhaps powered by sunlight or the kinetic movement of the wearer's body.)  .  

It's interesting to see how Star Trek TNG perceived the future of visor tech back in the late 80's and early 90's.  Geordi's visor (actually called a VISOR) saw in a kind of infra-red similar to the IR that RealSense uses now, albeit in color.  

http://i.imgur.com/qLZ4A.jpg

Speaking of color IR, I was recently discussing with a dentist friend who owns a dental practice about the potential use of RealSense as an affordable dental tool for reading the health of gum blood flow of patients through the color hue of the blood on the IR image (similar to how an early RealSense application in 2014 apparently used blood flow in the face to generate a value for someone's heart rate.).  They were excited about the potential but said that a color image rather than B&W would be needed for such an application 

We haven't got to the point where visual information can be inputted directly into the brain via forehead ports like the ones that Geordi's visor clips on to.  But Light Field tech that projects an image directly onto the retina of the eye, like Magic Leap, is a step towards that.  

Regarding audio that dos not interfere with normal hearing: "bone conduction" might be a way to do that.  Basically, the sound is passed through the bones of the head as small vibrations through the inner ear bone and is heard inside the user's head instead of it coming in via the external flesh ears.  

It started out as a kind of gimmick tech (kids lollipops that play a pop song in your brain when you bite down on them) and then became popular in hearing aids and earphones.  This past week, it was announced that the UK defense giant BAE is going to use bone conduction in their latest military helmets.  If it's inexpensive enough for kids lollipops, it's probably a cost-effective solution for assistive technology as well.

http://www.digitaltrends.com/cool-tech/new-army-helmets-to-feature-bone-...

Regarding belt-based RealSense solutions: I'd already had an idea about that myself.  I envisioned a belt with two F200 cameras on it, one that looked down to the feet and one that looked up to the hands.  

As I had developed an avatar with full arm motion just by tracking hand joints, I realized that this technique could probably be easily converted to controlling mechanical motorized joints instead of virtual in-game ones.  So a movement of the hand (if a disabled person at least had a bit of hand movement) could be used to activate motors strapped onto the shoulders and elbows and lift their real arm up by moving their hand a little.

The same with the feet - since the F200 camera can perceive toes as finger joints, moving the toes could activate a motor strapped around the legs and knees to move them when the toes are moved.  

I'm incorporating this concept into my current game project with an amateur Batman-type vigilante character (the father of the game's main character) who uses RealSense AR in his mask but ends up paralyzed by a villain and uses RealSense cameras in home-made kit to construct limb movers to give him some mobility again.  It's my hope that expressing the concept visually in a game like that will be a cost-effective way to evangelize to engineers in the real world inspiration about pursuing such hardware solutions.

Thanks again for your excellent engagement with myself and the wider RealSense developer community!

Edit: I've added an image of the current prototype model of the in-game mask.  

It utilizes an F200 camera on the inside of the mask and an R200 on the outside, both connected to a smartphone screen and battery (presumably removed by the inventor from a RealSense-powered mobile).  The R200 on the outside provides a video feed of the environment to the smartphone screen in front of the wearer's eyes and adds AR information.  The inside F200 camera, meanwhile, reads the wearer's eye expressions and projects a digital representation of them onto screen "eyes" on the front of the mask that have a different eye color from the wearer, disguising their true identity.

The mask is opened up for putting onto the head and for maintenance access to the electronics by twisting the nose round to cause the upper half of the mask to slide upward on runner rods.

Edit 2: I did a quick mock-up of a belt-based limb movement system by adding motorized bracer pads to the joint sections of our avatar.

 am working on technologies to assist people that have no visual sensors. The sensor you make would be a great help to us. I have been working on Project Tango. I maintain the public wiki on project tango: 

http://projectango.wikidot.com/start

 

I have met with Daniel Kish founder of http://www.worldaccessfortheblind.org/ and we have submitted our proposal to Google [x]. (solveforx.com)

 

Everything we make will be made open-source to assist the unenable. 

 

 

Thank you,

Kris Kitchen

Qieon Research Laboratories 

Never doubt that a small group of thoughtful, committed citizens can change the world; indeed, it's the only thing that ever has. Margaret Mead

Very useful thread. Thank you everyone

The forum is broken and I cannot contact anyone: 

Leave a Comment

Please sign in to add a comment. Not a member? Join today