RealSense Dev Lab and interview with winner of lab contest "CutMod"

I would like to thank all involved for staging an informative and fun RealSense DevLab.

Living in Los Angeles I never get anywhere before 10AM but at 8:30AM on November 5th I was at the first Intel RealSense DevLab in Los Angeles. I was a little wary of showing up, as I am not a hard core hacker, but am more in the visualist/3d artist/animator mode. I confessed this when I got there and was very graciously told I was welcomed by the team of Eric Mantion, Bob Duffy and Rick Blacker of Intel and they even let me have the breakfast, snacks, and the free lunch that arrived on a food truck {there *is* such a thing as a free lunch}!

The purpose of the lab was to encourage developers to create apps for the camera and to disseminate information about the camera. There was a contest to see who could develop the best RealSense app by the end of the day.

The team did a very good job of giving examples and breaking down the code for different RealSense apps.The Intel site has a lot of useful information for would be developers of apps for the RealSense camera. 

All of the attendees were given RealSense cameras, but I will have to wait until I get Windows 8 on one of my computers before I can use mine.

Trying out the camera at the event I was impressed with the refinement of the gesture tracking, both hand and facial seemed to work well with the promise it will only improve. The range right now is from 8 inches to about 4 feet out, increasing in diameter in a cone as you go out. I was told that creating a longer range for the camera is being worked on. This would be very useful for me in terms of mapping and interactivity in the performances I do. There is another  feature of the camera that was also very intriguing to me, because of its advanced depth sensing the camera will be able to do 3d scanning, capturing a digital 3d version of objects, including your face, in front of it. 

The camera can also be used in Unity3D. Unity has already developed an interface for it. I have long wanted to learn Unity and this along with the ability it has for creating Occulus Rift content will definitely encourage me to do so. I use TouchDesigner a lot so I am hoping that program will get a special RealSense node. Right now the camera depth information can be taken in with the VideoDeviceIn Top.

Several friends and people I work with also came to the Dev Lab. One of them was William Michaelsen otherwise known as, “CutMod”. Will is working with me on the interactivity and mapping for my next dome performance, “Robot Prayers”. We are hoping to be able to use the RealSense camera in it. Sample animations of, "Robot Prayers"can be seen on my vimeo site

Will, “CutMod” was the winner of the DevLab contest.   Here is a tweet including a picture of what Will did. I thought I would ask him a few questions. His answers are below.

Interview with William Michaelsen

1. Will, what potential do you see for the RealSense camera in the future? Which features of the camera interest you the most?

RealSense has the potential to become a ubiquitous platform for real-space interactive design. Applications ranging from gaming to cooking will be controlled by natural gesture and developers can tailor experiences to suit user moods as perceived in facial expressions. I'm most interested in leveraging the depth camera and fine-grain facial/hand tracking to control audiovisual synthesis and fx. I'd like to see a longer range of depth tracking but understand that the technology is geared more toward closer interaction in a personal computing context. Hopefully there will be sufficient market demand to create a longer range peripheral device for more robust gaming and commercial interactive applications.

2. Any comments about the RealSense DevLab you went to?

The Dev Lab was a great experience. Interspersed by breaks for delicious catering, we got a rundown of the core RealSense modalities and saw demos of more advanced projects. We then had free reign on a fleet of laptops with RealSense cameras and a goal of integrating the new technology into our own applications. There were a variety of high and low level programming flavors represented by both staff and attendees - from C# to JavaScript to Processing to Unity and beyond.

3. You won the laptop prize for creating the best use of the camera at the DevLab. Can you describe what you did?

My project was interactive promotional signage for use in retail or special event displays. My concept was for branded content to be displayed in a storefront window or trade show booth - as is standard practice - but with a twist. When people walk by the signage, their motion activates the branded content. In the timeframe of the hackathon, I implemented a fluid surface deformation effect based on depth camera silhouette isolation. In the future, I intend to implement more nuanced interactive scenes that utilize skeleton and facial expression tracking to drive real-time animation based on real-space interaction.


I built my interactive sign in TouchDesigner (a node based graphical programming environment by Derivative) because it is my strongest suit in Windows operating systems. It was easy to access the depth camera stream from RealSense and mask out the background with simple chromakey. With the foreground isolated from the background, and a bit of filters, anyone in near proximity to the sign splashes luminosity into a feedback system that subsequently serves as a displacement map for the branded content. In this way user silhouettes liquify the signage, which smoothly morphs back into shape when users exit the scene. While this early version uses the RealSense depth sensor as what amounts to a fancy webcam, I look forward to full implementation of the RealSense tracking data by TouchDesigner, as they have with LeapMotion and Kinect devices. RealSense has great potential for near-field interactive installations and I'm excited to explore the many possibilities as the technology and associated software packages mature.
 

For more complete information about compiler optimizations, see our Optimization Notice.