Archived - Intel® RealSense™ Blog Series: Virtual Reality and More with Developer Geoffrey Subileau

The Intel® RealSense™ SDK has been discontinued. No ongoing support or updates will be available.

The RealSense™ App Challenge 2014 is an exciting competition meant to spur developers from all over the world to forge new ground in the field of perceptual computing. Voice control, gesture recognition, facial analysis, and augmented reality are just some of the ideations we’re looking forward to in this challenge. Evolved from the 2013 Perceptual Computing Challenges and utilizing the Intel® RealSense™ SDK for Windows Beta and Developer Kit as well as the Intel 3D gesture camera, developers will be tasked with the challenge of designing innovative new ways that RealSense™ technology can be integrated into our computing experiences.

This definitely makes for some exciting developer possibilities. These technologies are changing the way we develop apps; for example, a user could choose to open up a YouTube video simply by using a hand gesture, or post a new Facebook status with a blink of an eye and voice recognition. With the release of the Intel® RealSense™ SDK for Windows Beta and Developer Kit and the Creative * Interactive Gesture Camera Developer Kit with RGB and 3D depth capturing capabilities to bundle this entire tech together, developers have a unique opportunity to usher in a whole new era of PC and computer interaction.

Perceptual Computing Challenge 2013 Pioneer Award Winner Geoffrey Subileau graciously took some time of out of his busy schedule to talk to us about his experience with that competition, his thoughts on perceptual computing, and his development plans for the future.  Here’s a closer look at his winning entry, titled PopPop!

Tell us about your development background.

 I have 10+ years designing and developing world-leading immersive virtual reality products and experience containers targeting many industries and domains, with double expertise in UX design and engineering.

Currently, I’m the Lead Converging Technologies User Experience Designer and R&D Manager at Immersive Virtuality Lab, Dassault Systèmes, and the co-founder and Chief Experience Officer at Pixel Potato.

What got you interested in coding, particularly perceptual computing?

My motivation is to evangelize the User's body and context aware paradigm; the more you know about the user, the more user friendly and ubiquitous will be the UX. And that is what perceptual computing is about.

Tell us about your history with the PerC Challenge. What were the highlights for you in 2013?

I already made a demo of a game made for the Kinect and leap motion devices, and I wanted to adapt it for the Intel camera. Highlights for me:

  • depth map could be displayed which means better user feedback (because Leap Motion could not do that at the time)
  • price
  • short range
  • hand and finger tracking

What were the key takeaways/lessons you learned in 2013?

Interacting with hands is kind of magical, but it is not as natural as we would expect; there’s tiredness, gesture false positive detection, lack of affordability, ethnographic mismatching.

But this helped me understand the best practices. When it's correctly designed for the right use case, it's terribly efficient. I think in the future the perfect UI will take the best of each modality and mix them: voice recognition, gaze/head/hand tracking, facial recognition, touch. And I think that is what Intel RealSense is targeting.

What Intel developer tools did you use?

The PerC SDK.

Any particular categories or markets which you are interested in developing apps for? What’s your motivation?

Games, because of the fun aspect of free hands interactions. It's engaging.

Collaboration: I started my career working on a European research project about collaborative virtual environments. Computers devices like mouse and keyboard are not designed for collab (that's why they're called Personal Computer). Doing hand/head/voice recognition help a lot to do collab, because you can point at things on the screen and say to your colleague: This thing here is too small. The computer will know what you’re pointing at and will highlight it so your colleague sees what you mean.

Tell us about the overall development experience: did everything go smoothly? What challenges did you come across, and how did you solve them?

With PerC SDK 2013, everything was fine. I developed with Unity 3D.

With RealSense 2014, it's ok. We're using it since beta release and we adapt our features roadmap according to the RealSense features release roadmap too.

Since the last Challenge, tell us what you’ve been working on. How did the Challenge help you in your professional or personal development projects?

Since 2013 challenge I worked on several personal VR projects. One involved the Intel PerC camera: I developed a Proof of concept of a holographic video conferencing app. The Intel camera was supposed to provide "eye contact" capabilities, but the head tracking in 3D space doesn't perform well enough so far to achieve this.

Are you competing in the current Intel RealSense App Challenge? What was your motivation to enter the contest?

See if the new camera and SDK is doing better in term of head tracking in 3D space.

What do you think will be different about this time around?

Much of the core technology of the camera is different and seems to even work better. I assume also that the SDK has more maturity.

What tools do you plan on using?

JavaScript for webGL and WebRTC and Visual Studio for some C++ parts

How would you define what RealSense technology is?

To me RealSense technology aims at better knowing about what the user is doing in the real world in order to make the application better understand what she attends to do. And the more the application knows about the user and her context of interaction, the better user experience the developers can offer. The application gets some artificial intelligence.

Were there new capabilities enabled through the latest Real Sense software or hardware solutions which piqued your interest and or got you inspired?

  • True 3D head tracking
  • Higher depth map resolution
  • RealSense cameras will be integrated into tablets, laptops and All-in-one screens

What advice would you give your fellow Challengers?

Better design 2 or 3 functionalities that work very well than a dozen unpolished.

Constant user Feedback: the user should permanently know what the system understands of him (what it is tracking basically)

Gestures: Should be used with caution, at the right time and be efficient for the user. Gestures can be tiredness (when you raise your hand above the heart, your blood pressure starts decreasing). Regarding ethnography studies, a same gesture pattern doesn't have the same meaning in different countries. Gesture recognition based interactions are not "affordable" (the first time you use the application you don’t know what gesture to do and what it does, unless someone told you or you saw a tutorial).

Regarding gesture recognition, false positive detections are a pain to deal with and user feedback is usually poor because the user has to complete the entire gesture pattern before the application can display a feedback of recognition. I would recommend to use gestures UI based on a 1:1 direct manipulation (like controlling a cursor) and trigger actions with visual and progressive gestures (like the iPhone “slide bar to unlock” widget).

Were there any Intel developer tools and/or resources that you found or find particularly helpful?

  • Unity 3D samples
  • Record/replay in ipdev. Recording and replaying a user session is very useful when developing: you don't to repeat a hundred times the same movements.

Where do you see perceptual computing going in the next five years?

In term of human-computer interaction, Perceptual computing makes the computer more intelligent about how it can interact with you and understand what you want to do.

In term of technology itself, depth camera will offer a lot of new services, two majors being indoor positioning and tracking, objects and space reconstruction and recognition.

How would you envision Intel RealSense and perceptual computing for other types of projects?

I would love to try Intel RealSense camera mounted on an Oculus Rift HMD.

How would you encourage other developers who might be interested in Intel RealSense?

Using this technology is little bit of science fiction and magic. And human-computer interaction domain is fascinating. It’s a new kind of experience.

Thanks again to Geoffrey Subileau for participating, and we wish you the best of luck in this year’s Intel RealSense App Challenge! For more information, we invite you to visit the following resources:

 

 

Para obtener información más completa sobre las optimizaciones del compilador, consulte nuestro Aviso de optimización.