Intel® RealSense™ Blog Series: Puppetry and New Media with Developer Priyanka Borar

The RealSense™ App Challenge 2014 is an exciting competition meant to spur developers from all over the world to forge new ground in the field of perceptual computing. Voice control, gesture recognition, facial analysis, and augmented reality are just some of the ideations we’re looking forward to in this challenge. Evolved from the 2013 Perceptual Computing Challenges and utilizing the Intel® RealSense™ SDK for Windows Beta and Developer Kit as well as the Intel 3D gesture camera, developers will be tasked with the challenge of designing innovative new ways that RealSense™ technology can be integrated into our computing experiences.

This definitely makes for some exciting developer possibilities. These technologies are changing the way we develop apps; for example, a user could choose to open up a YouTube video simply by using a hand gesture, or post a new Facebook status with a blink of an eye and voice recognition. With the release of the Intel® RealSense™ SDK for Windows Beta and Developer Kit and the Creative * Interactive Gesture Camera Developer Kit with RGB and 3D depth capturing capabilities to bundle this entire tech together, developers have a unique opportunity to usher in a whole new era of PC and computer interaction.

Perceptual Computing Challenge 2013 Pioneer Award Winner Priyanka Borar graciously took some time of out of her busy schedule to talk to us about her experience with that competition, her thoughts on perceptual computing, and her development plans for the future.  Here’s a closer look at her winning entry in the Creative User Experience category, entitled PuppeTree:

Tell us about your development background.

Being a New media artist by profession, code has become a medium of expression for me. My first relationship with code was formed at graduate school, BITS Pilani, India where I was pursuing Bachelors in Computer Science. I joined Oracle India Pvt. Ltd immediately after that where I was working with Java, JavaScript and XML as part of a Business Intelligence product team. To find deeper meaning in my work and cultivate a mode of creative outlet with code, I decided to pursue New Media Design at National Institute of Design, Ahmedabad India, where I was exposed to very different perspectives on code, computing and user experience. I have mostly worked with Processing and Unity since then. I have now begun exploring WebGL.

What got you interested in coding, particularly perceptual computing?

Our daily experiences are mediated through computing technologies, be it mobile phones, tablets or PC. These experiences shape how we perceive our natural environment. To be able to assimilate our relationship with technology, I feel the need to understand the nature of code and myself craft experiences for the digital world through it.

Perceptual computing opens up a whole new set of interactions and lays ground for creating more human experiences with computers. Adding senses to the computer is a huge step in tapping a mode of seamlessly weaving technologies into our natural environment.

Tell us about your history with the PerC Challenge.  What got you interested in this?

Under PerC Challenge, the category of Creative User experience is what got me interested. The brief was much in line with my design interests:

“Create ways for users to interact with apps using any combination of perceptual computing usage modes in order to create an innovative user experience. “

Of the four usage modes, most appealing to me is the Close-Range Depth Tracking that includes recognition of hand poses. The detection of hand poses brings the possible interactions very close to our natural environment. We use hands to touch, grab and act on objects around us. This micro-control exercised through our hands can now be translated to virtual environments because of the depth feature. This breaks the flatness of the screen.

What were the highlights for you in 2013?

In 2013, I was engaged in my final semester design project at school. The central area of investigation chosen for this project was performing arts and the motive was to gain an understanding of experience design by studying aesthetic experience in performing arts.

The Intel PerC Challenge came along at the right time to give me an opportunity to translate my research into a concrete outcome. The traditional art of puppetry clubbed with Intel’s Close-Range Depth Tracking technology resulted in Puppetree, a digital puppetry platform that allowed me to further my investigation into the realm of experience design and opened up more possibilities in front of me.

What were the key takeaways/lessons you feel you learned through this experience?

Puppetree served as a ground to further research in experience design. It laid a ground for me to build a perspective to analyse experiences in mediated environments, especially interactive virtual worlds. The agency of the user in an experience has become the highlight of my understanding. The translation of user’s hand movement from his physical space to the virtual space allows the user to take control of objects in the virtual world and impart life to them vs. embodying an avatar in the digital space. Instead of traversing to a different space through the screen, this technology allows extension of the physical space into the virtual.

What Intel developer tools did you use to build this project? Was there anything Intel did to help you in your app development beyond the software tools such as Hackathons, forums, events, etc.?

The app was built completely in Unity3d, using the unity plugin supplied with the SDK.

The design philosophy and guidelines supplied with Intel’s developer kit helped to shape perspective and understand working with the new technology and maximise its potential. The Unity3D developer community also served to be quite helpful, given that I was a beginner with both Perceptual computing and Unity3D.

Any particular categories or markets which you are interested in developing apps for? What’s your motivation?

Free play is the primary and simplest motivation behind anything that I engage in. As an artist, I find myself most spontaneous and natural in environments that offer free spaces for creative energy to flow and freedom of experimentation. Sensitivity is also an important part of forming a conducive space. For this reason I have identified kids and performing artists as two major work groups. This allows enough flexibility to engage in research, installation art, games, new media performances and the likes.

Tell us about the overall development experience: did everything go smoothly? What challenges did you come across, and how did you solve them?

The central technique to bring the Puppetree platform to life was mapping the finger data to the limbs of the puppet to get the hands and legs moving. Initially I was trying to map it directly to the hand-leg points of the 3D model. This approach did not work out well since direct translation was not giving desired results. I, then, approached the problem in the way traditional puppeteers do, using a controller and strings. So I designed a 3D wooden controller in Unity3D, similar in principal to the ones used in traditional puppetry and attached a script that generates strings between the joints of the puppet and the movable controller. This worked well.

Another problem that I faced was with the accuracy of the finger point data, which was also causing problems in getting the right effect. At that stage, I decided to not use the finger data and used the palm-centre to control the position of the controller. I tweaked the flexibility of the 3D model to exaggerate movement. I am hoping that the new versions of the SDK give more accurate finger-data.

Since the last Challenge, tell us what you’ve been working on. How did the Challenge help you in your professional or personal development projects?

Since the last Challenge, I have gotten more interested in theories on interaction and also started framing my own around, distance, perception and agency of the user. The challenge has served as a push towards experimenting with Interactive virtual worlds. I am working on game concepts in the light of the findings on experience design.

Do you have future plans for this project?

Yes. I want to build Puppetree into a performance platform for interactive storytelling that can be used by artists for live performances and by educators in schools.

What did you wish you knew before you got started – and what would you do differently?

I picked up Unity3D with this challenge. If I had been more comfortable with the software before starting out, I would have achieved better results and would have been able to work on multiple iterations of the platform. I could have also done a much better job at playing with the 3D space. What I have managed in this time is a basic skeleton that shows possibilities but doesn’t manifest the true potential yet.

What advice would you give your fellow Challengers?

To fellow Challengers, I’d like to say that thoroughly enjoy whatever you are making. Imagine yourself to be the user and design the app in the most natural and intuitive way for yourself. Think of various scenarios in which the app can be used to broaden the scope and possibilities of the app.

Where do you see perceptual computing going in the next five years?

Perceptual computing is adding senses to computing. This is the next paradigm in human-computer interaction indicating a shift from functional to more human modes of dealing with computing technologies. If the technology can be compressed to fit phones and tablets, it will revolutionise the way we work, communicate, search and analyse information. These devices will become windows to extending space in three dimensions. The possibilities are endless.

What do you find most interesting or exciting about perceptual computing?

To me, the most interesting part about perceptual computing is the depth aspect. This is instrumental in breaking the flatness of the screen and setting ground for creating interaction environments that resemble our physical environment. There have been other technologies like Kinect and Wii that have introduced depth sensing, but the scale with perceptual computing is different. Instead of using the whole body, it translates hand and face gestures to the virtual environment. So instead of transcending into a virtual space, this allows extension of my physical space. This slight difference in arrangement changes the user’s perception of self and of his environment in many ways. Interactions driven by a hand movement in front of the screen automatically evoke ideas of environments that are composed of objects meant to be held and acted upon. This was my inspiration to design a puppetry platform as puppetry has been known to us as the act of bringing to life inanimate objects.

Thanks again to Priyanka for taking the time to share! You can see more of Priyanka’s innovative projects here.  For more information about RealSense, we invite you to visit the following resources:

 

For more complete information about compiler optimizations, see our Optimization Notice.