Interview with Developer Ina Yosun Chang

Note:  If you're interested in Intel® RealSense™, then you'll want to check out the 2014 Intel® RealSenseApp Challengea contest intended to encourage innovative apps that take advantage of everything that perceptual computing has to offer. Using the Intel® 2014 Software Development Kit (SDK) and the brand new 3D gesture camera, developers will be able to show off their ideas, spark future imagination, and maybe even take a few prizes home from the $1 million dollars’ worth of cash and promotions offered. Interested? Check out the official 2014 Intel® RealSenseApp Challenge page to get started!

Ina Yosun Chang is an independent developer who is a regular fixture at various Hackathons, code challenges, and other developer-centric events. She graciously took time out of her very busy schedule to answer a few questions about her development background, what she’s currently doing, and her work with perceptual computing (Intel® RealSense™).

Tell us about your development background.

I started playing around with QBASIC when I was 10, on an old 386 that I literally found dumpster diving around turn-of-the-century Silicon Valley (the happiest place on earth for the sort of urban exploration that could also feel like the ultimate shopping spree for a tech geek). I found some books from the public library that had lines of BASIC, some of which didn’t work in QBASIC. I didn’t really know what I was doing, so when I say that it felt like reciting magic incantations, the mystique was in the discovering of the unknown - an experiential hack on substituting in the lines with things I knew from trial and error that actually work. 

When I was 12, I started auditing and taking programming courses at the local college in the summer. I also started my own web design company, because there seemed an incredible demand and market for it (this was in the late 90’s, before the field of web development became saturated).

I made hacks and such in everything, throughout high school, then limited myself to just scientific computing during college (bioengineering, physics and philosophy)/grad school (physics). Leaving Academia in 2006, I returned to web design, Flash and 3D graphics. In 2007, I delved into virtual worlds and content creation within such a platform - and became an addict in this space where I could create anything, and even be the director of a global Shakespearean theatre.

In 2009, I started iPhone development (funds well needed after being a virtual world addict) and computer vision. In 2010, I started playing around with Unity and Android* development. In the last four years, I’ve had the chance to make hacks on all kinds of cool technologies while they’re still burning hot - like clay modeling on mobile devices using multitouch and embedded 9-dof sensors as input (ClayAR 2011), the original Sony SmartWatch as a wearable input medium for the Oculus (2012), and more HUD-based hacks, such as the gyroFIRE platform for Google Glass (2013). And, of course, Perceptual Computing and RealSense. :)  

What got you interested in coding, particularly perceptual computing?

When I was thirteen, I realized that I wanted to make things, but I had difficulty explaining all my crazy ideas. This was in the late nineties - before Processing and Canvas, and way before there were real app markets. I made interactive fractals and fun visualizations in C, but people didn’t want to download executable apps. So, I stuck with making cool hacks in Flash - the web was kind of like the app store of the late 90s. The medium also allowed for robust explorations in new user interfaces - some of which screamed for a new user input device. 

Inspired by Molyneux’ strong AI proposition with Milo and Kate, I started Kinect hacking in 2010, back when it was still a bit more esoteric, so I’ve had a lot of time to think about motion gesture user experiences. When perceptual computing came along, I had already experimented with many of its potential use cases on the Kinect - it was then a matter of adopting the user interface motifs for a laptop form factor. 

What are you currently working on?

I’m always working on a bunch of cool hacks. I specialize in making creative apps that work well on new form factors and input medium, from new sensors like the PrimeSense Capri (Occipital)/Google’s Project Tango and Myo to the slew of wearables becoming more mainstream this year.  

The main consumer product (my startup) I’m working on currently is ARKitty, a mobile augmented reality pet, with strong enough AI that also helps children learn STEM in a fun, natural and immersive way. 

Something that hits the visionary point a bit more is a project I’ve code-named Wanderlust - a way to share experiences, not just of a song or a photo or a video, but the actual immersive combination created by the interval of time that spawned that emotion. It involves poignant detail-filled interactive scenes in virtual environments and a multiuser server that records and can play back every single action performed and generated in the world. Because the exact configuration can be shared, the problem this product tries to solve is that of facilitating the gap in human communication, as the memory of an occurrence is lost or misconstrued from changing perspectives.

I like to believe that I make creative software to help the human condition. 

Being scientific in inclination, I also continually work on R&D, particularly with cute algorithms. :) After becoming intimately familiar with a particular library, I have a tendency to just scrap it completely, mess with things, building up from first principles, defining the evolution of a single hack. 

What are your future plans for your projects?

I’ve hacked together hundreds of apps that could each potentially become its own startup*, but I’m curious to see what would happen if I take ARKitty all the way - beyond just hacking the app together, to polish and market it. Also, it’s kind of like working on a bunch of tiny hacks in training a virtual cat to play a range of STEM games - from lattice multiplication magic squares to chess to interactive geometry proofs to designing a simulation microcosm for open-ended engineering and science. 

I’m also considering starting a reverse incubator that might be considered gold for nontech founders - as an insomniac-driven-prolific hacker I’ve built tons of products that could each be its own startup. (I guess this might be nontrivial - it’s like discovering the semi-glue for Post-It notes, without knowing that Post-It notes would be the ultimate use of this adhesive.) 

I like to think that I’m bringing value to “EdTech” by applying cutting edge 3d/VR/AR technology appropriately, in a humanly fun way.

I’m also curious about implementing a variant of an old model for monetization - “free to play for good” - capitalizing not on wasting people’s time with arbitrary tasks that don’t do them any long term good, but empowering whales with the ability to see the world as a STEM professional. It’s applying the power of addiction to solving a properly presented learning task. People are thinking machines, and it’s only the pity of limited vision in allocation of the industrial age that they’re plagued with roles such as factory worker. In other words, I plan to revolutionize STEM education, to make it not only accessible, but actually taught, the right way. :)  

Are there any upcoming developer events you’re planning on attending?

I’m thinking of taking a break from hackathons and types, to focus on my own projects. I might attend a few out of peripheral interest, but - this year - instead of pivoting every week with a new project, hackathon-inspired, I should try building a lasting product.

However, I’m also looking forward to the Intel RealSense Challenge - though I am disappointed at how ideas require pre-approval - the last time around, all three of my winning entries were completely last minute ideas - after applying the Occam’s Razor of, “What actually works,” to minimize the products. The nature of the RealSense SDK being an evolving medium makes it difficult to not pivot at the last minute, when a documented SDK feature is not actually functional or omitted. 

Where do you see perceptual computing going in the next five years?

I’m excited about Intel’s initiative to embrace this new kind of user input medium, and its potential to help advance a next era of computing.  

If interest continues to develop and if Intel continues to fine-tune this technology - it’d be really exciting to see Perceptual Computing become a main form of user interaction - working well under all lighting conditions, including sunlight. This would also allow for placement in more robust form factors, such as ubiquitous wearables.  

What excites you about perceptual computing – what is your vision for this technology?

Perceptual computing is an input medium that’s never really had an analogue to a historical device - other than having a team of servants at your beck and call, it wasn’t possible to simply wave your hands - gesture - to command an action. I think that it’d take some time for mature perceptual computing apps to really take shape, with its lack of real analogue with traditional user interfaces. For example, the keyboard replaced the typing machine, and the mouse replaced pointer stones. It’s the fact that computing has advanced to the stage where it’s possible for real time processing of rich signals beyond a keystroke or screen point - on the surface, at least, we’re getting closer to a reality where your computer could be as cognizant of your being, as a real person.

Thanks again to Ina Yosun Chang for taking the time to give this interview. To follow her adventures in development, follow her on Twitter at @Yosunhttp://hacks.yosun.me/, or http://talks.yosun.me/. For more about RealSense and the Intel RealSense Challenge, please visit the following links:

Para obtener información más completa sobre las optimizaciones del compilador, consulte nuestro Aviso de optimización.