This post originally appeared on the Soma Games blog and is printed here with permission.
If you were watching at CES you may have seen Intel unveil their RealSense initiative. This is really an evolution of the Perceptual Computing initiative they pushed a year earlier but now with (vastly) improved hardware and software. We’ve been involved with this program for a while now, but wearing our Code-Monkeys hats, and we’ve even won a couple of awards. While we’ve written in the past about the tech I wanted to share a few thoughts about what we see in the future.
Hardware-free interfaces like RealSense and Kinect are undeniably going to be more and more common in the coming years and for many reasons but maybe not the reasons that seem most obvious. That said, the experience of building this kind of UI also exposed its weaknesses which were a little surprising. Take Tom Cruise in the iconic scene from Minority Report. Take a pose like Tom did and hold it. How long before your arms wear out and fall to your side from flaming deltoids? The limit of physical endurance was something that took us totally by surprise when we started this but of course it should have been obvious and while we found it to be a very limiting factor working with existing control schemes it forced us to think differently about how we controlled these games, specifically aiming toward schemes that were more autonomous systems that coasted, needing occasional input instead of constant input.
Related to this was the matter of latency. No matter how ninja I get, moving my arm takes an astonishing amount of time compared to twitching my thumb. Ergo, any of the control schemes or game mechanics that required twitch controls were a non-starter using meat-space controls.
These are a couple of the limitations we saw but what I’m really excited about was how those challenges lead to exciting epiphanies!
RealSense and technologies like it invite us to consider a very different way of approaching our games, our data and all of our virtual interactions – and the magic of it all is in the appealing ability to treat these virtual worlds in the way we treat the real world using our hands, our voices, and the well-honed ability to recognize spatial relations. Input schemes can move increasingly away from buttons and joysticks and drill-down menus (after all, these were always mechanical metaphors for physical actions anyway) into modalities more like dancing or conducting a symphony. Our virtual spaces can operate and be organized just like our real spaces and screens are more like windows to other worlds than flat representations of flatland spaces or even the compression interface into three-dimensional, but largely inaccessible worlds.
So if it’s not clear – we’re very, very excited about where this tech is going and working with it in its infancy has been kinda mind-blowing.
For practical purposes, expect to see us deploying RealSense technology in Stargate SG1 Gunship (under the Code-Monkeys label) F:The Storm Riders, and Redwall: The Warrior Reborn. It’s too soon of course to rely on this input being available but we will definitely make the games to use this tech where it makes sense. (We considered a RealSense version of G, but it feels like a poor fit)
We’ll be at GDC in a couple of weeks and if this is something you’re interested in, stop by and we’d love to talk to you!