Embracing The "I" in "UI" :: Ultimate Coder Challenge Week 2

It's been said that the iPhone single-handedly created what some are calling the UI Revolution. Five years ago, UX wasn't even a word and "User Interface" was on the Strangelove fringe of design conversations. Now, good UX designers - who are artists these days instead of engineers - are some of the most sought after and highly paid professionals in the tech world. Insofar as that revolution has lead to a software landscape where people actually like using software for the experience itself and not just the function it provides, its been a good change for all of us.

But look past the sea change to the cause of this revolution and something even more interesting emerges. In fairness, Apple was shipping good interface design for decades but remained a distant second in the desktop ecosystem. In that time, good UX was a luxury and most people were unwilling to pay the extra price. What changed that equation fundamentally was not better interface design, but a radically different input system: a touch screen.

Touch screens weren't really new,. They'd been around for years. What was new, brand new and 'where have you been all my life' was a system that was totally recreated to account for ta radically different input method. Core assumptions about how to interact with a device, the kind of "cut the end off the roast" stuff that is really, really hard to perceive in ourselves, was entirely replaced with things like gestures, shaking, multiple synchronous focus and haptic feedback. What Apple did was to scrap everything that had built up over decades of mouse & keyboard thinking, even their own cutting edge UI, and then create something that made instant sense to a three year old.

Perceptual computing stands to do the exact same thing, for better or worse, all over again.

That lofty goal in min, it's in no way a sure thing. The Wii and Kinnect have now been out in the wild for years (and the fairy wands that shall not be named) but for the most part they have failed to really catch on as revolutionary changes. Instead they remain primarily novelties. The games that use them continue to feel gimmicky. Why is that?

The problem is ultimately the same thing that kept touch screens in the gimmick category for decades - nobody really knows how to use them or takes the time to start from scratch, so we wind up with kludgy 'ports' of input methods and conventions that typically fail to translate. Nobody has (yet) really taken the interface itself as a starting point and built software that makes sense given that input experience. Think tank projects like Ultimate Coder are awesome and with any luck will lead to some genuine insights into spatial computing best practices but more likely, until the hardware is cheap and easily accessible and somebody invests several zeros in a whole new interface paradigm the threat remains that Minority Report style inputs will remain where they are - in our imagination.