The Future of Perceptual Computing: Interview with The Game Creators CEO Lee Bamber

Lee Bamber of The Game Creators is a man on a coding mission, as evidenced by his astonishing body of work: co-founder of The Game Creators (development and publishing of PC game creation software), participation in a number of challenges and contests, including not one but two Intel Ultimate Coder Challenges, and a number of other development projects including the App Game Kit (AGK), a game creation suite of tools that enables developers to create one game and port it to as many different platforms as possible.

Lee graciously agreed to sit down and talk with us about his background in software development, as well as give us his thoughts on perceptual computing and where this exciting technology might be headed.

Tell us about your development background.

LB: I started programming at the age of nine, and always had a passion for creating software. I studied software engineering and business for five years at college, then five years working in the games industry before starting my own development company (The Game Creators) in 1999. I'm familiar with many languages, some obsolete, some still being written and have a good grasp of cross platform development for PC, Mac, iOS, Android and a few lesser known platforms too. I'm responsible for brands such as DarkBASIC, FPS Creator, The 3D Gamemaker and most recently AGK (App Game Kit).

What got you interested in coding, particularly perceptual computing?

LB: Ever since I could play computer games I wanted to make them. It was later in life when I built my own joystick for the BBC Micro that I discovered my fascination for create original (if wacky) experiences through insane software ideas, cutting edge peripherals and untried technologies. As an example, I was one of the first owners of the Raspberry Pi, Ouya and Oculus Rift. Not to play them, but to see what could be created for the very first time!

The Ultimate Coder Challenge: Perceptual Computing. What got you interested in this? What were the highlights for you?

LB: As a spectacular loser of the first ever Ultimate Coder Challenge, I was invited back to pit my wits against more top developers, and the weapons we were given to wield where Gesture Cameras. My awareness of Perceptual Computing and subsequent learning began when the camera arrived on my doorstep. The highlight for me was when I converted myself into a 3D virtual avatar using the depth information returned from the camera; it was a wild thing seeing myself encased in liquid Carbonite!

Tell us about your app you created for the Challenge. What are your future plans for this app?

LB: I created an app called PerceptuCam, a virtual conference system which allows two users to connect over the internet and enter a virtual conference room. Each user is digitised by the gesture camera and rendered in 3D around the conference table. The app also used voice recognition to control menus, and gesture detection to issue commands while in the conference. Due to other commitments, and the fact that completing the app would have required substantial additional development, the project was shelved. Such an app must compete with long standing well developed solutions such as Google Talk, Skype and Face Time so the current version of the app and the associated blog material stands to demonstrate the possibilities, and it is hoped that other developers pick up the baton and run with the idea which has the potential to offer a low cost conference system solution to businesses worldwide.

See a video demonstration of Lee’s entry into the Challenge below:

What other contests or perceptual computing efforts are you currently involved in?

LB: I am part of a Perceptual Computing Committee who conference every other month about the various topics and challenges at the bleeding edge of Perceptual Computing development. I also like to tinker with Perceptual Computing over the weekend; combining it with other technologies to see what terrific Frankenstein apps I can conjure.

Where do you see perceptual computing going in the next five years?

LB: I see two main advances. Firstly, the improvement of the camera hardware with an increased resolution in both colour and depth capture, improved bandwidth for higher speed capture and increased field of view so objects can be tracked through 170 degrees. A significant leap will be the introduction of a 'rear camera' which can read depths from 'behind' the user, in order to build a complete picture very quickly about the target objects.

The second, and more significant, will be the introduction of powerful feedback functions in the SDK which turn raw data into meaningful, super accurate and ultra-reliable information. Perfect upper body tracking, neural learning to detect, remember and track people, not just their faces but what they wear. Behavioural information such as whether the user is restless, unhappy, distracted, bored, uncomfortable or away from the computer all together.

Predictive algorithms which can work out the orientation of the fingers of both hands based on historical data, even if the fingers are temporarily hidden from the camera.  All these ideas are merely extensions to what we already have, and significant steps will be made in the next 12 months to further the capabilities of the present hardware.

What excites you about perceptual computing – what is your vision for this technology?

LB: I see Perceptual Computing as the input method which will truly free the user from communicating with the computer on its terms. Even touch relies on the fact the human has to understand the concept of a 'button' in order to perform a function. Humans did not leave the cave one morning looking for buttons!  When the user can sit down at a desk, say a few words, look at something and nod at it, then we'll have a communication medium that resembles the way we want to interact. Perceptual Computing is the only medium thus far that can come close to enabling this reality.

What was your experience with the SDK?         

LB: The SDK is well documented with plenty of useful lightweight examples to get started. Some of the deeper functionality can be missing, or the documentation thin on the ground in certain complicated areas, but this is to be expected from such a young technology. All the basics are covered, and full access to the raw camera data is given which is all anyone needs to pioneer their own interface methods. I personally found myself abandoning many of the built-in features such as head tracking and finger detection in favour of writing my own versions from the raw depth data. This is not a reflection of the SDK, merely my own approach to solving problems and squeezing every last ounce of performance from the technology. I am also aware of many new functions that are being developed and scheduled for release through the SDK so it can only get better!

Perceptual Computing is more important than the introduction of the mouse and pointer. It may not seem like it now, but 10 years from now when you're conversing with your PC as naturally as if you were talking to your neighbour you'll look back and mark 2013 as the year this technology was placed in the hands of the everyday developer for the first time. Imagine ten micro Gesture cameras in every room of your house, constantly watching you and predicting what you're going to do next, obeying commands as subtle as a frown and instantly smoothing out all the little riffles life throws at you. Are you ready for the day when your house becomes another member of the family?  It's coming!

Thank you Lee for giving us your valuable insights on perceptual computing development, and we wish you every success! For those of you reading this that are working with or are interested in working with perceptual computing technology, please give us your thoughts below on what excites you about this technology, where you see it going, and what you think of Lee’s thoughts on the subject.

 

Per informazioni più dettagliate sulle ottimizzazioni basate su compilatore, vedere il nostro Avviso sull'ottimizzazione.