Interview with developer Peter O’Hanlon

You might already know about developer Peter O’Hanlon from his work at Confessions of a Coder or his astonishing record at The Code Project. Most recently, Peter made quite a splash in the perceptual computing programming field with his pioneering image application created for the Ultimate Coder Challenge: Going Perceptual, a groundbreaking contest  that pitted seven developers for seven weeks to create apps that utilized the  latest Ultrabook convertible hardware along with the Intel Perceptual Computing SDK and camera to build the Ultimate app prototype.  Peter’s app, Huda, utilized WPF to create a voice and gesture-based photo editing app, and he personally provided some of the most insightful and educational blog posts of the challenge.

Peter graciously agreed to sit down and answer a few questions about his coding adventures, particularly in the field of perceptual computing.

Tell us about your development background.

This is actually a difficult one to answer. I’ve been coding for over 30 years now, in various industries, with a whole slew of technologies. I started off, as a hobbyist developer, with the usual slew of BASIC and Forth technologies, before I became a professional developer working initially in C, then C++ (with a whole host of other languages thrown into the mix). Since 2001 I’ve primarily concentrated on the .NET stack, and I’ve been a WPF developer for the last five or six years.

What got you interested in coding, particularly perceptual computing?

I suppose I’m your typical geek. Technology has always held a fascination for me and while my day job was the standard corporate client gig, I’ve been lucky enough to have the interest in investigating technologies that didn’t necessarily fit with the day to day business.

Like many geeks, I was fascinated when the Wii came out because it represented a new way of interacting with computers. The, of course, came the Kinect. I’m not a great one for games playing but it was obvious to me that we were seeing a complete change in how we will be working with computers. It was exciting to see that the standard PC/Keyboard/Mouse combination was evolving and being replaced; from smart phones and tablets through to form factors such as the Ultrabook.

For me, Perceptual Computing represented a logical step forwards for me and my clients. It makes incredible sense for all sorts of interactions. How could I not want to get involved?

Ultimate Coder Challenge: Perceptual Computing. What got you interested in this? What were the highlights for you?

Ahh, the Ultimate Coder Challenge. This is probably the proudest moment of my coding career. When I was approached about taking part in the challenge, the requirements were vague. I had an idea for an application I’d been wanting to write as an article for The Code Project, but I hadn’t considered making it perceptual. The more I thought about it, though, the more I thought “why not”? 

Then I was lucky enough to get in contact with Bob Duffy and Wendy Boswell at Intel. They were incredibly supportive and they made the whole start up process, of the competition, incredibly smooth. People don’t realise how much help you need with a process like this, and they made the whole thing straightforward and pleasurable. The hard part was knowing that I’d been chosen for this, and then not being able to tell anyone until the official launch of the challenge.

Once we started building our applications, we faced all sorts of challenges and hurdles. This was a huge step for all of us – even those competitors who’d been involved in the perceptual space previously. Intel had made huge strides with their perceptual SDK, but we were still in largely uncharted territory here. That’s where the most surprising part of the contest came to the fore – rather than being competitors and keeping things to ourselves, we shared ideas and issues. We had a regular communication going on because we really were interested in pushing the platform forwards. This was a massive boost to me because I was learning so much from the other teams.

You can’t talk about the contest without talking about the judges. We spent so much time reading their comments and watching their videos. I gained so much respect for them that I now follow them on Twitter, G+, Facebook, their blogs and so on. They provided so much insight that they really were shaping my application – they may not have realised it, but I listened to everything that they said and the final shape of the application reflected their thoughts.

Tell us about your app you created for the Challenge. What are your future plans for this app?

Okay, my application is called Huda and it’s a photo editing application with a difference. Typically, when you edit photos, these edits are destructive. In other words, once you apply a filter, that’s it – you’ve lost the original image. Huda is different – it stores the edits in a different location to the actual image and reapplies them whenever you open the image. This allows you to remove edits, move filters around and so on – without losing anything underneath. Video demo below:

(A quick note - and an answer to a challenge I threw out): The name Huda came about because I was watching an old Dr. Who episode when I found out that I'd been selected for the contest, just as Matt Smith said "Who da man?" Hence, Huda.

Possibly the biggest difference, though, is that it can be controlled almost entirely through gestures and voice commands – and it will even tell you what it just did. This was the power of the Perceptual SDK at its best. Being able to shake your hand and have the application add a blur filter to a photo, or say something like “Add red filter” to sharpen the reds in a photo.

Huda was built as an entry for the contest, and it doesn’t really represent what I would expect if I were releasing an application like this commercially. It only features a few filters, and standard features such as red eye removal aren’t present, plus the Perceptual features were limited by time constraints. Since the competition, I have been revamping the core of Huda so that it will have more standard photo editing features, as well as being able to save the edited photos for viewing elsewhere. I’ve also been lucky enough to have contact with Intel and I am looking at how I can add their cloud facilities in. I have been highly impressed by Intel’s efforts in the application space, so I want to leverage this as much as possible, their cloud and Perceptual teams have been hugely supportive and very, very open to communication.

What other contests or perceptual computing efforts are you currently involved in?

I’m writing a new form of music synthesizer – basically, the app will use 3D space to control pitch and volume. Watch for some surprises in this. 

What was your experience with the SDK?

Major issues right now are with speed and accuracy of detection. Sometimes gestures are missed, or misreported. This is being worked on continuously, and will only improve. Right now, there’s a real innovation battle going on with Microsoft, LeapMotion and Intel coming out with amazing new features. As perceptual computing makes it as a standard feature in Ultrabooks, we’ll see a sea change in the way we use our applications.

The biggest issue, by far, is that we don’t have a set of standards for interacting through perceptual devices. There’s no accepted standard for Okay or Cancel, for instance. User interfaces will evolve to be more natural, so that features like buttons will become less relevant. This represents a huge opportunity for anyone getting into the game right now.

The biggest surprise for me is that Apple and Google have not got involved in the game though. They are missing out here on something I think is going to represent the next major innovation, and they have a lot of catching up to do.

What advice would you give to other developers looking to create something around perceptual computing?

Just get involved with it. Try it out and try to forget about the application form factors we’re currently used to. Ditch dropdown menus and buttons, and embrace complementing perceptual technologies. We can make applications truly accessible so that everyone can interact naturally with our software. Imagine how great it will be if we make it so that blind people have the same easy and natural interaction with our programs as we do, or that our software is as easy to use for quadriplegics as it is for you or me. That, to me, is one of the most exciting things about PerC. I can’t wait.

 

For more complete information about compiler optimizations, see our Optimization Notice.