Perceptual Computing: Apps, Hackathons, Demos, Oh My!

It is truly amazing what developers all over the world are managing to come up with in the field of perceptual computing. Perceptual computing is something that Intel® is putting a lot of time, energy, and resources into, as seen in the recently finished Ultimate Coder: Going Perceptual competition and the ongoing Intel® Perceptual Computing Challenge, which is about to kick off into its second phase on May 6th. To support further advances in this exciting field, Intel has released the Perceptual Computing SDK, a free framework developers can use to create apps, input devices, interfaces, and anything else they might be able to come up with.

Perceptual computing isn’t just about one product, one platform, or one operating system. It’s about exploring what computers are truly capable of accomplishing:

“We’re not trying to replace anything. We’re just trying to augment existing modes of interaction,” says Barry Solomon, product planner and strategist at Intel. “We’re adding senses to the computer’s brain so it can perceive their surroundings, who’s interacting with it, and make those interactions more intuitive.” – Intel Free Press, “Perceptual Computing Making Computers More Human”

In addition to the great work we get to see in Intel sponsored contests, there is a lot of interesting developments going on in perceptual computing, from wink-activated photography to teaching sign language using virtual reality/gesture activated input. In this article, we’re going to take a quick tour of what’s currently going on in perceptual computing.

Perceptual computing apps

More and more apps are coming out that integrate perceptual computing features. A new Google Glass app allows users to take photos merely by winking – and of course, the app is called “Winky”. It works by prioritizing the wink movement within the parameters of the glasses themselves; by default, this gesture is actually disabled in the device. More from the developer who came up with this intriguing idea:

"I wound up decompiling some of the code that was on the Glass devices and investigating there. I found that the wink gesture is disabled by default and only engineering or test builds seem like they would have the option to enable it.” – Mike DiGiovanni, ArsTechnica.com

Of course, you can also take a picture with Google Glass by tapping a button or voicing a command, but winking is a more intuitive gesture that works within the boundaries of the device, allowing the user to document what they’re doing with little effort or direction. A video of Winky can be seen here; the developer has also made the source code available at GitHub.

Intel recently sponsored a hackathon at Sacramento’s Hacker Lab, with participants using the Intel Perceptual Computing SDK to compete for position. Here’s an excerpt from one of the developer teams:

“Stashed away in the upper chambers of Sacramento’s Hacker Lab, fueled by cola, beer, and chips, BOGWAC (Bunch of Guys Working at Concordus) pushed the limits of Intel’s Perceptual Computing SDK. We started the evening with a plan to build a “Math Blasters”-style game for teaching American Sign Language finger spelling. We were going to use Unity3D to build the game and the SDK to integrate Creative’s Interactive Gesture Camera for sign recognition. Two points need to be made here: 1) No one on our team had ever used Unity3D and 2) No one on our team had ever seen or used the SDK. Were we ambitious? Maybe.” – Intel Perceptual Computing Hackathon “Best Educational” Winner

A demo of the winning app can be seen below:

Speaking of cool perceptual computing apps, another one just surfaced in the Intel AppUp store called FastAccess Anywhere. This app offers a number of perceptual computing integrations, including the ability to log into websites using facial recognition:


-Quickly Log Into Windows & Websites with Your Face
-Face-Based User Switching 
-Industry-leading Photo Rejection
-Web Login Sync Across All Your Windows, Android, & Apple iOS Devices
-Face-Based Power Savings
-Superior Low Light Performance
-Automatic Face Learning
-Improved security

Perceptual computing seems to be tailor-made for gaming, especially gesture control. Intel displayed a demo of Portal 2 at CES with gesture controls, using technology developed by Sixense, the winner of the recently closed Ultimate Coder: Going Perceptual challenge. A demo can be seen below:

Microsoft’s Envisioning Center includes a futuristic model that is based on the principles of perceptual computing, with concepts based on voice, gesture, and touch. The model includes a TV with a built-in Kinect, a Surface tablet interacting with a giant touchscreen that looks like an All In One computer, SmartGlass software, and a touch-enabled kitchen wall that users can interact with. NFC also makes an appearance, enabling appliances to interact with other devices to perform an action: share pictures, obey commands, transfer content, etc. You can see a demo of this technology below:

In another demo from TAT's Open Innovation experiment, we see that the touchscreens are used via gestures, voice, and tactile feedback. They are on mirrors, computers, tablets, phones, with content flowing freely between them all with little regard to OS or device. Watch this amazing demo:

SDK

The Intel Perceptual Computing SDK gives developers the head start they need to start creating apps that push the boundaries of computing as we know it:

“Once the technology has been embedded into the increasingly slimmer form factors of an Ultrabook™, these perceptual computing features will become standards that will change how we use computers. Intel has a fascinating vision for the future of interacting with the PC, and this combination of technologies creates a functional gap between smartphones and PCs.” – Chris Sewell, Geek.com

Download the Intel Perceptual Computing SDK for free here. Intel has also provided a basic getting started guide (PDF)

Just getting started

While this article has presented some pretty astonishing technology, it’s clear that we’re just barely scratching the surface of what is possible in perpetual computing:

“Perceptual computing has limitless possibilities. What I find so fascinating about this technology is that we don’t know what this will become. We don’t know what the innovations will be over the next three to five years, it’s going to be fascinating to watch this play out and how people are innovating on it…..I’m optimistic that perceptual computing will take off quickly. I’m optimistic that once developers see how easy it is to get access to these capabilities, and how readily they can produce interesting experiences, it is going to cause an explosion in the ecosystem and consumers are going to start getting access to this and find some really interesting, fun things they enjoy and I think that is going to be extremely powerful.”. – Craig Hurst, Director of Visual Computing Products at Intel

Where do you see perceptual computing going in the next few months and years? Give us your thoughts in the comments below.

 

 

Para obter informações mais completas sobre otimizações do compilador, consulte nosso aviso de otimização.