The Role of Sensors in Redefining Human-Computer Interaction

Introduction

Intel’s Ultrabooks, Atom and Core-based tablets, and handheld devices incorporate a wide range of hardware sensors. These sensors enable high definition image processing, audio processing, motion detection, environmental conditions detection, and geographical and proximity location detection, and more. These sensors create multi-channel multi-dimension connections between the user and the device.  Combining with high computation performance and low energy consumption, these sensors provide foundation for innovative changes in the interfaces between people and the devices, and the interactions between people and the world through the devices.

Perceptual Computing

For decades, we have been relying on mice, keyboards, and touch screens as the main ways to “command” the computers. These input methods are not intuitive. They also require direct touches on the machines.

In his recent keynote speech at Intel Developer Forum (IDF) 2012, Intel executive vice president Dadi Perlmutter presented perceptual computing as the next wave of technology that will redefine the human-computer interaction.

Perceptual computing is a technology that uses voice commands, facial recognition, and gesture controls to interact with a computer. The computer and the applications “perceive” the user’s intentions based on the sensor data it collects.

Here are some perceptual computing use cases:

·         You use voice commands to ask your computer to start the eReader app, open your favorite book title in the digital library, and use hand gestures to turn the pages without touching the screen.

·         In a 3D car racing video game, the app detects the angles and gestures of how the player holds the steering wheel and controls the vehicle’s movements.

Intel formally announced Perceptual Computing SDK 2013 Beta during the IDF.

Augmented Reality  

If perceptual computing is about the interactions between people and the computers, augmented reality is about the interfaces between people and the world through the computers. The ideas and applications of augmented reality have been around for decades. It became hot again recently after Google announced the “Google Glasses” project earlier this year.

Augmented reality (AR) is a real-time view of the physical world with the elements augmented by computer generated information. It is an extension of virtual reality. AR associates the real world objects with the sensory inputs, such as image, video, sound, location, etc. On mobile devices, the implementation of augmented reality application heavily depends on the sensors on the devices, such as video camera, orientation sensor, compass, GPS, proximity sensor, and others.

Here are some augmented reality use cases:

·         A vehicle navigation app shows road names, landmarks, and turn-by-turn instructions on the phone’s video camera’s displays.

·         You point your phone’s camera to a restaurant on the street, the screen shows today’s menu and the customer reviews.

 

Summary

This article discusses perceptual computing and augmented reality at a high level. The major elements which provide foundation to redefine the human-computer interaction include:

·         Advanced silicon technology which supports high computing performance and low energy consumption.

·         Sensors which provide accurate real-time data.

 

Reportez-vous à notre Notice d'optimisation pour plus d'informations sur les choix et l'optimisation des performances dans les produits logiciels Intel.