Programming Considerations for Sensors on Ultrabook™ Notebooks, Convertibles, and Tablets

Download Article


Programming Considerations for Sensors on Ultrabook™ Notebooks, Convertibles, and Tablets [PDF 1MB]
UltrabookFeatureDetectionApp.zip [ZIP 368KB]

Ultrabook™ devices are available in a variety of form factors, from ultra-portable notebooks to tablets and convertibles. One unique feature of Ultrabook devices is the assortment of sensors, including touch, compass, and accelerometer, but not all devices have all sensors. This document discusses how to detect which sensors a device has and how to utilize them.

Contents


Introduction


The world of Windows* 8 running on an Intel®-powered Ultrabook™ device opens up opportunities for applications to offer new user experiences and innovative application functionality. The Ultrabook system brings new hardware capabilities, such as a wealth of built-in sensors and communication devices. Windows 8 makes it easy to take advantage of these by providing the ability to create applications that have intuitive user interactions through touch and gestures that integrate information about their surroundings through the use of location information and communication with other devices.

You can create user interfaces that allow interaction with your applications in a natural way by taking advantage of the Ultrabook touch-sensitive input devices. Utilize familiar gestures like touch, tap, drag, swipe, pinch and spread. In addition, Ultrabook devices contain a variety of sensors that can detect orientation, direction, and movement. Use these sensors to orient a map application relative to the compass, increase readability of the screen under various lighting scenarios, or tilt and rotate the Ultrabook device to interact with a game.

Mobile devices are always on-the-go, and applications can now be location-aware. Windows 8 uses a combination of GPS, IP address, and coordinate triangulation to determine the device’s location and report that information to applications. With this information, your application can show its location on a map, provide directions to a specified location, or report items of interest nearby.

Many of the sensors in an Ultrabook have to do with movement or orientation of the device. For example, the accelerometer measures acceleration in three different directions (X, Y, Z), which your program can use to detect when someone shakes the device. In a similar vein, the gyrometer returns angular velocity values with respect to the x, y, and z axes. If your application needs to know its angles of rotation around X, Y and Z axis, the Ultrabook’s inclinometer gives pitch, roll, and yaw data.

The OrientationSensor is very useful for game developers because it returns a rotation matrix and a Quaternion that an application can use to adjust the user’s perspective. Finally, the compass, of course, returns a heading with respect to True North.

Sensors not related to motion include the LightSensor, which detects ambient light. Using it, your application can adjust the display to make it easier to read in the current light conditions. The ProximityDevice enables your application to publish messages to nearby devices or subscribe to messages from proximate devices over a distance of 3-4 cm. This is an example of Near Field Communication (NFC).

Since not all Windows 8 devices have all sensors, this paper covers how to determine which sensors are available and demonstrates how to utilize touch/gesture input, recognize whether the user is using touch or a stylus, have an application react to movement of the device and the ambient light level, recognize the device’s location and compass direction, and communicate using NFC. These concepts are demonstrated using both a Windows 8 desktop application written using WPF (Windows Presentation Foundation) and an application written in HTML5 and JavaScript*.

Ultrabook Sensors

  1. Multi-Touch
  2. Ink vs. touch input
  3. Accelerometer
  4. Ambient light detector
  5. Gyroscope
  6. Compass
  7. GPS
  8. Near Field Communications (NFC)
  9. Device orientation
  10. Inclinometer

Development Environment


To get the most out of this demonstration, you need a working installation of Microsoft Visual Studio* 2012 Express for Windows Desktop and have a standard WPF solution created. 

Determining What Sensors are Present

This section describes the steps needed to perform run-time sensor discovery using WinRT for both Windows Store apps and Windows 8 desktop applications.

Sensor detection in Windows 8 desktop applications written using WPF

The first step in sensor discovery is to enable our new desktop application to see and use WinRT assemblies. The trickiest part is making Visual Studio’s Reference Manager tool see the Windows assembly. To do that, we need to edit the .csproj file, located in the application’s source directory, and add the following line (Notepad should do just fine to edit the file):

 <targetplatformversion>8.0</targetplatformversion>



Figure 1: Visual Studio* .csproj file

Save the file and make sure to reload your solution. If all goes well, you should now be able to select the Windows assembly using Visual Studio’s Reference Manager tool.



Figure 2: Visual Studio* Reference Manager

With the Windows assembly reference, you can use WinRT namespaces, including the Windows.Devices.Sensors namespace. The bad news is, the solution will now fail to build and run— once you try using any of the RT-provided goodies—because of a missing dll reference.

But fixing this is as easy as adding a reference to System.Runtime.InteropServices. WindowsRuntime.dll (C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETCore\ v4.5).



Figure 3: Adding a reference

The general rule when using WinRT from a desktop application is to stay away from any objects found in UI-dependent namespaces.

With WinRT referenced and working, we are home free and can proceed to use the Windows.Devices.Sensors namespace. Now the only thing you need to do in order to find out if a sensor is indeed available is to retrieve the sensor default instance using the following code:

//Get the default accelerometer instance
_accelerometer = Accelerometer.GetDefault();
//Register a reading changed handler if accelerometer present
if (_accelerometer != null)
   _accelerometer.ReadingChanged += _accelerometer_ReadingChanged;

The procedure is identical for all sensor types exposed through WinRT.

Sensor detection in Windows Store Apps using HTML5 and JavaScript

When working with Windows Store apps, we do not need to worry about referencing WinRT (obviously), so sensor capability discovery is as simple as acquiring the default sensor instance:

var accelerometer = Windows.Devices.Sensors.Accelerometer.getDefault();
var compass = Windows.Devices.Sensors.Compass.getDefault();
var gyrometer = Windows.Devices.Sensors.Gyrometer.getDefault();
var inclinometer = Windows.Devices.Sensors.Inclinometer.getDefault();
var lightSensor = Windows.Devices.Sensors.LightSensor.getDefault();
var orientationSensor = Windows.Devices.Sensors.OrientationSensor.getDefault();

To confirm if a specific sensor is available, just do:

if(accelerometer.getCurrentReading)
       //Indicate sensor available
else
       //Indicate sensor unavailable
if (compass.getCurrentReading)
       //Indicate sensor available
else
       //Indicate sensor unavailable
if (gyrometer.getCurrentReading)
       //Indicate sensor available
else
       //Indicate sensor unavailable
if (inclinometer.getCurrentReading)
       //Indicate sensor available
else
       //Indicate sensor unavailable
if (lightSensor.getCurrentReading)
       //Indicate sensor available
else
       //Indicate sensor unavailable
if (orientationSensor.getCurrentReading)

Input (pointer) capability detection

If you only want to query touch capabilities, the easiest way is to use the TouchCapabilities class found in the Windows.Devices.Input namespace. An instance of this class can be used to query if the device has touch input available and, if yes, the number of input points supported. The drawback of using the TouchCapabilities class is that it does not provide specific information on the number of touch devices and the returned contact point count represents the maximum number of points supported by any available device. To get more specific input device information, you have to use the PointerDevice.GetPointerDevices() method and iterate over the returned PointerDevice collection.

//Get all of the pointer devices available on this system
var pointers = PointerDevice.GetPointerDevices();
ObservableCollection<PointerStateObject> pStates = new
   ObservableCollection<PointerStateObject>();
foreach (PointerDevice device in pointers) {
	//Check Device type
	if(device.PointerDeviceType == PointerDeviceType.Mouse) {
		//do something
	} else if (device.PointerDeviceType == PointerDeviceType.Pen) {
		//do something
	} else if (device.PointerDeviceType == PointerDeviceType.Touch) {
		//do something
}

//Get item contact point count
uint pointCount = device.MaxContacts;
//do something
pStates.Add(new 
		PointerStateObject(device.PointerDeviceType,device.MaxContacts));
}

Working with Touch and Stylus Input

A few notes on touch-smart applications

The first thing to realize about a touch interface is that using touch is not the same as using a mouse—just because your application supports mouse input does not mean it is touch friendly. The biggest difference between a mouse and touch is users cannot target objects as accurately using touch. Consequently, you can't expect users to tap or manipulate small objects. 

Also, a touch point is not a single point. When a user touches a touch device, the touch is registered as an area, not a single point. In fact, that area will move slightly as the user presses the device or moves his or her hand. Therefore, your application cannot count on users selecting a single pixel using touch.

Using touch requires a larger “target” for users to touch. Interactive controls must be large enough to be easily touchable—at least 11mm (40 pixels square) with 2mm (10 pixels) or more between targets. You may need to adjust these rules based upon the style of control as well. Some controls may be usable slightly smaller, while others will need to be larger—try them and see. Note, too, that targets near the edge of the display can be difficult to touch.

Users will expect support of all relevant gestures including panning, zoom, rotation, two-finger tap, press and tap, etc. Incorporating these gestures into your application opens opportunities to expand the usability of your application. Zooming in or out, which used to be handled with a mouse and a small slider, can now be accomplished with a two-finger spread or pinch. For all gestures, make sure your program provides smooth, responsive visual feedback, incorporate momentum, inertia, and friction to give a natural feel to the interaction.

While Ultrabook notebooks today have a fixed keyboard below a touch screen, do not assume this will always be the case; convertibles and tablets may not have a keyboard at all or be used in landscape orientation. A good application will work well in portrait as well as landscape modes. Switching to portrait mode may have impacts on the touch design of your application, such as the location and size of touch targets.

Touch can enhance an application and potentially make difficult user interactions easy and natural to perform, but it should make sense and not be used “just because you can.”  Using a two-finger gesture to rotate something that needs to be rotated with precise accuracy may not make sense unless the application provides touch controls to enable that accuracy. Above all, tasks need to be forgiving, allow users to correct mistakes easily, and handle inaccuracy with touching and dragging.

Touch and stylus input in applications written using WPF

When working with WPF, touch input gets handled using UIElement’s touch events: TouchDown, TouchMove, and TouchUp. Every touch event that occurs comes with a touch device instance that we can use for identification, for example to assign a specific color.

//Touch down event handler
private void Canvas_TouchDown(object sender, TouchEventArgs e)
{
  //Make the TouchCanvas element capture touch event's from the given
  //touch device
  TouchCanvas.CaptureTouch(e.TouchDevice);
  //Check if we are yet to assign a color to this touch device
  if (!_pointerColors.ContainsKey(e.TouchDevice.Id))
  {
     //Create a new System.Drawing.Color from the argb values of a
     //palette item's color
     System.Drawing.Color c = System.Drawing.Color.FromArgb( 
				_palette[_pointerColors.Count].PaletteColor.A,
				_palette[_pointerColors.Count].PaletteColor.R,
				_palette[_pointerColors.Count].PaletteColor.G,
				_palette[_pointerColors.Count].PaletteColor.B);
     //Assosiate the color with a touch device id
     _pointerColors.Add(e.TouchDevice.Id, c);
  }
  //mark event as handled
  e.Handled = true;
  this.InputTypeDisplay.Content = "Touch";
}

With a color assigned to a particular touch device, we can now go ahead and use touch move events to draw a point onto a pixmap. Getting the input point’s position relative to the drawing surface is as simple as calling the event’s GetTouchPoint method.

//Touch move event handler
private void Canvas_TouchMove(object sender, TouchEventArgs e)
{
  //Get the touch point relative to the TouchCanvas element
  TouchPoint p = e.GetTouchPoint(TouchCanvas);
  //Get the color assosiated with the touch device id
  System.Drawing.Color c = _pointerColors[e.TouchDevice.Id];
  //Translate touch point location into floatin point space
  System.Drawing.PointF drawPoint = new
	 System.Drawing.PointF((float)p.Position.X, (float)p.Position.Y);

  //Create a new ellipse item and position it on the touch point
  DrawItem item = new Ellipse(drawPoint, c, 15.0f, 15.0f);
  //Mark the event as handled
  e.Handled = true;
  //Add item to the touch draw items list
  _itemLock.WaitOne();
  _touchDrawItems.Add(item);
  _itemLock.ReleaseMutex();
  this.InputTypeDisplay.Content = "Touch";
}

Notice because we do our drawing in regular intervals, we just store ellipse items in the move event handler. The touch up event is simply used to release the drawing surface’s touch capture.

//Touch up event handler
private void Canvas_TouchUp(object sender, TouchEventArgs e)
{
  //Release touch capture from the given touch device
  TouchCanvas.ReleaseTouchCapture(e.TouchDevice);
  //Mark the event as handled
  e.Handled = true;
  this.InputTypeDisplay.Content = "Touch";
}

As with touch, pen input gets handled with stylus events: StylusDown, StylusMove, and StylusUp.

TouchCanvas.StylusDown += TouchCanvas_StylusDown;
TouchCanvas.StylusMove += TouchCanvas_StylusMove;
TouchCanvas.StylusUp += TouchCanvas_StylusUp;

The catch here is that once we mark a stylus event as handled, we will no longer get a touch event, even if the event’s source indeed comes from a touch device. Fortunately for us, stylus events provide the same data that touch events do, with the addition of a pressure property. You can learn more about this by reading Disable the RealTimeStylus for WPF Applications.

//Stylus down event handler
private void TouchCanvas_StylusDown(object sender, StylusDownEventArgs e)
{
  //Make the TouchCanvas element capture all stylus events
  TouchCanvas.CaptureStylus();
  //Mark the event as handled
  e.Handled = true;
  if (!_pointerColors.ContainsKey(e.StylusDevice.Id))
  {
    //Create a new System.Drawing.Color from the argb values of
    //a palette item's color
    System.Drawing.Color c = System.Drawing.Color.FromArgb( 
			_palette[_pointerColors.Count].PaletteColor.A,
			_palette[_pointerColors.Count].PaletteColor.R,
 			_palette[_pointerColors.Count].PaletteColor.G,
 			_palette[_pointerColors.Count].PaletteColor.B);
    //Assosiate the color with a touch device id
    _pointerColors.Add(e.StylusDevice.Id, c);
  }
}

private void TouchCanvas_StylusMove(object sender, StylusEventArgs e)
{
  //Get all event stylus points
  foreach (StylusPoint p in e.GetStylusPoints(TouchCanvas))
  {
     //Translate touch point location into floatin point space
     System.Drawing.PointF drawPoint = new System.Drawing.PointF(
				(float)p.X, (float)p.Y);

     System.Drawing.Color c = _pointerColors[e.StylusDevice.Id];

     //Create a new ellipse item, position it on the touch point and
     //resize it accordingly to the pressure applied
     DrawItem item = new Ellipse(drawPoint, c, 15.0f * p.PressureFactor,
				15.0f * p.PressureFactor);
     //Add item to the tuch draw items list
     _itemLock.WaitOne();
     _touchDrawItems.Add(item);
     _itemLock.ReleaseMutex();
  }
  //Mark the event as handled
  e.Handled = true;
}

private void TouchCanvas_StylusUp(object sender, StylusEventArgs e)
{
  //Make the TouchCanva element release the stylus capture
  TouchCanvas.ReleaseStylusCapture();
  //Mark the event as handled
  e.Handled = true;
}

Touch and stylus input in applications written using HTML5/WinJS

While working with Windows store apps, we have the benefit of working with Pointer events, which encapsulate input events from touch, pen, and mouse devices.

//Register touch event handlers
target.addEventListener("MSPointerDown", pointerDownEvent, false);
target.addEventListener("MSPointerUp", pointerUpEvent, false);
target.addEventListener("MSPointerMove", pointerMoveEvent, false);

As with WPF applications, we still work with the same old down and up events. The difference is that we get the same event for every device type and are able to determine the device type from the event arguments.

function pointerDownEvent(e) {
  //Check pointer device type
  if (e.pointerType == e.MSPOINTER_TYPE_TOUCH) {
    //Get the point coordinates 
    var pPoint = e.getCurrentPoint(target);
    //Get the pointer point id
    var pid = pPoint.pointerId;
    //Get a free color for the pointer id
    if (pktColorStore.freeColors.length > 0) {
       pktColorStore.usedColors[pid] = pktColorStore.freeColors.pop();
    }
    //Stop event propagation
    e.stopPropagation();
    InputType.innerText = "Touch";
  } else if (e.pointerType == e.MSPOINTER_TYPE_PEN) {
    //Stop event propagation
    e.stopPropagation();
    InputType.innerText = "Pen";
  }
}

function pointerUpEvent(e) {
  //Check pointer device type
  if (e.pointerType == e.MSPOINTER_TYPE_TOUCH) {
    //Get the point coordinates 
    var pPoint = e.getCurrentPoint(target);
    //Get the pointer point id
    var pid = pPoint.pointerId;
    //Return the color used by this pointer id to the free color store
    pktColorStore.freeColors.push(pktColorStore.usedColors[pid]);
    //Delete the doctionary entry for this pointer id
    delete pktColorStore.usedColors[pid];
    //Stop event propagation
    e.stopPropagation();
    InputType.innerText = "Touch";
  } else if (e.pointerType == e.MSPOINTER_TYPE_PEN) {
    //Stop event propagation
    e.stopPropagation();
    InputType.innerText = "Pen";
  }
}

function pointerMoveEvent(e) {
  //Check pointer device type
  if (e.pointerType == e.MSPOINTER_TYPE_TOUCH
   || e.pointerType == e.MSPOINTER_TYPE_PEN) {
    //Get the point coordinates 
    var pPoint = e.getCurrentPoint(target);
    //Get the pointer point id
    var pid = pPoint.pointerId;
    //Create a new Touch Point instance
    var pkt = new TouchPkt(GraphicsScene, pid);
    //Decide the pointer device type
    if (e.pointerType == e.MSPOINTER_TYPE_PEN) {
                        //Change the pen pressure value from 0-255 to 0-1
                        var pressureFloat = e.pressure / 255.0;
                        //Setup the touch point size to represent
				//the pen's pressure
                        pkt.Width = 15 * pressureFloat;
                        pkt.Height = 15 * pressureFloat;
                        //Use the pen color from the touch color store
                        pkt.Color = pktColorStore.penColor;
                        InputType.innerText = "Pen";

    } else {
      //Point read from a touch device are always the same size
      pkt.Width = 15;
      pkt.Height = 15;
      //Get the color assosiated with the pointer point id
      pkt.Color = pktColorStore.usedColors[pid];
      InputType.innerText = "Touch";
    }
    //Move the point to the recorded input device location
    pkt.MoveTo(pPoint.position.x, pPoint.position.y, 0);
    //Make the scene draw all it's new items
    GraphicsScene.Update();
    //Stop event propagation
    e.stopPropagation();
    }
}

With the differences mentioned above, the rest of the flow is identical to the WPF application.

Touch Gesture Handling

Gesture handling in applications written using WPF

The simplest way to get gesture support in WPF applications is to use the UIElement’s built in manipulation events. The down side here is that we are limited to pinch, rotate, and drag gestures.

The manipulation events are:
ManipulationStarting - Raised every time a new user manipulation interaction starts.

  • ManipulationDelta - Raised on every manipulation update and inertia update.
  • ManipulationInertiaStarting - Raised after the user interaction finishes, used to setup inertia.
<Image x:Name="photo" Source="/Assets/animals-bear.png " IsManipulationEnabled="True"  Width="500" />
	...
//Register for manipulation events
    photo.ManipulationStarting += m_rect_ManipulationStarting;
    photo.ManipulationDelta += m_rect_ManipulationDelta;
    photo.ManipulationInertiaStarting += m_rect_ManipulationInertiaStarting;

In the ManipulationStarting event handler, we simply set the manipulation’s container object, which becomes the root object for all manipulation calculations.

void m_rect_ManipulationStarting(object sender, ManipulationStartingEventArgs e)
{
  //Set the manipulation container to this.
  //the container is used as the relative object for all the calculations.
  e.ManipulationContainer = this;
}

The sole purpose of the inertia starting event is to set up the post-manipulation inertia behavior.

void m_rect_ManipulationInertiaStarting(object sender, ManipulationInertiaStartingEventArgs e)
{
  //Set the manipulations inertia values
  e.TranslationBehavior = new InertiaTranslationBehavior()
  {
    InitialVelocity = e.InitialVelocities.LinearVelocity,
    DesiredDeceleration = 10.0 * 96.0 / 1000000.0
  };

  e.ExpansionBehavior = new InertiaExpansionBehavior()
  {
    InitialVelocity = e.InitialVelocities.ExpansionVelocity,
    DesiredDeceleration = 10.0 * 96.0 / 1000000.0
  };

  e.RotationBehavior = new InertiaRotationBehavior()
  {
    InitialVelocity = e.InitialVelocities.AngularVelocity,
    DesiredDeceleration = 720.0 / 1000000.0
  };
  e.Handled = true;
}

In the ManipulationDelta event handler, we extract all of the delta values and apply those to the target object’s render transformation matrix.

void m_rect_ManipulationDelta(object sender, ManipulationDeltaEventArgs e)
{
  var rect = e.Source as FrameworkElement;
  if (rect != null)
  {
    //Get the manipulations delta
    var delta = e.DeltaManipulation;
    //Get the source elements current transformation matrix
    var matrix = ((MatrixTransform)rect.RenderTransform).Matrix;

    double oldXOffset = matrix.OffsetX;
    double oldYOffset = matrix.OffsetY;

    //Get the transformed center point
    Point rectCenter = new Point(rect.ActualWidth * 0.5, 
			     rect.ActualHeight * 0.5);
    rectCenter = matrix.Transform(rectCenter);

    //Adjust the elements scale, rotation and translation
    matrix.ScaleAt(delta.Scale.X, delta.Scale.Y, rectCenter.X, rectCenter.Y);
    matrix.RotateAt(delta.Rotation, rectCenter.X, rectCenter.Y);
    matrix.Translate(delta.Translation.X, delta.Translation.Y);

    e.Handled = true;

    if (e.IsInertial)
    {
      //Get the containing elements size rect
      Rect containingRect = new Rect(((FrameworkElement)
					  e.ManipulationContainer).RenderSize);
      //Get the transformed elements new bounds
      Rect shapeBounds = rect.RenderTransform.TransformBounds(new 
					  Rect(rect.RenderSize));
      //If elements fall out of bounds
      if (!containingRect.Contains(shapeBounds))
      {
        //Report boundary feedback
        e.ReportBoundaryFeedback(e.DeltaManipulation);
        //Stop any afther inertia
        e.Complete();
      }
      double halfWidth = (this.ActualWidth - this.photo.ActualWidth) * 0.5;
      double halfHeight = (this.ActualHeight - this.photo.ActualHeight) *
				  0.5;

      if (matrix.OffsetX < -halfWidth || matrix.OffsetX > halfWidth)
         matrix.OffsetX = -halfWidth;
      if (matrix.OffsetY < -halfHeight || matrix.OffsetY > halfHeight)
         matrix.OffsetY = oldYOffset;
      }

      //Update the elements render transformation
      rect.RenderTransform = new MatrixTransform(matrix);
  }
}

The thing to note here is that we limit all inertia manipulations to keep the manipulated item inside the view’s scope.

Gesture handling in applications written using HTML5/WinJS

As with touch and pen input, gesture recognition is quite different when working with Windows Store apps. The most obvious difference is that we do not get any manipulation or gesture events. Instead we need to create an instance of a dedicated GestureRecognizer object and feed it with the raw pointer points that we get from an object’s pointer events.

The gesture recognizer exposes a number of events:

  • manipulationstarted - Raised every time a pinch, rotate, or move gesture starts.
  • manipulationupdated - Raised on every pinch, rotate, and move gesture.
  • manipulationended - Raised when every pinch, rotate, and move gesture ends.
  • manipulationinertiastarting - Raised when every pinch, rotate, and move ends.
  • crosssliding - Raised on every cross-sliding gesture.
  • holding - Raised on every holding gesture.
  • righttapped - Raised on every right-tap gesture.
  • dragging - Raised on every dragging gesture.
//Register gesture recognizer manipulation event handlers
if (gestureHandler.ManipulationStart) {
  this.gestureRecognizer.addEventListener("manipulationstarted",
  		gestureHandler.ManipulationStart, false);
}
if (gestureHandler.ManipulationUpdate) {
  this.gestureRecognizer.addEventListener("manipulationupdated",
  		gestureHandler.ManipulationUpdate, false);
}
if (gestureHandler.ManipulationCompleted) {
  this.gestureRecognizer.addEventListener("manipulationended",
  		gestureHandler.ManipulationCompleted, false);
}
if (gestureHandler.ManipulationInertiaStart) {
  this.gestureRecognizer.addEventListener("manipulationinertiastarting",
  		gestureHandler.ManipulationInertiaStart, false);
}
if (gestureHandler.CrossSliding) {
  this.gestureRecognizer.addEventListener("crosssliding",
  		gestureHandler.CrossSliding, false);
}
if (gestureHandler.Holding) {
  this.gestureRecognizer.addEventListener("holding",
  		gestureHandler.Holding, false);
       }
if (gestureHandler.RightTapped) {
  this.gestureRecognizer.addEventListener("righttapped",
  		gestureHandler.RightTapped, false);
}
if (gestureHandler.Dragging) {
  this.gestureRecognizer.addEventListener("dragging",
  		gestureHandler.Dragging, false);
}

But before we get any of those events, we need to let the recognizer know which gestures we are interested in. In the example below, we just go ahead and register for all gesture types.

//Create gesture settings
var settings;
settings |= Windows.UI.Input.GestureSettings.manipulationRotate;
settings |= Windows.UI.Input.GestureSettings.manipulationRotateInertia;
settings |= Windows.UI.Input.GestureSettings.manipulationScale;
settings |= Windows.UI.Input.GestureSettings.manipulationScaleInertia;
settings |= Windows.UI.Input.GestureSettings.manipulationTranslateX;
settings |= Windows.UI.Input.GestureSettings.manipulationTranslateY;
settings |= Windows.UI.Input.GestureSettings.manipulationTranslateInertia;
settings |= Windows.UI.Input.GestureSettings.crossSlide;
settings |= Windows.UI.Input.GestureSettings.doubleTap;
settings |= Windows.UI.Input.GestureSettings.drag;
settings |= Windows.UI.Input.GestureSettings.hold;
settings |= Windows.UI.Input.GestureSettings.rightTap;
settings |= Windows.UI.Input.GestureSettings.tap;

gestureRecognizer.gestureSettings = settings;

The demo application also comes with a gesture helper library that takes care of all the grunt work of setting up a gesture recognizer and passing on pointer points. First, we create a new GestureHandler instance, making sure to set up manipulation handlers and manipulation bounds.

//Create a new GestureManipulation.GestureHandler subclass
var HandlerClass = WinJS.Class.derive(GestureManipulation.GestureHandler,
  //CTOR
  function () {
    //Store an instance pointer
    var instance = this;
    //A manipulation handler method
    function handleManipulation(e) {
    //Check if manipulation delta available
    if (e.delta != undefined)
      //Apply the manipulation delta to the current item transfromation
      instance.ApplyDelta(e.position, e.delta);
  }
  //Inertia start handler
  function handleInertiaStart(e) {
  }
  //Overload manipulation event handlers with the local handler function
                    this.ManipulationStart = handleManipulation;
                    this.ManipulationUpdate = handleManipulation;
                    this.ManipulationCompleted = handleManipulation;
                    this.ManipulationInertiaStart = handleInertiaStart;
                    this.KeepInBounds = true;
                    this.MaxDxBounds = parent.scrollWidth;
                    this.MaxDyBounds = parent.scrollHeight;

  },
  //PUBLIC
  {},
  //PUBLIC STATIC
  {}
);

Then we create an instance of the new GestureHandler subclass and use that to initiate a new GestureManipulator instance. We also set up the translation inertia deceleration to make sure it’s less violent than the default.

//Create a new HandlerClass instance
var handler = new HandlerClass();
            
	...

//Create a new GestureManipulation.Manipulator instance
//see /js/gesturemanipulation.js for more details
var manipulator = new GestureManipulation.Manipulator(target, parent);
//Initiate the GestureManipulation.Manipulator instance with the 
//new local hanlder and settings
manipulator.Init(handler, settings);

//Limit Translation inertia
manipulator.GestureRecognizer().inertiaTranslationDeceleration = 384.0 *
	96.0 / 1000000.0;

Near Field Communication

Near Field Communication (NFC) is designed to work over short distances to establish a connection between two devices whose NFC transmitters are within 3-4 cm of each other. This means you need to know where the two transmitters are located and you need to get them in close physical proximity, if possible, touching. Some devices make this easier than others, such as an Ultrabook and an NFC-enabled phone, instead of two notebooks. It may take some trial and error to locate the correct orientation.

NFC in applications written using WPF

To work with NFC, we need to reference some RT libraries as discussed at the beginning of this document. Once the RT libraries are linked, we can get the default proximity device instance. Then we can register for the device-arrived and device-departed events. We also subscribe for the writable tag message so we can detect when a writable tag comes in range.

//NFC
using Windows.Networking.Proximity;

_nfc = ProximityDevice.GetDefault();
//Register a reading changed handler if nfc device presnet
if (_nfc != null)
{
    _nfc.DeviceArrived += _nfc_DeviceArrived;
    _nfc.DeviceDeparted += _nfc_DeviceDeparted;
    tagMsgSubscription = _nfc.SubscribeForMessage("WriteableTag",
				 _nfc_WriteableTagMessage);
} 

When we get the writable tag message, we simply notify the user by setting the device label and set the tag flag.

void _nfc_WriteableTagMessage(Windows.Networking.Proximity.ProximityDevice sender,
	Windows.Networking.Proximity.ProximityMessage msg)
{
  this.Dispatcher.InvokeAsync(() =>
  {
    this.deviceLabel.Content = "Nfc writeable tag device found";
    isTag = true;
  });
}

In the device arrived event handler, we just set the device label to indicate we found an active NFC device.

void _nfc_DeviceArrived(ProximityDevice sender)
{
  this.Dispatcher.InvokeAsync(() =>
  {
    this.deviceLabel.Content = "Nfc device found";
    isTag = false;
  });
}

Once we get the device departed event, we clear the device label and remove the tag flag just in case.

void _nfc_DeviceDeparted(ProximityDevice sender)
{
  this.Dispatcher.InvokeAsync(() =>
  {
    this.deviceLabel.Content = "No Device";
    isTag = false;
  });
}

Now that we have handled device arrival and departure, it’s time to take care of message publishing. To publish a message, we first need to store it in a buffer using a DataWriter instance. Next, we can decide if we are working with a tag so we can use the proper method.

void Nfc_Button_Click(object sender, RoutedEventArgs e)
{
  if (this.UrlInput.Text.Length == 0) return;
  using (var writer = new DataWriter { UnicodeEncoding =
    Windows.Storage.Streams.UnicodeEncoding.Utf16LE })
  {
    this.NfcButton.IsEnabled = false;
    this.StopNfcButton.IsEnabled = true;
    writer.WriteString(this.UrlInput.Text);
    var buff = writer.DetachBuffer();
    if (isTag)
      publishedMid = _nfc.PublishBinaryMessage("WindowsUri:WriteTag", buff,
				this.MessageTransmitted);
    else
      publishedMid = _nfc.PublishBinaryMessage("WindowsUri", buff,
				this.MessageTransmitted);

  }
}

Once we start publishing a message, it will be sent to any device that comes into proximity. To avoid that, we register for the message transmitted event, which we use to stop message publication.

void MessageTransmitted(ProximityDevice sender, long mId)
{
  _nfc.StopPublishingMessage(mId);
  this.Dispatcher.InvokeAsync(() =>
  {
      this.NfcButton.IsEnabled = true;
      this.StopNfcButton.IsEnabled = false;
  });
  publishedMid = -1;
}

We also let the user cancel message publication just in case we never connect to a suitable device.

void StopNfc_Button_Click(object sender, RoutedEventArgs e)
{
  if (publishedMid != -1)
  {
      _nfc.StopPublishingMessage(publishedMid);
      this.NfcButton.IsEnabled = true;
      this.StopNfcButton.IsEnabled = false;
  }
}

NFC in applications written using HTML5/WinJS

Because both applications use RT, they end up being almost identical. First, we need to add a proximity capability, then we get the default proximity device, as usual, and then register all necessary event listeners.

var nfc = Windows.Networking.Proximity.ProximityDevice.getDefault();
if (nfc) {
  nfc.addEventListener("devicearrived", function (device) {
    deviceLabel.innerText = "Nfc device found";
    NFC.IsTag = false;
  });
  nfc.addEventListener("devicedeparted", function (device) {
    deviceLabel.innerText = "No Device";
    NFC.IsTag = false;
  });
  nfc.subscribeForMessage("WriteableTag", function (device, msg) {
    deviceLabel.innerText = "Nfc writeable tag device found";
    NFC.IsTag = true;
  });
}
function MessageTransmitted(device, mid) {
  nfc.stopPublishingMessage(mid);
  publishButton.disabled = false;
  stopNfcButton.disabled = true;
  mId = -1;
}
stopNfcButton.disabled = true;
publishButton.addEventListener("click", function () {
  if (urlInput.value.length == 0) return;
  if (nfc) {
    publishButton.disabled = true;
    stopNfcButton.disabled = false;
    var dataWriter = Windows.Storage.Streams.DataWriter();
    dataWriter.unicodeEncoding = Windows.Storage.Streams.UnicodeEncoding.utf16LE;
    dataWriter.writeString(urlInput.value);
    if (NFC.IsTag)
      mId = nfc.publishBinaryMessage("WindowsUri:WriteTag",
		dataWriter.detachBuffer(), MessageTransmitted);
    else
      mId = nfc.publishBinaryMessage("WindowsUri",
		dataWriter.detachBuffer(), MessageTransmitted);
  }
});
stopNfcButton.addEventListener("click", function () {
  if (mId != -1) {
    nfc.stopPublishingMessage(mId);
    publishButton.disabled = false;
    stopNfcButton.disabled = true;
    mId = -1;
  }
});

Accelerometer and gyroscope

Accelerometer and gyroscope in applications written using WPF

Just like the other sensors, the code for handling both the accelerometer and gyroscope uses RT libraries.

The first thing we do is get the default sensor instance.

m_acc = Accelerometer.GetDefault();
m_gyro = Gyrometer.GetDefault();

As the demo relies on a periodic sensor reading as opposed to reading changes, we use a timer instead of events.

m_timer = new DispatcherTimer();
m_timer.Interval = new TimeSpan(0, 0, 0, 0, 45);
m_timer.Tick += m_timer_Tick;
m_timer.Start();

void m_timer_Tick(object sender, EventArgs e)
{
  move();
  Canvas.SetLeft(photo, m_photoLeft);
  Canvas.SetTop(photo, m_photoTop);
}

The move helper function simply reads the current sensor values and updates the photo’s left and top position.

private void move()
{
  double accX = 0.0;
  double accY = 0.0;
  double gyroX = 0.0;
  double gyroY = 0.0;

  if (m_acc != null)
  {
     	  	AccelerometerReading r = m_acc.GetCurrentReading();
       	accX = r.AccelerationX;
              accY = -r.AccelerationY;
  }

  if (m_gyro != null)
  {
    GyrometerReading r = m_gyro.GetCurrentReading();
    gyroX = r.AngularVelocityY/64.0;
    gyroY = -r.AngularVelocityX/64.0;
  }

  m_photoLeft += (m_d * accX) + (m_d * gyroX);
  m_photoTop += (m_d * accY) + (m_d * gyroY);

  if (m_photoLeft < 0.0) m_photoLeft = 0.0;
  else if (m_photoLeft > m_leftMax) m_photoLeft = m_leftMax;
  if (m_photoTop < 0.0) m_photoTop = 0.0;
  else if (m_photoTop > m_topMax) m_photoTop = m_topMax;
 }

Notice that we check to see if we have any sensor instances to work with, then simply multiply the current reading by a constant value and add the result to the current photo’s offset. The last step involves making sure that the photo remains inside the view’s scope.

Accelerometer and gyroscope in applications written using HTML5/WinJS

As both demo apps use RT libraries to handle sensors, the only difference here is due to the different programming languages. As usual, the first thing we do is get the default sensor instances.

var m_acc = Windows.Devices.Sensors.Accelerometer.getDefault();
var m_gyro = Windows.Devices.Sensors.Gyrometer.getDefault();

Again, because we need readings at regular intervals, we choose a timer instead of reading changed-event handlers.

setInterval(frame, 45);

function frame() {
  move();
  var transform = new MSCSSMatrix().translate(m_photoLeft, m_photoTop);
  photo.style.transform = transform;
}

Again, we simply use the helper move function and adjust the photo’s render transformation.

function move() {
  var accX = 0.0;
  var accY = 0.0;
  var gyroX = 0.0;
  var gyroY = 0.0;

  if (m_acc != null) {
    var r = m_acc.getCurrentReading();
    accX = r.accelerationX;
    accY = -r.accelerationY;
  }

  if (m_gyro != null) {
    var r = m_gyro.getCurrentReading();
    gyroX = r.angularVelocityY/64.0;
    gyroY = -r.angularVelocityX/64.0;
  }

  m_photoLeft += (m_d * accX) + (m_d * gyroX);
  m_photoTop += (m_d * accY) + (m_d * gyroY);

  if (m_photoLeft < 0.0) m_photoLeft = 0.0;
  else if (m_photoLeft > m_leftMax) m_photoLeft = m_leftMax;
  if (m_photoTop < 0.0) m_photoTop = 0.0;
  else if (m_photoTop > m_topMax) m_photoTop = m_topMax;
}

With the language differences aside, the move function is identical to the one used in the WPF version.

Ambient Light Sensor

Ambient light sensor in applications written using WPF

As usual, the first thing we do is get the default sensor instance and register a reading-event handler.

m_lightSensor = LightSensor.GetDefault();
if (m_lightSensor != null)
{
    m_lightSensor.ReadingChanged += m_lightSensor_ReadingChanged;
    adjustLight(m_lightSensor.GetCurrentReading());
}

void m_lightSensor_ReadingChanged(LightSensor sender, LightSensorReadingChangedEventArgs args)
{
    this.Dispatcher.InvokeAsync(() =>
    {
      adjustLight(args.Reading);
    });
}

In the event handler, we just pass on the new reading to the adjustLight helper method, making sure it runs in the UI (user interface) thread as it makes some UI manipulations.

void adjustLight(LightSensorReading reading)
{
  double normalizedLux = Math.Log10(reading.IlluminanceInLux) / 5.0;
  this.lightOn.Opacity = normalizedLux;
}

In the adjustLight helper, we normalize the reading value and use the normalized value as the opacity of the light on the image.

Ambient light sensor in applications written using HTML5/WinJS

Again, as the applications both use RT libraries, the mechanics are virtually identical. As always, we get the default instance and register a reading-changed event listener.

var lightSensor = Windows.Devices.Sensors.LightSensor.getDefault();
if (lightSensor) {
    lightSensor.addEventListener("readingchanged", function (e) {
      adjustLight(e.reading);
    });
  adjustLight(lightSensor.getCurrentReading());
}

Again, we use the adjustLight helper function to consume the reading. Notice there is no need to worry about the execution thread when working with WinJS applications.

function log10(val) {
  return Math.log(val) / Math.LN10;
}

function adjustLight(reading) {
  var normalizedLight = log10(reading.illuminanceInLux) / 5.0;
  lightOn.style.opacity = normalizedLight;
}

We normalize the reading value and set it as the light-on image’s opacity.

Compass

Compass in applications written using WPF

As with all sensors, we handle the compass using the RT libraries, first getting the default compass instance. Then we get a simple orientation sensor instance, which we will need to compensate for UI rotations made due to changes in the device’s orientation.

m_compass = Compass.GetDefault();
if(m_compass != null)
m_compass.ReadingChanged += m_compass_ReadingChanged;
m_orientationSensor = SimpleOrientationSensor.GetDefault();
if (m_orientationSensor != null)
{
  m_orientationSensor.OrientationChanged += m_orientationSensor_OrientationChanged;
  updateOrientation(m_orientationSensor.GetCurrentOrientation());
}

In addition to registering for the compass and orientation sensors’ reading-changed events, we make sure to pass the current orientation sensor reading in to the updateOrientation helper.

void updateOrientation(SimpleOrientation or)
{
       if (or == SimpleOrientation.NotRotated)
           orientationRotation = 0.0;
       else if (or == SimpleOrientation.Rotated90DegreesCounterclockwise)
           orientationRotation = 90.0;
       else if (or == SimpleOrientation.Rotated180DegreesCounterclockwise)
           orientationRotation = 180.0;
       else if (or == SimpleOrientation.Rotated270DegreesCounterclockwise)
           orientationRotation = 270.0;
}

In the updateOrientation helper, we just check the current orientation enum and set the orientationRotation variable.

void m_orientationSensor_OrientationChanged(SimpleOrientationSensor sender, SimpleOrientationSensorOrientationChangedEventArgs args)
{
    this.Dispatcher.InvokeAsync(() => {
        updateOrientation(args.Orientation);
        updateRotation();
    });
}

void m_compass_ReadingChanged(Compass sender, 
  CompassReadingChangedEventArgs args)
{
  this.Dispatcher.InvokeAsync(() =>
  {
    if (args.Reading.HeadingTrueNorth.HasValue)
      headingRotation = -args.Reading.HeadingTrueNorth.Value;
    else
      headingRotation = -args.Reading.HeadingMagneticNorth;
      updateRotation();  
  });
}

Notice that we run the event handler code inside a dispatcher invoke statement so the code is executed in the UI thread. Both of the sensor’s reading-change events also run the updateRotation helper method, which is used to apply the new transformation to the compass image.

void updateRotation()
{
  var matrix = new Matrix();
  matrix.RotateAt(headingRotation + orientationRotation,
	this.CompassImg.Width *    
	0.5, this.CompassImg.Height * 0.5);
  this.CompassImg.RenderTransform = new MatrixTransform(matrix);
}

Compass in applications written using HTML5/WinJS

The Windows Store app is almost identical to the WPF. The main difference is there is no need to monitor UI orientation changes. Instead, we can configure the application to keep the user interface untransformed, regardless of the device’s orientation. As always, we start by getting the default sensor instance.

var compassSensor = Windows.Devices.Sensors.Compass.getDefault();
if (compassSensor) {
  compassSensor.addEventListener("readingchanged", function (e) {
    var reading = e.reading;
    if (reading.headingTrueNorth != NaN
        && reading.headingTrueNorth != null)
      applyTransfromation(-reading.headingTrueNorth);
    else
      applyTransfromation(-reading.headingMagneticNorth);
  });
}

function applyTransfromation(rotation) {
      var transform = new MSCSSMatrix().rotate(rotation);
      CompassImage.style.transform = transform;
}

The reading changed event handler is almost identical to the one from the WPF application. We simply take the new reading and pass it into the applyTransformation helper, which just constructs a new matrix transformation to the compass image element.

Closing

Ultrabook notebooks, convertibles, tablets, and similar current-day Windows devices have introduced a tremendous set of sensor capabilities. End users expect applications to take full advantage of their system’s features. Fortunately, it is not that difficult for developers to add intelligence to their applications so that they use the sensors available and respond meaningfully to those that are missing. Whether the application is developed using WPF or HTML5/WinJS, the APIs enable the developer to detect the sensors and use those present to their fullest extent. As this paper demonstrates, it is not all that tricky.

Any software source code reprinted in this document is furnished under a software license and may only be used or copied in accordance with the terms of that license.

Intel, the Intel logo, and Ultrabook are trademarks of Intel Corporation in the U.S. and/or other countries.
Copyright © 2013 Intel Corporation. All rights reserved.
*Other names and brands may be claimed as the property of others.

Para obtener más información sobre las optimizaciones del compilador, consulte el aviso sobre la optimización.