Intro to Touch on Ultrabook: Touch Messages from the Windows 7 interface

Last time, I started looking at the road to Touch on Ultrabook. Here, we'll look at the Touch messages from the Windows 7 interface.

How does Touch work on Windows 7 (and Windows 8 desktop)?
Let’s look at the Windows 7 interface first.

First, you’ll want to understand your touch device at startup. Use the GetSystemMetrics call with input SM_DIGITIZER, to see if you have a touch device, and learn more details about it. The call will show whether the device uses touch or pen, if it’s integrated or external, and if it supports multi-touch.

Now that you know a few things about your input device , it’s time to decide what messages you’ll use. As with other kinds of input devices, input from touch devices arrives in your app as Windows messages. For simple touch contacts, they are turned into legacy messages, like the WM_LBUTTONDOWN message for single click. More complex touch contacts and gestures are sent to your app as either WM_TOUCH or WM_GESTURE messages. You must pick which one your app will use; WM_GESTURE is the default, but you may switch to using WM_TOUCH by calling RegisterTouchWindow for each window.

The two messages let you receive touch input in different ways. If you use WM_TOUCH, you’ll get separate touch messages. This means one message for every point of contact, continued contact, movement, hover (if your hardware supports it), and removal of contact. With WM_GESTURE, the separate touch events are put together for you, so you only get messages for whole gestures. Use WM_TOUCH when you need the most control over your input or want to use custom gestures, otherwise use WM_GESTURE for simpler coding.

Normally, you won’t need to know if a message came from the mouse, touch, or pen input. In case you do, you can check the results of the GetMessageExtraInfo call when you’re examining a message. Mask its return value with 0xFFFFFF00. The message came from a pen or touch input if it equals 0xFF515700; otherwise it came from the mouse.

WM_TOUCH basics
If you’re going to use WM_TOUCH, enable it for each of your windows with RegisterTouchWindow (your window will now stop receiving any WM_GESTURE messages). Normally, you won’t use any special arguments for the call, but you can change its behavior if you need to.

Add an entry to your WndProc to catch WM_TOUCH messages. After you’ve handled the message, be sure to forward it to DefWindowProc.

You’ll receive sequences of WM_TOUCH messages that you can combine to form gestures. Once you’ve gotten a WM_TOUCH message, check the wParam to find out the number of points, and call GetTouchInputInfo to get an array of TOUCHINPUT structs for each contact. In that struct, you’ll find important details like the physical screen coordinates, a touch ID (so you can keep track of which finger this is, between contacts), and dwFlags to show if it’s a touch down/up/move/etc. (e.g. TOUCHEVENTF_DOWN). If there are multiple touches, you can also check which contact was the first or primary, with TOUCHEVENTF_PRIMARY.

You’ll need to decide what to do with each set of touch contacts. Earlier, you figured out which gestures your app will use. Here you need to write the code for them. The gestures begin with a TOUCHEVENTF_DOWN and end with a TOUCHEVENTF_UP, perhaps with a set of TOUCHEVENT_MOVE messages in between. At the very least, you'll need to keep track of how each contact changes over time; you'll use the touch ID to correlate across multiple messages.

Using WM_TOUCH can get pretty complex, but it gives you finite control over how gestures are recognized. Next, let’s examine the same kind of code when Windows recognizes the gestures for you.

If you use WM_GESTURE, congratulations! It’s already running without being enabled.

Pick the gestures that your app should recognize, and set them via SetGestureConfig. By default, most gestures are enabled, but add any others that aren’t set up (e.g. rotate). To add them, set the dwWant and dwID fields of the GESTURECONFIG struct that you’ll pass to SetGestureConfig. Include an entry to WndProc to catch WM_GESTURE messages, and make sure you forward the message to DefWindowProc when you’re done with it.

Within your message handler for WM_GESTURE, check the dwID field to see which gesture is active. Write your own code to handle the messages you want, like zoom or pan. You’ll examine the start and end points of the gestures, and scale the effect by its size.

For simple events that don’t translate into full gestures (e.g. single finger tap) arrive as legacy mouse messages like WM_LBUTTONDOWN instead of gestures.

If you want to see a full code sample using WM_GESTURE, check out this Windows 7 Touch sample.

Next time
In the next post, I'll look at Windows 8 support for touch via the WinRT interface, for apps that run in the Windows 8 user interface.

For more complete information about compiler optimizations, see our Optimization Notice.