We’ve heard quite a bit about touch design principles from the always eloquent Luke W (you can read about what he came up with in his very well done video series here and here). In this third video, Luke talks about common touch gestures and how developers can incorporate elements of discoverability within their apps so these touch features are more easily discovered.
Touch simply adds to input options
In the first two touch design videos, we were given a thorough overview of touch gestures: what they are and how developers can incorporate them into their apps. In this third video, we took a detailed look at gestures and discoverability. Basically, gestures are how people get things done on a touch interface. When put together with a keyboard and a mouse (as in the next generation of Ultrabooks), they add to our input options and how we can get things done as we use our computers.
Commonly used touch gestures
There are different gestures available for touchscreens over several different operating systems, with a synergistic consistency of gesture types across platforms (which makes it very convenient for any developer looking to add touch to their apps).
If you’re a developer wanting to optimize a desktop application for touch, it’s a good idea to familiarize yourself with the most commonly used gestures across multiple platforms. For example:
In his study of touch gestures across different platforms (not just the Ultrabook), Luke found that most operating systems implement these common touch gestures (with the exception of rotate) pretty consistently. It’s quite likely, as we move forward in the evolution of computing input controls, that people will be using touch in more than one operating system/platform. As more and more devices move to support a touch-optimized experience, people will naturally move to use touch more and more.
Knowing the basic touch gestures is just half the battle. The next step is to figure out how these gestures align with what people are trying to do on their computers. In a survey of several touch-enabled operating systems, Luke found that most people used touch gestures in accordance with three focused areas of computing: navigation, manipulation of objects, and management of interfaces. Using his social media application Tweester (see previous videos for early discussion on this storm chaser application), Luke shows us a few common gestures and how they work in the wild.
There are basic touch gestures used for system tasks. For example, the “press” gesture can be used to change between modes and bring up contextual information. The “double-tap” gesture is used to open items and to zoom in and out.
Each operating system sets its own rules, so developers would do well to familiarize themselves with what these might be for whatever OS they are developing for.
Touch gestures can easily manipulate objects. The “swiping” gesture commonly brings up the option to delete something or move something else around to another area on the screen. In Windows-specific touch language, swiping a short distance can select an object on a list and allow you to take action.
Touch gestures may align most naturally for navigation, as we intuitively move between programs. Scrolling with one or two fingers can be much easier than using a mouse and the “flick” gesture is great for really fast scrolling and manipulation. Luke mentions that Microsoft touch documentation suggests using the “pinch” gesture to stretch lists of content as well as images, and he also cautions developers and where they put app controls, since some placement might conflict with system applications, thus making for a very frustrating end user experience.
Adapting an existing desktop app to take advantage of Ultrabook touch capabilities
With a touch-optimized Ultrabook, users have many choices when it comes to input controls: the mouse, the keyboard, a touchpad, up and down keys, and touch. A well-designed app will allow the users to decide whatever kind of input is most convenient for them. Support each option equally well in your application and users will naturally gravitate to whatever works best for their unique needs.
Thinking about touch gestures the way we do keyboard shortcuts, we can start to see where touch could possibly enhance the existing design of an app, and even substitute how we access common actions or information. For example, how would CTRL+P (keyboard shortcut for print) work with a touch gesture? Of course, it’s crucial to always maintain non-touch support. It’s not the only way to accomplish something, and people always need an alternative.
How to help people discover touch within an app
One of the most intriguing discussions brought up in this third video was this: discoverability. Touch gestures are implemented without on-screen control prompts (arrows, icons, etc.). They are, quite literally, invisible. How can we help people discover the touch options available to them within programs? There’s no visible interface element saying “hey, click on me and I’ll help you do something!”
There are actually several ways to make touch-based interactions within an app more discoverable. First, remove other options. When in doubt, people will try out things that they think just might work. Give them immediate feedback to their actions, letting content follow fingers and aligning with natural motions. People will guess what they need to do, and most of the time, their intuition will serve them correctly.
Developers can also integrate intuitive visual cues. For example, in a full-screen image gallery, designers could use some visual design tricks to let people know that there’s more to explore with faded edges or animation that pops up more images so people know that there is indeed more to explore. However, again Luke brings up the need for knowing what you’re dealing with as far as operating system touch docs: certain touch gestures (like swiping) could bring up app controls rather than an image gallery slideshow. It’s best to stick with the OS standard for touch design and develop an app accordingly.
Another way to introduce touch gestures within an app is “just in time” educational pop-ups. Basically, these are text cues that are triggered by a user’s action. A call-out that tells a user that there’s a touch action possible is a very user-friendly way to make touch more accessible to those who haven’t used it before, as well as remind seasoned users that there’s another way to accomplish a task.
Gestures and discoverability – the bottom line
Developers should rely on common gestures supported by all operating systems when developing touch-enabled applications. However, each OS is different, so it’s imperative to make sure you’re familiar with what you’re dealing with so there are no unpleasant surprises later on. Luke encourages developers to not be afraid to experiment, since touch design is in the early stages yet and there is plenty of room for innovation. In addition, it’s important to be mindful of potential ways to help consumers actually discover the possibilities that touch has to offer within their favorite desktop apps redesigned for touch, and help them find it via content teases and visual design cues.
Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.
Notice revision #20110804