By Benjamin A. Lieberman, Ph.D.
As infants, we know instinctively to reach out with our bodies to explore the new and exciting world around us. Our first fumbling attempts allow us to learn the advantages and limitations of our “built-in” pointing tools—our hands and fingers. Soon, however, we learn that to have a more effective hold on our world, we need other tools that provide greater precision than our fingers, such as pens for drawing and writing. A similar discovery (or perhaps a rediscovery) is now occurring in the mobile computing space with the introduction of touch-responsive screens and stylus-based control.
Although touch screens have been in use for decades, only with the recent explosion of hand-held mobile devices has touch truly gone global. However, just as we did when we were children, we have discovered that the finger makes for a poor precision instrument. And so we see the reintroduction of an old friend: the pen. In a mobile computing environment, though, our tools must adapt to our new needs for digital in, and so the pen reverts to an earlier form—the stylus.
Intel® processor-based Ultrabook™ devices with a stylus are available on the market today, and software applications that use the stylus in appropriately creative ways will have a competitive advantage. A recent worldwide study by Dr. Daria Loi of Intel provides key insights into consumer behaviors with these technologies and user preferences for active stylus and effective stylus-centric app design.
Human Interaction with Our Environment
As most three year olds know, our fingers are useful for artistic expression. A nice, clean page and some finger-paints allowed an infinite variety of Jackson Pollock–esque creations. The level of control, immediacy, and tactile feedback our fingers provide lead to rapid understanding of the correct amount of pressure to apply to the canvas, the different viscosity of the paints, and even how using different paints on individual digits made a really interesting series of parallel lines. Seemingly, the only drawback was that every line was exactly finger width, which made fine edges and shading a bit difficult.
As we got older, we began to refine tool use into specialized functions, one of which—the pencil—allowed us to learn written language. The pencil was simple to use: Place the pointed end on a piece of paper and make a mark. We all rapidly adapted ourselves to the single level of indirection that a writing instrument introduced. We didn’t make the mark with our finger, but instead used the much more accurate pencil. Artistic expression also gained much by this approach.
And so over time we came full circle, turning these tools into commonly encountered items that require little or no thought to use effectively. If we need to open a door, we use fingers. If we need to drop a note to the spouse, we use a pen. No additional training required.
Feeling Our Way Through a Mobile Computing World
Then we invented computing devices. Although there is no question that computers have revolutionized the way humans produce and consume information, for many years, these devices were expensive, complex, and difficult to use effectively. In the early days of computing, the only way for a human to communicate with a computer system was via the equivalent of a typewriter wired to an oscilloscope. Needless to say, this interface was not intuitive. Over time, we have begun to move back to our earlier, more familiar world view.
The mouse is still a popular method of control for computer systems, and software has been highly optimized for mouse use. However, several problems are inherent in a mouse input device. First, you have the problem of mechanical detection of motion, which requires moving the mouse across a smooth surface. Even trackballs require extra desk space to operate. As devices grow smaller and users desire to be unleashed from their desktops, requiring a mouse for control becomes a significant problem for mobile devices. Something more basic is needed.
With the introduction of the touch-sensitive screen, we are able to take direct control over our tools by using nothing more than a finger. There is no question that for phones and tablets, finger-based input has found wide acceptance. But as we discovered earlier with the finger painting example, it is difficult to have a precise interaction simply using the blunt finger tip. And once again, the answer to the problem is the stylus. However, we have learned some lessons from earlier attempts at reintroducing a stylus—we need to change this simple tool to better fit into a mobile computing environment. Effective stylus design, software integration, and industrial design factors are key to the adoption of the stylus back into the mainstream usage. So, what design considerations will we need to take into account?
Effective Stylus Design for the Mobile Computing Device
In 2011, Dr. Daria Loi, user experience innovation manager in Intel’s PC Client Group, conducted a study on how users interact with touch-enabled clamshell devices running the Windows* 8 mobile operating system. This research provided quite a few counterintuitive insights, such as the general acceptance of a touch screen deployed to a clamshell laptop computer. Based on this research, engineers at Intel were encouraged to move forward with a general release of touch screens integrated with standard Ultrabook devices, with much success.
Famously, Steve Jobs of Apple Computer noted that the general public will reject a vertical touch screen because of the effort required to lift your hand and arm forward to the screen. This so-called “gorilla-arm” position was not observed in practice during the study. Instead, users rested their hands on the sides of the screen, with their elbows on the table surface or alongside their bodies. In some cases, the user would even rest one hand on the top of the screen and use the thumb to scroll the screen! So the arguments against touch interactions on a vertical screen do not seem to hold true based on direct observation. As Dr. Loi stated, “They basically told me, ‘Nobody’s obliging me to be on the mouse for 8 hours in a row. Nobody’s obliging me to lift my arm to touch the screen for 8 hours in a row. I am in charge. I do what I want. Here, you give me one extra option.’”
Early in 2012, Dr. Loi conducted another user study focused on Windows 8 usage on multiple form factors, some of which were equipped with a stylus in addition to the standard touch-enabled screen. She observed that the users had a different approach to controlling the Windows 8 software when a stylus was available. These observations sparked a series of follow-up research questions:
- How will users respond to the introduction of a stylus into the Ultrabook computing environment?
- What type of stylus technology would be best (e.g., passive or active)?
- Which elements of the operating system enhance or detract from the stylus user experience?
Figure 1:Tablet with an active stylus.
Given the discussion above, there is clear value in the use of a stylus in a computing environment, but how well will that translate into a laptop situation? What design considerations should be made to accommodate this modality?
Dr. Loi and team decided to pursue this question in a similar way to the 2011 study—that is, in multiple markets, prompting users to perform tasks with Ultrabook devices, and observing and recording their actions. This design method, based on direct interaction with systems instead of an indirect method such as a questionnaire or interview, greatly contributed to how senior executives responded when research results from the 2011 study on touch were shared. As Dr. Loi notes, “I found myself sitting in meetings with senior executives from different companies and literally seeing ‘aha’ moments on their faces. I would show them the research results, and then I had a five-minute video of users I interviewed telling what they thought about touch on a clamshell device.”
So after the 2011 touch study and the 2012 Windows 8 study, a new hands-on, international study was organized in three locations (the United States, the United Kingdom, and China), with a focus on stylus use. These locations were selected for specific reasons based on market and cultural differences in each location. For example, in China, the style of writing is different in both form and character construction—more like what would be considered calligraphy in the West. Therefore, they have a different response to using a stylus. “As a user, you really need to be able to use and try a device in practice. As a researcher, you need to be next to the person, observe his or her behavior, and ask questions based on what you observe. It’s really behavior- and observation-driven research, very practical,” says Dr. Loi.
The research approach was divided into two parts. The first part used a passive stylus, and the second part used the active stylus technology. A passive stylus is one that reacts with a modern touch screen in much the same way as your finger—that is, via capacitance. This stylus form has a blunt tip and uses existing touch screens. By contrast, an active stylus requires that an extra physical responsive layer be added to the screen. This technology provides a stylus that is much more pressure sensitive and has a smaller, harder tip. Each of these stylus forms was seen to have advantages and drawbacks for the different user groups as they were prompted to execute a specific set of computing tasks. Users were provided with a varied set of interaction tools, including touch screens, active and passive styluses, and touch pads, and allowed to explore multiple input mechanisms. Users’ behaviors and choices were carefully recorded, with some surprising results (see Table 1).
Table 1. Summary of key findings from studies conducted by Dr. Loi
When using a stylus, the palm was often held on the screen to provide support.
Palm rejection of extraneous touch events that occur against the screen was an issue.
Passive stylus users applied more pressure than active stylus users.
Device tipping can result from extra pressure against the touch screen if a passive stylus is used.
Users did not complain about arm lifting to touch the screen.
Occasional arm lifting and reaching to touch the screen is as acceptable as mouse use (which was also noted to cause discomfort).
The active stylus was preferred over the passive stylus.
The active, pressure-sensitive stylus was preferred for its accuracy of line and motion. However, some users liked that feel of a soft-tip, passive stylus.
Users preferred multiple interaction options.
Stylus, touch pad, touch screen, and mouse were all used interchangeably as the needs of the user dictated; personal preference was a strong motivator.
Users liked the ability to take direct control over system behavior.
Users felt more in control of the device when provided with an active stylus and touch-sensitive screen.
Users strongly preferred personalization options.
Users enjoyed selecting a stylus that best fit their personal needs (weight, balance, surface finish, pointing tip, etc.).
With touch interactions, users showed no hand preference.
As opposed to typical mouse use (which tends to drive selecting one hand), touch users switched interacting hands freely.
If a stylus is provided with the device, it must be integrated into the design.
The stylus must be “garaged” in the body of the device.
Different cultures respond in unique ways to the introduction of stylus technology.
Cultural differences, such as responding to the sound when an active stylus is used on a screen, will have a dramatic effect on utilization and acceptance.
One key difference between touch-based interactions and stylus-based interactions was the innate tendency to brace the writing hand against the screen. This is much the same behavior you find when writing on paper: The fine motor skills required to hold and manipulate a pen accurately require the larger arm muscles to be relaxed. So, stylus users attempting to write on a screen had to anchor their arm (elbow on table surface or braced against the body) and their palm to use the stylus effectively. Without the ability to rest the user’s palm against the screen, a natural human tendency is prevented, leading to frustration and dissatisfaction. The consequence for a touch-sensitive screen is the absolute necessity to engineer palm-rejection algorithms into the hardware sensors.
Another key finding was that acceptance of stylus-based input was driven by personal preference for the form factor. Users tended to be specific on certain physical aspects of the stylus, such as the surface finish, weight, and tip construction. For example, with the passive stylus, out of 15 different models, users tended toward just two types, including the Wacom Bamboo stylus, based on the finish and heft, which matched a high-quality standard pen. A link to a review of the top passive styluses can be found in the section, “For More Information.”
As Dr. Loi noted, “I was impressed by how specific they were with design recommendations around the stylus. They were very precise in articulating why they liked one versus the other and what they expected to be the ideal stylus. They were talking very specifically about weight balance, proportions, size, finishes, look, and feel. Many people also commented about different tips, to be able to interchange and add different kinds of tips.”
Along with storage preferences, many subjects strongly recommended integrating the stylus directly into the body of the device. They did not want the stylus to be easy to lose or to have to carry two devices, like they have to do with a mouse. They also wanted a stylus that matched the design characteristics of the associated device. The overall design of the stylus had to be such that it isn’t considered an “afterthought,” but instead integral to the industrial design concept of the device. As Dr. Loi noted, “They really wanted something that has the same kind of elegance or quality or look and feel of the device that they choose to purchase.”
Along with these findings, additional surprises came out of the research. A strong sense of familiarity was expressed when using the stylus, as opposed to the learned behavior of a mouse. The direct, tactile sense that holding a pen and writing produces was a pleasant surprise to many, especially given that we have moved so far away from handwritten notes. Email, text messages, Tweets, ubiquitous cell phone coverage—all have combined to “depersonalize” our interactions with one another. The reintroduction of direct handwritten notes was seen to add a more human, personal touch to the communications. Technology users are looking for something both practical and expressive over which they have complete control.
Another unexpected finding was the importance of sound when using the stylus. The key here was that the passive stylus had a soft, broad tip—essentially like a tiny finger tip. The active stylus, by contrast, had a solid tip that is pressure sensitive. This means that there was a distinct “click” sound when users touched the stylus to the screen. In some locations, such as Europe and the United States, this was seen as a positive feedback that solid contact had been made. In other places, such as China, users expressed irritation over the sound and even concern that the tool was possibly damaging the screen. Clearly, such cultural differences must be considered when designing a stylus for general use.
So, why now? The stylus as a method of computer interaction has been around for decades. Why is there a resurgence in popularity for this age-old tool now? Well, partially it is because we are only now developing the necessary computing ecosystems in terms of touch-centric operating systems and stylus-enabled applications that will allow users the freedom to choose the most effective method of interaction. As the Chinese test groups noted, the previous types of stylus were those associated with old-style PDAs—thin, cheaply made, and easy to lose. This was considered outdated technology and therefore of no interest to them.
However, with the advent of sensitive, touch-enabled screens and the software to take effective advantage of finger-based control, the market is ready to accept using stylus-enabled devices. With an active, pressure-sensitive stylus, the level of adoption is poised to become as high as was the touch-enabled handheld mobile computer (e.g., Apple iPhone*, Google Android*, and associated tablets). The analogy is similar to the up-surge in e-Books—previous attempts failed because the marketplace was not yet ready. Now, it is: “We were not ready technologically. We were not ready from a communication perspective, and we were not ready from an interaction perspective as a culture. This is why, now, we’ve got Windows 8, we’ve got touch screens that are all over the place, we’ve got millions of applications. It’s a different planet.”
The final set of observations are centered around how the application software responded to the presence of a stylus and how system users were disappointed not to be able to do all of the things you would expect when holding a pen. For example, touch-enabled operating systems, such as Apple iOS*, Android, or Windows 8, support navigation with a pen (e.g., button clicks, swipe moves) but do not directly support handwriting or character recognition. In fact, the version of Microsoft Office 2013 used in the testing was partly touch enabled, but users were surprised that they couldn’t just write on the screen with the stylus as they expected to. This disconnect between expectations of system function and reality will place a major limiting factor on acceptance of stylus technology. As Dr. Loi noted, “They would look at me and say, ‘Why doesn’t it work?’ And I would say, ‘Well, it hasn’t been implemented. You can’t quite do that.’ They were like, ‘Why?’ Which is a very, very good question: Why?”
Recommendations for industrial design of stylus-enabled devices included:
- To use a stylus effectively on a flat surface—vertical or horizontal—it is necessary to engineer palm-rejection algorithms into the hardware touch sensors.
- An active stylus was preferred over passive for its accuracy and responsiveness, but users liked the tactile sensation that the softer tip of a passive stylus against the screen provided.
- Users strongly prefer multiple interaction options and the ability to move freely between all forms of touch, stylus, or mouse-based navigation.
- A sense of ownership and direct control over the computer was a strong motivator in adoption of the stylus.
- Personalization—the ability to select different materials, finishes, and pointing tips—was also shown to be a driving factor in adoption of the stylus.
- The stylus must be directly integrated with the body of the device and match the look and feel of that device.
- Unexpected user cultural differences, such as response to sound, will have a dramatic effect on the acceptance of stylus technology.
Human Interaction with Our Computing Devices
As this article has shown, in an increasingly mobile computing environment, there is a need for better control over those devices. The rediscovery of the stylus as a pointing device provides for fine-grained, direct control over a touch-enabled device. Contrary to some opinions in the industry, this study showed that not only were people accepting of a stylus-based input device, but they actively preferred it for certain applications. The attitudes that prevent earlier adoption are giving way to innovation, such as the introduction of tactile (hepatic) feedback.
More and more, computer users are looking for the seamless incorporation of all forms of system interaction, from direct screen touch to stylus, keyboard, mouse, and touch pad. Each form offers advantages to the user, and as the computing world meshes more deeply with the real world, this approach allows enhanced control over both.
One of the biggest hurdles to overcome is the current lack of touch- and stylus-enabled software. Application developers are lagging behind the technological advances, with dissatisfaction the result. Fortunately, a number of application developers have recognized this need and have begun to direct attention toward stylus support. For example, applications have been developed to refocus data input software for handwriting recognition. Applications such as Penultimate*, Evernote*, and Springpad* all accept handwriting directly into the application, with the ability to convert written text to digital text. Going forward, application developers will improve on the past and find novel ways to use the stylus, leading to wider adoption in the marketplace.
Design teams should be highly encouraged by the results of this study. It is clear that direct interaction with users is the best way to learn about how a technology will be used in practice. System users are not one homogeneous group. They are all individuals. They want personalized control over their technology. The technology must adapt to them, not they to the technology.
As Dr. Koi noted, “People want excitement. They want passion. They want the right thing, and that’s what we should do. The only way is for [the development community] to be exposed to the reality of everyday users.”
For More Information
- Matthew Baxter-Reynolds. (July 2012). “The Human Touch: Building Ultrabook Applications in a Post-PC Age.” Intel Research Article.
- Seamus Bellamy. (May 18, 2012.). “Roundup: The Best Stylus for iPad and Android Tablets.” TabTimes.http://tabtimes.com/review/ittech-accessories/2012/05/18/roundup-best-stylus-ipad-or-android-tablets.
- Min Lin, Kathleen J. Price, Rich Goldman, & Andrew Sears. (2005). “Tapping on the Move: Fitt’s Law Under Mobile Conditions.” Managing Modern Organizations Through Information Technology, Proceedings of the 2005 Information Resources Mgmt. Assoc. Internat. Conf. Idea Group, Inc.
- Koji Yatani & Khai N. Truong. (2009). “An Evaluation of Stylus-based Text Entry Methods on Handheld Devices Studied Under Different User Mobility States.” Pervasive and Mobile Computing, 5, pp. 496–508.
- Suranjit Adhikari. (2012). “Haptic Device for Position Detection,” U.S. Patent Application, Pub. No. US2012/0293464. Submitted November 22, 2012.
- M.R. Davis & T.O. Ellis (1964). “The RAND Tablet—A Man Machine Graphical Communication Device.” Memorandum to the Advanced Research Program Agency, U.S. Department of Defense.
About the Author
Ben Lieberman holds a Ph.D. in biophysics and genetics from the University of Colorado, Health Sciences Center. Dr. Lieberman serves as principal architect for BioLogic Software Consulting, bringing more than 15 years of software architecture and IT experience in various fields, including telecommunications, airline travel, e-commerce, government, financial services, and the life sciences. Dr. Lieberman bases his consulting services on the best practices of software development, with specialization in object-oriented architectures and distributed computing—in particular, Java*-based systems and distributed website development, XML/XSLT, Perl, and C++-based client–server systems. Dr. Lieberman has provided architectural services to many corporate organizations, including Comcast, Cricket, EchoStar, Jones Cyber Solutions, Blueprint Technologies, Trip Network Inc., and Cendant Corp.; educational institutions, including Duke University and the University of Colorado; and governmental agencies, including the U.S. Department of Labor, Mine Safety and Health Administration and the U.S. Department of Defense, Military Health Service. He is also an accomplished professional writer with a book (The Art of Software Modeling, Benjamin A. Auerbach Publications, 2007), numerous software-related articles, and a series of IBM corporate technology newsletters to his credit.
Intel, the Intel logo, and Ultrabook are trademarks of Intel Corporation in the US and/or other countries.
Copyright © 2013 Intel Corporation. All rights reserved.
*Other names and brands may be claimed as the property of others.