Android* Touch on Ice Cream Sandwich* (ICS)

Objective

This article delves into the APIs used for touch in Android* ICS. Several code snippets are provided. The tutorial starts with single pointer (finger) touch detection, describes multi-touch detection, and then mentions the more advanced process of defining and detecting gestures, which is left as an exercise for the reader. Code samples are using API 15 (Android 4.0.3), which was current at the time of this publication. In this paper, "pointer" and "finger" are used interchangeably since from an Android code perspective, a "finger" is a type of "pointer device."

 

1. Introduction

Android provides several APIs for touch, depending on the usage model. The implementation can be as simple as detecting a single finger press or as fancy as defining a custom touch gesture in the form of drawing a circle with a finger. The flexibility that the Android SDK provides with these APIs benefits developers; both standard touch detection APIs and the ability to recognize custom gestures provide great flexibility.

This paper covers the wide spectrum of Android touch capabilities. The paper starts with a very basic code example: single-touch detection. In this paper, single-touch refers to touch that is performed with one "pointer" (assumed to be a finger). Multi-touch will refer to the use of two fingers (though three or more are supported but out of the scope of the code exercises).

It is expected that this paper will provide developers with a great head start at enabling touch in Android via the respective APIs. The paper also provides references to gesture creation, allowing developers to be creative and have fun! Gesture creation details, however, are outside the scope of this paper, due to length and complexity.

 

2. Overview of the Touch APIs

The following links are useful for the code examples that follow in this guide.

Android Developers page for Touch class definition:

http://developer.android.com/reference/android/text/method/Touch.html

Handling input events:

http://developer.android.com/guide/topics/ui/ui-events.html

A nice blog pertaining to multi-touch:

http://android-developers.blogspot.com/2010/06/making-sense-of-multitouch.html

A nifty guide for creating a custom gesture library and detecting the gestures (left as exercise):

http://developer.android.com/training/custom-views/making-interactive.html

Some touch detection methods are now briefly discussed, and code examples follow in the next sections. Specifically, the following APIs will be discussed:

onTouchEvent. Invoked whenever a View (say, for example, a text box) is touched. This requires that the View class is extended and that onTouchEvent is overridden.

onTouch. Like, onTouchEvent, onTouch is used for when a View is touched. The difference is that instead of extending the View class, the View's onTouch event listener must be registered and defined for when the event occurs.

MotionEvent. This class provides some APIs that can be used for single- and multi-touch detection.

As an illustration, a sample package, ASimplePhotoViewer, was created to demonstrate the use of the various touch APIs. This app simply provides an on-screen slider that allows the user to adjust screen brightness "on the fly." Rotation with two fingers will also be supported for rotating the background photo. Note that only code snippets are provided.

 

3. Single-Touch Code Examples

3a. OnTouchEvent

The onTouchEvent API is used by extending the View class and then intercepting touch events within the custom view through method overriding. First, this paper discusses how to create a custom View class.

Assuming the package name is simply aspv.pkg with the main Activity file named ASPV.java, create a new Java* source file for the custom view, as follows:

Figure 3.a.1: Custom View File

The Java class in the new file should extend View since a custom View is being defined. The following code snippet accomplishes this:

Figure 3.a.2: Extending the View Class

The class constructor for the custom view should have a context passed in, and in this code example, a reference to the parent Activity (ASPV) is provided:

Figure 3.a.3: Custom View Class Constructor

Callbacks can be made to the parent class after touch events are handled in the custom view. Simply create a parent object in the custom view file and initialize it in the corresponding constructor method:

Figure 3.a.4: Parent Reference

Create an XML layout file dedicated to the custom view:

Figure 3.a.5: Custom View XML

To use the new XML layout, the developer must "inflate" it in the ASPV activity class and set the current content to the new view, as follows:

Figure 3.a.6: Inflating a Custom View

The previous steps provide the basics for defining a custom view. At this point, onTouchEvent method overriding is defined in the custom view class:

Figure 3.a.7: Overriding onTouchEvent

MotionEvent provides important state information about the touch event, such as a finger pressing down, etc. The developer can handle touch using code like the following:

Figure 3.a.8: Handling onTouchEvent using MotionEvent Parameter

IMPORTANT! Pay attention to the fact that onTouchEvent returns a Boolean, which indicates whether the custom view consumed the touch event. Code that handles the event should return true for good measure.

 

Also, note that when using event.getX() and event.getY(), the developer can specify a tolerance value to determine if the user has pressed a finger close enough to some viewable object. For example, by determining the coordinates of the viewable object, one could use the absolute value of the differences between each touch coordinate value and the coordinate components of the viewable object to determine whether the touch was close enough to the object.

3b. OnTouch

Using OnTouch is an alternative for touch detection. For a viewable object, its onTouch listener should be registered so that when the user touches the screen within the bounds of the object, its onTouch method will be invoked.

In this sample, the viewable object used will be the actual View handle associated with the main Activity. This can be retrieved as follows in the ASPV Activity class:

Figure 3.b.1: Retrieving the View Associated with Main Activity

Next, it is time to register the onTouch listener for the this_view object so that anytime the screen is touched, onTouch is invoked. Do this as follows in the Activity class's OnCreate method:

Figure 3.b.2: Registering the OnTouch Listener

Because this_view is associated with the current Activity view, any touch on the Activity will cause onTouch invocation. However, if say, onTouch was instead registered for a textbox (a type of View), the method would only be invoked if user touch is made within the bounds of the textbox.

Tip: To ensure that the touch method entry is reached after a touch event, utilize Android debugging messages via the Log class, such as using Log.d for debug-specific messages. Logcat and the Android SDK DDMS tools are great for run-time debugging.

 

4. Multi-Touch Code Example: Rotation

The previous sections described single-touch detection. Multi-touch detection is similar, but a few details should be carefully considered:

MotionEvent.ACTION_POINTER_UP: Occurs if the non-primary pointer is released. Note: use ACTION_UP for primary pointer.

MotionEvent.ACTION_POINTER_DOWN: Occurs if the non-primary pointer is pressed. Note: use ACTION_DOWN for primary pointer.

Pointer ID: Once a finger is pressed, Android assigns a pointer ID to it, and that ID is guaranteed to remain the same till the finger is lifted off the screen. Each subsequent finger press will result in a new pointer ID assigned to the finger. Thus, the developer should continually query the action event for pointer IDs to keep track of atomic finger press events.

ACTION_MASK: This one is tricky! To properly detect the presence of multi-touch (multiple fingers on the screen), the switch condition used in the previous section will need to be updated so that it is ANDed with a mask operation. This AND operation essentially preserves only the lower eight bits of the specific action (e.g., pointer up, down, etc.) and masks off the upper bits that encode pointer identification.

Return true: To properly consume events, the developer should return a value of true in the touch event handler that handles the event. Consuming events is key to detecting subsequent events.

These concepts are bridged together by updating the onTouchEvent method to detect multi-touch. Some code snippets are provided for simple rotation detection. This is simple code for the sake of demonstration and is not intended to be exhaustive, such as rotation degree detection, clockwise vs. counter-clockwise direction, three or more fingers, etc.

For example, if only detecting multi-touch with two fingers for rotation, the developer can create some state variables as follows:

Figure 4.1: Some Tracker Variables for Multi-touch

Keep in mind these items:

  • It is a good idea to track the initial coordinates of the primary and secondary fingers along with the last location of these "pointers." This is critical in determining whether a proper rotation gesture motion has been performed.
  • Pointer IDs should be tracked with each action event to ensure that this gesture is part of a single atomic action and the user has not lifted the finger off the screen and initiated another action.
  • A rotation variable can be used to track how much a picture has been rotated. In this example, zero is the initial rotation value so that the picture appears upright (portrait position) until rotated by the user.

Now, the switch statement in onTouchEvent needs to be modified to mask off the event action bits, leaving behind only those bits pertaining to the actions performed:

For example, if only detecting multi-touch with two fingers for rotation, the developer can create some state variables as follows:

Figure 4.2: Using the Action Mask

Whenever onTouchEvent is invoked and a condition for a specific action type is met, it is a good idea for the developer to capture pointer information in order to determine which finger performed the action. Each pointer has two pieces of information that can be easily confused: index and ID. Various APIs in MotionEvent may use one or the other, so please use caution. Index and ID can be captured as follows:

Figure 4.3: Acquiring Index and ID

Just like ACTION_MASK, ACTION_POINTER_ID_SHIFT should be used for acquiring pointer information when dealing with multi-touch.

The ordering of pointer events is important. For example, when fingers are touched on the screen, it is assumed that the primary pointer makes contact first and then non-primary pointers. Recalling that ACTION_DOWN is used for primary pointer and ACTION_POINTER_DOWN is used for non-primary pointers, code can be written as follows to determine the pointer IDs and update the starting x and y coordinates of the pointers accordingly when pointers are pressed down on the screen:

Figure 4.4: Acquiring Pointer IDs on Press

Then, on subsequent touch events (such as moving a pointer), the developer can update the coordinates of all active pointers as follows:

Figure 4.5: Updating Pointer Positions on Move

In the case of rotation, the developer now has the starting and ending positions of all pointers and can use some trigonometry to determine the degree of rotation, etc. This topic, however, is outside the scope of this document.

 

5. Gestures

As hinted at above, please use the following link as good reference for creating and detecting custom gestures:

http://developer.android.com/training/custom-views/making-interactive.html

The link discusses the usage of a gesture builder API.  This is a programmatic way to be creative in defining custom gesture motions beyond the common MTIs such as two-finger rotate.  The developer can use this API for a more advanced gesture recognition template.  Of course, this requires more work and is left as an exercise as the details are beyond the scope of this paper.

 

6. Summary

This paper provided the basics to get developers going with single- and multi-touch detection in Android. The paper assumed no prior Android touch experience, making the guide suitable for both beginners and those familiar with basic touch but wanting exposure to multi-touch detection. Gesture creation and detection, the most challenging of the touch features discussed, was not covered in detail; however, the preceding code snippets and links provide the framework for completing the Android Developers tutorial. The Gesture Builder tutorial, referenced above, shows that Android has much versatility when it comes to touch. Developers can either use the pre-defined APIs for handling simple touch or even define custom gestures for the usage model at hand. This makes Android touch extensible and fun.

 

About the Author

David Medawar is an Intel Software Engineer currently working in the Software and Services Group. David has worked at Intel for the past seven years on a large variety of projects starting with system BIOS (firmware), up to his current work of writing applications. David is very interested in playing with the Android framework, including finding ways to improve the user experience during his spare time.



*Other names and brands may be claimed as the property of others.

Per informazioni complete sulle ottimizzazioni del compilatore, consultare l'Avviso sull'ottimizzazione