Implementing Gestures and Touches in iOS 7
- Touches
- Recipe: Adding a Simple Direct Manipulation Interface
- Recipe: Adding Pan Gesture Recognizers
- Recipe: Using Multiple Gesture Recognizers Simultaneously
- Recipe: Constraining Movement
- Recipe: Testing Touches
- Recipe: Testing Against a Bitmap
- Recipe: Drawing Touches Onscreen
- Recipe: Smoothing Drawings
- Recipe: Using Multi-Touch Interaction
- Recipe: Detecting Circles
- Recipe: Creating a Custom Gesture Recognizer
- Recipe: Dragging from a Scroll View
- Recipe: Live Touch Feedback
- Recipe: Adding Menus to Views
- Summary
The touch represents the heart of iOS interaction; it provides the core way that users communicate their intent to an application. Touches are not limited to button presses and keyboard interaction. You can design and build applications that work directly with users’ gestures in meaningful ways. This chapter introduces direct manipulation interfaces that go far beyond prebuilt controls. You’ll see how to create views that users can drag around the screen. You’ll also discover how to distinguish and interpret gestures, which are a high-level touch abstraction, and gesture recognizer classes, which automatically detect common interaction styles like taps, swipes, and drags. By the time you finish reading this chapter, you’ll have read about many different ways you can implement gesture control in your own applications.
Touches
Cocoa Touch implements direct manipulation in the simplest way possible. It sends touch events to the view you’re interacting with. As an iOS developer, you tell the view how to respond. Before jumping into gestures and gesture recognizers, you should gain a solid foundation in this underlying touch technology. It provides the essential components of all touch-based interaction.
Each touch conveys information: where the touch took place (both the current and previous location), what phase of the touch was used (essentially mouse down, mouse moved, mouse up in the desktop application world, corresponding to finger or touch down, moved, and up in the direct manipulation world), a tap count (for example, single-tap/double-tap), and when the touch took place (through a time stamp).
iOS uses what is called a responder chain to decide which objects should process touches. As their name suggests, responders are objects that respond to events, and they act as a chain of possible managers for those events. When the user touches the screen, the application looks for an object to handle this interaction. The touch is passed along, from view to view, until some object takes charge and responds to that event.
At the most basic level, touches and their information are stored in UITouch objects, which are passed as groups in UIEvent objects. Each UIEvent object represents a single touch event, containing single or multiple touches. This depends both on how you’ve set up your application to respond (that is, whether you’ve enabled Multi-Touch interaction) and how the user touches the screen (that is, the physical number of touch points).
Your application receives touches in view or view controller classes; both implement touch handlers via inheritance from the UIResponder class. You decide where you process and respond to touches. Trying to implement low-level gesture control in non-responder classes has tripped up many new iOS developers.
Handling touches in views may seem counterintuitive. You probably expect to separate the way an interface looks (its view) from the way it responds to touches (its controller). Further, using views for direct touch interaction may seem to contradict Model–View–Controller design orthogonality, but it can be necessary and can help promote encapsulation.
Consider the case of working with multiple touch-responsive subviews such as game pieces on a board. Building interaction behavior directly into view classes allows you to send meaningful semantically rich feedback to your core application code while hiding implementation minutia. For example, you can inform your model that a pawn has moved to Queen’s Bishop 5 at the end of an interaction sequence rather than transmit a meaningless series of vector changes. By hiding the way the game pieces move in response to touches, your model code can focus on game semantics instead of view position updates.
Drawing presents another reason to work in the UIView class. When your application handles any kind of drawing operation in response to user touches, you need to implement touch handlers in views. Unlike views, view controllers don’t implement the all-important drawRect: method needed for providing custom presentations.
Working at the UIViewController class level also has its perks. Instead of pulling out primary handling behavior into a secondary class implementation, adding touch management directly to the view controller allows you to interpret standard gestures, such as tap-and-hold or swipes, where those gestures have meaning. This better centralizes your code and helps tie controller interactions directly to your application model.
In the following sections and recipes, you’ll discover how touches work, how you can respond to them in your apps, and how to connect what a user sees with how that user interacts with the screen.
Phases
Touches have life cycles. Each touch can pass through any of five phases that represent the progress of the touch within an interface. These phases are as follows:
- UITouchPhaseBegan—Starts when the user touches the screen.
- UITouchPhaseMoved—Means a touch has moved on the screen.
- UITouchPhaseStationary—Indicates that a touch remains on the screen surface but that there has not been any movement since the previous event.
- UITouchPhaseEnded—Gets triggered when the touch is pulled away from the screen.
- UITouchPhaseCancelled—Occurs when the iOS system stops tracking a particular touch. This usually happens due to a system interruption, such as when the application is no longer active or the view is removed from the window.
Taken as a whole, these five phases form the interaction language for a touch event. They describe all the possible ways that a touch can progress or fail to progress within an interface and provide the basis for control for that interface. It’s up to you as the developer to interpret those phases and provide reactions to them. You do that by implementing a series of responder methods.
Touches and Responder Methods
All subclasses of the UIResponder class, including UIView and UIViewController, respond to touches. Each class decides whether and how to respond. When choosing to do so, they implement customized behavior when a user touches one or more fingers down in a view or window.
Predefined callback methods handle the start, movement, and release of touches from the screen. Corresponding to the phases you’ve already seen, the methods involved are as follows:
- touchesBegan:withEvent:—Gets called at the starting phase of the event, as the user starts touching the screen.
- touchesMoved:withEvent:—Handles the movement of the fingers over time.
- touchesEnded:withEvent:—Concludes the touch process, where the finger or fingers are released. It provides an opportune time to clean up any work that was handled during the movement sequence.
- touchesCancelled:WithEvent:—Called when Cocoa Touch must respond to a system interruption of the ongoing touch event.
Each of these is a UIResponder method, often implemented in a UIView or UIViewController subclass. All views inherit basic nonfunctional versions of the methods. When you want to add touch behavior to your application, you override these methods and add a custom version that provides the responses your application needs. Notice that UITouchPhaseStationary does not generate a callback.
Your classes can implement all or just some of these methods. For real-world deployment, you will always want to add a touches-cancelled event to handle the case of a user dragging his or her finger offscreen or the case of an incoming phone call, both of which cancel an ongoing touch sequence. As a rule, you can generally redirect a cancelled touch to your touchesEnded:withEvent: method. This allows your code to complete the touch sequence, even if the user’s finger has not left the screen. Apple recommends overriding all four methods as a best practice when working with touches.
Touching Views
When dealing with many onscreen views, iOS automatically decides which view the user touched and passes any touch events to the proper view for you. This helps you write concrete direct manipulation interfaces where users touch, drag, and interact with onscreen objects.
Just because a touch is physically on top of a view doesn’t mean that a view has to respond. Each view can use a “hit test” to choose whether to handle a touch or to let that touch fall through to views beneath it. As you’ll see in the recipes that follow, you can use clever response strategies to decide when your view should respond, particularly when you’re using irregular art with partial transparency.
With touch events, the first view that passes the hit test opts to handle or deny the touch. If it passes, the touch continues to the view’s superview and then works its way up the responder chain until it is handled or until it reaches the window that owns the views. If the window does not process it, the touch moves to the UIApplication instance, where it is either processed or discarded.
Multi-Touch
iOS supports both single- and Multi-Touch interfaces. Single-touch GUIs handle just one touch at any time. This relieves you of any responsibility to determine which touch you were tracking. The one touch you receive is the only one you need to work with. You look at its data, respond to it, and wait for the next event.
When working with Multi-Touch—that is, when you respond to multiple onscreen touches at once—you receive an entire set of touches. It is up to you to order and respond to that set. You can, however, track each touch separately and see how it changes over time, which enables you to provide a richer set of possible user interaction. Recipes for both single-touch and Multi-Touch interaction follow in this chapter.
Gesture Recognizers
With gesture recognizers, Apple added a powerful way to detect specific gestures in your interface. Gesture recognizers simplify touch design. They encapsulate touch methods, so you don’t have to implement them yourself, and they provide a target-action feedback mechanism that hides implementation details. They also standardize how certain movements are categorized, as drags or swipes.
With gesture recognizer classes, you can trigger callbacks when iOS determines that the user has tapped, pinched, rotated, swiped, panned, or used a long press. These detection capabilities simplify development of touch-based interfaces. You can code your own for improved reliability, but a majority of developers will find that the recognizers, as shipped, are robust enough for many application needs. You’ll find several recognizer-based recipes in this chapter. Because recognizers all basically work in the same fashion, you can easily extend these recipes to your specific gesture recognition requirements.
Here is a rundown of the kinds of gestures built in to recent versions of the iOS SDK:
- Taps—Taps correspond to single or multiple finger taps onscreen. Users can tap with one or more fingers; you specify how many fingers you require as a gesture recognizer property and how many taps you want to detect. You can create a tap recognizer that works with single finger taps, or more nuanced recognizers that look for, for example, two-fingered triple-taps.
- Swipes—Swipes are short single- or Multi-Touch gestures that move in a single cardinal direction: up, down, left, or right. They cannot move too far off course from that primary direction. You set the direction you want your recognizer to work with. The recognizer returns the detected direction as a property.
- Pinches—To pinch or unpinch, a user must move two fingers together or apart in a single movement. The recognizer returns a scale factor indicating the degree of pinching.
- Rotations—To rotate, a user moves two fingers at once, either in a clockwise or counterclockwise direction, producing an angular rotation as the main returned property.
- Pans—Pans occur when users drag their fingers across the screen. The recognizer determines the change in translation produced by that drag.
- Long presses—To create a long press, the user touches the screen and holds his or her finger (or fingers) there for a specified period of time. You can specify how many fingers must be used before the recognizer triggers.