The Art of Deliberate Touches: Building a Triggered Tap Gesture Recognizer
Distinguishing user intent can be a hard problem. There’s a big difference between random touches on the screen caused by device handling and a deliberate intent to interact. A while back, while chatting in the Freenode IRC #iphonedev channel, I ran across a developer who was desperate to find a solution for this challenge. He needed a custom gesture recognizer that enabled users to produce a "confirmed touch." By this, he meant a gesture that was unlikely to be produced by accidental brushes against the screen. The recognizer needed to be easy for users to learn and reliable to detect.
Enter the "triggered tap" — and yes, I made that phrase up. This gesture triggers only when the user already has one touch on the screen. For example, a user might first place a forefinger and subsequently tap with the middle finger. Each time the middle finger taps, a counter goes up by one. The gesture continues until the forefinger leaves the screen.
It’s a reasonably easy gesture to implement and one that can be guarded by any number of safety measures to ensure that the user actually intends interaction. Here’s a video that demonstrates my implementation of this approach. In this you can see a variety of successful and unsuccessful interactions, which are overseen by recognizer rules.
The Recognizer Interface
The following interface represents the triggered tap recognizer’s API. It consists of two custom properties: a tap counter that accumulates the number of recognitions and a minimum delay interval for touches.
@interface TriggeredTapGestureRecognizer : UIGestureRecognizer @property (nonatomic, readonly) int count; @property (nonatomic) NSTimeInterval minimumDelay; @end
This recognizer produces a UIGestureRecognizerStateRecognized update at every tap along with an increase in the recognizer’s count. Although you could skip the counter, I felt the gesture would be of greatest use if it kept track of the user’s iterations for each triggered sequence. A sequence starts when a single finger touches the screen and ends when that initial finger is removed. With the count, you might wait until the user has tapped twice, for example, before responding by unlocking a portion of your interface.
I also designed in a minimum delay. It’s rare for touches to sync up entirely. Even when you mean to touch a device with two fingers, those touches may arrive as separate events. The delay property enforces a clear chronological separation between the original finger placement and any subsequent taps. Again, this enforces deliberation: both in the user’s interaction and the recognizer itself. Each touch must be purposeful and intentional.
Touch Life Cycles
Apple makes certain guarantees about the life cycle of a touch, but when you work with gesture recognizers, some of your normal UIView touch assumptions must be re-addressed. Most important, recognize that a gesture recognizer touch may not participate in every touch callback.
The triggered touch will potentially set its state to RecognizerStateRecognized many times. On each tap, the recognizer does not reset. Instead, it continues waiting for tap touches until the initial triggering touch is lifted from the screen. After you’ve set a recognizer to UIGestureRecognizerStateRecognized, touches will no longer reach the touchesEnded:withEvent: method. Instead, you’ll need to manually clean up these touches at the end of their lifetimes.
That means keeping a collection of active touches on-hand and testing touch phases during interactions to weed out touches that have ended. Here’s how you can collect and test those touches at each stage of the interaction. When a touch’s phase is UITouchPhaseEnded, it no longer participates in the gesture.
// Collect touches [activeTouches unionSet:touches]; // Clean up expired touches for (UITouch *touch in activeTouches.copy) { if (touch.phase == UITouchPhaseEnded) [activeTouches removeObject:touch]; }
Tracking the Trigger
To create a willful invocation, this gesture recognizer requires a primary touch, which I call t1. When established, it must be the only active touch on the screen. Testing this ensures the touch won’t trigger if several fingers reach the screen at the same time.
The recognizer stores a date as it establishes the primary touch. The touch’s trigger time enables you to test subsequent touches. You ensure enough time has passed to make the secondary touch a meaningful action.
// To start: must have no trigger, single touch if (!t1 && (activeTouches.count == 1)) { // first touch t1 = touches.anyObject; triggerTime = [NSDate date]; return; }
That delay plays a role whenever each new touch is detected. If the primary trigger exists and enough time has passed, each additional touch establishes a state change. The count increases and the recognizer continues its scan.
This state change enables the recognizer to communicate with its target. When the gesture is recognized, the iOS runtime calls the target’s action method at the next run loop cycle. The recognizer state then returns to UIGestureRecognizerStatePossible. That "recognized"-to-"possible" state fallback enables this class’s repeat events.
// Recognize on two touches, a trigger point after mininum delay if (t1 && (activeTouches.count == 2) && ([[NSDate date] timeIntervalSinceDate:triggerTime] > _minimumDelay)) { [activeTouches minusSet:touches]; self.state = UIGestureRecognizerStateRecognized; triggerTime = [NSDate date]; _count++; return; }
The recognition cycle persists through the first touch’s lifetime. When the primary touch leaves the screen, the interaction ends and the recognizer’s count no longer updates. The recognizer’s target can use the final count to determine whether to take further action.
From start to end, the triggering touch controls the recognition. As a prerequisite, it creates a purposeful filter for interaction.
Wrap-Up
The TriggeredTapGestureRecognizer class introduced in this write-up mandates user intent. It requires a prerequisite trigger interaction before recognizing its subsequent user taps. The recognizer also continues to function even after repeated successful recognition states. Because of this, this class provides a good example of how you might approach any “first this, then that” touch interactions with a single unified gesture recognizer.