
iOS development notes and related stuff
by Sim Domingo
Gesture based or gesture enhanced UIs have recently become a trend with iOS apps. Apps like Clear have become very popular for offering gestures as their main method of interaction. Meanwhile apps like Tweetbot are well liked for using gestures as handy shortcuts for various actions. I believe gesture based UIs became popular because people find them fun to use. They are also a more natural way of interacting with touch input devices. Though not as discoverable when compared to onscreen buttons, as we become more accustomed to touch UIs, their use should become more prevalent. In this post, I will try to discuss my thoughts on some of the subtle and not too often discussed aspects of these gestures.
On the programming side, gestures are not difficult to implement with UIKit in iOS. You can do everything with the really handy subclasses of UIGestureRecognizer. It’s very rare that you will need to drop down to processing raw UITouch objects. But while they may require relatively little code to implement, devising a set of consistent and intuitive gestures require more thought. There are also issues when implementing them that are not immediately apparent until you encounter them. I’ll try to focus on a tricky one concerning double tap gestures.
Doubles taps are commonly used to overload taps on an onscreen UI element, like a button or table view cell. A tap will typically trigger a certain action, while a double tap will trigger another less common action. An example of this usage is in Tweetbot. A tap on a tweet will display action buttons for a tweet. Meanwhile a double tap will navigate to a detail view for the tweet. While it sounds straightforward, naively implementing double taps will cause some issues.
The main issue stems from the need of the app to disambiguate a single tap from a double tap. Since a double tap is composed of two single taps separated by a short delay, a double tap is recognized by looking for two taps separated by a time interval below a certain threshold. Thus a double tap is a composition of two single taps. Therefore a naively designed app that treats double taps and single taps as two independent events will see single single taps as always being delayed by the double tap threshold. This can create the impression that the app is slow to respond, even when it is not.
There are several ways to avoid this behavior. All of them involve being aware that double taps events are composed of two single taps. Handling the first single tap as part of handling the double tap event usually solves the problem. Let’s look at a concrete example of how this is done with Tweetbot. This app highlights the cell that contains a tweet when a finger lands on it. Then when you lift your finger, it unhighlights the cell. After the double tap delay has passed, it is recognized as a single tap and the single tap action is triggered. However if another tap is detected before the delay has passed, the double tap action is triggered.
The highlight after the first tap is key to making the app appear responsive. There is still the problem of the apparent delay before responding to the single tap action. But this is unavoidable because of the need to disambiguate between single and multiple taps. Note that most of the above discussion does not apply for long presses. While one needs to distinguish between long and short presses, the ambiguity is always resolved as soon as the finger is lifted.
Gestures usually have the effect of making the users feel more efficient as the actions they perform are natural and fun. However if clumsily implemented, they make your app appear slow or confusing instead. This is why it’s important for app designers to think their gesture based or gesture enhanced UIs through. It is not something you can just bolt on to apps that previously relied on taps on buttons and cells.