The previous article, event transmission and Response Chain, described how the system looks for the “first responder” when a touch occurs on the screen, how the “response chain” is determined once the “first responder” is found, and how events are passed along the “response chain”. In the context of the previous article, the UIGestureRecognizer was not used. But if we want to add the ability to handle events to a UIView, it’s much easier to use the UIGestureRecognizer and its subclasses than to inherit a UIView class and override Touches. These two methods have different effects on the event processing mechanism. That’s what this article is about: processing events through response chains and gesture recognition.

First, let’s review the general flow of the event delivery and response chain:

  1. Find the “first responder” with a “hit test”
  2. The “response chain” is determined by the first responder
  3. Events are passed along the response chain
  4. The event is either received by a responder or discarded by no responder

In Step 3, the event passes this process along the “response chain,” which the responder does by calling its next touches family of methods. As we mentioned in the previous article, if we were using classes like UIControl as responders, those classes themselves wouldn’t call their next Touches family of methods, thus having the effect of blocking the response chain, or of accepting an event. In this article, we will analyze how the processing and receiving of events operate with the participation of UIGestureRecognizer.

When gesture recognition is involved in the response chain

In the previous article, we only discussed the flow of events along the response chain in the blue part of the diagram below, but in fact, the gesture recognition part of the lower part of the diagram is happening at the same time.

As you can see, after the first responder is found through the hit test, the UITouch is distributed to the UIResponder’s Touches method (see the previous article) and to the Gesture recognition system, allowing the two processing systems to work simultaneously.

The first thing to notice is that the process shown in blue in the figure above is not executed only once. For example: When you slowly slide a finger across a view, a UITouch object is created. The UITouch object updates itself as you move your finger, triggering touches. In general, we get a trigger sequence similar to the following:

TouchesBegan // Moved // Moved //... . touchesMoved // ... TouchesMoved // Moved your finger on the screen touchesEnded // Moved your finger off the screenCopy the code

The UITouch gestureRecognizers attribute stores the gestures collected in the search for first responders, and the gesture recognition system is constantly determining whether the current UITouch conforms to one of the collected gestures as it triggers the Touches.

When the gesture is successful: The view that was touched receives’ touchesCancelled ‘and no more touches from the UITouch. ‘Touches’ cancelled for all other touches associated with the Same UITouch. This enables the recognized gesture to monopolize the UITouch. The specific performance is as follows:

TouchesBegan // Moved // Moved //... . touchesMoved // ... TouchesMoved // Moved your finger to the screen 'touchesCancelled' // Gesture recognition succeeded, The touches methods are blocked // now the finger 💅 does not leave the screen // but if you continue swiping 🛹 // the touches methods are not triggeredCopy the code

When gesture recognition fails: it means that it is not recognized temporarily. It does not mean that it will not be recognized successfully in the future and the response chain will not be blocked. Note that this means unsuccessful, not necessarily failed. In the internal state of the gesture, the gesture state is for the most part.possible, meaning that the UITouch does not match it for the time being, but may have a chance to recognize it later. Fail really recognizes that gestureRecognizers are no longer possible under current touch conditions, and the next Runloop will remove the gesture from gestureRecognizers.

Take 🌰 for example

Here is a simple example to simulate the interaction of response chains and gestures. Now with a finger, touch and slide over a view for a distance. The following figure shows the view with no gestures and with a UIPanGestureRecognizer gesture.

As we can see from the figure, when the finger is pressed down without gesture, the responder’s ‘touchBegan’ method will be triggered. As the finger moves, ‘touchMoved’ will be triggered repeatedly. When the finger ends moving and is raised, ‘touchEnded’ will be triggered. During this process, we received a constant update of the UITouch.

In the case that a UIPanGestureRecognizer gesture is added to the view, we add the following one to represent the gesture recognition system that is working at the same time as the response chain. You can see that the gesture recognition system is also working at the moment the finger is pushed down, and the first half is in the constant recognition state. After we had dragged a very small distance (notice that our fingers weren’t up yet), the gesture recognition system determined that the UITouch was moving in accordance with the UIPanGestureRecognizer, The view’s response chain is sent ‘touchCancelled’, preventing the UITouch from touching the view’s touches (as well as other touches, not shown here). After that, all that is called is the target-action method associated with the gesture (the dark green node in the figure, call PanFunction).

Take it a step further

In order to make the image more beautiful and easy to read, I have omitted many details from the image. Here are the following:

  1. The state of the gesture recognizer is not indicated in the figure:
    • The gesture is in the picturerecognizingAt the orange node ofrecognizedThe brown nodes are all at.possiblestate
    • The state change of the gesture at the green node in the figure is.began -> [.changed] -> ended
  2. The gesture recognizer is not a responder, but it does have the touches family of methods that fire a little before the view’s touches methods that it adds
    • You can also see that each node on the gesture line is slightly to the left
    • The orange, brown, and dark green dots on the gesture line also trigger the gesture recognizer’s touches method
  3. A more detailed trigger sequence should be shown in the figure below (in aUIViewTo add theUIPanGestureRecognizer, and simply refers to the case of sliding over a certain distance)

The gesture and the responder’s touches have the same name: ‘Began,’ ‘Moved,’ ‘Ended,’ ‘cancelled.’ This can easily be confused with the state property of the gesture recognizer, which, depending on the type of each gesture (discrete/continuous), can have. Possible,. Began,. Changed,. Ended,. Cancelled,. Failed, The name looks like the method name looks like the method name but it’s not the same thing.

More options at 🦾

We can change the performance of a gesture by configuring its properties. Here are three commonly used properties:

  1. cancelsTouchesInView: The default istrue. As the name suggests, if I set it tofalseWhen the gesture recognition is successful, it will not be senttouchesCancelledGiven to the target view without interrupting the firing of the view’s own methods, the result is both the gesture and its own method firing at the same time. Sometimes we don’t want the gesture to override the view’s own methods, so we can change this property to do so.
  2. delaysTouchesBegan: The default isfalse. In the last example we learned that after the finger touches the screen, the gesture is at.possibleThe view’s touches method is activated when the gesture recognition is successful. When this propertytrueThe view’s touches method is delayed until the gesture recognition has succeeded or failed. That is, if you set this property totrueThe view’s Touches series method will no longer trigger if the gesture recognition is successful throughout the process.
  3. delaysTouchesEnded: The default istrue. Like the last attribute, this attribute istrueOf the viewtouchesEndedThe trigger will be delayed about 0.15s. This property is often used for combos, such as when we need to trigger a double click gesture, which should be triggered when our finger leaves the screentouchesEndedIf the property isfalseThen the view will not be delayedtouchesEndedMethod will fire immediately, and our double click will be recognized as a double click. When the property istrue, will be delayedtouchesEndedTo connect the two clicks together to normally recognize the double click gesture.

UIControl and gesture recognition

Because the way UIControl receives target-action methods is by recognizing, receiving, and processing them in its touches method, the gesture’s Touches method must fire before the view’s Touches method. According to the triggering rules described above, it can be concluded that for a custom UIControl, the priority of gesture recognition is higher than that of events handled by UIControl itself.

For example, when we add a.touchupinside method to a UIControl, we add a UITapGestureRecognizer. If you click on the UIControl, you’ll see that the method associated with the gesture was triggered and sent to UIControl ‘touchupInside’, disrupting its own processing time mechanism and thus failing to trigger the.touchupinside method.

At the same time, such a mechanism can lead to a problem: When we add a UIControl as a subview to a view that already has a click gesture, then whatever we do we add a target-action method of the click type to that UIControl, The end result is both a gesture that triggers its parent view (because it was collected during the test hit) and an interruption to UIControl’s event handling so that the added target-action method never fires.

There’s actually a solution for that at 🍎. UIKit does something special for some controls (which are also subclasses of UIControl), so that when there’s a gesture on a parent view that conflicts with the control’s functionality, it triggers the control’s own methods instead of the gesture on its parent view. The specific control and conflict triggering mode are as follows:

Here’s another example: when we add a UIButton subview to a view that already has a tap gesture, and add a target-action method for the click type to the button, when the button is clicked, the button’s target-action method is triggered, and the gesture’s method is ignored.

If you don’t want this to happen, you should add the gesture to the target control (because the gesture recognizes the event before the control does, as in the previous example of adding the.touchupinside method to UIControl), and then the gesture will work.

conclusion

In general, gesture recognizers, for the most part, recognize screen touch events at a higher priority than methods of the control itself.

So in the process of development, be careful not to let gestures override the method implementation of the control itself. Also understand that, by default, gesture recognition doesn’t actually prevent the control’s own touches family of methods at first, but instead cancellations them at some point later. In ADDITION, in UIKit, there are special cases where UIKit controls have the opportunity to skip the gesture recognition of the superview and gain control of the event.

This article is written while thinking, so a little verbose 🤪, forgive me.