preface

Event Handling Guide for iOS

Plus a recent picture of the inheritance

Some new concepts

UIKit framework

The UIKit framework provides a set of classes to build and manage user interface (UI) interfaces for iPhone OS applications, application objects, event controls, drawing models, Windows, views, interfaces for controlling touch screens, and more. (PS1: Think of it as an API library for manipulating interfaces)

Response chain

When iOS captures an event, it passes the event to an object that looks best positioned to handle it, such as a touch event to the view where the finger just touched it. If that object cannot handle the event, iOS continues to pass the event to a deeper object. Until an object is found that can respond to the event. This sequence of objects is called the “responder chain,” along which iOS passes the event from the outermost layer to the in-memory object, passing the responsibility for handling the event. This mechanism of iOS makes event processing coordinated and dynamic. Here is one of the most common response chains

The responder

In iOS, objects that respond to events are all subclasses of UIResponder. When an event arrives, the system passes the event to the appropriate responder, who becomes the first responder. Events that are not handled by the first responder will be passed in the responder chain by the nextResponder of the UIResponder. You can override this property to determine the passing rule. When an event arrives and the first responder does not receive the message, it is passed back down the responder chain.

Introduction, contact and difference of UIResponder, UIEvent and UIControl

UIResponder

The most familiar classes, UIApplication, UIView, and UIViewController, are directly inherited from UIResponder. UIResponder is a class that handles events (UIEvent) in response to user actions. UIResponder provides callback methods for user clicks, presses, and Motion for user start, move, end, and cancel, respectively. The cancel event is invoked only when the program forces an exit or an incoming call is made.

Let’s take the click event as an example

We can test for ourselves when these four methods are called begin at the click of the mouse Drag across the screen Start Moved Release the mouse Ended The unclick event will only be called when the program is forced to exit or an incoming call is made.Note: If two fingers touch a View at the same time, the View calls’ touchesBegin ‘only once, and there are two’ Touches’ inside the view argument. If two fingers touch separately, then the View calls touchesBegin twice, each time containing only one UITouch object in the Touches argument.

UIEvent

UITouch is an object captured by the hardware that represents the user’s operation of the device. There are three types of Events: Touch Events, Motion Events, and Remote Control Events.

UIControl

If a UIResponder instance object can respond to and process random events, then a UIEvent represents a single event containing only one type, which could be a touch, a remote control, or a press. The corresponding subclass might be a device shake (to handle system events, Subclasses of UIResponder can override some corresponding methods so that they can handle specific UIEvent types).

In a way, you can think of UIEvents as notifications. While UIEvents can be subclassed and sendEvent can be called manually, they don’t really mean that you can do so, at least not in the normal way. Since you can’t create custom types, sending defined events can be a problem because unexpected responders may “handle” your event incorrectly. However, you can still use them. In addition to system events, UIResponder can respond to any “event” as a Selector.

While UIResponder can fully detect touch events, handling them is not easy. So how do you differentiate between different types of touch events?

So that’s what UIControl is good at, UIControl is kind of wrapping up UIResponder, you’ve already wrapped up the gesture with the View, so why is our UIButton that can detect double click, click and so on, all written in UIControl

typedef NS_OPTIONS(NSUInteger.UIControlEvents) {
    UIControlEventTouchDown                                         
    UIControlEventTouchDownRepeat                                   
    UIControlEventTouchDragInside                                   
    UIControlEventTouchDragOutside                                  
    UIControlEventTouchDragEnter                                    
    UIControlEventTouchDragExit                                     
    UIControlEventTouchUpInside                                     
    UIControlEventTouchUpOutside                                    
    UIControlEventTouchCancel                                       
    UIControlEventValueChanged                                      
    UIControlEventPrimaryActionTriggered NS_ENUM_AVAILABLE_IOS(9_0) 
    UIControlEventEditingDidBegin                                   
    UIControlEventEditingChanged                                    
    UIControlEventEditingDidEnd                                     
    UIControlEventEditingDidEndOnExit                               
    UIControlEventAllTouchEvents                                    
    UIControlEventAllEditingEvents                                  
    UIControlEventApplicationReserved                               
    UIControlEventSystemReserved                                    
    UIControlEventAllEvents                                        
};
Copy the code

Event generation, transmission and response process

UIApplication — >UIWindow — > recursively find the most appropriate control to handle — > Control calls touches — > Determines if the touches method is implemented — > If it is not implemented, an event is passed to the previous responder by default — > Finds the previous responder — > Invalid if it is not found

Delivery process

  1. When the touch event occurs, the pressure is converted into an electrical signal, and the iOS system will generate a UIEvent object to record the time and type of the event.
  2. When a system event is detected, such as a click on the screen, UIKit internally creates a UIEvent instance and records the time and type of event generated. The system then adds the event to an event queue managed by UIApplication.
  3. UIApplication will take the first event out of the event queue and distribute it for processing, usually first to the application’s main window (keyWindow).
  4. The main window finds the most appropriate view in the view hierarchy to handle touch events

5. When you find the appropriate view control, you call the View control’s Touches method to handle the event: touchesBegin… TouchesMoved… TouchesEnded and so on. If the most suitable responder is found, but if it does not implement the Touches method, the previous responder object’s touches method is called

If the parent control can’t receive a touch event, then the child control can’t receive a touch event and UIView can’t receive touches

  1. The userInteractionEnabled property is YES, which allows the control to interact with the user.
  2. The Hidden property is NO. Controls are invisible, so naturally there is no touch
  3. The value of the alpha attribute is not 0 to 0.01.
  4. The touch point is within the scope of this UIView.

hit-Test

User touch events will be first intercepted by the system, packaging processing. It then recursively traverses the view hierarchy until it finds the appropriate responder to handle the event, a process also known as hit-test.

Hit-testing first checks that the location of the touch object is within the range of the view object on any screen. If so, start the same check on the child view objects of this view object. The view object at the bottom of the view tree that contains the location of this touch point is the hit-test view object to look for. Once iOS has identified the hit-test view object, it passes the touch event to it for processing.

So let’s say the user touches E

So that’s the order

  • The touch point is within the region of view A, and then begins to examine subviews B and C
  • The touch point is not in the range of B but in the range of C, so the D and E views are examined
  • The touch point is not in the scope of D but in the scope of E, and the E view is the view object at the bottom of the view tree that contains the touch point, so E becomes a hit-test view.

HitTest :withEvent: method process:

  • We first call the current view’s pointInside:withEvent: method to determine whether the touch point is in the current view:

If pointInside:withEvent: returns NO, the touch point is not in the current view, then hitTest:withEvent: returns nil. If pointInside:withEvent: returns YES, the touch point is in the current view. Then we iterate over all the subviews of the current view. We call the hitTest:withEvent: method on the subviews and repeat the previous step. We iterate over the subviews from top to bottom, that is, from the end of the subviews array. The UIApplication object maintains its own stack of responders until the hitTest:withEvent: method returns a non-empty object with a child view or all the child views are iterated over. When pointInSide: withEvent: returns yes, the responder is pushed. There are no controllers in the delivery chain because controllers themselves have no concept of size. But there is a controller in the response chain, because the controller inherits from UIResponder. So the controller might be a separate exception, which doesn’t need the pointInside method to get into the responder stack on its own

  • If a child view’s hitTest:withEvent: method returns a non-empty object the first time, the current view’s hitTest:withEvent: method returns that object, and the processing is complete

If all the child views’ hitTest:withEvent: methods return nil, then the current view’s hitTest:withEvent: method returns the current view’s self.

Hit-test internal implementation

- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event
{
    if (self.hidden || !self.userInteractionEnabled || self.alpha < 0.01| |! [selfpointInside:point withEvent:event] || ! [self _isAnimatedUserInteractionEnabled]) {
        return nil;
    } else {
        for (UIView *subview in [self.subviews reverseObjectEnumerator]) {
            UIView *hitView = [subview hitTest:[subview convertPoint:point fromView:self] withEvent:event];
            if (hitView) {
                returnhitView; }}return self; }} subviews are in reverse order, that is, subviews are in order from top to bottom, that is, subviews are in order from lastObject to firstObject, that is, find the appropriate responder view, that is, stop traversing.Copy the code

Application — > window — > root view — >… – > lowest view

The response process

The responder chain

  • The Response Chain is usually called the responder Chain.
  • In our app, all views are organized in a tree hierarchy, and each view has its own superView, including the controller’s topMost View (the controller’s self.view).
  • When a view is added to a superView, its nextResponder property is pointed to its superView.
  • When the controller is initialized, the nextResponder of self.view(topMost View) is pointed to the controller, The Controller’s nextResponder will point to the superView of self.view.
  • In this way, the entire app is chained by nextResponder, which is what we call the responder chain.
  • So the responder chain is a virtual chain, and there is no object to store such a chain, but it is connected by UIResponder properties.

@property(nonatomic, readonly, nullable) UIResponder *nextResponder;

The response process

If you do not implement the Touches method, the default is for the control to pass the event up the responder chain to the previous responder for processing. So how do you tell who was the last responder to the current responder?

  • Determines whether the current View is the controller’s View. If it is the controller’s View, the last responder is the controller
  • If it is not the controller’s View, the last responder is the parent

When a View can handle a touch event, it responds to the event.

The system calls the four methods described above

- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event;
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event;
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event;
- (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event;
Copy the code

With the responder chain, you can call the ‘Touches’ super method to have multiple responders respond to the event simultaneously.

It is important to note that there are no controllers in the pass chain, since controllers themselves have no concept of size. But there is a controller in the response chain, because the controller inherits from UIResponder. (In hitTest)

Initial view — > super view — >… . – > View Controller – > Window – >Application

Responder chain related questions

Expand button click range

Solution: classify the button and override the pointInside or hitTest methods

Override the pointInside method

- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event {
    CGRect bounds = CGRectInset(self.bounds, - 50.- 50);
    return CGRectContainsPoint(bounds, point);
}
Copy the code

Bounds is familiar, ==frame is its position relative to the superview, bounds is its starting position, == has two properties, size and origin, origin starts at 0 (if I setBounds and I didn’t say that), What does it mean that the first two variables in setBounds are negative? == Why does the offset (-30, -30) move the view to the lower right corner? = =

This is because the function of ==setBounds is == : forces the upper left corner of your (view1) frame to be (-30, -30). So the origin of view1, naturally, is shifted to the lower right (30,30).

The same is true here for Inset, -50, -50, rect’s origin is shifted by dx,dy, and then the size of rect is reduced by 2 times the width by dx, and by 2 times the height by dy;

For example: so the inset value is positive, zoom out, the value is negative, zoom in.

- (void)demoTest {
    UIView *view1 = [[UIView alloc] initWithFrame:CGRectMake(100.100.200.200)];
    view1.backgroundColor = [UIColor redColor];
    [self.view addSubview:view1];
    NSLog(111 = % @ "@".NSStringFromCGRect(view1.frame));
    // 111={{100, 100}, {200, 200}}
    
    CGRect rect = CGRectInset(view1.frame, 30.30);
    UIView *view2 = [[UIView alloc] initWithFrame:rect];
    view2.backgroundColor = [UIColor yellowColor];
    [self.view addSubview:view2];
    NSLog(222 = % @ "@".NSStringFromCGRect(view2.frame));
    // {{130, 130}, {140, 140}}
}
Copy the code

The last line is CGRectContainsPoint(bounds, point); A function of a Boolean variable that returns whether the rectangle contains the specified point.

Override the HitTest method

- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event {
    CGRect bounds = CGRectInset(self.bounds, - 50.- 50);
    if (CGRectContainsPoint(bounds, point)) {
        return self;
    } else {
        return nil;
    }
    return self;
}
Copy the code

Through the event

Solution:

  • Click on the area of button 1, button 2 responds to the event, and you must override the hitTest method of button 1
  • In the hitTest method, the coordinate system of the touch point is converted from button 1 to button 2, with the upper-left corner of button 2 as the origin
  • After the coordinate transformation, determine whether the touch point is on button 2, if so, go straight back to button 2 (a more serious approach is to call button 2’s hitTest method), if not, then call the system method, let the system to handle
- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event {
    CGPoint pointTest = [self convertPoint:point toView:self.button];
        if ([self.button pointInside:pointTest withEvent:event]) {
            return self.button;
        } else {
            return [superhitTest:point withEvent:event]; }}Copy the code