UIResponder
Abstract interface for responding to and handling events. (UIResponder is a very important abstract interface that inherits from NSObject, and is the parent of almost every UI class in UIKit (with special exceptions, like UIImage, which inherits directly from NSObject), It is from there that the connection between NSObject and the UI layer is established.
UIKIT_EXTERN API_AVAILABLE(ios(2.0)) @interface UIResponder : NSObject <UIResponderStandardEditActions>
Copy the code
Responder objects (UIResponder instances) form the event-handling backbone of UIKit applications. Many key objects are also responders, including UIApplication objects, UIViewController objects, and all UIView objects (including UIWindow). When events occur, UIKit dispatts them to the application’s responder object for processing.
There are several events, including Touch Events, Motion Events, remote-Control Events, and Press Events. To handle a particular type of event, the responder must override the corresponding method. In order to handle touch events, for example, responder implementation touchesBegan: withEvent:, touchesMoved: withEvent:, touchesEnded: withEvent: and touchesCancelled: withEvent: Methods. In the case of touches, the responder uses uiKit-provided event information (UIEvent) to track changes in these touches and update the interface of the application as appropriate.
In addition to handling events, UIKit responders manage forwarding unprocessed events to other parts of the application. If a given responder cannot handle the event, it forwards the event to the next responder in the responder chain (the documentation should be wrong; it says “Next Event”). UIKit dynamically manages the responder chain, using predefined rules to determine which responder object will receive the event next. For example, a View forwards events to its SuperView, and the root View of the hierarchy forwards events to its View Controller.
The responder handles UIEvent objects, but can also accept a Custom Input View through an Input View. The system keyboard is the most obvious example of an input view. When the user clicks on the UITextField and UITextView objects on the screen, this view becomes the first responder and displays its Input View, the system keyboard. Similarly, you can create Custom Input views and display them when other responders activate them. To associate a Custom Input View with a responder, assign this view to the responder’s inputView property.
For information about Responders and Responder chains, see Using Responders and the Responder Chain to Handle Events. (We’ll learn more about this document later.)
Managing the Responder Chain
nextResponder
Returns the next responder in the responder chain or nil if there is no next responder.
@property(nonatomic, readonly, nullable) UIResponder *nextResponder;
Copy the code
Return Value: the next object in the responder chain; Nil if this is the last object in the chain.
The UIResponder class does not automatically store or set the next responder, so this method returns nil by default. Subclasses must override this method and return the appropriate next responder. For example, UIView implements this method and returns either the UIViewController object that manages it (if there is one) or its SuperView object (if there is no UIViewController). UIViewController similarly implements this method and returns the superView of its view (the test is to return NULL). UIWindow returns the application object (or UIWindowScene). A shared UIApplication object usually returns nil, but if the object is a subclass of UIResponder and has not been called to handle events, its App delegate is returned.
isFirstResponder
Returns a Boolean value indicating whether this object is the first responder.
@property(nonatomic, readonly) BOOL isFirstResponder;
Copy the code
Return Value: YES if receiver is the first responder; Otherwise, the value is NO.
UIKit initially dispatches some type of event (such as the motion event UIEventTypeMotion) to the first responder.
canBecomeFirstResponder
Returns a Boolean value indicating whether this object can be the first responder.
@property(nonatomic, readonly) BOOL canBecomeFirstResponder; // default is NO
Copy the code
Return Value: YES if the receiver can be the first responder, NO otherwise.
By default, this method returns NO. A subclass must override this method and return YES to become the first responder. Do not call this method on views that are not currently in the active view hierarchy. The results are inconclusive.
becomeFirstResponder
Ask UIKit to make this object the first responder in its window.
- (BOOL)becomeFirstResponder;
Copy the code
Return Value: YES if this object is now the first responder; Otherwise, the value is NO.
Call this method if you want the current object to be the first responder. Calling this method does not guarantee that the object will be the first responder. UIKit requires the current first responder to resign as first responder, but it probably won’t. If so, UIKit will call the object’s canBecomeFirstResponder method, which by default returns NO. If this object succeeds in becoming the first responder, subsequent events targeting the first responder will be passed to the object first, and UIKit will attempt to display the object’s input View (if any).
Do not call this method on views that are not part of the active view hierarchy. You can determine if the view is on the screen by checking its Window property. If the property contains a valid Window, it is part of the active view hierarchy. If the property is nil, the view is not part of a valid view hierarchy.
You can override this method in a custom responder to update the state of the object or perform some action, such as highlighting the selection. If you override this method, you must call super at some point in the implementation.
canResignFirstResponder
Returns a Boolean indicating whether the receiver is willing to relinquish the identity of its first responder.
@property(nonatomic, readonly) BOOL canResignFirstResponder; // default is YES
Copy the code
Return Value: YES if the receiver can relinquish its identity as the first responder, NO otherwise.
By default, this method returns YES. You can override this method in a custom responder and return other values as needed. For example, a text field (UITextField) that contains invalid content might want to return NO to ensure that the user corrects the content first.
resignFirstResponder
Notifies this object that it has been asked to relinquish its identity as the first responder in its window.
- (BOOL)resignFirstResponder;
Copy the code
The default implementation returns YES, which gives up the first responder identity. You can override this method in a custom responder to update the state of the object or perform other actions, such as removing highlighting from the selection. You can also return NO, refusing to relinquish first responder status. If you override this method, you must call super (superclass implementation) at some point in your code. (Remember that UITextField calls resignFirstResponder to hide the system keyboard, probably giving up the identity of the first responder while hiding its associated input View)
Responding to Touch Events
In general, all responders performing custom touch processing should override all four methods. For each is processing touch (you’re touchesBegan: withEvent: received in touch), your inquiries will receive touchesEnded: withEvent: or touchesCancelled: withEvent:. You must deal with cancelled touch (touchesCancelled: withEvent:), in order to ensure the correct behavior of the application. Failure to do so is likely to result in incorrect behavior or crashes.
touchesBegan:withEvent:
Tells the object that one or more New Touches have taken place in the View or window.
- (void)touchesBegan:(NSSet<UITouch *> *)touches withEvent:(nullable UIEvent *)event;
Copy the code
Touches: A set of UITouch instances representing a touch at the beginning of an event, represented by the event. For touches in view, by default, this collection contains only one touch. To receive multiple touches, you must set the View’s multipleTouchEnabled property to YES. Event: The event to which Touchs belongs.
UIKit calls this method when it detects a new touch in a View or window. Many UIKit classes override this method and use it to handle the corresponding Touch event. The default implementation of this method forwards the message to the responder chain. When creating your own subclass, call super to forward any events that you don’t handle yourself. For example,
[super touchesBegan:touches withEvent:event];
Copy the code
If you override this method (a common pattern) without calling super, you’ll also have to override other methods that handle touch events, even if your implementation does nothing.
touchesMoved:withEvent:
Notify the responder when one or more touches associated with the event are changed.
- (void)touchesMoved:(NSSet<UITouch *> *)touches withEvent:(nullable UIEvent *)event;
Copy the code
Touches: A set of UITouch instances representing touches whose value has been changed. These touches belong to the specified event. For touches in view, by default, this collection contains only one touch. To receive multiple touches, you must set the View’s multipleTouchEnabled property to YES. Event: The event to which Touchs belongs.
UIKit calls this method when the location or force of touch changes. Many UIKit classes override this method and use it to handle the corresponding Touch event. The default implementation of this method forwards the message to the responder chain. When creating your own subclass, call super to forward any events that you don’t handle yourself. For example,
[super touchesMoved:touches withEvent:event];
Copy the code
If you override this method (a common pattern) without calling super, you’ll also have to override other methods that handle touch events, even if your implementation does nothing.
touchesEnded:withEvent:
Notifies the responder when one or more fingers are lifted from the View or window.
- (void)touchesEnded:(NSSet<UITouch *> *)touches withEvent:(nullable UIEvent *)event;
Copy the code
Touches: A set of UITouch instances representing touches at the end of the event represented by the event. For touches in a view, by default, this collection contains only one touch. To receive multiple touches, you must set the View’s multipleTouchEnabled property to YES. Event: The event to which Touchs belongs.
UIKit calls this method when the finger or Apple Pencil no longer touches the screen. Many UIKit classes override this method and use it to clear up the state involved in handling the corresponding touch event. The default implementation of this method forwards the message to the responder chain. When creating your own subclass, call super to forward any events that you don’t handle yourself. For example,
[super touchesEnded:touches withEvent:event];
Copy the code
If you override this method (a common pattern) without calling super, you’ll also have to override other methods that handle touch events, even if your implementation does nothing.
touchesCancelled:withEvent:
Notify the responder when a system event cancels the touch sequence (for example, a system alert, a cell phone call).
- (void)touchesCancelled:(NSSet<UITouch *> *)touches withEvent:(nullable UIEvent *)event;
Copy the code
Touches: A set of UITouch instances representing touches at the end of the event represented by the event. For touches in a view, by default, this collection contains only one touch. To receive multiple touches, you must set the View’s multipleTouchEnabled property to YES. Event: The event to which Touchs belongs.
UIKit calls this method when it receives a system interrupt that needs to cancel the touch sequence. Interrupts are anything that directs an application to be inactive or causes a view that handles touch events to be removed from its window. The implementation of this method should clear up any state associated with processing the touch sequence. The default implementation of this method forwards the message to the responder chain. When creating your own subclass, call super to forward any events that you don’t handle yourself. For example,
[super touchesCancelled:touches withEvent:event];
Copy the code
If you override this method without calling super (a common pattern), you must also override other methods that handle touch events (if only implemented as stubs (Empty)).
touchesEstimatedPropertiesUpdated:
Tells the responder that it has received the updated value of the previously estimated attribute or that it no longer expects an update.
- (void)touchesEstimatedPropertiesUpdated:(NSSet<UITouch *> *)touches API_AVAILABLE(ios(9.1));
Copy the code
Touches: An array of UITouch objects that contain updated properties. In every touch object, UIKit by removing each updated attribute of bit flags to update estimatedPropertiesExpectingUpdates properties.
When UIKit cannot report to touch the actual value, it will deliver value estimate, and the UITouch object estimatedProperties and estimatedPropertiesExpectingUpdates attributes set in the proper place. When receiving the estimatedPropertiesExpectingUpdate attribute in the update of the project, UIKit calls this method to send these updates. UIKit also calls this method if one or more updates are no longer needed. Use this method to update the application’s internal data structure with new values provided by UIKit.
When implementing this method, use the estimationUpdateIndex property of the UITouch object in the Touchs parameter to locate the raw data in the application. After the data is located, the new value from the touch object is applied to the data. You can check the touch object estimatedPropertiesExpectingUpdates bit mask to determine which touch attributes updated; Updated attributes are no longer included in the bitmask.
Touch-related attributes may still be estimated due to hardware considerations. For example, when the Apple Pencil is near the edge of the screen, the sensor may not be able to determine its exact height or azimuth. In these cases, the estimatedProperties property continues to store a list of properties whose values are only estimates.
Responding to Motion Events
motionBegan:withEvent:
Tells the receiver that the motion event has started.
- (void)motionBegan:(UIEventSubtype)motion withEvent:(nullable UIEvent *)event API_AVAILABLE(ios(3.0));
Copy the code
Motion: An event subtype constant indicating the type of motion. Common movement is shaking, by UIEventSubtypeMotionShake instructions. Event: Represents the object of the UIEvent associated with the movement.
UIKit notifies the responder only when a motion event starts and ends. It does not report the middle rock (UIEventSubtypeMotionShake). The movement event is initially transmitted to the first responder and forwarded to the responder chain as needed. The default implementation of this method forwards the message to the responder chain.
motionEnded:withEvent:
Tells the receiver that the motion event has ended.
- (void)motionEnded:(UIEventSubtype)motion withEvent:(nullable UIEvent *)event API_AVAILABLE(ios(3.0));
Copy the code
Motion: An event subtype constant indicating the type of motion. Common movement is shaking, by UIEventSubtypeMotionShake instructions. Event: Represents the object of the UIEvent associated with the movement.
UIKit notifies the responder only when a motion event starts and ends. It does not report the middle rock (UIEventSubtypeMotionShake). The movement event is initially transmitted to the first responder and forwarded to the responder chain as needed. The default implementation of this method forwards the message to the responder chain.
motionCancelled:withEvent:
Tells the receiver that the motion event has been canceled.
- (void)motionCancelled:(UIEventSubtype)motion withEvent:(nullable UIEvent *)event API_AVAILABLE(ios(3.0));
Copy the code
Motion: An event subtype constant indicating the type of motion. Common movement is shaking, by UIEventSubtypeMotionShake instructions. Event: Represents the object of the UIEvent associated with the movement.
UIKit calls this method when it receives an interrupt that needs to cancel the motion event. Interrupts are anything that directs the application to be inactive or causes the view handling the motion event to be removed from its Window. UIKit might also call this method if shaking takes too long. This method should be implemented by all responders that handle motion events. In the implementation, all state information related to processing motion events is cleared. (We need to do the necessary finishing touches when the motion event is interrupted)
The default implementation of this method forwards the message to the responder chain.
Responding to Press Events
In general, all responders performing custom press processing should override all four methods.
For each is dealing with the press, your inquiries will receive pressesEnded: withEvent: or pressesCancelled: withEvent: (in pressesBegan: withEvent: received in press). PressesChanged: withEvent: will be provide analog value presses call (e.g., fingertips or analog button).
You must handle derpresses to ensure correct behavior in your application. Failure to do so is likely to result in incorrect behavior or crashes.
pressesBegan:withEvent:
Tells the object the first time the physics button is pressed.
- (void)pressesBegan:(NSSet<UIPress *> *)presses withEvent:(nullable UIPressesEvent *)event API_AVAILABLE(ios(9.0));
Copy the code
Presses: A set of UIPress instances, representing the occurring new presses. And every time you press the stage, you set it to UIPressPhaseBegan. Event: Indicates the event hosted.
UIKit calls this method when the user presses a new button. Use this method to determine which button was pressed and which action was taken.
The default implementation of this method forwards the message to the responder chain. When creating your own subclass, call super to forward all events that you cannot handle on your own.
pressesChanged:withEvent:
This object is told when the value associated with Press changes.
- (void)pressesChanged:(NSSet<UIPress *> *)presses withEvent:(nullable UIPressesEvent *)event API_AVAILABLE(ios(9.0));
Copy the code
Presses: A set of UIPress instances containing changed values. Event: Indicates the event hosted.
UIKit calls this method when the mock value associated with a button or fingertip changes. For example, it calls this method when the button’s simulated force value changes. Use this method to take whatever steps are necessary to respond to changes.
The default implementation of this method forwards the message to the responder chain. When creating your own subclass, call super to forward all events that you cannot handle on your own.
pressesEnded:withEvent:
Tells the object when to release the button.
- (void)pressesEnded:(NSSet<UIPress *> *)presses withEvent:(nullable UIPressesEvent *)event API_AVAILABLE(ios(9.0));
Copy the code
Presses: A set of UIPress instances, representing buttons that the user no longer presses. Every stage that you press is set to UIPressPhaseEnded. Event: Indicates the event hosted.
UIKit calls this method when the user stops pressing one or more buttons. Use this method to take whatever action is necessary to respond to the end of press. The default implementation of this method forwards the message to the responder chain. When creating your own subclass, call super to forward any events that you don’t handle yourself.
pressesCancelled:withEvent:
Tell this object when a system event cancels a press event (such as a low memory warning).
- (void)pressesCancelled:(NSSet<UIPress *> *)presses withEvent:(nullable UIPressesEvent *)event API_AVAILABLE(ios(9.0));
Copy the code
Presses: A set of UIPress instances, representing the presses associated with events. Every press phase is set to UIPressPhaseCancelled. Event: Indicates the event hosted.
UIKit calls this method when it receives a system interrupt that needs to be unordered. Interrupts are anything that directs the application to be inactive or causes the view that handles press events to be deleted from its Window. The implementation of this method should clear up any state associated with processing the Press sequence. Failure to handle cancellations can result in incorrect behavior or crashes.
The default implementation of this method forwards the message to the responder chain. When creating your own subclass, call super to forward all events that you cannot handle on your own.
Responding to remote-control Events
Notifies an object when it receives a remote control event.
- (void)remoteControlReceivedWithEvent:(nullable UIEvent *)event API_AVAILABLE(ios(4.0));
Copy the code
Event: An event object that encapsulates a remote control command. Remote control events are of the UIEventTypeRemoteControl type.
Remote control events result from commands from external attachments, including headphones. The application responds to these commands by controlling the audio or video media presented to the user. Receive the response program objects should check a subtype of events, to determine the desired command (for example, play (UIEventSubtypeRemoteControlPlay)), and then continue accordingly.
To allow remote control event, you must call UIApplication beginReceivingRemoteControlEvents method. To shut down the remote control event, please call endReceivingRemoteControlEvents method.
Managing Input Views
inputView
Custom input view that is displayed when the receiver becomes the first responder.
@property (nullable, nonatomic, readonly, strong) __kindof UIView *inputView API_AVAILABLE(ios(3.2));
Copy the code
This property is typically used to provide a view to replace the system-supplied keyboard displayed with UITextField and UITextView objects.
The value of this read-only property is nil. Responder objects that need a custom view to collect user input should redeclare this property as read-write and use it to manage their custom input views. When the Receiver becomes the first responder, the responder infrastructure automatically displays the specified input view. Similarly, the responder infrastructure automatically closes the specified input view when the receiver exits its first responder state.
inputViewController
The Custom Input View Controller to use when the receiver becomes the first responder.
@property (nullable, nonatomic, readonly, strong) UIInputViewController *inputViewController API_AVAILABLE(ios(8.0));
Copy the code
This property is typically used to provide a view controller in place of the system-provided keyboard displayed for UITextField and UITextView objects.
The value of this read-only property is nil. If you want to provide a custom input view controller to replace the system keyboard in your application, redeclare this property as read and write in the UIResponder subclass. You can then use this property to manage custom input view controllers. When the receiver becomes the first responder, the responder infrastructure automatically renders the specified input view controller. Similarly, when a receiver abandons its first responder state, the responder infrastructure automatically cancels the specified input view controller.
inputAccessoryView
The custom Input Accessory view that is displayed when the receiver becomes the first responder.
@property (nullable, nonatomic, readonly, strong) __kindof UIView *inputAccessoryView API_AVAILABLE(ios(3.2));
Copy the code
This property is typically used to attach attachment views to system-supplied keyboards for displaying UITextField and UITextView objects. (The position is at the top of the keyboard)
The value of this read-only property is nil. If you want to attach a custom control to a system-provided inputView (such as the system keyboard) or a custom inputView (provided in the inputView property), redeclare this property as read and write in a UIResponder subclass. You can then use this property to manage custom attachment views. When the Receiver becomes the first responder, the responder infrastructure appends the attachment view to the appropriate input view before displaying it.
inputAccessoryViewController
Customize the input attachment view controller to display when the Receiver becomes the first responder.
@property (nullable, nonatomic, readonly, strong) UIInputViewController *inputAccessoryViewController API_AVAILABLE(ios(8.0));
Copy the code
reloadInputViews
Update the custom input and Accessory views when the object is the first responder. If called when the object is the first responder, reload inputView, inputAccessoryView, and textInputMode. Otherwise ignore.)
- (void)reloadInputViews API_AVAILABLE(ios(3.2));
Copy the code
When it is the first responder, use this method to refresh the custom Input View or input Accessory view associated with the current object. The view is replaced immediately (that is, no animation is attached). This method is invalid if the current object is not the first responder.
Getting the Undo Manager ()
undoManager
Returns the most recent shared Undo manager in the responder chain.
@property(nullable, nonatomic,readonly) NSUndoManager *undoManager API_AVAILABLE(ios(3.0));
Copy the code
By default, each window of an application has an undo manager: a shared object that manages undo and redo operations. However, the class of any object in the responder chain can have its own custom undo manager. (For example, instances of UITextField have their own undoManager, which is cleared when the text field exits the first responder state.) When you request undoManager, the request will go to the responder chain, and the UIWindow object will return an available instance.
You can add an undo manager to the view controller to perform undo and redo operations that are local to the managed view. .
Here’s the UIResponder document, and the most important thing to remember is that we need to rewrite touches when we need to handle events ourselves. (Response to touch events), Presses… (Response to key events), Motion… (response to motion events), and the nextResponder property, which links the responder chain.
Let’s take a look at the Responder Object document, which is marked out of date but still useful for reference.
Responder object
Responders are objects that can respond to events and process them. All responder objects are instances of classes that eventually inherit from UIResponder (iOS) or NSResponder (OS X). These classes declare a programming interface for event handling and define the default behavior of responders. Application visible objects are almost always responders (for example, Windows, Views, and Controls), and application objects (AppDelegates) are also responders. In iOS, a view controller (UIViewController object) is also a responder object.
To receive an event, the responder must implement the appropriate event handling method, in some cases telling the application that it can be the first responder.
The First Responder Receives Some Events First
In an application, the first responder object to receive multiple events is called the first responder. It receives critical events, motion events, action messages, and so on. (Mouse events and multi-touch events first go to the view under the mouse pointer or finger; This view may or may not be the first responder. The first responder is usually the view in the window that the application thinks is best suited to handle the event. In order to receive events, the responder must also indicate his intention to be the first responder; For each platform, the responder does it in a different way:
// OS X
- (BOOL)acceptsFirstResponder {
return YES;
}
// iOS
- (BOOL)canBecomeFirstResponder {
return YES;
}
Copy the code
In addition to receiving event messages, a responder can also receive action messages without specifying a target. (Action messages are sent by controls such as buttons and Controls when the user operates.)
The Responder Chain Enables Cooperative Event Handling
If the first responder is unable to process the event or action message, it forwards it to the “next responder” in a chain of links called the responder chain. Responder chains allow responder objects to transfer responsibility for handling events or action messages to other objects in the application. If an object in the responder chain cannot handle an event or action, it passes the message to the next responder in the chain. Messages propagate up the chain to higher-level objects until they are processed. If it is not processed, the application will discard it.
The responder chain for iOS (left) and OS X (right)
The path of an event. The general path of the event on the responder chain starts from the first responder’s view or the view under the mouse pointer or finger. From there, it goes up into the view hierarchy, into the Window object, and then into the global application object. However, the responder chain for events in iOS adds a variation to this path: if the view is managed by a View Controller and the view cannot handle the event, the View Controller becomes the next responder.
The path of an action message. For action messages, both OS X and iOS extend the responder chain to other objects. In OS X, the chain of responder for action messages is different for document architecture-based applications, applications that use window controllers (NSWindowController), and applications that do not fit into either category. In addition, if an application on OS X has both a Key Window and a Main Window, the chain of responders through which the action message passes may involve a view hierarchy of both Windows.
Next is the Handling Touches in Your View document.
Handling Touches in Your View
If touch handling is intricately linked to the content of the view, use Touch Events directly on the View subclass.
If you don’t plan to use a gesture recognizer for custom views, you can handle touch events directly from the view itself. Because Views are responders, they can handle multi-touch events and many other types of events. When UIKit determines that a touch event occurred in a view, it will call the View’s touchesBegan:withEvent:, touchesMoved:withEvent:, or touchesEnded:withEvent: methods. You can override these methods in custom views and use them to provide responses to touch events. (from UIResponder)
The methods you override to handle touches in the view (or any responder) correspond to different stages of the touch event processing process. For example, Figure 1 illustrates the different stages of a touch event. When a finger (or Apple Pencil) touches the screen, UIKit creates a UITouch object, sets the touch position to the appropriate point, and sets its phase property to UITouchPhaseBegan. When the same finger moves across the screen, UIKit updates the touch position and changes the phase property of the touch object to UITouchPhaseMoved. When the user lifts his finger off the screen, UIKit changes the phase property to UITouchPhaseEnded, and the touch sequence ends.
Figure 1 The Phases of a touch Event
Similarly, the system can cancel an ongoing touch sequence at any time; For example, when an incoming call interrupts the application. When it did so, UIKit by calling touchs to notify your view touchesCancelled: withEvent: method. You can use this method to perform any necessary cleanup of the view’s data structure.
UIKit creates a new UITouch object for each new finger on the touch screen. The touch itself is passed through the current UIEvent object. UIKit distinguishes between touches from fingers and Apple Pencil, and you can do different things with them.
Important: In its default configuration, the view only receives the first UITouch object associated with the event, even if multiple fingers are touching the view. To receive additional touches, the View’s multipleTouchEnabled property must be set to true. You can also configure this property in The Interface Builder using the property inspector.
Next is the Using Responders and the Responder Chain to Handle Events document.
Use Responders and the Responder Chain to Handle Events
Learn how to handle events propagated through your application.
Applications use responder objects to receive and process events. A responder object is any instance of the UIResponder class. Common subclasses include UIView, UIViewController, and UIApplication. The responder receives raw event data and must either process the event or forward it to another responder object. When an application receives an event, UIKit automatically directs the event to the most appropriate responder object (called the first responder).
Unprocessed events pass from one responder to another in the active responder chain, which is a dynamic configuration of the application responder object. Figure 1 shows the responder in the application, whose interface consists of a Label, a text field, a button, and two background views. The figure also shows how events move from one responder to the next along the responder chain.
Figure 1 Responder Chains in an APP
If the Text field doesn’t handle the event, UIKit will send the event to the parent view object of the Text field, followed by the root view of the window. The responder chain is moved from the root view to the owning view controller before the event is directed to the Window. If the window cannot handle the event, UIKit passes the event to the UIApplication object, or perhaps to the App Delegate if the object is an instance of UIResponder and not part of the responder chain.
1, Determining the First Responder of an Event
UIKit designates an object as the first responder of an event based on the type of event. Event types include:
Event type | First responder |
---|---|
Touch events | The view in which The touch occurred. |
Press events | The object that has focus. |
Shake-motion events | The object that you (or UIKit) designate. |
Remote-control events | The object that you (or UIKit) designate. |
Editing menu messages | The object that you (or UIKit) designate. |
Note: Motion events associated with accelerometers, gyroscopes and magnetometers do not follow the response chain. Instead, Core Motion passes these events directly to the specified object. For more information, see Core Motion Framework
Controls use the Action message to communicate directly with the target object associated with it. When a user interacts with Control, Control sends an Action message to its target object. Action Messages are not events, but they can still take advantage of responder chains. When the target object of Control is nil, UIKit iterates through the responder chain from the target object until it finds an object that implements the appropriate action method. For example, UIKit editing menu uses this behavior to search for responder objects that implement methods called cut:, Copy:, or paste:.
Gesture recognizers receive touch and Press events before the view. If the View’s Gesture recognizers can’t recognize a series of Touches, UIKit will send Touches to the View. If the View can’t handle Touches, UIKit passes them up to the responder chain. For more information on Handling events with the Gesture recognizer, see Handling UIKit Gestures (translation below).
Determining Which Responder contains a Touch Event
UIKit uses view-based hit-testing to determine where touch events occur. Specifically, UIKit compares the touch position to the bounds of the View object in the View hierarchy. UIView’s hitTest:withEvent: method traverses the view hierarchy to find the deepest subview containing the specified touch, which will be the first responder to the touch event. (UIEvent will be directly handed to it for processing)
Note: If the touch position is outside the view bounds, the hitTest:withEvent: method ignores the view and all its children. Therefore, when the view’s clipsToBounds attribute is NO, the subviews out of the view’s bounds are not returned even if they happen to contain touches. For more information about hit-testing behavior, see UIView’s hitTest:withEvent: method (which we’ve examined in detail above).
When a touch occurs, UIKit creates a UITouch object and associates it with the View. UIKit updates the same UITouch object with new information when the touch position or other parameters change. The only property that doesn’t change is the View. The value in the touch View property does not change even if the touch position is moved outside of the original view. UIKit releases the UITouch object at the end of the touch. UIKit releases the UITouch object at the end of the touch. UIKit releases the UITouch object when the touch ends.
Altering the Responder Chain
You can change the responder chain by overriding the nextResponder property of the responder object. When you do this, the next responder is the object you return.
Many UIKit classes have overridden this property and returned specific objects, including:
- UIView object. If the view is the root view of the View Controller, the next responder is the View Controller; Otherwise, the next responder is the Superview of the View.
- UIViewController object.
- If the View Controller’s View is the root view of the window, its next responder is the Window object.
- If the View Controller is presented by another View Controller, its next responder is the render View Controller. (Parent View Controller)
- UIWindow object. The next responder to the window is the UIApplication object.
- UIApplication object. The next responder is an App delegate, but only if the app delegate is an instance of UIResponder and not the View, view Controller, or app object itself.
That wraps up with event handling, responders, and responder chains, and we’ll also incorporate gesture recognizers (inheriting from NSObject, Also target-action mechanics) and target-Action, which are pretty important. (When a view simultaneously implements touches… And when we add a gesture, when we touch the view, we’ll first call ‘touchesBegan withEvent’. Function, and when the gesture recognizers identify gestures will be called after the view of touchesCancelled: withEvent: interrupt touches… To execute the action function of the gesture.
Handling UIKit Gestures
Use Gesture recognizers to simplify touch handling and create a consistent user experience.
Gesture recognizers are the easiest way to handle touch (the touch event on the screen of UIEventTypeTouches) or press (the physical button of UIEventTypePresses) events in your view. You can attach one or more Gesture recognizers to any view. Gesture recognizers encapsulate all the logic needed to process and interpret incoming events for the view, matching them to known patterns (Gesture types). When a match is detected, the Gesture Recognizer notifies the gesture recognizer of its designated target object, which can be the View Controller, the View itself, or any other object in the application.
Gesture recognizers send notifications using target-Action design patterns. When the UITapGestureRecognizer object detects a single finger click in the view, it calls the view Controller’s action method, which you can use to provide the response.
Figure 1 Gesture recognizer notifying its target
Gesture recognizers come in two types: discrete and continuous. A discrete gesture recognizer calls your action Method exactly once the gesture has been recognized. Once the initial recognition criteria are met, the Continuous Gesture Recognizer performs the call to the action method multiple times and notifies you when the information in the gesture event changes. For example, the UIPanGestureRecognizer object calls the action method every time the touch position changes.
Interface Builder contains every standard UIKit Gesture recognizers object. It also includes a custom gesture recognizer object that you can use to represent custom UIGestureRecognizer subclasses.
Configuring a Gesture recognizer
To configure a gesture recognizer:
- In the storyboard, drag the gesture recognizer into the view.
- Implement the action method to be called when recognizing gestures; See Listing 1.
- Connect your action method to a gesture recognizer.
You can create this connection in the Interface Builder by right-clicking the gesture recognizer and connecting it to the corresponding object in the Interface with its Sent Action Selector. You can also programmatically configure action methods using the gesture recognizer’s addTarget(_: Action 🙂 method.
Listing 1 shows the common format of the gesture recognizer’s action method. If you wish, you can change the parameter type to match a particular gesture recognizer subclass.
Listing 1 Gesture recognizer Action Methods (Gesture recognizer Action methods)
// Swift
@IBAction func myActionMethod(_ sender: UIGestureRecognizer) {... }// Objective-c
- (IBAction)myActionMethod:(UITapGestureRecognizer *)sender { ... }
Copy the code
Responding to Gestures
The Action Method associated with the Gesture Recognizer provides the application’s response to the gesture. For discrete gestures, your action Method is similar to Button’s action Method. Once the Action Method is called, you can perform any task appropriate to the gesture. For consecutive gestures, the Action Method can respond to the recognition of the gesture, but can also track events before the gesture is recognized. Tracking events allows you to create a more interactive experience. For example, you can use updates to the UIPanGestureRecognizer object to reposition the content in your application. (for example, have an imageView move with your finger)
The Gesture recognizer’s state property conveys the current recognition state of the object. For continuous gestures, Gesture recognizer will update from the value of this attribute UIGestureRecognizer. State, began to UIGestureRecognizer. State. Changed to UIGestureRecognizer. State. Ended, or UIGestureRecognizer State. Cancelled. Action Methods Use this property to determine the appropriate action procedure. For example, you can use the Began and changed states to make temporary changes to your content, ended states to make those changes permanent, and Cancelled states to discard changes. Before performing, always check the value of the Gesture Recognizer’s State property.
Gesture document see here, the document there are various types of gesture processing and the realization of custom gesture, discrete gesture, continuous gesture realization, gesture state machine and so on, we can learn according to need.
Target-Action
Although delegation, Bindings, and Notification are useful for handling some forms of communication between objects in a program, they are not particularly appropriate for the most obvious types of communication. The user interface of a typical application consists of many graphical objects, the most common of which may be controls. Controls are graphical simulations of real-world or logical devices (buttons, sliders, checkboxes, etc.); Like a real-world control, such as a radio tuner, you use it to communicate your intentions to a system that is part of the application.
The role of a control in a user interface is simple: it interprets the user’s intention and instructs other objects to perform the request. The hardware generates a raw event when the user acts on the control by clicking it or pressing the back key. Control takes events (appropriately wrapped for Cocoa) and converts them into application-specific instructions. However, the event itself does not provide much information about the user’s intentions; They just tell you that the user clicked a mouse button or pressed a key. Therefore, some mechanism must be used to provide conversion between events and instructions. This mechanism is called target-action.
Cocoa uses the target-Action mechanism to communicate between control and another object (target). This mechanism allows one or more cells in Control and OS X to encapsulate the information necessary to send application-specific instructions to the appropriate objects. The receiving object is usually an instance of a custom class, called target. An action is a message sent by control to target. The object of interest in a user event (target) is the object that gives it meaning, which is usually reflected in the name it gives to the action.
The Target
Target is the recipient of the Action message. A control (or more commonly a cell) stores the target of its action message as an outlet (see Outlets). Target is usually an instance of one of your custom classes, although it can be any Cocoa object whose class implements the appropriate Action methods.
You can also set the target outlet of cell or Control to nil and determine the target object at run time. When target is nil, the application object (NSApplication or UIApplication) searches for the appropriate receivers in the specified order:
- It starts with the first responder in the key Window, and then follows the nextResponder link to link the responder chain to the content view of the Window object (NSWindow or UIWindow).
Note: The Key window in OS X responds to the key-presses of applications and is the receiver of messages in menus and dialogs. The application’s Main window is the primary focus of the user’s actions and usually has a critical state.
- It tries the Window object first, and then tries the Window object’s delegate.
- If the Main Window is different from the Key Window, it will start with the first responder in the Main Window and follow the main Window’s responder chain up to the Window object and its delegate.
- Next, the Application object attempts to respond. If it cannot respond, it tries its delegate. The Application object and its delegate are the final receivers.
Control does not (and should not) retain its Targets. However, the clients (usually Applications) of the controls that send the Action message are responsible for ensuring that its Targets are available for receiving the Action message. To do this, they may have to keep targets in a memory-managed environment. Delegates this precaution applies equally to delegates and data sources.
The Action
An action is a message that control sends to Target, or from Target’s perspective, a method that Target implements in response to an action message. The cell of Control, or (more commonly in AppKit) Control, stores the action as an instance variable of type SEL. SEL is an Objective-C data type that specifies the signature of a message. The Action message must have a simple, unique signature. It calls a method that does not return anything, usually with a single parameter of type ID. By convention, this parameter is called sender. Here is an example from the NSResponder class, which defines a number of action methods:
- (void)capitalizeWord:(id)sender;
Copy the code
Some Cocoa classes declare action methods that can also have an equivalent signature:
- (IBAction) deleteRecord:(id)sender;
Copy the code
In this case, IBAction does not specify a data type for the return value; No value is returned. IBAction is a type qualifier that Interface Builder takes note of during application development and synchronizes actions that are added programmatically with the internal list of action methods defined for the project.
The sender parameter usually identifies the Control that sent the Action message (although it could be another object that was replaced by the actual sender). The idea behind this is similar to a return address on a postcard. If desired, target can query the sender for more information. If the actual sending object replaces another object with the sender, that object should be treated the same way. For example, suppose you have a text field, and when the user enters text, the action method nameEntered: : is called in target.
- (void)nameEntered:(id) sender {
NSString *name = [sender stringValue];
if(! [name isEqualToString:@""]) {
NSMutableArray *names = [self nameList];
[names addObject:name];
[sender setStringValue:@""]; }}Copy the code
Here, the response method extracts the contents of the text field, adds the string to the array cached as an instance variable, and then clears the field. Other possible queries for the sender include: Ask the NSMatrix object for its selectedRow ([sender selectedRow]), ask the NSButton object for its state ([sender state]), and ask any cell associated with the control for its flag ([[sender cell]) Tag]) is a numeric identifier.
Target-action in the AppKit Framework and target-Action in UIKit, so I’m just going to look at target-Action in UIKit.
Target-Action in UIKit
UIKit Framework also declares and implements a set of Control classes; The Control class in this framework inherits from the UIControl class, which defines most of iOS’s target-Action Mechanism. However, there are some fundamental differences in how the AppKit and UIKit frameworks implement target-Action. One of the differences is that UIKit doesn’t have any real cell classes. Controls in UIKit do not rely on its cells to get target and action information.
A bigger difference in how the two frameworks implement target-action lies in the nature of the event model. In the AppKit framework, users typically use a mouse and keyboard to register events for the system to process. These events, such as clicking a button, are finite and discrete. Thus, the Control object in AppKit typically recognizes a single physical event as a trigger for the action it sends to Target. (In the case of buttons, this is a mouse-up event.) In iOS, the user’s finger is the initiator of events, not mouse clicks, mouse drags, or physical buttons. Multiple fingers can touch an object on the screen at once, and the touches can even go in different directions.
To illustrate this multi-touch event model, UIKit declares a set of Control-event constants in Uicontrol.h that specify the various physical gestures that the user can make on control, Such as lifting a finger from a control, dragging a finger into a control, and touching it in a text field. You can configure the Control object so that it responds to one or more of these touch events by sending an action message to target. Many control classes in UIKit are implemented to generate certain control events; UISlider instance of a class, for example, to generate the UIControlEventValueChanged control event, you can use this event to the target object sending the action message.
Sets control to send action messages to target objects by associating target and Action with one or more Control events. To do this, send addTarget: action: forControlEvents: message to a specified target – action pair of controls. When the user to specify the way touch controls, control through sendAction: : the from: forEvent: forward messages to the action to the global UIApplication object. As in AppKit, the global Application object is the central dispatch point for Action messages. If the control specifies a nil target for the Action message, the application queries the object in the responder chain until it finds an object that is willing to process the Action message, that is, an object that implements the method corresponding to the Action selector.
In contrast to the AppKit framework, where an action method may have only one or two valid signatures, the UIKit framework allows three different forms of action Selector:
- (void)action
- (void)action:(id)sender
- (void)action:(id)sender forEvent:(UIEvent *)event
Copy the code
To learn more about target-Action Mechanism in UIKit, read the UIControl Class Reference.
Refer to the link
Reference link :🔗
- Using Responders and the Responder Chain to Handle Events
- Handling Touches in Your View
- Responder object
- Events (iOS)
- Target-Action
- Count the flow of iOS touch events
- IOS responder chain and event handling
- IOS development series – Touch event, gesture recognition, shake event, headset wire control