What happens between the time a finger touches the screen and the time a MotionEvent is transferred to an Activity or View? Where do touch events come from in Android? Where is the source? This article describes an intuitive whole process, not very understanding, just to understand.
Android Touch event model
Touch events must be captured before they can be passed to the window. Therefore, there should be a thread constantly listening to the screen, once there is a touch event, to capture the event; Second, there should also be some way to find the target window, because there may be multiple interfaces visible to the user in multiple apps, and it is necessary to determine which window the event informs. Finally, there is the question of how the target window consumes events.
InputManagerService is a service that Android abstracts to handle various user operations. It is a Binder service entity instantiated with the SystemServer process at startup and registered with the ServiceManager. However, this service is mainly used to provide some input equipment information, and its role as Binder service is relatively small:
private void startOtherServices() {
...
inputManager = new InputManagerService(context);
wm = WindowManagerService.main(context, inputManager,
mFactoryTestMode != FactoryTest.FACTORY_TEST_LOW_LEVEL,
!mFirstBoot, mOnlyCore);
ServiceManager.addService(Context.WINDOW_SERVICE, wm);
ServiceManager.addService(Context.INPUT_SERVICE, inputManager);
...
}
Copy the code
InputManagerService and WindowManagerService were added at about the same time, which shows that they are almost mutually related to each other to some extent, and the handling of touch events does involve both services. The best evidence is that WindowManagerService needs to hold a reference to InputManagerService directly. In contrast, InputManagerService collects touch events. WindowManagerService is responsible for finding the target window. Next, take a look at how InputManagerService collects touch events.
How do I capture touch events
InputManagerService opens a separate thread just for reading touch events,
NativeInputManager::NativeInputManager(jobject contextObj,
jobject serviceObj, const sp<Looper>& looper) :
mLooper(looper), mInteractive(true) {
...
sp<EventHub> eventHub = new EventHub();
mInputManager = new InputManager(eventHub, this, this);
}
Copy the code
Here’s EventHub, which uses Linux’s inotify and epoll mechanisms to listen for device events: It can be regarded as a hub for different devices, including device plug and pull, various touch and button events, etc. It is mainly oriented to the device node in the /dev/input directory, for example, the event in the /dev/inpu/event0 directory is the input event. EventHub getEvents can be used to listen for and get the event:
In new InputManager, an InputReader object is created and an InputReaderThread Loop thread is created. The Loop thread is used to obtain the Input event through EventHub’s getEvents
InputManager::InputManager( const sp<EventHubInterface>& eventHub, const sp<InputReaderPolicyInterface>& readerPolicy, const sp<InputDispatcherPolicyInterface>& dispatcherPolicy) { <! --> mDispatcher = new InputDispatcher(dispatcherPolicy); <! MReader = new InputReader(eventHub, readerPolicy, mDispatcher); initialize(); } void InputManager::initialize() { mReaderThread = new InputReaderThread(mReader); mDispatcherThread = new InputDispatcherThread(mDispatcher); } bool InputReaderThread::threadLoop() { mReader->loopOnce(); return true; } void InputReader::loopOnce() { int32_t oldGeneration; int32_t timeoutMillis; bool inputDevicesChanged = false; Vector<InputDeviceInfo> inputDevices; {... <! Listen for events --> size_t count = mEventHub->getEvents(timeoutMillis, mEventBuffer, EVENT_BUFFER_SIZE); . <! -- processing events --> processEventsLocked(mEventBuffer, count); . <! --> mQueuedListener --> Flush (); }Copy the code
Through the preceding process, the input event can be read, preliminarily encapsulated as a RawEvent through processEventsLocked, and finally a notification is sent requesting the delivery of a message. That solves the event reading problem, so let’s focus on event distribution.
Distribution of events
At the time of the new InputManager, not only to create an event read the thread, also creates an event dispatch thread, although also can directly distribute in the read thread, but this will increase the time consuming, read against events in a timely manner, therefore, after the event read, distributed directly to the thread to send a notification, please distribute threads to deal with, In this way, the reader thread can be more agile and prevent event loss, so the InputManager model looks like this:
InputReader mQueuedListener-> Flush () indicates that the InputDispatcher event has been read. InputDispatcherThread is a typical Looper thread. Based on the Native Looper thread, Hanlder message processing model is implemented. If there is an Input event coming, it will be awakened to process the event, and after processing, it will continue to sleep and wait.
bool InputDispatcherThread::threadLoop() { mDispatcher->dispatchOnce(); return true; } void InputDispatcher::dispatchOnce() { nsecs_t nextWakeupTime = LONG_LONG_MAX; {<! Wake up, process the Input message --> if (! haveCommandsLocked()) { dispatchOnceInnerLocked(&nextWakeupTime); }... } nsecs_t currentTime = now(); int timeoutMillis = toMillisecondTimeoutDelay(currentTime, nextWakeupTime); <! -- mLooper --> pollOnce(timeoutMillis); }Copy the code
So that’s the dispatch thread model. DispatchOnceInnerLocked is the dispatch processing logic. Here’s one branch, touch events:
void InputDispatcher::dispatchOnceInnerLocked(nsecs_t* nextWakeupTime) { ... case EventEntry::TYPE_MOTION: { MotionEntry* typedEntry = static_cast<MotionEntry*>(mPendingEvent); . done = dispatchMotionLocked(currentTime, typedEntry, &dropReason, nextWakeupTime); break; } bool InputDispatcher::dispatchMotionLocked( nsecs_t currentTime, MotionEntry* entry, DropReason* dropReason, nsecs_t* nextWakeupTime) { ... Vector<InputTarget> inputTargets; bool conflictingPointerActions = false; int32_t injectionResult; if (isPointerEvent) { <! - key 1 to find the target Window - > injectionResult = findTouchedWindowTargetsLocked (currentTime, entry, inputTargets nextWakeupTime, &conflictingPointerActions); } else { injectionResult = findFocusedWindowTargetsLocked(currentTime, entry, inputTargets, nextWakeupTime); }... <! -- dispatchEventLocked(currentTime, Entry, inputTargets); return true; }Copy the code
Can be seen from the above code, to touch events will find target Window through findTouchedWindowTargetsLocked first, then by dispatchEventLocked sends the message to the target Window, the following look at how to find the target Window, And how the window list is maintained.
How do I find the target window for touch events
The Android system can support multiple screens at the same time. Each screen is abstracted into a DisplayContent object, and a WindowList object is maintained internally, which is used to record all Windows in the current screen, including status bar, navigation bar, application window, sub-window, etc. Adb shell Dumpsys SurfaceFlinger (adb shell Dumpsys SurfaceFlinger)
So, how do you find the window that the touch event corresponds to, whether it’s the status bar, or the navigation bar, or the application window, and that’s where DisplayContent’s WindowList comes in, DisplayContent holds information about all the Windows, so, You can determine which window to send the event to according to the position of the touch event and the properties of the window. Of course, the details are much more complex than a sentence, which are related to the state, transparency, split screen and other information of the window. The following is a brief glance to achieve subjective understanding of the process.
int32_t InputDispatcher::findTouchedWindowTargetsLocked(nsecs_t currentTime, const MotionEntry* entry, Vector<InputTarget>& inputTargets, nsecs_t* nextWakeupTime, bool* outConflictingPointerActions) { ... sp<InputWindowHandle> newTouchedWindowHandle; bool isTouchModal = false; <! --> size_t numWindows = mWINDOwhandles.size (); for (size_t i = 0; i < numWindows; i++) { sp<InputWindowHandle> windowHandle = mWindowHandles.itemAt(i); const InputWindowInfo* windowInfo = windowHandle->getInfo(); if (windowInfo->displayId ! = displayId) { continue; // wrong display } int32_t flags = windowInfo->layoutParamsFlags; if (windowInfo->visible) { if (! (flags & InputWindowInfo::FLAG_NOT_TOUCHABLE)) { isTouchModal = (flags & (InputWindowInfo::FLAG_NOT_FOCUSABLE | InputWindowInfo::FLAG_NOT_TOUCH_MODAL)) == 0; <! - find the target window - > if (isTouchModal | | windowInfo - > touchableRegionContainsPoint (x, y)) {newTouchedWindowHandle = windowHandle; break; // found touched window, exit window loop } } ...Copy the code
MWindowHandles represents all Windows, findTouchedWindowTargetsLocked is found in the mWindowHandles goal window, rules are too complex, all in all is according to click the position more window Z characteristics to determine the order, You can do your own analysis if you are interested. Now, if you look at mWindowHandles, how do you keep them up to date when you add or remove Windows? Here is involved with WindowManagerService interaction, mWindowHandles value is in InputDispatcher: : setInputWindows set in,
void InputDispatcher::setInputWindows(const Vector<sp<InputWindowHandle> >& inputWindowHandles) { ... mWindowHandles = inputWindowHandles; .Copy the code
Who would call this function? The real entrance is the introduction of InputMonitor will call WindowManagerService InputDispatcher: : setInputWindows, this time is with window to delete are logically related, such as increasing addWindow, for example:
WindowManagerService and InputManagerService complement each other. Now, how to find the target window is solved. Now, how to send events to the target window is solved.
How do I send events to the target window
Find the target window, but also encapsulated the event, the rest is to notify the target window, but there is one of the most obvious problem is that, currently all the logic is in the SystemServer process, and to notify the window is located in the APP user process, so how to notify? Binder communication is the most common IPC device in Android, but Input events are handled with no Binder: older versions use sockets and older versions use pipes.
void InputDispatcher::dispatchEventLocked(nsecs_t currentTime,
EventEntry* eventEntry, const Vector<InputTarget>& inputTargets) {
pokeUserActivityLocked(eventEntry);
for (size_t i = 0; i < inputTargets.size(); i++) {
const InputTarget& inputTarget = inputTargets.itemAt(i);
ssize_t connectionIndex = getConnectionIndexLocked(inputTarget.inputChannel);
if (connectionIndex >= 0) {
sp<Connection> connection = mConnectionsByFd.valueAt(connectionIndex);
prepareDispatchCycleLocked(currentTime, connection, eventEntry, &inputTarget);
} else {
}
}
}
Copy the code
If you look at the code layer by layer, you will find that the sendMessage function of InputChannel will be called, and it will be sent to the APP through the socket.
Where does this Socket come from? Or how to communicate with a pair of sockets? WindowManagerService (WMS) : WindowManagerService (WMS) : WindowManagerService (WMS) : WindowManagerService (WMS) : WindowManagerService (WMS) : WindowManagerService (WMS)
ViewRootImpl
public void setView(View view, WindowManager.LayoutParams attrs, View panelParentView) { ... requestLayout(); if ((mWindowAttributes.inputFeatures & WindowManager.LayoutParams.INPUT_FEATURE_NO_INPUT_CHANNEL) == 0) { <! Create InputChannel --> mInputChannel = new InputChannel(); } try { mOrigWindowType = mWindowAttributes.type; mAttachInfo.mRecomputeGlobalAttributes = true; collectViewAttributes(); <! Add window, And request open Socket Input communication channel - > res. = mWindowSession addToDisplay (mWindow mSeq, mWindowAttributes, getHostVisibility (), mDisplay.getDisplayId(), mAttachInfo.mContentInsets, mAttachInfo.mStableInsets, mAttachInfo.mOutsets, mInputChannel); }... <! If (mInputChannel! = null) { if (mInputQueueCallback ! = null) { mInputQueue = new InputQueue(); mInputQueueCallback.onInputQueueCreated(mInputQueue); } mInputEventReceiver = new WindowInputEventReceiver(mInputChannel, Looper.myLooper()); }Copy the code
In iWindowSession. aidl, InputChannel is of type out, which means it needs to be populated by the server.
public int addWindow(Session session, IWindow client, int seq, WindowManager.LayoutParams attrs, int viewVisibility, int displayId, Rect outContentInsets, Rect outStableInsets, Rect outOutsets, InputChannel outInputChannel) { ... if (outInputChannel ! = null && (attrs.inputFeatures & WindowManager.LayoutParams.INPUT_FEATURE_NO_INPUT_CHANNEL) == 0) { String name = win.makeInputChannelName(); <! Key points - 1 - > create communications channel InputChannel [] inputChannels = InputChannel. OpenInputChannelPair (name); <! --> win.setinputChannel (inputChannels[0]); <! --> inputChannels[1]. TransferTo (outInputChannel); <! - registered channel and window - > mInputManager. RegisterInputChannel (win. MInputChannel win. MInputWindowHandle); }Copy the code
WMS creates a Socketpair as a full-duplex channel and fills it into the InputChannel of the Client and Server. Then let InputManager bind the Input communication channel to the current window ID, so that we can know which window uses which channel to communicate; Binder sends outInputChannel back to APP. SocketPair creation code:
status_t InputChannel::openInputChannelPair(const String8& name, sp<InputChannel>& outServerChannel, sp<InputChannel>& outClientChannel) { int sockets[2]; if (socketpair(AF_UNIX, SOCK_SEQPACKET, 0, sockets)) { status_t result = -errno; . return result; } int bufferSize = SOCKET_BUFFER_SIZE; setsockopt(sockets[0], SOL_SOCKET, SO_SNDBUF, &bufferSize, sizeof(bufferSize)); setsockopt(sockets[0], SOL_SOCKET, SO_RCVBUF, &bufferSize, sizeof(bufferSize)); setsockopt(sockets[1], SOL_SOCKET, SO_SNDBUF, &bufferSize, sizeof(bufferSize)); setsockopt(sockets[1], SOL_SOCKET, SO_RCVBUF, &bufferSize, sizeof(bufferSize)); <! -- Fill in server inputChannel --> String8 serverChannelName = name; serverChannelName.append(" (server)"); outServerChannel = new InputChannel(serverChannelName, sockets[0]); <! -- fill in client inputChannel --> String8 clientChannelName = name; clientChannelName.append(" (client)"); outClientChannel = new InputChannel(clientChannelName, sockets[1]); return OK; }Copy the code
Here, socketpair creation and access are actually based on file descriptors. WMS needs to communicate with Binder to send back file descriptor FD to APP end. This part can only refer to Binder knowledge, mainly to convert two process FD at kernel level. Socketpair is created and delivered to the APP, but the channel is not completely established because an active listener is required. After all, messages need to be notified when they arrive. Let’s take a look at the channel model
The means by which the APP side listens for messages is to add the socket to the Epoll array of the Looper thread, and the Looper thread will wake up when a message arrives and get the event content. In terms of code, the opening of the communication channel is completed with the creation of the WindowInputEventReceiver.
When the message arrives, Looper finds the corresponding listener based on the FD: NativeInputEventReceiver, and calls handleEvent to handle the corresponding event
int NativeInputEventReceiver::handleEvent(int receiveFd, int events, void* data) { ... if (events & ALOOPER_EVENT_INPUT) { JNIEnv* env = AndroidRuntime::getJNIEnv(); status_t status = consumeEvents(env, false /*consumeBatches*/, -1, NULL); mMessageQueue->raiseAndClearException(env, "handleReceiveCallback"); return status == OK || status == NO_MEMORY ? 1:0; }...Copy the code
The event is then read further, encapsulated as a Java layer object, and passed to the Java layer for appropriate callback processing:
status_t NativeInputEventReceiver::consumeEvents(JNIEnv* env, bool consumeBatches, nsecs_t frameTime, bool* outConsumedBatch) { ... for (;;) { uint32_t seq; InputEvent* inputEvent; <! --> Status_T status = mInputConsumer. Consume (&minputeventFactory, ConsumeFanDom, frameTime, &SEq, &inputevent); . <! --> case AINPUT_EVENT_TYPE_MOTION: {MotionEvent* MotionEvent = static_cast<MotionEvent*>(inputEvent); if ((motionEvent->getAction() & AMOTION_EVENT_ACTION_MOVE) && outConsumedBatch) { *outConsumedBatch = true; } inputEventObj = android_view_MotionEvent_obtainAsCopy(env, motionEvent); break; } <! --> if (inputEventObj) {env->CallVoidMethod(receiverobj.get (), gInputEventReceiverClassInfo.dispatchInputEvent, seq, inputEventObj); env->DeleteLocalRef(inputEventObj); }Copy the code
So finally the touch event is encapsulated as an inputEvent and processed by the dispatchInputEvent (WindowInputEventReceiver) of the InputEventReceiver, and we’re back to our usual Java world.
Event handling in the target window
How does an Activity or Dialog get a Touch event? How do you deal with that? In other words, we give the listener event to the rootView in ViewRootImpl and let it consume the event itself. It depends on the implementation. For the Activity and Dialog DecorView, we override the View’s dispatchTouchEvent function, assigning the event to the CallBack object. The View and ViewGroup consumption is dependent on the View logic.
conclusion
Now put all the flow in series with the module, the flow is roughly as follows:
- Click on the screen
- The Read thread of InputManagerService catches the event, preprocesses it and sends it to the Dispatcher thread
- Dispatcher finds the target window
- Sends events to the target window through the Socket
- The APP is woken up. Procedure
- Find the target window to handle the event
Android Touch Event InputManagerService
For reference only, welcome correction