This article is not intended to introduce the blocking principle and message barrier mechanism of the Handler, but to provide a reminder for you to read on demand.

Handler is the heart of the App, driving the execution of all events throughout the App. Let’s explore Handler blocking and message flat barriers.

Blocking mechanism

So what is blocking?

For example, if we order a takeaway, we don’t have to keep asking the rider whether it’s delivered or not, we can just keep doing other things and the rider will call us when they arrive. This process is called blocking.

After the message is retrieved from MessageQueue, the time needed to block is calculated by time comparison. If there is no time, the time needed to block is assigned to -1

if(msg ! =null) {
    if (now < msg.when) {
        // Next message is not ready. Set a timeout to wake up when it is ready.
        nextPollTimeoutMillis = (int) Math.min(msg.when - now, Integer.MAX_VALUE);
    } else{}}else {
    // No more messages.
    nextPollTimeoutMillis = -1;
}


Copy the code

It is then blocked by nativePollOnce

nativePollOnce(ptr, nextPollTimeoutMillis);
Copy the code

This nativePollOnce is a native method, and we continue to analyze it:

Instead of listing the code to post the call, I will post the call flow directly

nativePollOnce  
-> android_os_MessageQueue_nativePollOnce  (android_os_MessageQueue.cpp)
-> NativeMeaageQueue::pollOnce 
->Looper::pollOnce (Looper.cpp)
->Looper::pollInner->epoll_wait 
Copy the code

During the call, you can see that the block time parameter was passed to the epoll_wait function

Looper.cpp
int eventCount = epoll_wait(mEpollFd.get(), eventItems, EPOLL_MAX_EVENTS, timeoutMillis);
Copy the code

That means that blocking is done by epoll_wait. So what is epoll? And then, a little bit.

Here’s another concept for you:

Non-blocking busy polling:

Or we ordered a takeaway, but we are very hungry now, we need the mobile number of the rider, and then call the rider once a minute, this is non-blocking busy polling.

We’re asking the network, we’re reading data from the database, anything that involves processing data involves I/O, and now we have multiple I/O events, so how do we handle multiple streams?

Multiple threads handling multiple I/O streams can be extremely inefficient due to CPU design, all of which use one thread to handle I/O

while (true) {for ( i  -> stream[]) {
        if (i has data){
            read data until unavailable
        }
    }
}
Copy the code

We can process multiple IO with non-blocking busy polling like this, going all the way through each stream, and when we see data in the stream, we process the data, and then we continue polling. But when there is no data in any of the streams, the CPU runs idle, wasting resources. At this point, CPU resources should be freed up, hence the select mechanism.

while (true){
    select(stream[])
    for ( i  -> stream[]) {
        if (i has data){
            read data until unavailable
        }
    }
}
Copy the code

As you can see from SELECT, an IO event occurs, and then the stream is polled to process the data. Block at select if there is no data.

Select can know that an IO event occurred, but it does not know which streams, so it can only poll all streams indiscriminately. This kind of undifferentiated polling is obviously a waste of resources.

After linux2.6, the epoll mechanism emerged

First look at the definition of epoll

Epoll is an extensible IO event processing mechanism in the Linux kernel.

while (true){
   activite_stream[] = epoll_wait();
    for ( i  -> activity_stream[]) {
        read data until unavailable
    }
}
Copy the code

Epoll will tell us which streams have events, and we can stream data so that every stream we operate on has data. Directly reduce the degree of responsibility from O(N) to O(1).

Epoll can be used in only three ways:

int epoll_create(it size)
Copy the code

Create an epoll handle with size to tell the kernel how many listeners there are to listen on

int epoll_ctl(int epfd,int op,int fd, struct epoll_event *event);
Copy the code

This is the time registration function for epoll

int epoll_wait(int epfd,struct epoll_event * event ,int maxevents,int timeout);
Copy the code

The events parameter is used to retrieve a collection of events from the kernel. The maxEvents parameter is used to tell the kernel how big the events are. The maxEvents value must not be larger than the size at which epoll_create() was created. The timeout argument is a timeout event (milliseconds, 0 is returned immediately, -1 will always block)

Epoll is created and registered in a function

void Looper::rebuildEpollLocked() {
    ....
    mEpollFd.reset(epoll_create1(EPOLL_CLOEXEC));
    LOG_ALWAYS_FATAL_IF(mEpollFd < 0."Could not create epoll instance: %s", strerror(errno));
    epoll_event wakeEvent = createEpollEvent(EPOLLIN, WAKE_EVENT_FD_SEQ);
    intresult = epoll_ctl(mEpollFd.get(), EPOLL_CTL_ADD, mWakeEventFd.get(), &wakeEvent); . }Copy the code

This function is called in the native Looper constructor.

If there is blocking, there must be awakening

MessageQueue.java 
boolean enqueueMessage(Message msg, long when) {
            // We can assume mPtr ! = 0 because mQuitting is false.
            if(needWake) { nativeWake(mPtr); }}return true;
    }
Copy the code

When a message is dumped to the queue, you can see that there is a wake operation, which is finally called to the wake method of Looper. CPP

void Looper::wake() {
    uint64_t inc = 1;
    ssize_t nWrite = TEMP_FAILURE_RETRY(write(mWakeEventFd.get(), &inc, sizeof(uint64_t)));
    if(nWrite ! = sizeof(uint64_t)) {if(errno ! = EAGAIN) { mWakeEventFd.get(), nWrite, strerror(errno)); }}}Copy the code

The way to wake up is to write data to the listening FD using the Linux PIPE mechanism.

In general, when there is no task to execute, the handler blocks through the epoll mechanism to release CPU resources. When sending messages to the queue, the handler wakes up to write data to the listening file descriptor and wakes up epoll to continue processing messages.

Message barrier

Having talked about blocking mechanisms, let’s talk about message barriers.

Message barriers are those that block unimportant messages and prioritize the important ones.

What is the important news? For example, UI drawing, click events, etc.

The handler sends three types of messages:

  • Ordinary message
  • Barrier message
  • Asynchronous messaging

When retrieving a message from MessageQueue, you see something like this:

  if(msg ! =null && msg.target == null) {
      // Stalled by a barrier. Find the next asynchronous message in the queue.
      do {
          prevMsg = msg;
          msg = msg.next;
      } while(msg ! =null && !msg.isAsynchronous());
}
Copy the code

We all know that message’s target is the handler used to distribute data, so how can tartget be empty? That’s because target == NULL means the message is a barrier message.

After receiving the barrier message, the handler starts to process the asynchronous message. Common messages are deferred.

MessageQueue.java
private int postSyncBarrier(long when) {
    // Enqueue a new sync barrier token.
    // We don't need to wake the queue because the purpose of a barrier is to stall it.
    synchronized (this) {
        final int token = mNextBarrierToken++;
        final Message msg = Message.obtain();
        msg.markInUse();
        msg.when = when;
        msg.arg1 = token;

        Message prev = null;
        Message p = mMessages;
        if(when ! =0) {
            while(p ! =null&& p.when <= when) { prev = p; p = p.next; }}if(prev ! =null) { // invariant: p == prev.next
            msg.next = p;
            prev.next = msg;
        } else {
            msg.next = p;
            mMessages = msg;
        }
        returntoken; }}Copy the code

This sends a barrier message, also sorted by time, with the only difference being that there is no target.

This method is called in ViewRootImpl:

   void 
     () {
        if(! mTraversalScheduled) { mTraversalScheduled =true;
            // Send a barrier message
            mTraversalBarrier = mHandler.getLooper().getQueue().postSyncBarrier();
            // Perform a UI refresh task
            mChoreographer.postCallback(
                    Choreographer.CALLBACK_TRAVERSAL, mTraversalRunnable, null); notifyRendererOfFramePending(); pokeDrawLockIfNeeded(); }}Copy the code

Then it’s time for mChoreographer tasks to execute

It’s going to end up here

Choreographer.java
private void postCallbackDelayedInternal(int callbackType,
        Object action, Object token, long delayMillis) {
    synchronized (mLock) {
        .......
        if (dueTime <= now) {
            scheduleFrameLocked(now);
        } else {
            Message msg = mHandler.obtainMessage(MSG_DO_SCHEDULE_CALLBACK, action);
            msg.arg1 = callbackType;
           // Set the message to asynchronous
            msg.setAsynchronous(true); mHandler.sendMessageAtTime(msg, dueTime); }}}Copy the code

This is to send an asynchronous message and then look at the processing:

Messagequyue. Java -> next methodif(msg ! =null && msg.target == null) {
    // Stalled by a barrier. Find the next asynchronous message in the queue.
    do {
        prevMsg = msg;
        msg = msg.next;
    } while(msg ! =null && !msg.isAsynchronous());
}
if(msg ! =null) {
    if (now < msg.when) {
        // Next message is not ready. Set a timeout to wake up when it is ready.
        nextPollTimeoutMillis = (int) Math.min(msg.when - now, Integer.MAX_VALUE);
    } else{...return msg;
    }
Copy the code

In this case, when target == NULL, which is when the message barrier is on, the asynchronous message is fetched and then returned for distribution.

There are open barriers and there are closed barriers:

private void removeCallbacksInternal(int callbackType, Object action, Object token) {
    synchronized (mLock) {
        mCallbackQueues[callbackType].removeCallbacksLocked(action, token);
        if(action ! =null && token == null) { mHandler.removeMessages(MSG_DO_SCHEDULE_CALLBACK, action); }}}Copy the code

By removing the barrier messages, normal messages can be processed normally.

Before processing UI events, ViewRootImpl sends a barrier message telling handler to process asynchronous messages first. Choreographer sends asynchronous messages (msk.setasynchronous (true)). After processing asynchronous messages, Then send a message to remove the barrier.

This is how handlers keep our UI refreshed.

conclusion

The epoll blocking mechanism of handler and the message barrier of Handler are covered in this article. The main purpose is to give you a deeper understanding, not limited to the surface implementation principles.