Handler as an Android development position interview almost must ask the knowledge, its importance is self-evident. For Android FrameWork source code learning, for AMS and WMS learning, understanding the principle of Handler, is essential.
Why is Handler so important?
With regard to Handler, many Android developers simply think of Handler as a framework for interthread communication native to the Android SDK. But Handler is much more than that.
As an Android developer, you must have learned the Java language. When we write Java code, we use the main method to run, but why should we write Java code in an Activity that doesn’t have the main method?
Because the Activity lifecycle is hosted by AMS.
AMS is a system service process, and the communication channel between AMS and App processes is ActivityThread. The ActivityThread#main function is executed when the App process initializes.
public static void main(String[] args)
{... Looper.prepareMainLooper(); . ActivityThread thread =new ActivityThread();
thread.attach(false, startSeq);
if (sMainThreadHandler == null) { sMainThreadHandler = thread.getHandler(); }... Looper.loop(); . }Copy the code
As you can see, we create a Handler in the main method, initialize the main thread’s Looper, and call the Looper#loop function.
We continue to see Looper#loop.
public static void loop(a)
{...for(;;) {... }... }Copy the code
We can see that in the Looper#loop function, an infinite loop of code is executed.
Infinite loop? How does the rest of the code work? Maybe……
Yes, all of the lifecycle code in Android’s four major components actually runs on Handler, which is why it’s almost mandatory to ask Handler in an interview.
Because Handler isn’t just used for interthread communication, it’s actually the message management mechanism for Android’s four major components at runtime. So as an Android developer, Handler is something you have to figure out.
How does Handler work?
This is almost always the way our code is written when we use handlers to communicate between threads
/ / main thread
Handler handler = new Handler(Looper.getMainLooper())
{
@Override
public void handleMessage(@NonNull Message msg)
{
/ / the message processing. }};/ / the child thread
Thread thread = new Thread(new Runnable()
{
@Override
public void run(a)
{
/ / send a message
handler.sendMessage(msg);
}
}).start();
Copy the code
To understand how this Handler works, you first need to know how the Handler sends messages.
public final boolean sendMessage(@NonNull Message msg)
{
return sendMessageDelayed(msg, 0);
}
public final boolean sendMessageDelayed(@NonNull Message msg, long delayMillis)
{...return sendMessageAtTime(msg, SystemClock.uptimeMillis() + delayMillis);
}
public boolean sendMessageAtTime(@NonNull Message msg, long uptimeMillis)
{...return enqueueMessage(queue, msg, uptimeMillis);
}
private boolean enqueueMessage(@NonNull MessageQueue queue, @NonNull Message msg,
long uptimeMillis)
{...return queue.enqueueMessage(msg, uptimeMillis);
}
Copy the code
As you can see, the Handler#sendMessage function ends up calling the MessageQueue#enqueueMessage function. In fact, all sendXXX functions in Handler will end up calling MessageQueue#enqueueMessage.
When the MessageQueue#enqueueMessage function is called, the message is added to the MessageQueue.
public static void loop(a)
{...for(;;) { Message msg = queue.next(); . msg.target.dispatchMessage(msg); }... }public void dispatchMessage(@NonNull Message msg)
{... handleMessage(msg); }Copy the code
In an infinite loop in the Looper#loop function, the MessageQueue# Next function is called and the message in the MessageQueue is retrieved. The Handler#dispatchMessage function is called, and eventually the Handler#handleMessage function is called to execute our message-handling code.
So the body of the Handler runs the process
What is the relationship between Handler, Looper, and MessageQueue?
public Handler(@NonNull Looper looper, @Nullable Callback callback, boolean async)
{ mLooper = looper; mQueue = looper.mQueue; . }Copy the code
You can see that object instances of Looper and MessageQueue are held in the Handler constructor.
static final ThreadLocal<Looper> sThreadLocal = new ThreadLocal<Looper>();
private Looper(boolean quitAllowed)
{
mQueue = new MessageQueue(quitAllowed);
mThread = Thread.currentThread();
}
public static void prepare(a)
{
prepare(true);
}
private static void prepare(boolean quitAllowed)
{
if(sThreadLocal.get() ! =null)
{
throw new RuntimeException("Only one Looper may be created per thread");
}
sThreadLocal.set(new Looper(quitAllowed));
}
public static @Nullable Looper myLooper(a)
{
return sThreadLocal.get();
}
Copy the code
You can see that in the constructor of Looper an object instance of MessageQueue is created and the thread holding the current call, indicating that Looper is tied to the thread. However, the constructor of Looper is private, so we can only create the thread using Looper#prepare.
In Looper#prepare, the created Looper object is stored via sThreadLocal. SThreadLocal = static final sThreadLocal = static final sThreadLocal = static final sThreadLocal = static final sThreadLocal = static final sThreadLocal
Each thread has a safe to store looper objects bound to that thread, and the key that can open these safes is the same key. This ensures unified management and prevents memory leaks
Only one Looper object can be bound per thread. In Looper#prepare, if sThreadLocal already has a Looper object, an exception will be thrown.
Each thread is bound to a Looper object, and each Looper object holds a MessageQueue object.
The relationship between Handler, Looper, and MessageQueue is similar to a consumer-producer mechanism. Handler is the producer. Looper is the consumer; MessageQueue is a conveyor belt that stores messages produced by producers and delivers them to consumers.
What is a MessageQueue? What is its data structure?
MessageQueue#enqueueMessage I didn’t go into the code, because to understand the MessageQueue#enqueueMessage function you need to know what the data structure of the MessageQueue is.
MessageQueue is a priority queue based on a linked list implementation.
MessageQueue has a message-type field, mMessages, which represents the head node of the list; There is a message-type field next in Message, based on which it can be seen that MessageQueue stores messages based on the linked list structure.
boolean enqueueMessage(Message msg, long when)
{... msg.when = when; Message p = mMessages; .// If MessageQueue has no message or is currently inserted message.when = 0 or
// The currently inserted message.when is smaller than the header message.when
if (p == null || when == 0 || when < p.when)
{
/ / headmsg.next = p; mMessages = msg; . }else{... Message prev;// Traverse MessageQueue to find the current message insertion location
for (;;)
{
prev = p;
p = p.next;
// If the current message.when is greater than or equal to all MessageQueue messages
// Insert the current message at the end of the MessageQueue
if (p == null || when < p.when)
{
break; }... } msg.next = p; prev.next = msg; }... }Copy the code
It can be seen that messages stored in MessageQueue are arranged according to the size of message.when from small to large. The entire MessageQueue is sorted by insertion.
Message.when represents the target processing time of the Message, commonly known as the time at which the Message will be processed.
Message next(a)
{...for(;;) {... Message prevMsg =null; Message msg = mMessages; .if(prevMsg ! =null)
{
prevMsg.next = msg.next;
}
else
{
mMessages = msg.next;
}
msg.next = null;
msg.markInUse();
returnmsg; . }}Copy the code
When the MessageQueue#next function is called to retrieve the message, it retrieves the first message in the MessageQueue linked list.
Therefore, MessageQueue is a priority queue that is arranged in ascending order according to the target processing time of the message, and the message in front of the queue is first out of the queue.
MessageQueue’s message sleep and message wake mechanism
MessageQueue’s message sleep and message wake mechanism is actually the implementation of blocking queue, but MessageQueue’s blocking queue is semi-blocked.
Why is the blocking queue implemented by MessageQueue semi-blocked? MessageQueue does not block for insert operations.
If MessageQueue#enqueueMessage is called, the message must be inserted into the MessageQueue.
A normal blocking queue would block an insert, but MessageQueue#enqueueMessage does not block. Why?
This is because the main thread MessageQueue is not only used by the user, but also through which the system processes messages. If the MessageQueue of the main thread blocks while adding messages, some messages may be missing. If the missing message is important, it may cause a system error.
However, there is a risk of MessageQueue running out of memory if there is no restriction on insert operations. However, under normal circumstances, MessageQueue is very difficult to run out of memory.
/** ** for sleep threads *@paramA PTR identifies a thread, which is equivalent to a thread number. * This value is generated by the nativeInit function when MessageQueue is created@paramTimeoutMillis The time remaining before the thread's sleep ends. 0 means no sleep and -1 means permanent sleep. * /
private native void nativePollOnce(long ptr, int timeoutMillis);
/** * is used to wake up the thread *@paramA PTR identifies a thread, which is equivalent to a thread number. * This value is generated by the nativeInit function when MessageQueue is created
private native static void nativeWake(long ptr);
Copy the code
MessageQueue’s message sleep and message wake mechanism are mainly realized through these two native functions.
Message next(a)
{...int nextPollTimeoutMillis = 0;
for(;;) {... nativePollOnce(ptr, nextPollTimeoutMillis); .final longnow = SystemClock.uptimeMillis(); Message msg = mMessages; .if(msg ! =null)
{
// If the target processing time of the message has not been reached
if (now < msg.when)
{
// Calculate the remaining time
nextPollTimeoutMillis = (int) Math.min(msg.when - now, Integer.MAX_VALUE); }... }else
{
nextPollTimeoutMillis = -1; }... }}Copy the code
MessageQueue blocks the fetch operation in two ways:
- If the target processing time of message is not reached, it will wake up automatically when the target processing time is reached.
- MessageQueue queue is empty and permanently asleep.
We can see that when the queue is empty, nextPollTimeoutMillis = -1, the current thread will sleep permanently during the next loop. NextPollTimeoutMillis is computed before the target processing time of Message is reached, and the current thread is automatically woken up after the nextPollTimeoutMillis value has elapsed.
The nativeWake function is called to wake up the sleeping thread. The nativeWake function is called in MessageQueue source code mainly in MessageQueue#quit and MessageQueue#enqueueMessage functions.
boolean enqueueMessage(Message msg, long when)
{...boolean needWake;
// If MessageQueue has no message or is currently inserted message.when = 0 or
// The currently inserted message.when is smaller than the header message.when
if (p == null || when == 0 || when < p.when)
{
......
// mBlocked is true when MessageQueue is already asleep
needWake = mBlocked;
}
else
{
// If the synchronization barrier is enabled and the message is asynchronous, the thread needs to be woken up
needWake = mBlocked && p.target == null&& msg.isAsynchronous(); .for (;;)
{
// Traverse MessageQueue to find the insertion location.// If there was another asynchronous message before the insertion position, it indicates the sleep state at this time
// The thread is not woken up if it is caused by the wait time of the previous asynchronous message
if (needWake && p.isAsynchronous())
{
needWake = false; }... }}if(needWake) { nativeWake(mPtr); }}void quit(boolean safe)
{
if(! mQuitAllowed) {throw new IllegalStateException("Main thread not allowed to quit."); }...if (mQuitting)
{
return;
}
mQuitting = true; . nativeWake(mPtr); }Copy the code
MessageQueue wakes up thread sleep in the following ways:
- When MessageQueue needs to abort exit, it wakes up the thread and executes
MessageQueue#next
The unfinished loop in the function that hits the flag McOntract = true,MessageQueue#next
The function returns NULL and exitsLooper#loop
An infinite loop in a function. - When calling
MessageQueue#enqueueMessage
When a message is inserted, if the message needs to be inserted first in the MessageQueue, the thread needs to wake up so that the newly inserted message is fetched first. If the target processing time is not reached after the message is fetched, the thread continues to sleep until the target processing time. - When calling
MessageQueue#enqueueMessage
If message is inserted in the middle of a MessageQueue, the thread is not woken up by default. - When calling
MessageQueue#enqueueMessage
When the function inserts a message, if the synchronization barrier is turned on and the message is an asynchronous message, it needs to wake up the thread for the asynchronous message to execute immediately. - When calling
MessageQueue#enqueueMessage
If the synchronization barrier is enabled when the message is inserted, and the message is an asynchronous message, but if there is another asynchronous message before the insertion position of the asynchronous message, the sleep state is caused by the waiting time of the previous asynchronous message, and the message is automatically woken up.
How does Handler keep threads safe?
Handler manages messages throughout the system runtime, so the management of these messages must be thread-safe.
boolean enqueueMessage(Message msg, long when)
{...synchronized (this) {... }return true;
}
Message next(a)
{...synchronized (this) {... }... }void quit(boolean safe)
{...synchronized (this) {... }}Copy the code
Only three functions in MessageQueue are listed here, but in fact there are many functions in MessageQueue that use synchronized lock, and the synchronized lock is MessageQueue. This object.
We know that a thread is uniquely bound to a Looper object, and that each Looper object holds an instance of a MessageQueue object.
Therefore, it can be said that at any time, the insertion, extraction, and exit operations of MessageQueue corresponding to each thread are atomic.
That is, when one operation of MessageQueue starts to be executed, other operations cannot be executed. This ensures the thread safety of MessageQueue.
How does Handler switch between child thread and main thread?
As we know from the JVM’s runtime data area, memory is threadindependent, with all threads in the same process sharing the same memory. Objects are thread-free; only the execution of methods needs to be split.
As shown in the figure, which methods are executed in the child thread and which are executed in the main thread, I have identified in the figure.
Each message object can be thought of as a small chunk of memory; Each MessageQueue object can be thought of as a large chunk of memory, a collection of many Message objects.
So Handler is not only a message management mechanism, but also a memory management mechanism.
Therefore, the essence of Handler is to wrap each message into a Message object and pass a reference to the Message object from the child thread to the main thread via MessageQueue.
How do I create a Message? How is the share design pattern implemented in Handler?
Create a Message? Isn’t that straight new? This is one way to create MessageQueue, but it can cause the memory overflow problem we mentioned earlier.
Because this approach does not use the Message recycling mechanism, which is the share design pattern in Handler.
Creating a Message directly new is easy to create in OOM. Not only does MessageQueue run out of memory, but more importantly, multiple message creation can cause memory jitter.
Because Handler is the message management mechanism for Android’s four major components, messages need to be created almost all the time.
If the message is created using the direct new method, the system will appear OOM soon after running. The frequent creation and collection of Message objects will cause a large number of memory fragments. When the number of memory fragments reaches a certain level, the creation of new objects cannot find continuous memory, resulting in OOM.
The Message# Obtain function is therefore recommended for creating Message objects, because using this function takes advantage of the Message recycling mechanism.
public static void loop(a)
{...for(;;) { Message msg = queue.next(); . msg.recycleUnchecked(); }}Copy the code
You’ll see the Message#recycleUnchecked function called at the end of each loop in the Looper#loop function, which is used to recycle the Message object.
public static final Object sPoolSync = new Object();
void recycleUnchecked(a)
{
// Clear all data in message
flags = FLAG_IN_USE;
what = 0;
arg1 = 0;
arg2 = 0;
obj = null;
replyTo = null;
sendingUid = UID_NONE;
workSourceUid = UID_NONE;
when = 0;
target = null;
callback = null;
data = null;
// Add the message object to the cache pool
synchronized (sPoolSync)
{
if (sPoolSize < MAX_POOL_SIZE)
{
next = sPool;
sPool = this; sPoolSize++; }}}Copy the code
It can be seen that the cache pool in Message recycling mechanism is a linked list structure, and synchronized lock is used to realize Message recycling operation atomicity for all Message objects, ensuring the thread safety of the cache pool.
public static Message obtain(a)
{
synchronized (sPoolSync)
{
if(sPool ! =null)
{
Message m = sPool;
sPool = m.next;
m.next = null;
m.flags = 0; // clear in-use flag
sPoolSize--;
returnm; }}return new Message();
}
Copy the code
As you can see, when the Message#obtain function is called to create the Message object, reuse is first sought from the cache pool; A Message object is created only if there are no Message objects in the cache pool.
What are the asynchronous messages of the Handler? What are the synchronization barriers for handlers?
For Android developers, the asynchronous message of Handler seems almost unheard of.
In Handler, there are three types of messages:
- Normal messages (that is, synchronous messages), of message
Message#isAsynchronous
Function returns false - Asynchronous message, message
Message#isAsynchronous
Function returns true - Synchronize barrier messages with the null target field of Message
The messages we normally insert through Handler’s sendXXX function are synchronous messages by default.
private boolean enqueueMessage(@NonNull MessageQueue queue, @NonNull Message msg,
long uptimeMillis)
{
msg.target = this; .if (mAsynchronous)
{
msg.setAsynchronous(true);
}
return queue.enqueueMessage(msg, uptimeMillis);
}
Copy the code
The mAsynchronous field defaults to false in Handler. MAsynchronous can only be set to true in Handler’s constructor.
So there are two ways to create an asynchronous message:
- Called when message is created
Message#setAsynchronous
The function is set to true - Set mAsynchronous to true when creating the Handler
But if MessageQueue has no synchronization barrier turned on, then there is no difference between asynchronous and synchronous messages, so what is a synchronization barrier?
A barrier stands for block, and as the name suggests, a synchronous barrier blocks synchronous messages, allowing only asynchronous messages to pass through.
/** * Used to send synchronization barrier messages, enable synchronization barrier */
public int postSyncBarrier(a)
{
return postSyncBarrier(SystemClock.uptimeMillis());
}
private int postSyncBarrier(long when)
{
synchronized (this)
{
final int token = mNextBarrierToken++;
final Message msg = Message.obtain();
MSG. Target = null
msg.markInUse();
msg.when = when;
msg.arg1 = token;
Message prev = null;
Message p = mMessages;
// Insert MSG. When at the appropriate location in MessageQueue
if(when ! =0)
{
while(p ! =null&& p.when <= when) { prev = p; p = p.next; }}if(prev ! =null)
{
msg.next = p;
prev.next = msg;
}
else
{
msg.next = p;
mMessages = msg;
}
returntoken; }}/** * Used to delete synchronization barrier messages and close synchronization barrier */
public void removeSyncBarrier(int token)
{
synchronized (this)
{
Message prev = null;
Message p = mMessages;
// Traverse MessageQueue to find the location of the synchronization barrier message
while(p ! =null&& (p.target ! =null|| p.arg1 ! = token)) { prev = p; p = p.next; }// If p = null, there is no synchronization barrier message in MessageQueue
if (p == null)
{
throw new IllegalStateException("The specified message queue synchronization "
+ " barrier token has not been posted or has already been removed.");
}
final boolean needWake;
If the synchronization barrier message is in the middle or at the end of the MessageQueue, the synchronization barrier is not enabled
if(prev ! =null)
{
// Remove synchronization barrier messages
prev.next = p.next;
// There is no need to wake up the thread, because the synchronization barrier is not open at this time, just remove the synchronization barrier message
needWake = false;
}
else // If the synchronization barrier message is at the beginning of MessageQueue, synchronization barrier is enabled
{
// Remove synchronization barrier messages
mMessages = p.next;
// There are two possibilities:
// 1. There is no asynchronous message in MessageQueue, so we need to wake up the thread
// 2. When waiting for asynchronous messages, check whether MessageQueue is still in the synchronization barrier state
// 2.1 wakes up the thread if it is not in the synchronization barrier state
If the thread is in the synchronization barrier state, it does not wake up
needWake = mMessages == null|| mMessages.target ! =null;
}
/ / retrieve the message
p.recycleUnchecked();
// MessageQueue does not wake up threads in the exit state
if(needWake && ! mQuitting) { nativeWake(mPtr); }}}Message next(a)
{...for(;;) {... nativePollOnce(ptr, nextPollTimeoutMillis); . Message msg = mMessages; .MessageQueue starts with a synchronization barrier message, indicating that synchronization barrier is enabled
if(msg ! =null && msg.target == null)
{
// There are only two possible ways to exit the do-while loop:
// 1. MessageQueue does not find the asynchronous message after traversing
// 2. The asynchronous message was found
do {
prevMsg = msg;
msg = msg.next;
} while(msg ! =null && !msg.isAsynchronous());
}
// If the synchronization barrier is enabled, the MSG must be asynchronous
if(msg ! =null)
{
// If the target processing time is not reached, the thread will sleep. }else // If there are no asynchronous messages in MessageQueue
{
// Permanent sleep thread
nextPollTimeoutMillis = -1; }... }}Copy the code
As you can see, the synchronization barrier implementation uses a target = null message, and when the message is fetched by MessageQueue#next, The target = null message is used as a flag to determine the preference for asynchronous messages.
Note, however, that the MessageQueue#postSyncBarrier function simply adds the target = null message to the MessageQueue; The synchronization barrier is opened only when a message with target = null is picked up and processed by MessageQueue#next.
So this is why, in the MessageQueue#removeSyncBarrier function, there is no need to wake up the thread when the synchronization barrier message is in the middle or at the end of the MessageQueue.
Assume that the MessageQueue is blocked at this time, which is also caused by the waiting time of synchronous message, and there is no need to actively wake up.
In the MessageQueue#removeSyncBarrier function, if the synchronization barrier message is at the beginning of the MessageQueue, the synchronization barrier is enabled, and the wake up thread needs to be discussed separately.
Using MessageQueue#next, we can see that threads fall asleep when the synchronization barrier is turned on for two reasons:
- Asynchronous message waiting time, can automatically wake up sleep
- MessageQueue has no asynchronous messages and sleeps permanently
If the thread sleeps because there is no asynchronous message in the MessageQueue, it indicates that all asynchronous messages in the MessageQueue have been processed. In this case, the thread needs to wake up, close the synchronization barrier, and let the MessageQueue start polling for synchronous messages.
If the thread is sleeping because of asynchronous message wait time, it checks whether the MessageQueue is still in the synchronous barrier state after removing the synchronous barrier message.
If not, the thread is woken up and MessageQueue starts polling for synchronous messages that precede the asynchronous message. If there is no message before the asynchronous message, the sleep wait time continues because the target execution time has not yet been reached.
If the synchronization barrier is still in place, the thread is not woken up and continues to sleep waiting for the moment.
Synchronization barriers are rarely used in daily development, but are often used in system source code, where messages need to be processed first. For example, the ViewRootImpl#scheduleTraversals function uses a synchronization barrier to refresh the screen.
The interview questions
How many handlers are there in a thread?
Handler can create an infinite number of them. Handler encapsulates messages sent by users. It is equivalent to a producer.
How many loopers are there in a thread? How to guarantee?
Only one Looper object can be bound to a thread.
Each thread keeps a reference to its own bound Looper object in its ThreadLocalMap. The Key is looper. sThreadLocal.
This field is static final, so only one looper object reference is saved per thread.
When Looper#prepare is called to create a Looper object, it determines whether the current thread has already saved a reference to the Looper object. If it has been saved, it is a secondary creation and an exception is thrown.
Handler memory leak cause? Why didn’t any of the other inner classes say there was a problem?
Handler memory leaks are caused by Handler being able to set delayed messages and messages holding Handler objects.
Handler objects typically live through the lifecycle of the four major components, but because handlers can set delayed messages, it is possible that messages will not be processed by the end of the component’s lifecycle.
Since Message holds the Handler object, if the Handler object calls a function defined in the component, the Handler object holds the component object. This can cause the GC to fail to reclaim component objects, causing a memory leak.
The inner classes we use will also hold the outer class objects, but these inner classes will survive the life of the component; When the component life cycle ends, the component object and the inner class object are recycled together, so there is no memory leak.
The most efficient way to solve this problem is to define handlers as static inner classes that hold weak references to components inside.
private static class SafeHandler extends Handler
{
private WeakReference<Activity> thiz;
public SafeHandler(Activity activity)
{
thiz = new WeakReference(activity);
}
@Override
public void handleMessage(final Message msg)
{
Activity activity = thiz.get();
if(activity ! =null) { activity.handleMessage(msg); }}}Copy the code
Why can the main thread directly new Handler? What does the new Handler have to do if I want to prepare it in a child thread?
The new Handler needs to initialize Looper. Call Looper#prepare to create the Looper object, and then call Looper#loop to start the Looper polling MessageQueue.
The main thread Looper initialization is already done in ActivityThread#main, so we don’t need to initialize the main thread Looper when we create the main thread Handler. However, Looper needs to be initialized if a Handler is created in a child thread.
What happens to the Looper bound to the child thread when there is no message in the message queue? What’s the use?
The Looper bound thread is put to sleep when there are no messages in the MessageQueue, which improves CPU utilization.
The specific message sleep and message wake mechanisms are described earlier.
Since there can be multiple handlers to add data to a MessageQueue (each Handler may be on a different thread when sending messages), how is it internally thread-safe? What about getting messages?
Synchronized lock is used in MessageQueue to access messages, and MessageQueue. This object is locked, so the access message operations of the same MessageQueue object are atomic, ensuring thread safety.
How should we create Message when we use it?
You can create the new directly or by using the Message#obtain function, but it is recommended to create the new using the Message#obtain function.
Because the Message#obtain function uses the Message recycling mechanism; If you create a Message directly by new, it will cause OOM.
Why does Looper loop death not cause application stuck (ANR)?
Firstly, the causes of ANR should be clarified:
- KeyDispatchTimeout: Input events (touch events, etc.) are not processed within 5s
- BroadcastTimeout:
- Foreground Broadcast: The onReceiver function does not finish executing within 10 seconds
- Background Broadcast: The onReceiver function does not finish executing for 60 seconds
- ServiceTimeout:
- Foreground Service: The life cycle functions, such as onCreate, onStart, and onBind, are not executed within 20 seconds
- Background Service: Life cycle functions such as onCreate, onStart, and onBind are not executed within 200 seconds
- ContentProviderTimeout: The ContentProvider does not complete the current transaction within 10 seconds
The four major components of Android are hosted by AMS. The communication channel between AMS process and App process is ActivityThread, and ActivityThread runs based on the Looper dead-loop polling MessageQueue in the main thread.
The Looper loop is the engine that keeps all four components alive.
The ANR is caused by the abnormal life cycle of the four components, and the Looper loop has nothing to do with ANR.
Why does MessageQueue sleep not cause ANR?
MessageQueue has two types of dormancy:
- There are no messages in the queue, and the thread sleeps permanently
- The target processing time of the message is not reached, and the sleeping thread waits for the time and can wake up automatically
In either case, the conditions for ANR generation are not met.
Are you familiar with the Handler synchronization barrier mechanism? How is that done?
A synchronization barrier blocks synchronous messages and only lets asynchronous messages through. Most of the messages we send in our daily development are synchronous messages.
Handler synchronization barriers are implemented based on a message type with a null target field.
When a message with a null target field is fetched by the MessageQueue#next function, the synchronization barrier is turned on and asynchronous messages in the MessageQueue are polled.
The specific synchronization barrier mechanism is explained earlier.
conclusion
The Handler part of the knowledge is not difficult, but the difficult part is how to understand the design of the Handler and apply it to the development.
This requires us to read the source code not only to understand the code flow, but also to think about why the code is designed the way it is and what it does.