1. Introduction to the Handler message mechanism
Java Level: Handler. Java, Looper. Java, ThreadLocal. Java, Messagequue
C++ Level: NativeMessageQueue. CPP, looper.cpp
1.1 Handler. Java
Handler description in the official document
Android Developers Docs | Handler
A Handler allows you to send and process Message and Runnable objects associated with a thread’s MessageQueue. Each Handler instance is associated with a single thread and that thread’s message queue. When you create a new Handler it is bound to a Looper. It will deliver messages and runnables to that Looper’s message queue and execute them on that Looper’s thread.
There are two main uses for a Handler: (1) to schedule messages and runnables to be executed at some point in the future; and (2) to enqueue an action to be performed on a different thread than your own.
Scheduling messages is accomplished with the post(Runnable), postAtTime(java.lang.Runnable, long), postDelayed(Runnable, Object, long), sendEmptyMessage(int), sendMessage(Message), sendMessageAtTime(Message, long), and sendMessageDelayed(Message, long) methods. The post versions allow you to enqueue Runnable objects to be called by the message queue when they are received; the sendMessage versions allow you to enqueue a Message object containing a bundle of data that will be processed by the Handler’s handleMessage(Message) method (requiring that you implement a subclass of Handler).
When posting or sending to a Handler, you can either allow the item to be processed as soon as the message queue is ready to do so, or specify a delay before it gets processed or absolute time for it to be processed. The latter two allow you to implement timeouts, ticks, and other timing-based behavior.
When a process is created for your application, its main thread is dedicated to running a message queue that takes care of managing the top-level application objects (activities, broadcast receivers, etc) and any windows they create. You can create your own threads, and communicate back with the main application thread through a Handler. This is done by calling the same post or sendMessage methods as before, but from your new thread. The given Runnable or Message will then be scheduled in the Handler’s message queue and processed when appropriate.
It means:
-
Handler is a utility class that can send and process messages and Runnable on a thread-associated MessageQueue
-
When a new Handler is created, it is bound to a Looper. The Handler sends the Message and Runnable to The Looper’s MessageQueue and eventually executes them on the Looper’s thread.
-
The basic principle of Handler to implement communication between threads is through memory sharing. Loopers are created in the thread and isolated from loopers of other threads by ThreadLocal. That is to say, I can send messages to the thread as long as I get the Looper of a thread. MessageQueue is created when Looper is built. The Handler binds Looper to hold mLooper and mlooper.mQueue. MLooper. MQueue sends a message to the message queue of the Looper thread when another thread sends a message. This completes a communication between threads.
-
Handler has two main uses:
-
Deferred execution: Schedule Message and Runnable execution at some point in the future
-
Thread switch: The operation message executed on other threads is added to MessageQueue on the current thread, which is received and processed by the current thread
-
-
The Handler POST family of methods queues Runnable objects so that they can be called back when a message is received
-
The Handler Send family of methods sends a Message to the Handler’s handleMessage(Message) method
-
When Posting or sending to a Handler and the MessageQueue is ready, messages and Runnable can be executed either immediately or later
-
The main thread Looper is enabled by default when the program is running, The Looper already corresponds to MessageQueue, which is responsible for managing top-level application objects (Activities, Broadcast Receivers, etc) and any Windows it creates. At development time we can create our own worker thread and communicate with the main thread through a Handler. For example, if you download an image from a child thread and send the image data to the main thread through a Handler that’s bound to the main thread Looper and then the main thread displays it to the ImageView.
1.2 Looper.java
Official documentation of Looper’s explanation
Android Developers Docs | Looper
Class used to run a message loop for a thread. Threads by default do not have a message loop associated with them; to create one, call
prepare()
in the thread that is to run the loop, and thenloop()
to have it process messages until the loop is stopped.Most interaction with a message loop is through the
Handler
class.
It means:
-
Looper is a message Looper designed to run in a thread. By default, a thread does not have a message Looper associated with it and needs to create one manually. To run a message Looper in one thread, call looper.prepare () to create a Looper and cache it in the ThreadLocal to separate loopers from other threads. Each thread can only create one Looper, and then call looper.loop () to start a loop to fetch and process messages from MessageQueue, Stop the message circulator manually and recycle all messages until the loop stops or call the looper.quitsafely () or looper.quit () method.
-
Most interactions with the message circulator are through the Handler class.
-
The main thread Looper makes special: In Android, the main thread message loop is turned on by default when an application runs, PrepareMainLooper () creates the prepareMainLooper method in addition to calling prepare The instance also caches the main thread’s message Looper to a static variable, private static Looper sMainLooper, which is used to fetch the main thread’s Looper from a non-main thread.
1.3 ThreadLocal. Java
Official documentation for MessageQueue
Android Developers Docs | ThreadLocal
This class provides thread-local variables. These variables differ from their normal counterparts in that each thread that accesses one (via its
get
orset
method) has its own, independently initialized copy of the variable.ThreadLocal
instances are typically private static fields in classes that wish to associate state with a thread (e.g., a user ID or Transaction ID).For example, the class below generates unique identifiers local to each thread. A thread’s id is assigned the first time it invokes
ThreadId.get()
and remains unchanged on subsequent calls.Each thread holds an implicit reference to its copy of a thread-local variable as long as the thread is alive and the
ThreadLocal
instance is accessible; after a thread goes away, all of its copies of thread-local instances are subject to garbage collection (unless other references to these copies exist).
It means:
-
ThreadLocal can create thread-local variables, which can only be accessed by the current thread and cannot be accessed or modified by other threads.
-
Android 24 uses ThreadLocalMap as a reference for each Thread object. The key of the Map is the ThreadLocal object. The Map value is the value that needs to be cached. Fetching a local variable from ThreadLocal does the following: Thread. CurrentThread (). ThreadLocals. GetEntry (ThreadLocal. This). The value, By doing this, you can only get the value of threadLocalHashCode in the current thread’s current ThreadLocal.
-
As long as a thread is active and has access to a ThreadLocal instance, each thread holds an implicit reference to a copy of its thread-local variables; When a thread disappears, all copies of its thread local instances are garbage collected (unless there are other references to those copies).
-
The Looper class takes advantage of ThreadLocal to ensure that only one Looper object exists per thread.
-
Before Android API Level 23, the Android ThreadLocal class and the JDK ThreadLocal class stored data in different containers. Android 23 uses Values + Object[] table to store local variables. Local link storage containers are changed to ThreadLocalMap+Entry[] table.
1.4 MessageQueue. Java
Official documentation for MessageQueue
Android Developers Docs | MessageQueue
Low-level class holding the list of messages to be dispatched by a
Looper
. Messages are not added directly to a MessageQueue, but rather throughHandler
objects associated with the Looper.You can retrieve the MessageQueue for the current thread with
Looper.myQueue()
.
It means:
-
MessageQueue is a list of messages for Looper to dispatch. Messages are not added directly to MessageQueue, but via the Handler object associated with Looper.
-
Looper.myqueue () returns the MessageQueue object associated with the current thread. This method must be called from the thread on which Looper is running, otherwise a null pointer exception will occur.
-
MessageQueue is a queue, but in fact it is a common public final class MessageQueue, which is through Message mMessages. The operation of the entire queue is implemented as a linked list structure encapsulated as a head node.
1.5 the Message. Java
The official documentation explains Message
Android Developers Docs | Message
Defines a message containing a description and arbitrary data object that can be sent to a Handler. This object contains two extra int fields and an extra object field that allow you to not do allocations in many cases.
While the constructor of Message is public, the best way to get one of these is to call
Message.obtain()
or one of theHandler.obtainMessage()
methods, which will pull them from a pool of recycled objects.
It means:
-
The Message class defines a Message containing a description and data object that can be sent to the Handler. This object contains two additional int fields (arg1, arg2) and an additional object field (obj) that can be combined in certain circumstances to send messages to the target thread.
-
While Message’s constructors are common, the best way to get one of them is to call either message.obtain () or handler.obtainMessage () methods, which reuse invalid Message instances from existing Message pools, This avoids memory problems that can result from frequent creation/destruction of Message instances when there is too much Message processing.
1.6 some pre-knowledge before reading C++ code
Analysis of Linux Epoll mechanism
Select, poll, and epoll are all MECHANISMS for I/O multiplexing. I/O multiplexing is a mechanism by which a process can monitor multiple descriptors and, once a descriptor is ready (usually read or write), inform the program to do the corresponding read/write operation. But SELECT, poll, and epoll are essentially synchronous I/O because they are responsible for reading and writing after the read and write event is ready, which means that the read and write process is blocked
- Select is a system call, a system call: select four macros: FD_ZERO FD_SET FD_CLR FD_ISSET
- Poll is a Unix re-implementation of select itself. The only problem with poll is that it does not have a maximum number of file descriptors
- Epoll is a module consisting of three system calls implemented by the file system in the kernel: epoll_create epoll_ctl epoll_wait
- Epoll brings two benefits that significantly improve performance
- Epoll events register a file descriptor through epoll_ctl(). Once a file descriptor is ready, the kernel scans all monitored file descriptors only after the process calls a certain method. The kernel uses a callback-like callback mechanism to quickly activate the file descriptor and epoll_wait() is notified
- When epoll_wait() is called once to obtain a ready file descriptor, instead of the actual descriptor, a value representing the number of ready file descriptors is returned, which can be retrieved from an array specified by epoll_wait(Mmap). The overhead of copying a large number of file descriptors is avoided
**epoll** is an extensible I/O event notification mechanism for the Linux kernel. Debuting in Linux 2.5.44, it is designed to replace the existing POSIX select and poll system functions and enable better performance for programs that require a lot of manipulation of file descriptors (for example: The time complexity of the old system function is O(n) and the time complexity of epoll is O(log n). Epoll implements a similar function to Poll in that it listens for events on multiple file descriptors.
The epoll bottom layer is constructed from configurable operating system kernel objects that are rendered in user space in the form of file descriptors. Epoll searches for monitored file descriptors using red-black trees (RB-trees).
When an event is registered on an ePoll instance, ePoll adds the event to the red-black tree of the ePoll instance and registers a callback function to add the event to the ready list when it occurs.
Epoll_create (int size) to create an epoll handle. This function returns a new epoll handle from which all subsequent operations will be performed. When you’re done, close the created epoll handle with close().
File descriptor
The file descriptor is formally a non-negative integer. In fact, it is an index value that points to the record table of open files that the kernel maintains for each process. When a program opens an existing file or creates a new file, the kernel returns a file descriptor to the process.
The NDK analyses
To develop applications on Android OS, Google provides two development packages: SDK and NDK
The Android NDK(Native Develop Kit) is a set of tools that allow you to implement some applications in Native coding languages such as C and C++. This helps you reuse code bases written in these languages when developing certain types of applications.
Typically, C/C++ is compiled into a. So file using the NDK tool and then called in Java.
NDK is generally not recommended due to the added complexity of development, but it can be of great value for specific needs, such as:
1. Migrate their applications between platforms
2. Reuse existing libraries, or provide their own library reuse
3. Improve performance in some cases, especially in computationally intensive applications like games
4. Use third-party libraries, many of which are written by C/C++ libraries such as Ffmpeg.
5. Does not depend on the design of Dalvik Java Virtual machine
6, code protection, because APK Java layer code is easy to decompile, and C/C++ library decompile is difficult.
JNI (Java Native Interface) is a feature of Java calling Native language. JNI enables Java to interact with C/C++ models. That is, you can call C/C++ code in Java code or you can call Java code in C/C++ code. Since JNI is part of the JVM specification, we can run our JNI programs on any Java virtual machine that implements the JNI specification.
Java calls C/C++ in the Java language is native, not created by Android, the general Java program using JNI standard may be different from Android, Android JNI is simpler.
JNIEnv analyses
The Execution environment of the Java language is the Java Virtual Machine (JVM), which is actually a process in the host environment. Each JVM VIRTUAL machine has a JavaVM structure in the local environment, which is returned when the Java VIRTUAL machine is created
JavaVM is the representative of Java virtual machine in JNI layer, JNI global only a JavaVM structure encapsulates some function Pointers (or function table structure), JavaVM encapsulation of these function Pointers is mainly for THE JVM operation interface.
JNIEnv is the current execution environment for Java threads, one JavaVM structure for each JVM, and it is possible to create multiple Java threads in a JVM, one JNIEnv structure for each thread, stored in thread-local storage TLS. Therefore, the JNIEnv of different threads is different and cannot be shared with each other. The JNIEnv structure is also a function table that is used to manipulate Java data or call Java methods in native code. That is, as long as you have the JNIEnv structure in your native code, you can call your Java code in your native code.
JNIEnv and JavaVM
- JavaVM: JavaVM is the representative of Java virtual machine in JNI layer. JNI has only one globally
- JNIEnv: JavaVM code in threads, one per thread, JNI can have many jniEnVs;
JNIEnv creation and release are implemented differently in C and C++
Common functions associated with JNIEnv
- jobject NewObject(JNIEnv *env, jclass clazz,jmethodID methodID, …) The first argument, jClass class, says which class object you want to create, and the second argument,jmethodID methodID, says which constructor ID you want to use to create the object
- Jstring NewString(JNIEnv *env, const jchar *unicodeChars,jsize len) : env is a pointer to the JNI interface; UnicodeChars is a pointer to a Unicode string; Len is the length of a Unicode string. The return value is a Java string object, or null if the string cannot be constructed.
- ArrayType NewArray(JNIEnv *env, jsize length); Specify a length and return an array of the corresponding Java primitive type
JNI references
A reference is generated when an object created from a Java virtual machine is passed to C/C++ code. According to Java’s garbage collection mechanism, the presence of a reference does not trigger garbage collection of the Java object to which the reference refers.
There are three types of references defined in the JNI specification: Local Reference, Global Reference, and Weak Global Reference. The differences are as follows:
Local references, also cheaply referenced, are typically created and used in functions. Prevents the GC from reclaiming all referenced objects. With NewObject, for example, a local reference to the created instance is returned, using the DeleteLocalRef function
Global references, which can be used across methods and threads until they are explicitly released by the developer. Like a local reference, a global reference ensures that the reference object is not collected by the GC until it is freed. Unlike local applications, the only function that can create a full reference is NewGlobalRef, whereas releasing it requires the ReleaseGlobalRef function
Weak global references, like global references, need to be created and deleted by the programmer. Weak references, like global references, can be used in multiple local directories. However, weak references do not prevent garbage collection from collecting the object to which the reference refers. The object it references may not exist or have been reclaimed.
Refer to the link
C++ type conversion
In C, the general format of a cast is :(type specifier) the function of an expression is to cast the value of an expression to the type represented by the type specifier. C++ is compatible with C, so the casting in C language is also applicable in C++. In addition to this casting method, C++ also provides four types of casting methods, respectively
- Static_cast < type specifier >(expression)
- Dynamic_cast < type specifier >(expression)
- Const_cast < type specifier >(expression)
- Reinterpret_cast < type specifier >(expression)
The reinterpret_cast operator does not change the value of the operand in parentheses, but rather reinterprets the object from a bit-pattern
The function of the double colon :: in C++
The first type, class scope, is used to identify the variables and functions of a class
Human::setName(char* name); Copy the code
The second, namespace scope, is used to specify which namespace the class or function belongs to
std::cout << "Hello World" << std::endl; Copy the code
C++ extern keyword resolution
Extern can be placed in front of a variable or function to indicate that the definition of the variable or function is in another file, prompting the compiler to look for its definition in another module when it encounters the variable or function. Extern can also be used for link designation.
C++ header and source file parsing
Header file (.h) :
Write class declarations (including declarations of members and methods within the class), function prototypes, #define constants, etc., but generally do not write concrete implementations.
When writing the header file, you must add the following precompiled statements at the beginning and end: #ifndef CIRCLE_H #define CIRCLE_H
// Your code is written here
#endif
Source file (.cpp) :
The source file mainly writes the specific code that implements the functions already declared in the header file. Note that you must start with #include the implementation header and the header to be used.
RefBase class parsing in Andoird Jni
In Android source code, you often see type definitions such as sp and WP, which are actually smart Pointers in Android. Smart Pointers are a C++ concept that solves the problem of automatic release of objects by reference counting.
The smart pointer source code for Android is in the following two files: refbase.h and refbase.cpp
Android defines two smart pointer types: strong Pointer (SP) and weak pointer wp (weak Pointer).
Strong Pointers have the same concept as smart Pointers in general sense. They record how many users are using an object through reference counting. If all users give up referencing the object, the object will be destroyed automatically.
Weak Pointers also point to an object, but weak Pointers only record the address of the object. Weak Pointers cannot be used to access the object, that is, they cannot be used to call the object’s member functions or access the object’s member variables. To access an object to which a weak pointer points, you first upgrade a weak pointer to a strong pointer (via the promote() method provided by the WP class). If the object is destroyed, WP’s promote() method will return a null pointer to an object that could have been destroyed elsewhere, thus avoiding an address access error.
If you want to use smart Pointers to reference objects of this class, the class must satisfy the following two conditions:
- This class is a subclass or indirect subclass of the base class RefBase;
- This class must define a make-believe function, that is, its constructor needs to be defined as: virtual ~MyClass();
An ordinary pointer is defined as MyClass* p_obj; A smart pointer is defined as sp p_obj;
Wp_obj = new MyClass(); P_obj = wp_obj.promote();
Refer to the link
C++ namespace keyword parsing
A namespace is an area of memory named by the programmer. The programmer can specify some named Spaces as needed to separate some global entities from other global entities in the namespace. Use namespaces to resolve name conflicts. For example, namespace ns1 // Specify the middle NSL
Namespace is the CPP keyword, which is used to declare the namespace of the code block. The local code at the bottom of AOSP declares the command space “Android”, and treats all “Android” code as a project. The advantage of this is to distinguish its own code from third-party open-source code. It also avoids the problem of symbol renaming.
C++ virtual keyword parsing
Virtual means “virtual” in English. The keyword “virtual” in c++ is used in two main ways: virtual functions and virtual base classes. Virtual will be introduced from these two aspects.
1. The virtual functions
Setting the overridden member function to virtual in the base class means that when the member function is called through a pointer or reference to the base class, the function is called based on the type of the object to which the pointer points, not the type of the pointer. For example, two Pointers to a superclass type, one to an instance of the superclass and one to an instance of a subclass. When an overloaded method is called, the pointer to the subclass will execute the method of the subclass, which is the implementation of Java’s native polymorphism.
2. The virtual base class
In c++, a derived class can inherit from more than one base class. The question is: if multiple base classes inherit from the same base class, does the derived class need to inherit from the “same base class” multiple times? Virtual base classes solve this problem. In short, virtual base classes allow objects derived from multiple classes that inherit from a single class to inherit only one object.
When the class inherits two or more functions of the same name from different paths, not qualifying the class name to virtual results in ambiguity. Of course, if virtual base classes are used, this does not necessarily lead to ambiguity. The compiler selects the “shortest” superclass member function on the inheritance path to call. This rule does not contradict the access control permissions of member functions. That is, you cannot call a lower-priority member function of the same name of type public because the higher-priority member function has “private” access control.
C++ pure virtual functions
A pure virtual function is a special kind of virtual function, and its general format is as follows: Virtual < type >< function name >(< parameter list >)=0;
In many cases, a virtual function cannot be meaningfully implemented in a base class. Instead, it is declared as a pure virtual function, and its implementation is left to a derived class of that base class. That’s what pure virtual functions do. Pure virtual functions allow a class to have an operation name, but no operation content, leaving the details to be defined by derived classes when they inherit. Classes that contain pure virtual functions are called abstract classes. This class cannot declare objects, but serves as a base class for derived classes. Unless all pure virtual functions in the base class are fully implemented in a derived class, the derived class becomes abstract and cannot instantiate objects.
Handler mechanism Native layer implementation of some articles introduced
Android Messaging 2-Handler(Native Layer)
Source code interpretation of the epoll kernel mechanism
MessageQueue – C++ world support for Message
Android: NativeMessageQueue and Looper. CPP
Select /poll/epoll Comparison analysis
Tencent technology | ten questions to understand Linux epoll works
Linux IO mode and select, poll, epoll details
1.7 android_os_MessageQueue. CPP
The Native layer of MessageQueue is currently only known through C++ source code
android_os_MessageQueue.cpp
**android_os_MessageQueue. H ** the Looper. H header is used to define the Looper type, which defines the MessageQueue that inherits the Android smart type base class: Class MessageQueue: public virtual RefBase, mLooper: protected: sp
mLooper; Android_os_MessageQueue_getMessageQueue. This method is called in the android_view_inputeventReceiver. CPP file. I can’t find a place to use it on the Handler side. The purpose is to get the Java layer MessageQueue instance
extern sp<MessageQueue> android_os_MessageQueue_getMessageQueue(JNIEnv* env, jobject messageQueueObj);
The looper. h and android_OS_Messagequeue. h headers and other required headers are imported into the android_OS_Messagequeue. CPP source file
Class NativeMessageQueue: Public MessageQueue, public LooperCallback, LooperCallback is a pure virtual function defined in Looper.h similar to an abstract class in Java that is inherited to implement polymorphism.
NativeMessageQueue initializes the Looper object of the Native layer during instantiation, and processes messages through Looper, such as pollOnce and Wake
SetFileDescriptorEvents, etc. The handleEvent method is a callback to LooperCallback that collects looperEvents and then calls the dispatchEvents of The Java layer MessageQueue to retrieve the file descriptor record and any state that might change Finally, the event state is synchronized to the Native layer.
1.8 stars. The CPP
Looper on Native layer corresponds to NDK Docs and C++ source code
Android Developers NDK Docs | Looper.cpp
Looper.cpp
The looper.h header introduces threads. H, Android /looper.h, sys/epoll.h and other headers.
struct ALooper {}; : struct is the concrete type of NDK Looper i.e. Looper inherits from ALooper
struct Message {}; The: Message structure encapsulates the body of the Message sent to Looper
Class LooperCallback: Public Virtual RefBase {} The implementation of the interface is SimpleLooperCallback
class MessageHandler : public virtual RefBase {}; WeakMessageHandler interface implementation class is WeakMessageHandler, which is only responsible for processing Message, and is no longer responsible for inserting Message into the queue. Message’s entry into the team is handled directly by Looper
Class Looper: public ALooper, public RefBase {} : supports polling loops that monitor file descriptor events, optionally using callbacks. The implementation uses epoll() internally. The declared processing methods include pollOnce, wake, addFd, sendMessage, prepare, setForThread, and getForThread
Stars. CPP source file
WeakMessageHandler implementation, SimpleLooperCallback implementation, Looper class implementation
The Looper of the Native layer is essentially an encapsulation of epoll and has nothing to do with the Looper of the Java layer
The Looper of the Native layer is also tied to threads
When Looper of the Native layer is created, epoll_create() is called to create epoll. When Poll_once is called in the Java layer, epoll_wait is called to wait for the message to be triggered
Looper also has sendMessage and sendMessageDelayed member functions, but ultimately sendMessageAtTime handles them.
Just like the Java layer treats messages, messages are sorted by trigger time in C++. The difference is that the Message of the C++ world is also encapsulated in a MessageEnvelope object.
Java layer execute looper.quit (), finally execute message.dispose () to release native layer resources: NativeMessageQueue ->decStrong(env) After the nativeMessageQueue strong reference count drops to 0, the object is destroyed.
2. Details about key points related to the Handler message mechanism
2.1 Handler Details about Key points
The Handler class is the entry and exit class of the entire messaging mechanism, based on Demeter’s rule: “A class’s member variables, input and output parameters are related to objects that are friends of the class, and the classes in the method body are not friends of the class. The Handler class blocks direct access to MessageQueue and Looper. We just care about what goes in and what goes out.
Handler The seven POST and eight Send methods are public message-sending methods, each of which is called in the Android SDK source code, which is why handler is an important member of the Android Frameworks framework.
public class Handler {
@UnsupportedAppUsage
final Looper mLooper; // Message loop
final MessageQueue mQueue; // Message queue
@UnsupportedAppUsage
final Callback mCallback; // Message back
final boolean mAsynchronous; // Async control variables
// constructor
public Handler(@Nullable Callback callback, boolean async) {
mLooper = Looper.myLooper();
mQueue = mLooper.mQueue;
mCallback = callback;
mAsynchronous = async;
}
// Message entry
private boolean enqueueMessage(@NonNull MessageQueue queue, @NonNull Message msg, long uptimeMillis) {
msg.target = this; // The handle to the current Handler is passed to Message
msg.workSourceUid = ThreadLocalWorkSource.getUid(); // Returns the UID of the code currently executing on this thread
if (mAsynchronous) {
msg.setAsynchronous(true); // Asynchronous messages
}
return queue.enqueueMessage(msg, uptimeMillis); // Messages are queued. See MessageQueue for details
}
// Message exit
public void dispatchMessage(@NonNull Message msg) {
if(msg.callback ! =null) {
handleCallback(msg);
} else {
if(mCallback ! =null) {
if (mCallback.handleMessage(msg)) {
return; } } handleMessage(msg); }}}Copy the code
Constructor family methods
@RequiresApi(Build.VERSION_CODES.P)
fun createHandler(type: Int): Handler {
return when (type) {
0x01 -> Handler()//Deprecated from API level 30
0x02 -> Handler(CallBack())//Deprecated from API level 30
0x03 -> Handler(Looper.myLooper())
0x04 -> Handler(Looper.myLooper(), CallBack())
0x05 -> Handler(false)//UnsupportedAppUsage
0x06 -> Handler(CallBack(), false) // Todo cannot be called and crashes
0x07 -> Handler(Looper.myLooper(), CallBack(), false)//UnsupportedAppUsage
0x08 -> Handler.createAsync(Looper.myLooper())//Call requires API level 28
0x09 -> HandlerCompat.createAsync(Looper.myLooper())
0x0a -> Handler.createAsync(Looper.myLooper(), CallBack())//Call requires API level 28
0x0b -> HandlerCompat.createAsync(Looper.myLooper(), CallBack())
0x0c -> Handler.getMain() // lazy singleton
0x0d -> Handler.mainIfNull(Handler(Looper.myLooper())) // Todo cannot be called and crashes
else -> Handler.getMain()
}
}
Copy the code
-
If Looper is not explicitly specified for Handler, the Handler is constructed with mLooper = looper.mylooper () associated with the current thread’s Looper.
-
If the thread does not create the corresponding Looper, the Handler will raise an exception during construction: RuntimeException: Can’t create handler inside thread thread [thread-3,5,main] that has not called Looper. Prepare ()
-
01 and 02 Creation modes Deprecated Cause:
- Specifying a Looper not explicitly during Handler builds can cause bugs in some cases, such as the following:
- If the thread does not create the corresponding Looper, the Handler will cause a RuntimeException during construction
- The mLooper = looper.mylooper () line is not thread-safe
-
An alternative to implicitly setting Looper (for example, creating a main thread Handler) :
- ContextCompat.getMainExecutor(context) //Use an Executor
- View.gethandler () // Use the Handler created in the system
- Handler.getmain ()/Handler(looper.getMainLooper ()) // Specify Looper explicitly.
-
CallBack is a CallBack interface, the CallBack object of the handleMessage method. Only the SEND method executes this CallBack, and its return value also affects whether the handleMessage member method executes.
-
By default, all messages processed by handlers are synchronous. Using the constructors 05, 06, 07, 08, 09, 0a can create a Handler capable of handling asynchronous messages. Asynchronous messages indicate that interrupts or events do not need to be globally ordered as synchronous messages do. Asynchronous messaging is not affected by MessageQueue. EnqueueSyncBarrier (long) method inserts the synchronous obstacles.
-
The way the HandlerCompat class is created is version-compatible
-
UnsupportedAppUsage: This annotation indicates that the member can be accessed, but Android does not encourage or support this access. The fields and methods of this annotation may be restricted, changed, or removed in a future version of Android.
Post series methods
@RequiresApi(Build.VERSION_CODES.P)
fun postMessage(type: Int): Boolean {
val handler = Handler(Looper.myLooper())
return when (type) {
0x01 -> handler.post {}
0x02 -> handler.postDelayed({}, 1024)
0x03 -> handler.postDelayed({}, 0.1024)
0x04 -> handler.postDelayed({}, Any(), 1024)
0x05 -> HandlerCompat.postDelayed(handler, {}, Any(), 1024)
0x06 -> handler.postAtTime({}, 1024)
0x07 -> handler.postAtTime({}, Any(), 1024)
0x08 -> handler.postAtFrontOfQueue {}
0x09 -> handler.postDelayed(200L) {}.equals("")
0x0a -> handler.postAtTime(200L) {}.equals("")
else -> handler.post {}
}
}
// The sending method will eventually execute to this private method
private boolean enqueueMessage(MessageQueue queue, @NonNull Message msg, long uptimeMillis)
Copy the code
- The POST series send methods wrap Runnable as a Message and add it to handler-mlooper-mqueue. The runnable will be executed on the thread corresponding to the mLooper.
- Returns true if Runnable is successfully placed in the mQueue. Return false on failure, usually because the mLooper handling the mQueue is exiting. A result of true does not mean that Runnable will be processed, there is a case where if mLooper exits before the delay time, the message will be discarded.
- Run after delayMillis milliseconds, the base point of delay is systemclock. uptimeMillis. The time spent in deep sleep adds to the additional delay in execution.
- MessageToken is an object that wraps Runnable as Message and sets an object to Message’s obJ. This object can be used to find the message in the queue, which can be invoked by this token handler. RemoveCallbacksAndMessages (messageToken) cancel the execution of Runnable.
- Systemclock. uptimeMillis(): the number of milliseconds between 0 and when the method is called, not counting the time spent in deep sleep.
- System.currenttimemillis (): represents the number of milliseconds from 1970-01-01 00:00:00 to the current time. This value is strongly associated with the System time and can be modified by changing the System time. Therefore, this value is unreliable
- UptimeMillis is specified to run at a specific point in time; DelayMillis is how many milliseconds after the run
- PostAtFrontOfQueue wraps Runnable as a Message in front of the mQueue, which will be processed on the next iteration of the mLooper
- PostDelayed in the HandlerCompat class is version-compatible
- 09, 0a are two extension methods implemented for Handler in the handler. kt class to adjust the order of arguments using Lambda expressions
Send series methods
fun sendMessage(type: Int): Boolean {
val handler = Handler(Looper.myLooper())
return when (type) {
0x01 -> handler.sendEmptyMessage(1)
0x02 -> handler.sendEmptyMessageDelayed(2.1024)
0x03 -> handler.sendEmptyMessageAtTime(3.1024)
0x04 -> handler.sendMessage(Message.obtain())
0x05 -> handler.sendMessageDelayed(Message.obtain(), 1024)
0x06 -> handler.sendMessageAtTime(Message.obtain(), 1024)
0x07 -> handler.sendMessageAtFrontOfQueue(Message.obtain())
0x08 -> handler.executeOrSendMessage(Message.obtain())
else -> handler.sendEmptyMessage(1)}}// The sending method will eventually execute to this private method
private boolean enqueueMessage(MessageQueue queue, @NonNull Message msg, long uptimeMillis)
Copy the code
- Send a Message to the mQueue. The handleMessage method will be executed on the thread corresponding to the mLooper
- ExecuteOrSendMessage: The handler that gets the mLooper when constructed compared to the Looper cached in the current thread if the same thread executes handleMessage directly otherwise sendMessage is called
dispatchMessage
public void dispatchMessage(@NonNull Message msg) {
if(msg.callback ! =null) {
handleCallback(msg);
} else {
if(mCallback ! =null) {
if (mCallback.handleMessage(msg)) {
return; } } handleMessage(msg); }}// Fetch the Runnable object from Message and call its run method
private static void handleCallback(Message message) {
message.callback.run();
}
/** * You can use this callback interface when instantiating Handler to avoid having to implement a Handler subclass. * /
public interface Callback {
/** * returns true to indicate that no further processing is required, i.e. the handleMessage member method */ is not executed
boolean handleMessage(@NonNull Message msg);
}
/** * Subclasses must implement it to receive messages. * /
public void handleMessage(@NonNull Message msg) {}Copy the code
- Post Runnable and Send Message are processed separately. The RUN method of Runnable is implemented in POST mode, and the handleMessage method is implemented in send mode. HandleMessage distinguishes between Callback and Handler.
Remove series methods
fun removeMessage(type: Int): Unit {
val handler = Handler(Looper.myLooper())
handler.sendEmptyMessage(5)
val runnable = {}
val token = Any()
HandlerCompat.postDelayed(handler, runnable, token, 1024)
when (type) {
0x01 -> handler.removeMessages(5)
0x02 -> handler.removeMessages(5, token)
0x03 -> handler.removeEqualMessages(5, token)
0x04 -> handler.removeCallbacksAndMessages(token)
0x05 -> handler.removeCallbacksAndEqualMessages(token)
0x06 -> handler.removeCallbacks(runnable, token)
0x08 -> handler.removeCallbacks(runnable)
else -> handler.removeCallbacks(runnable)
}
}
Copy the code
- The remove family of methods are all wrapped mQueue methods, which are implemented in the MessageQueue section
2.2 Message Key Points
public final class Message implements Parcelable {
public int what;
public int arg1;
public int arg2;
public Object obj;
public Messenger replyTo;
int flags // The message flag
static final int FLAG_ASYNCHRONOUS = 1 << 1; // Asynchronous message flags
static final int FLAGS_TO_CLEAR_ON_COPY_FROM = FLAG_IN_USE; // The flag cleared in the copyFrom method
public long when;
Bundle data;
Handler target;
Runnable callback;
Message next;
public static final Object sPoolSync = new Object();
private static Message sPool;
private static int sPoolSize = 0;
private static final int MAX_POOL_SIZE = 50;
private static boolean gCheckRecycle = true;
}
Copy the code
int what
Each Handler has its own Message code namespace (for example, two H’s that send what to each other do not receive the Message, which is isolated by target), so there is no need to worry about Handler conflicts.
Int arg1, arg1
If the sent message only needs to carry a few integer values, arg1 and arg2 are sufficient and are simplified alternatives to using setData(Bundle data).
Object obj
If the message needs to carry any object. For example, when sending messages across processes.
long when
The execution time of this message. The base time point is systemclock. uptimeMillis.
Bundle data
If you send a message that only needs to carry multiple types of data, you can wrap it as a Bundle.
Handler target
The Handler that sends and processes the message.
Runnable callback
This field wraps runnable inside a Message when a runnable is posted
Message next
MessageQueue manages the entire queue through a linked list structure
Static Message sPool
Fields associated with the message pool
public static final Object sPoolSync = new Object(); // Class lock, used to ensure the security of the message pool when operating on next, sPool, sPoolSize
private static Message sPool; // The header of the message pool list structure
private static int sPoolSize = 0; // Current message pool size
Message next; // The next node reference in the message pool linked list structure
private static final int MAX_POOL_SIZE = 50; Maximum number of Message objects stored in a Message poolCopy the code
-
The message pool is implemented using a linked list structure, with sPool as the header of the message pool. SPool is a static variable that references objects with the same lifetime as the class, so every Message object in a singly linked list is not recycled by the GC, so they can be used as an in-memory cache.
-
A Message pool (single linked list) can cache up to 50 Message objects, meaning that the single linked list has a maximum length of 50 elements. When the total number of Message objects cached reaches 50, messages that can no longer be cached are recycled by the GC.
-
Message objects that remain in Message pools (single-linked lists) are FLAG_IN_USE, indicating that Message objects are available.
-
Message object caching and Message object acquisition in the Message pool are all completed at the header node. The legendary header interpolation method, namely, inserting elements in the header and taking elements out in the header, is the most efficient insertion and deletion in a single linked list. Time complexity O (1)
-
When the object is recycled, assign the instance variables of the Message object to zero (initial values)
recycle/recycleUnchecked
Share mode: message pool, Activity stack management, Re
Recycle: Resets Message instances and places them into the global Message pool. After calling this function, you must not touch the message because it has been effectively released. It is an error to reclaim messages currently queued or being passed to handlers
RecycleUnchecked: Messages that may be used for recycling. This method is not public and is used internally by MessageQueue and Looper.
Recycling process
- Assign all properties held by the Message object to zero so that the current Message object is in the same state as a new Message object created with new.
- Add the current Message object that has been initialized to a zero-value state to a Message pool and cache it as a header: If the number of cached Message objects is less than the maximum number of cached objects (50), then the sPool variable pointing to the first Message object is obtained. The sPool variable is the single linked list header reference, which always points to the first Message object in the single linked list. SPool then assigns his current Message header object to the next variable of the current Message object to prevent subsequent nodes from disconnecting. SPool then points to the current Message object, and finally +1 the total number of Message pools. This completes the caching of the current Message object.
obtain
The method for getting a Message object from the Message pool held by the Message class sPool (from the head of the chain) for our use, which is Google’s recommended method for creating a Message. Reusing objects saves memory space and the performance overhead of creating objects, as well as reducing GC effort.
public static Message obtain(a) {
synchronized (sPoolSync) {
if(sPool ! =null) {
Message m = sPool;
sPool = m.next;
m.next = null;
m.flags = 0; // clear in-use flag
sPoolSize--;
returnm; }}return new Message();
}
Copy the code
2.3 MessageQueue Key points
Priority Queue
A normal queue is a first-in, first-out data structure, with elements appended at the end of the queue and removed from the head of the queue. In priority queues, elements are assigned priority. When an element is accessed, the element with the highest priority is deleted first. Priority queues have the behavior characteristics of first in (largest out). It is usually implemented using heap data structures.
MMessages in MessageQueue is the head node of the linked list structure, that is, the physical structure of the MessageQueue is the linked list structure, and the logical structure is called the priority queue.
When can be used as a priority. Synchronous messages can be inserted and removed according to the WHEN field. However, Handler messages also have synchronization barriers and idle messages, and different types of elements depend on different priorities. The logical structure of MessageQueue should be a custom queue or Priority queue plus
public final class MessageQueue {
@UnsupportedAppUsage
private final boolean mQuitAllowed; // true: the main thread is not allowed to exit the message queue
@UnsupportedAppUsage
private long mPtr; NativeMessageQueue*, a pointer to a class object that can call a class function
@UnsupportedAppUsage
Message mMessages; // Message queuing is implemented through a linked list structure, with mMessages as the head node of the list
private final ArrayList<IdleHandler> mIdleHandlers = new ArrayList<IdleHandler>(); // List of free messages
private SparseArray<FileDescriptorRecord> mFileDescriptorRecords; // File descriptor record list
private IdleHandler[] mPendingIdleHandlers; // Free message array
private boolean mQuitting; // Flags whether the message loop is exited and the remaining messages are no longer processed
private boolean mBlocked; // indicates whether next() blocks wait with a non-zero timeout in pollOnce().
@UnsupportedAppUsage
private int mNextBarrierToken; // Next synchro id. The value is incremented when a synchronization barrier is inserted and stored in the arg1 field of the message to identify the barrier.
// The Native method involved
private native static long nativeInit(a); // Initialize Native layer NativeMessageQueue, Looper, epoll, etc
private native static void nativeDestroy(long ptr); / / destroy NativeMessageQueue
private native void nativePollOnce(long ptr, int timeoutMillis); // Pass the timeoutMillis parameter to the epoll_wait() system call. Epoll_wait () retrieves available events. Otherwise, the array of available events is traversed, and if the file descriptor of the traversed event is the wake event file descriptor mWakeEventFd, the awoken() method is called to reset the wake event file descriptor
private native static void nativeWake(long ptr); // Wake up the message polling thread
private native static boolean nativeIsPolling(long ptr); // Flag whether the Native layer Looper is idle
private native static void nativeSetFileDescriptorEvents(long ptr, int fd, int events); // fd event callback
/ / the constructor
MessageQueue(booleanquitAllowed) { mQuitAllowed = quitAllowed; mPtr = nativeInit(); }}Copy the code
Deposit: enqueueMessage
The sending process usually starts with Handler’s sendMessage() method. When we call Handler’s sendMessage() or sendEmptyMessage() methods, The Handler calls the enqueueMessage() method of MessageQueue to add the message to the MessageQueue. The Message Message is not really a queue structure, but a linked list structure. MessageQueue’s enqueueMessage() method first determines whether the message delay is later than the last node in the current list, and if so, treats the message as the last node in the list. The enqueueMessage() method then determines whether the message polling thread needs to be woken up, and if so, NativeMessageQueue’s Wake() method is called through the nativeWake() JNI method. NativeMessageQueue’s wake() method in turn calls Native Looper’s wake() method. A W character is written to the wakeup event file descriptor via the write() system call, at which point the message polling thread listening to the wakeup event file descriptor is awakened.
boolean enqueueMessage(Message msg, long when) {
synchronized (this) {
if (msg.isInUse()) {
throw new IllegalStateException(msg + " This message is already in use.");
}
if (mQuitting) {
msg.recycle();
return false;
}
msg.markInUse(); //flags |= FLAG_IN_USE;
msg.when = when; // New message time
Message p = mMessages; // The current message queue header node
boolean needWake; // Whether the thread needs to be woken up
// p == null No message yet, when == 0 no delay is set, when < p. hen the delay time is less than the current message
if (p == null || when == 0 || when < p.when) { //
// Concatenates the new message to the head node and wakes up the event queue if it blocks.
msg.next = p;
mMessages = msg;
needWake = mBlocked; // When a message is not mBlocked, it is true
} else {
// The message is delayed
// Insert into the middle of the queue. Normally we do not need to wake up the event queue unless there is a synchronization barrier at the head of the queue and the message is the earliest asynchronous message in the queue.
//p.target == null indicates a synchronous barrier message, and isAsynchronous new message is an asynchronous message
needWake = mBlocked && p.target == null && msg.isAsynchronous();
Message prev; //
// This branch executes delayed messages
for (;;) {
prev = p; // Cache the header node
p = p.next; // Take the next one
if (p == null || when < p.when) { // The delay point was found
break;
}
if (needWake && p.isAsynchronous()) { // Asynchronous delay messages are not in a hurry to wake up the thread
needWake = false; }}// Insert a new node between the two nodes when the delay point is found
msg.next = p; // invariant: p == prev.next
prev.next = msg;
}
// Wake up via Native if needed
if (needWake) {
nativeWake(mPtr); // Wake up the thread}}return true;
}
Copy the code
Wake up: nativeWake
The write() method writes a W character to the wake event file descriptor, which wakes up the blocked message loop thread.
static void android_os_MessageQueue_nativeWake(JNIEnv* env, jclass clazz, jlong ptr) {
NativeMessageQueue* nativeMessageQueue = reinterpret_cast<NativeMessageQueue*>(ptr);
nativeMessageQueue->wake(a); }void NativeMessageQueue::wake(a) {
mLooper->wake(a); }void Looper::wake(a) {
#if DEBUG_POLL_AND_WAKE
ALOGD("%p ~ wake".this);
#endif
uint64_t inc = 1;
//write Wake up time
ssize_t nWrite = TEMP_FAILURE_RETRY(write(mWakeEventFd.get(), &inc, sizeof(uint64_t)));
}
Copy the code
Take: next
public static void loop(a) {
final Looper me = myLooper();
final MessageQueue queue = me.mQueue;
for (;;) {
// The loop method in Looper will have an infinite loop to execute queue.next()
Message msg = queue.next(); // might block}}Message next(a) {
If the message loop has exited and been collected, retrieving the message again returns NULL
final long ptr = mPtr;
if (ptr == 0) {
return null;
}
int pendingIdleHandlerCount = -1; // Only during the first iteration is -1 indicating the IdleHandler to be processed
int nextPollTimeoutMillis = 0; // The next polling timeout is Millis
for (;;) {
// Flush all Binder commands suspended in the current thread to the kernel driver. It is useful to call this before performing operations that may block for a long time to ensure that any pending object references are released to prevent the process from holding the object longer than it needs to
if(nextPollTimeoutMillis ! =0) {
Binder.flushPendingCommands();
}
// block until the next polling time,
// When the message queue is empty, there are no idle messages: nextPollTimeoutMillis = -1
NextPollTimeoutMillis = 0 after processing idle messages
NextPollTimeoutMillis = math.min (msg.when - now, integer.max_value)
// If nextPollTimeoutMillis is -1, the message queue is waiting.
// If nextPollTimeoutMillis is 0, execute it immediately
// Wake up if nextPollTimeoutMillis is (when - now) delayed to end
nativePollOnce(ptr, nextPollTimeoutMillis); // The Native layer calls epoll_wait for the message to fire
synchronized (this) {
// Try to retrieve the next message. Find it and return.
final long now = SystemClock.uptimeMillis();
Message prevMsg = null;
Message msg = mMessages; / / head node
if(msg ! =null && msg.target == null) { // The current head node is a synchronization barrier message
// Blocked by a barrier. Finds the next asynchronous message in the queue.
do {
Get the next synchronization barrier message, either asynchronous or synchronous, regardless of whether the message is asynchronous or synchronous
prevMsg = msg;
msg = msg.next;
} while(msg ! =null && !msg.isAsynchronous()); // If the message is synchronous, the loop continues until an asynchronous message is found
}
if(msg ! =null) {
if (now < msg.when) { // The delay for the next message is not up yet. Set a timeout to wake up when ready.
nextPollTimeoutMillis = (int) Math.min(msg.when - now, Integer.MAX_VALUE); // Delay interval
} else {
//now >= MSG
mBlocked = false; // Get message, can't block
if(prevMsg ! =null) { // Barrier node
prevMsg.next = msg.next; // Disconnect the MSG node
} else {
mMessages = msg.next; // Disconnect the MSG node
}
msg.next = null; // Dissociate the MSG node
if (DEBUG) Log.v(TAG, "Returning message: " + msg);
msg.markInUse(); // Pending status
return msg; / / return}}else {
// Empty message queue, wait
nextPollTimeoutMillis = -1;
}
// There is no message left or there is only a flat message left
// If the execution exits Looper, some release operations need to be done here, nativeDestroy.
if (mQuitting) {
dispose(); / / release of the Native
return null;
}
// If it is idle for the first time, get the number of idle programs to run. The idle handle is only available when the queue is empty or the first message in the queue (which may be an obstacle) will be processed in the future
if (pendingIdleHandlerCount < 0 && (mMessages == null || now < mMessages.when)) {
pendingIdleHandlerCount = mIdleHandlers.size(); //
}
if (pendingIdleHandlerCount <= 0) {
mBlocked = true; // The message is blocked
continue;
}
if (mPendingIdleHandlers == null) { // Create a specified number of IdleHandler arrays, arrays save space, regardless of expansion
mPendingIdleHandlers = new IdleHandler[Math.max(pendingIdleHandlerCount, 4)];
}
mPendingIdleHandlers = mIdleHandlers.toArray(mPendingIdleHandlers); // ArrayList to array
}
// Run the idle handler. We only reach this code block in the first iteration.
for (int i = 0; i < pendingIdleHandlerCount; i++) {
final IdleHandler idler = mPendingIdleHandlers[i];
mPendingIdleHandlers[i] = null; // Recycle triggers gc
boolean keep = false;
try {
keep = idler.queueIdle(); // Execute queue idle tasks. Keep is true and will not be removed from the list
} catch (Throwable t) {
Log.wtf(TAG, "IdleHandler threw exception", t);
}
if(! keep) {synchronized (this) {
mIdleHandlers.remove(idler); // Decide whether to remove queueIdle based on its return value
}
}
}
pendingIdleHandlerCount = 0; // Reset the idle handler count to 0 so we don't run them again.
// When the idle handler is invoked, a new message may have already been delivered, so instead of waiting, go back and look again for the message to process.
nextPollTimeoutMillis = 0; }}Copy the code
Such as: nativePollOnce
private native void nativePollOnce(long ptr, int timeoutMillis); /*non-static for callbacks*/
Copy the code
static void android_os_MessageQueue_nativePollOnce(JNIEnv* env, jobject obj, jlong ptr, jint timeoutMillis) {
NativeMessageQueue* nativeMessageQueue = reinterpret_cast<NativeMessageQueue*>(ptr);
nativeMessageQueue->pollOnce(env, obj, timeoutMillis);
}
void NativeMessageQueue::pollOnce(JNIEnv* env, jobject pollObj, int timeoutMillis) {
mPollEnv = env;
mPollObj = pollObj;
mLooper->pollOnce(timeoutMillis);
mPollObj = NULL;
mPollEnv = NULL;
if (mExceptionObj) {
env->Throw(mExceptionObj);
env->DeleteLocalRef(mExceptionObj);
mExceptionObj = NULL; }}int Looper::pollOnce(int timeoutMillis, int* outFd, int* outEvents, void** outData) {
int result = 0;
for (;;) {
while (mResponseIndex < mResponses.size()) {
const Response& response = mResponses.itemAt(mResponseIndex++);
int ident = response.request.ident;
if (ident >= 0) {
int fd = response.request.fd;
int events = response.events;
void* data = response.request.data;
#if DEBUG_POLL_AND_WAKE
ALOGD("%p ~ pollOnce - returning signalled identifier %d: "
"fd=%d, events=0x%x, data=%p".this, ident, fd, events, data);
#endif
if(outFd ! =nullptr) *outFd = fd;
if(outEvents ! =nullptr) *outEvents = events;
if(outData ! =nullptr) *outData = data;
returnident; }}if(result ! =0) {
#if DEBUG_POLL_AND_WAKE
ALOGD("%p ~ pollOnce - returning result %d".this, result);
#endif
if(outFd ! =nullptr) *outFd = 0;
if(outEvents ! =nullptr) *outEvents = 0;
if(outData ! =nullptr) *outData = nullptr;
return result;
}
result = pollInner(timeoutMillis); }}int Looper::pollInner(int timeoutMillis) {
#if DEBUG_POLL_AND_WAKE
ALOGD("%p ~ pollOnce - waiting: timeoutMillis=%d".this, timeoutMillis);
#endif
// Adjust the timeout based on when the next message is due.
if(timeoutMillis ! =0&& mNextMessageUptime ! = LLONG_MAX) {nsecs_t now = systemTime(SYSTEM_TIME_MONOTONIC);
int messageTimeoutMillis = toMillisecondTimeoutDelay(now, mNextMessageUptime);
if (messageTimeoutMillis >= 0
&& (timeoutMillis < 0 || messageTimeoutMillis < timeoutMillis)) {
timeoutMillis = messageTimeoutMillis;
}
#if DEBUG_POLL_AND_WAKE
ALOGD("%p ~ pollOnce - next message in %" PRId64 "ns, adjusted timeout: timeoutMillis=%d".this, mNextMessageUptime - now, timeoutMillis);
#endif
}
// Poll.
int result = POLL_WAKE;
mResponses.clear(a); mResponseIndex =0;
// We are about to idle.
mPolling = true;
struct epoll_event eventItems[EPOLL_MAX_EVENTS];
//todo epoll_wait
int eventCount = epoll_wait(mEpollFd.get(), eventItems, EPOLL_MAX_EVENTS, timeoutMillis);
}
Copy the code
Wait for an event to become available, with an optional timeout in milliseconds. Callbacks are invoked for all file descriptors for which the event occurred, internally via epoll_wait.
If the timeout is zero, it returns immediately without blocking.
If the timeout is negative, wait indefinitely until the event occurs.
POLL_WAKE is returned if wake() is used to wake the poll before the timeout expires and no callback is called and no other file descriptor is ready.
POLL_CALLBACK is returned if one or more callbacks are invoked.
If there is no data before the given timeout expires, POLL_TIMEOUT is returned.
If an error occurs, POLL_ERROR is returned.
If its file descriptor has data and there is no callback function (requiring the caller to process it), a value of >= 0 containing the identifier is returned.
In this (and only this) case, outFd, outEvents, and outData will contain polling events and data associated with fd, otherwise they will be set to NULL. This method does not return until it has finished invoking the appropriate callbacks for all signaled file descriptors.
int epoll_wait(int epfd, struct epoll_event * events, int maxevents, int timeout);
-
The first argument, epfd, is the descriptor of epoll.
-
The second parameter events is the allocated epoll_event structure array. Epoll will copy the events that occurred into the Events array. (Events cannot be a null pointer. The kernel is very efficient.
-
The third parameter, maxEvents, represents the maximum number of events that can be returned at this time. The maxEvents parameter is usually equal to the size of the pre-allocated Events array.
-
If timeout is 0, epoll_wait will be null in the rDLList and return immediately. 0 will return immediately. -1 will be undefined and permanently blocked. The specific time is delayed to return after the point
Stop: quit
void quit(boolean safe) {
if(! mQuitAllowed) {throw new IllegalStateException("Main thread not allowed to quit.");
}
synchronized (this) {
if (mQuitting) { // Exit has been performed
return;
}
mQuitting = true; // Check out
if (safe) {
removeAllFutureMessagesLocked();
} else {
removeAllMessagesLocked();
}
// Remove all messages, wake up the thread to do the finishing worknativeWake(mPtr); }}private void removeAllFutureMessagesLocked(a) {
final long now = SystemClock.uptimeMillis();
Message p = mMessages;
if(p ! =null) {
if (p.when > now) {
removeAllMessagesLocked(); // If the header node is a delayed message, it traverses the dead and subsequent delayed messages
} else {
// The message that is being executed finishes first
Message n;
for (;;) {
n = p.next;
if (n == null) {//
return;
}
if (n.when > now) {
break;
}
p = n;
}
p.next = null;
do {
p = n;
n = p.next;
p.recycleUnchecked();
} while(n ! =null); }}}private void removeAllMessagesLocked(a) {
Message p = mMessages;
while(p ! =null) {
Message n = p.next;
p.recycleUnchecked(); // Reclaim each message and add it to the message pool
p = n;
}
mMessages = null; // gc root is empty and can be collected by GC
}
Copy the code
Delete: remove
void removeCallbacksAndEqualMessages(Handler h, Object object) {
if (h == null) {
return;
}
synchronized (this) {
Message p = mMessages;
// Delete header demarcation
while(p ! =null && p.target == h
&& (object == null || object.equals(p.obj))) {
Message n = p.next;
mMessages = n;
p.recycleUnchecked();
p = n;
}
// Remove all matching nodes after the head node
while(p ! =null) {
Message n = p.next;
if(n ! =null) {
if (n.target == h && (object == null || object.equals(n.obj))) {
Message nn = n.next;
n.recycleUnchecked();
p.next = nn;
continue; } } p = n; }}}Copy the code
Synchronization barrier: postSyncBarrier
ViewRootImpl#scheduleTraversals()
// Insert the synchronization barrier
mTraversalBarrier = mHandler.getLooper().getQueue().postSyncBarrier();
// Send asynchronous messages
Message msg = mHandler.obtainMessage(MSG_DO_SCHEDULE_VSYNC);
msg.setAsynchronous(true);
mHandler.sendMessageAtFrontOfQueue(msg);
// Send asynchronous messages
Message msg = mHandler.obtainMessage(MSG_DO_SCHEDULE_CALLBACK, action);
msg.arg1 = callbackType;
msg.setAsynchronous(true);
mHandler.sendMessageAtTime(msg, dueTime);
// Remove the synchronization barrier
mHandler.getLooper().getQueue().removeSyncBarrier(mTraversalBarrier);
Copy the code
The processing logic is in the next method
Idle messages: IdleHandler
/** * a callback interface to add more messages while the thread is blocked and waiting. * /
public static interface IdleHandler {
/** * called when the message queue runs out of messages and will now wait for more messages. Return true to keep the idle handler active and false to remove it. * This method may be called if there are still messages waiting to be processed in the queue, but they are all scheduled after the current time */
boolean queueIdle(a);
}
// add a new messagequeue.idlehandler to the MessageQueue.
// This can be removed automatically by returning false from idleHandler.queueidle () at call time, or by explicitly removing it with removeIdleHandler.
// This method is safe to call from any thread
public void addIdleHandler(@NonNull IdleHandler handler) {
if (handler == null) {
throw new NullPointerException("Can't add a null IdleHandler");
}
synchronized (this) { mIdleHandlers.add(handler); }}// Remove messagequeue.idleHandler from the queue previously added with addIdleHandler. If the given object is not currently in the free list, nothing is done.
// This method is safe to call from any thread
public void removeIdleHandler(@NonNull IdleHandler handler) {
synchronized (this) { mIdleHandlers.remove(handler); }}Copy the code
IdleHandler can be used to do things when the main thread is idle. Looper.myqueue ().addidleHandler () adds an IdleHandler to MessageQueue.
When the queueIdle() method of IdleHandler returns false, MessageQueue removes the IdleHandler from the array after the queueIdle() method is executed again, otherwise it will be executed multiple times.
2.4 Details of Looper key points
public final class Looper {
static final ThreadLocal<Looper> sThreadLocal = new ThreadLocal<Looper>();
private static Looper sMainLooper; // guarded by Looper.class
private static Observer sObserver;
final MessageQueue mQueue;
final Thread mThread;
private boolean mInLoop;
private static Observer sObserver;
private Looper(boolean quitAllowed) {
mQueue = newMessageQueue(quitAllowed); mThread = Thread.currentThread(); }}Copy the code
This class contains the code needed to set up and manage the Event loop based on MessageQueue. Apis that affect queue state should be defined in MessageQueue or Handler rather than Looper itself. For example, idle handlers and synchronization barriers are defined on queues, while ready threads, loops, and exits are defined on circulators.
Construction: prepare
// Create Looper and cache it in ThreadLocal to ensure interthread singletons
private static void prepare(boolean quitAllowed) {
if(sThreadLocal.get() ! =null) {
throw new RuntimeException("Only one Looper may be created per thread");
}
sThreadLocal.set(new Looper(quitAllowed));
}
// Get Looper from ThreadLocal
public static @Nullable Looper myLooper(a) {
return sThreadLocal.get();
}
// create thread to eat Looper,
@Deprecated
public static void prepareMainLooper(a) {
prepare(false);// Build a Looper that does not allow exit
synchronized (Looper.class) {
if(sMainLooper ! =null) {
throw new IllegalStateException("The main Looper has already been prepared.");
}
sMainLooper = myLooper(); // Cache to the static variable sMainLooper}}Copy the code
Cycle: loop
public static void loop(a) {
final Looper me = myLooper(); // Get Looper from ThreadLocal to keep the thread unique
if (me == null) {
throw new RuntimeException("No Looper; Looper.prepare() wasn't called on this thread.");
}
if (me.mInLoop) { // Looping again causes queued messages to be executed before completion
Slog.w(TAG, "Loop again would have the queued messages be executed"
+ " before this one completed.");
}
me.mInLoop = true; // mark the loop
final MessageQueue queue = me.mQueue; // Corresponding message queue
// Make sure that the identity of the thread is that of the local process, and track what the identity token actually is.
Binder.clearCallingIdentity();
final long ident = Binder.clearCallingIdentity();
// Allow overriding a threshold with a system prop. e.g.
// adb shell 'setprop log.looper.1000.main.slow 1 && stop && start'
final int thresholdOverride =
SystemProperties.getInt("log.looper."
+ Process.myUid() + "."
+ Thread.currentThread().getName()
+ ".slow".0);
boolean slowDeliveryDetected = false;
for (;;) {
Message msg = queue.next(); // There may be blocking, such as no message, delayed message, etc
if (msg == null) {
// No message indicates that the message queue is exiting.
return;
}
// This must be in a local variable, in case a UI event sets the logger
final Printer logging = me.mLogging;
if(logging ! =null) {
logging.println(">>>>> Dispatching to " + msg.target + "" +
msg.callback + ":" + msg.what);
}
// Make sure the observer won't change while processing a transaction.
final Observer observer = sObserver;
final long traceTag = me.mTraceTag;
long slowDispatchThresholdMs = me.mSlowDispatchThresholdMs;
long slowDeliveryThresholdMs = me.mSlowDeliveryThresholdMs;
if (thresholdOverride > 0) {
slowDispatchThresholdMs = thresholdOverride;
slowDeliveryThresholdMs = thresholdOverride;
}
final boolean logSlowDelivery = (slowDeliveryThresholdMs > 0) && (msg.when > 0);
final boolean logSlowDispatch = (slowDispatchThresholdMs > 0);
final boolean needStartTime = logSlowDelivery || logSlowDispatch;
final boolean needEndTime = logSlowDispatch;
if(traceTag ! =0 && Trace.isTagEnabled(traceTag)) {
Trace.traceBegin(traceTag, msg.target.getTraceName(msg));
}
final long dispatchStart = needStartTime ? SystemClock.uptimeMillis() : 0;
final long dispatchEnd;
Object token = null;
if(observer ! =null) {
token = observer.messageDispatchStarting(); // Observe the distribution start event
}
// Start sending messages
long origWorkSource = ThreadLocalWorkSource.setUid(msg.workSourceUid);
try {
msg.target.dispatchMessage(msg); // Pass to Handler
if(observer ! =null) {
observer.messageDispatched(token, msg); // Observe the distribution completion event
}
dispatchEnd = needEndTime ? SystemClock.uptimeMillis() : 0;
} catch (Exception exception) {
if(observer ! =null) {
observer.dispatchingThrewException(token, msg, exception); // Observe distribution exception events
}
throw exception;
} finally {
ThreadLocalWorkSource.restore(origWorkSource);
if(traceTag ! =0) { Trace.traceEnd(traceTag); }}if (logSlowDelivery) {
if (slowDeliveryDetected) {
if ((dispatchStart - msg.when) <= 10) {
Slog.w(TAG, "Drained");
slowDeliveryDetected = false; }}else {
if (showSlowLog(slowDeliveryThresholdMs, msg.when, dispatchStart, "delivery",
msg)) {
// Once we write a slow delivery log, suppress until the queue drains.
slowDeliveryDetected = true; }}}if (logSlowDispatch) {
showSlowLog(slowDispatchThresholdMs, dispatchStart, dispatchEnd, "dispatch", msg);
}
if(logging ! =null) {
logging.println("<<<<< Finished to " + msg.target + "" + msg.callback);
}
// Make sure that during the course of dispatching the
// identity of the thread wasn't corrupted.
final long newIdent = Binder.clearCallingIdentity();
if(ident ! = newIdent) { Log.wtf(TAG,"Thread identity changed from 0x"
+ Long.toHexString(ident) + " to 0x"
+ Long.toHexString(newIdent) + " while dispatching to "
+ msg.target.getClass().getName() + ""
+ msg.callback + " what=" + msg.what);
}
msg.recycleUnchecked(); // return to the pool}}Copy the code
2.5 Key Points of ThreadLocal
ThreadLocal is suitable for scenarios where variables are isolated between threads but shared between methods or classes
- Each thread needs to have its own separate instance
- Instances need to be shared among multiple methods, but do not want to be shared by multiple threads
The relationship of three elements
-
Thread1—-ThreadLocalMap<ThreadLocal , Value >
-
hread2—-ThreadLocalMap<ThreadLocal , Value >
-
ThreadLocalMap is an attribute of Thread, and ThreadLocal maintains ThreadLocalMap. This property refers to a utility class. Threads can have shared variables maintained by multiple ThreadLocal threads that are unique to their own threads.
Differences between ThreadLocal and Synchronized
-
Both ThreadLocal and Synchonized are used to deal with multi-threaded concurrent access, but ThreadLocal is fundamentally different from synchronized:
-
Synchronized is used for data sharing between threads, while ThreadLocal is used for data isolation between threads.
-
Synchronized is a mechanism that uses locks to make variables or code blocks accessible only to one thread at a time. ThreadLocal provides a copy of the variable for each thread
, so that each thread is not accessing the same object at the same time, thus isolating the data sharing of multiple threads. Synchronized, on the other hand, is used to obtain data sharing when communicating between multiple threads.
public class ThreadLocal<T> {
private final int threadLocalHashCode = nextHashCode();// This is a custom hash code (useful only in ThreadLocalMaps) that eliminates collisions in common cases of continuously constructed ThreadLocals for the same thread, while maintaining good behavior in less common cases.
private static AtomicInteger nextHashCode =new AtomicInteger(); // The next hash code to give. Atomic renewal. Starting from scratch
private static int nextHashCode(a) {
returnnextHashCode.getAndAdd(HASH_INCREMENT); }}Copy the code
Storage: set
public class Thread implements Runnable {
ThreadLocal.ThreadLocalMap threadLocals = null; // The ThreadLocal value associated with this thread. This mapping is maintained by the ThreadLocal class.
}
Copy the code
public void set(T value) {
// get the current thread
Thread t = Thread.currentThread(); / / native methods
If threadLocalMap is not null,
// Update the variable value to save directly, otherwise create threadLocalMap and assign the value
ThreadLocalMap map = getMap(t);
if(map ! =null)
map.set(this, value);
else
// Initialize and assign thradLocalMap
createMap(t, value);
}
ThreadLocalMap getMap(Thread t) {
return t.threadLocals;
}
void createMap(Thread t, T firstValue) {
t.threadLocals = new ThreadLocalMap(this, firstValue); // Initialize ThreadLocalMap and assign values to Thread
}
Copy the code
Take: get
public T get(a) {
// get the current thread
Thread t = Thread.currentThread();
// get the current thread's ThreadLocalMap
ThreadLocalMap map = getMap(t);
// if map data is not empty,
if(map ! =null) {
// get the node stored in threalLocalMap of the current ThreadLocal
ThreadLocalMap.Entry e = map.getEntry(this);
if(e ! =null) {
@SuppressWarnings("unchecked")
T result = (T)e.value; / / get the value
returnresult; }}TheralLocalMap (TheralLocalMap, TheralLocalMap, TheralLocalMap, TheralLocalMap, TheralLocalMap
return setInitialValue();
}
Copy the code
Delete: remove
public void remove(a) {
ThreadLocalMap m = getMap(Thread.currentThread());
if(m ! =null)
m.remove(this);
}
Copy the code
The remove method directly removes the value of ThrealLocal from ThreadLocalMap in the current Thread. Why delete? It involves memory leak. In fact, the key used in ThreadLocalMap is a weak reference to ThreadLocal. The weak reference is that if the object has only weak references, it will be cleaned up in the next garbage collection.
So if a ThreadLocal is not strongly referenced, it will be cleaned up during garbage collection, and so will any key in the ThreadLocalMap that uses this ThreadLocal. However, a value is a strong reference and will not be cleaned up, resulting in a value with a null key.
ThreadLocalMap
ThreadLocalMap is a custom hash map that is only suitable for maintaining thread-local variables. This class is package-private to allow fields to be declared in class Thread. To store very large and long-lived objects, hash table entries use WeakReferences as keys. However, because reference queues are not used, old entries can only be guaranteed to be deleted if the hash table runs out of space
Like a regular Hashmap stored in an array, ThreadLocalMap uses an open address method to resolve hash conflicts with the zip method used by Hashmap
Open addressing space utilization is low, and the load factor is set relatively small to avoid collisions by looking for the next slot that can be stored in a hash collision
static class ThreadLocalMap {
// The entry in this hash map extends WeakReference, using its main REF field as the key (which is always a ThreadLocal object). Note that an empty key (entry.get() == null) means that the key is no longer referenced, so the entry can be removed from the table. Such entries are called "stale entries" in the following code
static class Entry extends WeakReference<ThreadLocal<? >>{
/** The value associated with this ThreadLocal. */Object value; Entry(ThreadLocal<? > k, Object v) {super(k); value = v; }}private static final int INITIAL_CAPACITY = 16; // Initial capacity - must be a power of 2
private Entry[] table; // Array of slots, resized as needed. Table. Length must always be a power of 2.
private int size = 0;
}
Copy the code
Storage: set
private void set(ThreadLocal
key, Object value) {
// We don't use fast paths like get(), because using set() to create a new entry is at least as common as replacing an existing one, in which case fast paths usually fail.
Entry[] tab = table;
int len = tab.length;
And that's why len is 2 to the NTH power because len is subtracted by one and the last bit is 1 and the subscript is guaranteed to cover everything, right
// Otherwise the array's valid bits are halved
int i = key.threadLocalHashCode & (len-1);
// If there are expansion factors, there must be empty slots
for(Entry e = tab[i]; e ! =null; e = tab[i = nextIndex(i, len)]) { ThreadLocal<? > k = e.get();// is the current reference return
if (k == key) {
e.value = value;
return;
}
// The slot is dropped by GC to reset the state
if (k == null) {
replaceStaleEntry(key, value, i);
return; }}// Set value for empty slots
tab[i] = new Entry(key, value);
int sz = ++size;
// There are no cleanable slots and the number is greater than the load factor rehash
if(! cleanSomeSlots(i, sz) && sz >= threshold) rehash(); }Copy the code
Take: getEntry
private Entry getEntry(ThreadLocal
key) {
int i = key.threadLocalHashCode & (table.length - 1); / / the key calculation
Entry e = table[i]; / / slot
if(e ! =null && e.get() == key)
return e;
else
return getEntryAfterMiss(key, i, e);
}
Copy the code
Refer to the link
2.6 Operation process of the Handler message mechanism
A sequence diagram to understand the main thread based message mechanism flow (only Java Level)
reference
- Android Handler Native layer source code
- Looper.cpp
- Looper.h
- android_os_MessageQueue.cpp
- android_os_MessageQueue.h
- Android Handler Java layer source code
- aospxref | Handler Android 11
- aospxref | Handler Android 10
- GoogleSource | ThreadLocal.java Android 23
- GoogleSource | ThreadLocal.java Android 30
- Android Handler Java layer Api documentation
- Android Developers NDK Docs | Looper.cpp
- Android Developers Docs | Handler
- Android Developers Docs | Looper
- Android Developers Docs | MessageQueue
- Android Developers Docs | Message
- Android Developers Docs | ThreadLocal
- other
- The nuggets | explore Android message mechanism
- Jane books | Android Handler asynchronous communication mechanism of learning strategy
- Stack Overflow | When and how should I use a ThreadLocal variable?
- Handler twenty-seven asked