The Framework and Binder content is quite deep, this article still stands in the application layer developer’s perspective to establish the basic understanding, can encounter the problem when thinking and direction. (This article will cover the key issues and core processes, not everything.)

Outline:

  • background
    • Why multiple processes
    • Why Binder
    • Binder Simple Architecture
  • A simple example
  • Source code analysis
    • The client interacts with the driver
    • The server interacts with the driver
  • conclusion
  • Details added
    • Why Binder is efficient
    • Why don’t Binder use SHM
  • Ask questions
  • The resources

This article is about 4.0K words, about 17 minutes to read.

Android source code based on 8.0.

background

Why multiple processes

Binder is a cross-process communication (IPC) mechanism for Android.

In Android, a single process is allocated a limited amount of memory, and multiple processes can use more memory, isolate crash risks, and so on.

The common usage scenarios of multi-process in Android include WebView, push, keepalive, system service, etc. of independent process. Since it is a multi-process scenario, cross-process communication is required.

Why Binder

Linux comes with some ways to communicate across processes:

  • Pipe: Pipe descriptors are half-duplex and unidirectional. Data can only flow in one direction. To read or write, two pipe descriptors are required. Linux provides PIPE (FDS) to get a pair of descriptors, one read and one write. Anonymous pipes can only be used for communication between parent and child processes that are related to each other. Named pipes do not have this restriction.

  • Socket: full duplex, read and write. For example, Zygote processes wait for AMS system services to initiate socket requests to create application processes.

  • Shared Memory (SHM) : A Memory mapped to a segment that can be accessed by multiple processes. SHM is the most efficient IPC method, which usually requires a combination of other cross-process methods such as semaphores to synchronize information. Android based on SHM to improve the Anonymous Shared Memory (Ashmem), because of high efficiency and suitable for processing large data, such as the application process through the Shared Memory to read the SurfaceFlinger process synthesized view data, display.

  • Memory mapping (MMAP) : Linux initializes the contents of a virtual memory area by associating it with a file on a disk. After reading or writing a pointer to the memory, the system synchronizes the corresponding disk file. Binder uses MMAP.

  • Signal: unidirectional, sends a signal and is done. No result is returned. It can only send signals, not parameters. If the child process is killed, the system sends a SIGCHLD signal, and the parent process clears the child process description in the process table to prevent zombie processes.

There are also file sharing, Message queues and other cross-process communication methods…

Each of these cross-process communication methods has its pros and cons, and Android ultimately chose to build its own Binder that is easy to use, efficient, and secure.

  • Easy to use: Easy to use C/S architecture (with AIDL you only need to write business logic)
  • Efficient: Memory mapping with MMAP requires only one copy
  • Security: Kernel-based management of identity tags, each App has a UID to verify permissions, and supports both real name (system services) and anonymity (self-created services)

Binder Simple Architecture

Linux memory is divided into user space and kernel space, which requires a system call to access.

(Image source: “Binder Principles for Android Application Engineers”)

Binder is based on C/S architecture. Binder drivers running in kernel space expose user space to a device file, /dev/binder, through which processes establish communication channels.

Binder startup process:

  1. Open binder driver (open)
  2. Mmap the driver file descriptor (mDriverFD) to allocate buffers
  3. The server runs the binder thread, registers the thread with the binder driver, and enters a loop waiting for instructions from the client (both ends interact with the driver via IOCtl).

A simple example

AIDL (Android Interface Definition Language) supports Java classes with Binder to reduce rework.

The sample invocation flow is as follows:

Not much code, mostly logs, just focus on the comments.

Client Activity:

//NoAidlActivity.java

protected void onCreate(Bundle savedInstanceState) {
    Intent intent = new Intent(this, MyService.class);

    bindService(intent, new ServiceConnection() {
        @Override
        public void onServiceConnected(ComponentName name, IBinder service) {
            //1. Retrieve reusable objects from the object pool
            Parcel data = Parcel.obtain();
            Parcel reply = Parcel.obtain();

            Log.e(halliday.NoAidlActivity, pid ="
                  + Process.myPid() + ", thread = "
                  + Thread.currentThread().getName());

            String str = "666";
            Log.e(halliday."The client sends to the server:" + str);
            //2. Write data to data as request parameters
            data.writeString(str);

            //3. Obtain the IBinder handle of the server and invoke transact
            // The behavior code is 1; The return value from the server is required, so flags 0 indicates a synchronous call
            service.transact(1, data, reply, 0);

            Log.e(halliday.NoAidlActivity, pid ="
                  + Process.myPid() + ", thread = "
                  + Thread.currentThread().getName());

            //4. Read the return value from reply
            Log.e(halliday."Client receives server return:" + reply.readString());
        }
    }, Context.BIND_AUTO_CREATE);
}
Copy the code

Service. transact passes flags 0 to indicate synchronous invocation and blocks waiting for the return value from the server. If the server performs time-consuming operations, user operations on the UI will cause ANR.

The other value of flags is 1, indicating the one way for asynchronous calls. Do not wait for the server to return the result.

Look at the Service running on the server,

class MyService extends Service {

    @Override
    public IBinder onBind(Intent intent) {
        // Return the server IBinder handle
        return newMyBinder(); }}Copy the code

Register the Service, let the server Service run in :remote process, to achieve cross-process,

<service
         android:name=".binder.no_aidl.MyService"
         android:process=":remote" />
Copy the code

Binder objects running on the server side,

class MyBinder extends Binder {

    @Override
    protected boolean onTransact(int code, Parcel data, Parcel reply, int flags){
        if (code == 1) {// if the behavior code is 1
            Log.e(halliday."-- MyBinder, pid ="
                  + Process.myPid() + ", thread = "
                  + Thread.currentThread().getName());
            //1. Read client parameters from data
            Log.e(halliday."The server received:" + data.readString());

            String str = "777";
            Log.e(halliday."Server returns:" + str);
            //2. Write the return value from reply to the client
            reply.writeString(str);

            //3. The processing is complete
            return true;
        }
        return super.onTransact(code, data, reply, flags); }}Copy the code

Run the following, 7 lines of log:

Since our flags is passing in a 0 synchronous call, we can try to sleep a few seconds in the server onTransact and see that the client takes a few seconds to print the return value. So if the server needs to perform time-consuming operations, the client needs to make binder calls in child threads.

Extension: As can be seen from IT Internet uncle’s article “Android get process name function, how to optimize to the extreme”, when using system API, if there is a better solution, or recommended to put the cross-process solution getSystemService as the last pocket, because IT needs the cost of binder call itself. Moreover, as application layer developers, they rarely pay attention to the internal implementation of distant processes, in case the other party has a potentially time-consuming operation?

In this example, the Binder mechanism uses Parcel to serialize data, and the client calls Transact on the main thread to request (Parcel data pass), The server responds by calling onTransact in the Binder thread (Parcel Reply returns the result).

Source code analysis

The call process of Binder is roughly as follows. Bp of native layer BpBinder refers to Binder proxy.

As can be seen, the following calls are required to complete a communication:

  1. Request: Client Java Layer -> Client Native Layer ->Binder Driver layer -> Server Native Layer -> Server Java layer
  2. Response: Server Java layer -> Server Native Layer ->Binder Driver layer -> Client Native Layer -> Client Java layer

The Binder driver layer acts as a staging post, a bit like the network layering model.

The client interacts with the driver

Let’s start with the client-driver interaction. Because it is called cross-process (with :remote specified), the service object called back from onServiceConnected is a BinderProxy instance. We follow up with a line call to service.transact(1, data, reply, 0) as an entry point.

The BinderProxy class is written in Binder:

//BinderProxy.java

public boolean transact(int code, Parcel data, Parcel reply, int flags){
    // Call native methods
    return transactNative(code, data, reply, flags);
}
Copy the code

The native method is registered in android_util_binder.cpp,

//android_util_Binder.cpp

/ / register JNI
static const JNINativeMethod gBinderProxyMethods[] = {
    { "transactNative"."(ILandroid/os/Parcel; Landroid/os/Parcel; I)Z",
     (void*)android_os_BinderProxy_transact},
};

// The native method is implemented
static jboolean android_os_BinderProxy_transact(JNIEnv* env, jobject obj, jint code, jobject dataObj, jobject replyObj, jint flags){
    // Convert to Parcel in native layer
    Parcel* data = parcelForJavaObject(env, dataObj);
    Parcel* reply = parcelForJavaObject(env, replyObj);
    // Get the native layer handle BpBinder
    IBinder* target = (IBinder*)
        env->GetLongField(obj, gBinderProxyOffsets.mObject);
    // Call BpBinder's Transact
    status_t err = target->transact(code, *data, reply, flags);
}
Copy the code

Continue to work with BpBinder. CPP,

//BpBinder.cpp

status_t BpBinder::transact(...).{
    // The driver will find the corresponding binder handle based on the mHandle value
    status_t status = IPCThreadState::self()->transact(
        mHandle, code, data, reply, flags);
}
Copy the code

IPCThreadState is a thread singleton responsible for communicating specific instructions with binder drivers, following ipcThreadState.cpp,

//IPCThreadState.cpp

status_t IPCThreadState::transact(...).{
    // Write data to mOut, as shown in 1.1
    err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);

    / /... Ignore the code for one way asynchronous calls and just look at synchronous calls that return values
    // Interact with the binder driver, and the incoming reply receives the returned data, see 1.2
    err = waitForResponse(reply);
}

//1.1 Writes data to mOut
status_t IPCThreadState::writeTransactionData(...).
{
    binder_transaction_data tr;
    / /... Package various data (data size, buffer, offsets)
    tr.sender_euid = 0;
    // Write the BC_TRANSACTION directive to mOut
    mOut.writeInt32(cmd);
    // Write the wrapped binder_transaction_data to mOut
    mOut.write(&tr, sizeof(tr));
}

//1.2 Interacts with the Binder driver and the incoming reply receives the returned data
status_t IPCThreadState::waitForResponse(...).{
    // This is an important loop where the client sleeps and waits for the server to return the result
    while (1) {
        // Write mOut to the driver and read mIn from the driver. See 1.3
        talkWithDriver();
        // Read the driver's reply
        cmd = (uint32_t)mIn.readInt32();
        switch (cmd) {
            case BR_TRANSACTION_COMPLETE:
                // The driver has received a transact request from the client
                // If the call is one way asynchronous, it can end at this point
                if(! reply && ! acquireResult)goto finish;
                break;
            case BR_REPLY:
                // Indicates that the client receives the result from the server
                binder_transaction_data tr;
                // Read the server data out and pack it into tr
                err = mIn.read(&tr, sizeof(tr));
                // Pass tr data through to replyreply->ipcSetDataReference(...) ;/ / end
                gotofinish; }}}Write mOut to driver and read mIn from driver
status_t IPCThreadState::talkWithDriver(bool doReceive){
    binder_write_read bwr;
    // Specify write data size and write buffer
    bwr.write_size = outAvail;
    bwr.write_buffer = (uintptr_t)mOut.data();

    // Specify the read data size and read buffer
    if (doReceive && needRead) {
        bwr.read_size = mIn.dataCapacity();
        bwr.read_buffer = (uintptr_t)mIn.data();
    } else {
        bwr.read_size = 0;
        bwr.read_buffer = 0;
    }

    // Ioctl calls go to binder_ioctl in the binder driver layer
    ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr);

    if (bwr.write_consumed > 0) {
        // Data has been written to the driver and removed from mOut
        if (bwr.write_consumed < mOut.dataSize())
            mOut.remove(0, bwr.write_consumed);
        else
            mOut.setDataSize(0);
    }
    if (bwr.read_consumed > 0) {
        // Read data from driver into mIn
        mIn.setDataSize(bwr.read_consumed);
        mIn.setDataPosition(0); }}Copy the code

Ioctl calls go to binder_ioctl in the Binder driver layer, not the driver code.

The server interacts with the driver

Binder driver, binder thread, at processState.cpp,

//ProcessState.cpp

virtual bool threadLoop(a)
{	// Register binder threads into the thread pool of binder drivers
    IPCThreadState::self()->joinThreadPool(mIsMain);
    return false;
}
Copy the code

Follow up IPCThreadState. CPP,

//IPCThreadState.cpp

void IPCThreadState::joinThreadPool(bool isMain){
    // Write data to the binder driver, indicating that the current thread needs to register with the binder driver
    mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);
    status_t result;
    do {
        // Enter an infinite loop, waiting for instructions to arrive, see 1.1
        result = getAndExecuteCommand();
    } while(result ! = -ECONNREFUSED && result ! = -EBADF);// Write data to binder drivers (exit loop, end thread)
    mOut.writeInt32(BC_EXIT_LOOPER);
}

//1.1 Wait for instructions
status_t IPCThreadState::getAndExecuteCommand(a){
    // The driver will write instructions into mIn
    talkWithDriver();
    // Read instructions from mIn
    cmd = mIn.readInt32();
    // Execute instructions, see 1.2
    result = executeCommand(cmd);
    return result;
}

//1.2 Execute the command
status_t IPCThreadState::executeCommand(int32_t cmd){
    // The client sends the request to the driver, which forwards it to the server
    switch ((uint32_t)cmd) {
        case BR_TRANSACTION:{
            // The server receives the BR_TRANSACTION command
            binder_transaction_data tr;
            // Read the parameters requested by the client
            result = mIn.read(&tr, sizeof(tr));

            // Prepare the data and pass it up to the Java layerParcel buffer; Parcel reply; buffer.ipcSetDataReference(...) ;// Cookies are stored in binder entities that correspond to server native layer objects called BBinder
            reinterpret_cast<BBinder*>(tr.cookie)->transact(tr.code, buffer,
                                                            &reply, tr.flags);
            // The server writes the return value to the driver and the driver forwards it to the client
            sendReply(reply, 0); }}}//1.3 The server writes the return value to the driver for the driver to forward to the client
status_t IPCThreadState::sendReply(const Parcel& reply, uint32_t flags){
    err = writeTransactionData(BC_REPLY, flags, - 1.0, reply, &statusBuffer);
    // The server returns the result to the client, without waiting for the client, so pass NULL
    return waitForResponse(NULL.NULL);
}
Copy the code

Then see how BBinder’s Transact is passed up to the Java layer, in binder.cpp,

//Binder.cpp

status_t BBinder::transact(uint32_t code, const Parcel& data, 
                           Parcel* reply, uint32_t flags){
    switch (code) {
            The ping command is used to determine connectivity, that is, whether the binder handle is still alive
        case PING_TRANSACTION:
            reply->writeInt32(pingBinder());
            break;
        default:
            // See here, a JNI call to the Java layer execTransact, see 1.1
            err = onTransact(code, data, reply, flags);
            break;
    }
    return err;
}

//android_util_Binder.cpp

//1.1 Calls to the Java layer execTransact through JNI
virtual status_t onTransact(...).{ JNIEnv* env = javavm_to_jnienv(mVM); jboolean res = env->CallBooleanMethod(mObject, gBinderOffsets.mExecTransact, ...) ; }Copy the code

Back at the Java layer, execTransact looks like this:

//android.os.Binder.java

private boolean execTransact(...). {
    res = onTransact(code, data, reply, flags);
}
Copy the code

At this point, the onTransact of the server MyBinder in the sample code is called back. In the example, we process the request parameter data and the return value reply. Finally, the native layer sendReply(reply, 0) actually writes the return value to the driver for the driver to forward to the client.

Combine the calling code with the flowchart:

Then there is the instruction interaction diagram (non-One Way mode) :

The binder synchronous call actually ends with the server’s BR_REPLY directive, and the server continues the loop waiting for the next request.

conclusion

This article focuses on Binder’s background and call process, leaving three questions for further discussion.

  1. How binder handles are transferred and managed (Binder drivers and ServiceManager processes)
  2. Handle to the bindersRemote transfer to Local
  3. One way asynchronous mode and its serial call (async_todo), synchronous mode parallel call

Series of articles:

  • Graphic | Android launch
  • Graphic | a figure out the Android system services
  • The illustration | a figure Android application process to ascertain the start

Details added

Why Binder is efficient

Linux user space cannot read and write disks directly. All system resource management (reading and writing disk files, allocating reclaimed memory, and reading and writing data from network interfaces) is done in the kernel space.

Traditional IPC transfers data: the sending process copy_from_user from the user to the kernel, and the receiving process copy_to_uer from the kernel to the user, twice.

Binder transfers data: Mmap maps virtual memory in Binder kernel space and virtual memory in user space to the same physical memory. Copy_from_user copies the data from the user space of the sending process to the kernel space of the receiving process (one copy). The receiving process can read the data from the user space directly through the mapping relationship.

(Image source: “Binder Principles for Android Application Engineers”)

Why don’t Binder use SHM

SHM usually needs to be combined with other cross-process methods such as semaphores to synchronize information, which is less convenient to use than MMAP.

Ask questions

  • Why is SurfaceFlinger not created by Zygote’s fork, but by init?

The resources

  • Books – Scenario analysis of Android system source code
  • Blog – Wang Xiaoer’s Android site
  • Binder blog – Binder principles for Android Application engineers
  • Blog – Binder Transport Mechanisms _
  • Blog – Shared memory vs. file memory mapping

More sexy articles, pay attention to the original technology public account: Halliday EI