preface

With a brief introduction to the Binder system, this article begins with an AIDL analysis of a complete Binder communication process.

I didn’t intend to post this article, because I think Gityuan’s series of articles have explained Binder details clearly enough. Then, it is not easy to analyze source code, and it is not enough to read the article to understand the details. This article is arranged by my own thinking habits, as far as possible to linear logic analysis, but the actual situation is not the case, you will always be clear when a line to meet another line, so some of the knowledge points of bifurcation was taken by me, I need to further understand.

AIDL generates code analysis

AIDL use

Start by writing an iHelloInterface.aidl file as follows

interface IHelloInterface {
    void hello(String msg);
}
Copy the code

After build, the iHelloInterface.java file is generated and a remote service is created

class RemoteService : Service() {
    private val serviceBinder = object : IHelloInterface.Stub() {
        override fun hello(msg: String) {
            Log.e("remote"."hello from client: $msg")}}override fun onBind(intent: Intent): IBinder = serviceBinder
}
Copy the code

Bind the remote service and invoke the service method

class MainActivity : AppCompatActivity() {
    private val conn = object : ServiceConnection {
        override fun onServiceConnected(name: ComponentName, service: IBinder) {
            // Service is a BinderProxy
            / / asInterface returns a IHelloInterface. Stub. The Proxy instance
            val proxy = IHelloInterface.Stub.asInterface(service)
            proxy.hello("client msg")}override fun onServiceDisconnected(name: ComponentName?).{}}override fun onCreate(savedInstanceState: Bundle?). {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_main)
        bindService(Intent("com.lyj.RemoteService"), conn, Service.BIND_AUTO_CREATE)
    }
}
Copy the code

The remote service returns the server Binder instance in onBind, and AMS creates a BinderProxy instance corresponding to the server Binder after the client binds the service through binserService. The callback to ServiceConnection. The onServiceConnected method, Client can through IHelloInterface. Stub. AsInterface get a IHelloInterface. According to the BinderProxy Stub. The Proxy instance, call the method of IPC communication.

IHelloInterface analysis

The content of this file is divided into three parts

  • The IHelloInterface interface is a functional abstraction of the remote service
    public interface IHelloInterface extends android.os.IInterface {
        public void hello(java.lang.String msg) throws android.os.RemoteException;
    }
    Copy the code
  • The IHelloInterface.Stub class represents the server-side implementation, which is itself an abstract class inherited from Binder. The Hello method is overridden when we use it (return the implementation in the remote Service onBind method and override the Hello method).
    public static abstract class Stub extends android.os.Binder implements com.lyj.bindertest.IHelloInterface {
        // Type identifier
        private static final java.lang.String DESCRIPTOR = "com.lyj.bindertest.IHelloInterface";
        // The client server identifies the Hello method with this code
        static final int TRANSACTION_hello = (android.os.IBinder.FIRST_CALL_TRANSACTION + 0);
    
        public Stub(a) {
            this.attachInterface(this, DESCRIPTOR);
        }
        
        / / this method is in commonly ServiceConnection calls onServiceConnected callback,
        public static com.lyj.bindertest.IHelloInterface asInterface(android.os.IBinder obj) {
            if ((obj == null)) {
                return null;
            }
            android.os.IInterface iin = obj.queryLocalInterface(DESCRIPTOR);
            if(((iin ! =null) && (iin instanceof com.lyj.bindertest.IHelloInterface))) {
                return ((com.lyj.bindertest.IHelloInterface) iin);
            }
            return new com.lyj.bindertest.IHelloInterface.Stub.Proxy(obj);
        }
    
        @Override
        public android.os.IBinder asBinder(a) {
            return this;
        }
        
        // Parse the client call
        @Override
        public boolean onTransact(int code, android.os.Parcel data, android.os.Parcel reply, int flags) throws android.os.RemoteException {
            java.lang.String descriptor = DESCRIPTOR;
            switch (code) {
                case INTERFACE_TRANSACTION: {
                    reply.writeString(descriptor);
                    return true;
                }
                case TRANSACTION_hello: {
                    // Call hello when code is TRANSACTION_hello
                    data.enforceInterface(descriptor);
                    java.lang.String _arg0;
                    // Read data from parcel
                    _arg0 = data.readString();
                    this.hello(_arg0);
                    reply.writeNoException();
                    return true;
                }
                default: {
                    return super.onTransact(code, data, reply, flags); }}}}Copy the code
  • IHelloInterface. Stub. The Proxy class represents a remote service in the client’s agent, Binder corresponding BinderProxy mRemote represent remote services
    private static class Proxy implements com.lyj.bindertest.IHelloInterface {
      // Remote service Binder corresponds to BinderProxy
      private android.os.IBinder mRemote;
    
      Proxy(android.os.IBinder remote) {
          mRemote = remote;
      }
    
      @Override
      public android.os.IBinder asBinder(a) {
          return mRemote;
      }
    
      @Override
      public void hello(java.lang.String msg) throws android.os.RemoteException {
          android.os.Parcel _data = android.os.Parcel.obtain();
          android.os.Parcel _reply = android.os.Parcel.obtain();
          try {
              // Write type identifier
              _data.writeInterfaceToken(DESCRIPTOR);
              / / write parameters
              _data.writeString(msg);
              // Call binderproxy.transact to initiate communication
              boolean _status = mRemote.transact(Stub.TRANSACTION_hello, _data, _reply, 0);
              if(! _status && getDefaultImpl() ! =null) {
                  getDefaultImpl().hello(msg);
                  return;
              }
              _reply.readException();
          } finally{ _reply.recycle(); _data.recycle(); }}Copy the code

Summarize the IHelloInterface. Stub and IHelloInterface. Stub. The relationship between the Proxy

Stub is a Binder that receives calls from clients using onTransact and calls IHelloInterface. Hello with code=TRANSACTION_hello.

IHelloInterface. Stub. Proxy holds a named mRemote BinderProxy (internal server handle Binder), Proxy. Hello calls to BinderProxy. Transact, The incoming code=TRANSACTION_hello, which is the code received by the server

This shows that binderproxy.transact initiates the communication and binder.ontransact is the server that receives the message, which AIDL simply encapsulates.

Transition from Java layer to Native layer

Starting with binderproxy. transact, call JNI transactNative to enter the native layer. BinderProxy. Corresponding native function is android_util_Binder transactNative. Android_os_BinderProxy_transact of CPP

final class BinderProxy implements IBinder {
    public boolean transact(int code, Parcel data, Parcel reply, int flags) throws RemoteException {
        // Check whether parcel data is greater than 800K
        Binder.checkParcel(this, code, data, "Unreasonably large binder buffer");
        // Call native layer
        returntransactNative(code, data, reply, flags); }}Copy the code

Android_os_BinderProxy_transact gets the corresponding BpBinder pointer from the BinderProxy object passed in and calls BpBinder::transact

frameworks\base\core\jni\android_util_Binder.cpp

static jboolean android_os_BinderProxy_transact(JNIEnv* env, jobject obj, jint code, jobject dataObj, jobject replyObj, jint flags) // throws RemoteException
{
    // Java Parcel becomes native Parcel
    Parcel* data = parcelForJavaObject(env, dataObj);
    Parcel* reply = parcelForJavaObject(env, replyObj);
    // Get BpBinder pointer from Java BinderProxy object
    IBinder* target = (IBinder*) env->GetLongField(obj, gBinderProxyOffsets.mObject);
    / / call BpBinder. Transact
    status_t err = target->transact(code, *data, reply, flags);
    return JNI_FALSE;
}
Copy the code

BpBinder calls ipcThreadState. transact, IPCThreadState::self gets the current thread IPCThreadState singleton, if not present. As mentioned in the previous article, threads communicating with Binder correspond to an IPCThreadState object at the native end. frameworks\native\libs\binder\BpBinder.cpp

status_t BpBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    if (mAlive) {
        // mHandle is the handle to the receiver's BBinder
        status_t status = IPCThreadState::self() - >transact(
            mHandle, code, data, reply, flags);
        if (status == DEAD_OBJECT) mAlive = 0;
        return status;
    }
    return DEAD_OBJECT;
}
Copy the code

Create IPCThreadState

What IPCThreadState::self does is simple: get the current thread’s IPCThreadState singleton (pthread_getSpecific can be seen from ThreadLocal) and call the null parameter constructor if it doesn’t exist

frameworks\native\libs\binder\IPCThreadState.cpp

IPCThreadState* IPCThreadState::self(a)
{
    // Whether it has been created
    if (gHaveTLS) {
restart:
        const pthread_key_t k = gTLS;
        // From thread private space
        IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k);
        if (st) return st;
        return new IPCThreadState;
    }
    if (gShutdown) {
        return NULL;
    }
    // Thread synchronization lock
    pthread_mutex_lock(&gTLSMutex);
    if(! gHaveTLS) {int key_create_value = pthread_key_create(&gTLS, threadDestructor);
        if(key_create_value ! =0) {
            pthread_mutex_unlock(&gTLSMutex);
            return NULL;
        }
        gHaveTLS = true;
    }
    pthread_mutex_unlock(&gTLSMutex);
    goto restart;
}
Copy the code
IPCThreadState::IPCThreadState(a)/ / assignment ProcessState
    : mProcess(ProcessState::self()),
      mStrictModePolicy(0),
      mLastTransactionBinderFlags(0)
{
    // The current object is stored in the current thread private space
    pthread_setspecific(gTLS, this);
    clearCaller(a);// Two Parcel objects, mIn and mOut, are used to read and write data from binder drivers
    mIn.setDataCapacity(256);
    mOut.setDataCapacity(256);
}
Copy the code

In the IPCThreadState constructor we focus on the fetch assignment of the ProcessState instance, which is a process-wide singleton object that ProcessState::self gets and creates if it doesn’t exist. In fact, the ProcessState singleton already exists at this call. As mentioned in the previous article, each App process calls app_main.cpp onZygoteInit after being forked by the Zygote process. Create the ProcessState here and start the binder thread pool. Binder initializations: Bind initializations: Bind initializations: Bind initializations: Bind initializations: Bind initializations

Binder initialization in process

frameworks\base\cmds\app_process\app_main.cpp

virtual void onZygoteInit(a)
{
    sp<ProcessState> proc = ProcessState::self(a);// Start a binder thread pool
    proc->startThreadPool(a); }Copy the code

Create ProcessState

frameworks\native\libs\binder\ProcessState.cpp

sp<ProcessState> ProcessState::self(a)
{
    // Process mutex
    Mutex::Autolock _l(gProcessMutex);
    if(gProcess ! =NULL) {
        return gProcess;
    }
    gProcess = new ProcessState("/dev/binder");
    return gProcess;
}
Copy the code

The ProcessState constructor calls open_driver to open the Binder driver, and then the mmap system calls the binder_mmap method in the Binder driver to open the memory mapping for receiving data

ProcessState::ProcessState(const char *driver)
    // Open binder and mDriverFD to save the file descriptor
    : mDriverFD(open_driver(driver))
    ......
    // Maximum number of binder threads, value 15
    : mMaxThreads(DEFAULT_MAX_BINDER_THREADS)
{
    if (mDriverFD >= 0) {
        // Call binder_mmap to create a (1m-8K) memory map
        mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);
        if (mVMStart == MAP_FAILED) {
            close(mDriverFD);
            mDriverFD = - 1;
            mDriverName.clear(a); }}}Copy the code

In Open_driver, the kernel layer Binder is called for initialization

  1. The open system call corresponds to the driver layer binder_open, and the fd returned is a file descriptor that needs to be passed in subsequent operations
  2. Ioctl BINDER_VERSION The binder_ioctl BINDER_VERSION case is called to the driver layer to obtain the kernel binder version
  3. Ioctl BINDER_SET_MAX_THREADS is called to the driver layer binder_ioctl BINDER_SET_MAX_THREADS case, which sets a limit on the number of threads for this process in the driver
static int open_driver(const char *driver)
{
    int fd = open(driver, O_RDWR | O_CLOEXEC);
    if (fd >= 0) {
        int vers = 0;
        // Get the kernel binder version
        status_t result = ioctl(fd, BINDER_VERSION, &vers);
        if (result == - 1{=)close(fd);
            fd = - 1;
        }
        // Compare the kernel binder version with the framework binder version
        if(result ! =0|| vers ! = BINDER_CURRENT_PROTOCOL_VERSION) {close(fd);
            fd = - 1;
        }
        // Set the driver binder_proc.max_threads = DEFAULT_MAX_BINDER_THREADS
        size_t maxThreads = DEFAULT_MAX_BINDER_THREADS;
        result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);
        if (result == - 1) {}}return fd;
}
Copy the code

The driver layer binder is initialized

Driver layer binder_open creates a binder_proc based on the current process information and inserts the global linked list. This binder_proc pointer is then loaded into the file pointer corresponding to user-space fd for the next use of drivers/ Android /binder.c

static int binder_open(struct inode *nodp, struct file *filp)
{
    // Binder instance belongs to the process object corresponding to ProcessState
    struct binder_proc *proc;
    // Apply for kernel space to create the binder_proc pointer
    proc = kzalloc(sizeof(*proc), GFP_KERNEL);
    if (proc == NULL)
        return -ENOMEM;
    // Get the process descriptor for the current process
    get_task_struct(current);
    proc->tsk = current;
    // Initialize the process task queue
    INIT_LIST_HEAD(&proc->todo);
    // Wait for the queue
    init_waitqueue_head(&proc->wait);
    proc->default_priority = task_nice(current);

    binder_lock(__func__);

    binder_stats_created(BINDER_STAT_PROC);
    // The binder_proc node inserts a global linked list
    hlist_add_head(&proc->proc_node, &binder_procs);
    proc->pid = current->group_leader->pid;
    INIT_LIST_HEAD(&proc->delivered_death);
    The // binder_proc pointer is stored in the file pointer private_data so that it can be retrieved the next time userspace calls the driver with fd
    filp->private_data = proc;
    binder_unlock(__func__);
    return 0;
}
Copy the code

Memory mapping is enabled for the current process in binder_mmap

  • The vm_area_struct structure represents a segment of virtual address space in user space, and the vm_struct structure represents a segment of virtual address in kernel space
  • The kernel automatically allocates an address segment of this size to the user space and stores it in the vm_areA_struct structure. After the call to binder_mmap, the kernel allocates a virtual address segment of the same size to the user space based on the user space address segment. Stored in the area pointer
  • Create a binder_buffer that records the start address and size of the user/kernel mapping area for subsequent storage of communication data
  • The user virtual space is not connected to the kernel virtual space. The binder_update_page_range command is used to allocate one page (4KB) of physical memory, and the two virtual space Pointers point to the physical page at the same time to complete the mapping

Only one physical page is allocated, and more memory will be allocated as needed when the call to binder_TRANSACTION actually generates the communication

static int binder_mmap(struct file *filp, struct vm_area_struct *vma)
{
    int ret;
    // kernel virtual space
    struct vm_struct *area;
    struct binder_proc *proc = filp->private_data;
    const char *failure_string;
    // Each time Binder transfers data, a binder_buffer is allocated from Binder's memory cache to store the transferred data
    struct binder_buffer *buffer;

    if(proc->tsk ! = current)return -EINVAL;
    // Ensure that the memory mapping size does not exceed 4M
    if((vma->vm_end - vma->vm_start) > SZ_4M) vma->vm_end = vma->vm_start + SZ_4M; .// Use IOREMAP to allocate a continuous kernel virtual space, which is the same size as the user process virtual space
    // VmA is a virtual space structure passed from user space
    area = get_vm_area(vma->vm_end - vma->vm_start, VM_IOREMAP);
    if (area == NULL) {
            ret = -ENOMEM;
            failure_string = "get_vm_area";
            goto err_get_vm_area_failed;
    }
    // Address to the kernel virtual space
    proc->buffer = area->addr;
    // User virtual space start address - kernel virtual space start address
    proc->user_buffer_offset = vma->vm_start - (uintptr_t)proc->buffer; .// Allocate an array of Pointers to physical pages. The array size is the equivalent number of pages in the VMA
    proc->pages = kzalloc(sizeof(proc->pages[0]) * ((vma->vm_end - vma->vm_start) / PAGE_SIZE), GFP_KERNEL);
    if (proc->pages == NULL) {
            ret = -ENOMEM;
            failure_string = "alloc page array";
            goto err_alloc_pages_failed;
    }
    proc->buffer_size = vma->vm_end - vma->vm_start;

    vma->vm_ops = &binder_vm_ops;
    vma->vm_private_data = proc;
    // Allocate a physical page to both kernel space and process space
    if (binder_update_page_range(proc, 1, proc->buffer, proc->buffer + PAGE_SIZE, vma)) {
            ret = -ENOMEM;
            failure_string = "alloc small buf";
            goto err_alloc_small_buf_failed;
    }
    buffer = proc->buffer;
    // Create a buffers list and insert buffer into the proc list
    INIT_LIST_HEAD(&proc->buffers);
    list_add(&buffer->entry, &proc->buffers);
    buffer->free = 1;
    binder_insert_free_buffer(proc, buffer);
    Oneway asynchrony is half of the total available space
    proc->free_async_space = proc->buffer_size / 2;
    barrier();
    proc->files = get_files_struct(current);
    proc->vma = vma;
    proc->vma_vm_mm = vma->vm_mm;
    return 0;
}
Copy the code

The binder_update_page_range function allocates a physical page for the mapped address, first allocating a physical page (4KB) and then mapping the physical page to both the user-space address and the memory-space address

static int binder_update_page_range(struct binder_proc *proc, int allocate,
				    void *start, void *end,
				    struct vm_area_struct *vma)
{
    // Start address of kernel mapping area
    void *page_addr;
    // Start address of the user mapping area
    unsigned long user_page_addr;
    struct page六四屠杀page;
    // Memory structure
    struct mm_struct *mm;
    
    if (end <= start)
        return 0; .// Allocate all physical pages in a loop, and establish user space and kernel space mapping to the physical page respectively
    for (page_addr = start; page_addr < end; page_addr += PAGE_SIZE) {
        int ret;
        page = &proc->pages[(page_addr - proc->buffer) / PAGE_SIZE];

        BUG_ON(*page);
        // Allocate a page of physical memory
        *page = alloc_page(GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO);
        if (*page == NULL) {
                pr_err("%d: binder_alloc_buf failed for page at %p\n",
                        proc->pid, page_addr);
                goto err_alloc_page_failed;
        }
        // Physical memory is mapped to kernel virtual space
        ret = map_kernel_range_noflush((unsigned long)page_addr,
                                PAGE_SIZE, PAGE_KERNEL, page);
        flush_cache_vmap((unsigned long)page_addr,
        // Userspace address = kernel address + offset
        user_page_addr =
                (uintptr_t)page_addr + proc->user_buffer_offset;
        // Physical space is mapped to user virtual space
        ret = vm_insert_page(vma, user_page_addr, page[0]); }}Copy the code

Enable Binder thread pools

Since each complete Binder communication requires a cyclic read/write driver that blocks the current thread, enabling multiple threads to multitask is a natural choice. Each time a communication request from another process is read by the binder driver, the current thread processes the request. Then, when the current binder thread does not exceed the maximum limit, another thread is created ready to handle subsequent requests, increasing the response time. Let’s look at code verification.

frameworks\native\libs\binder\ProcessState.cpp

void ProcessState::startThreadPool(a)
{
    AutoMutex _l(mLock);
    if(! mThreadPoolStarted) { mThreadPoolStarted =true;
        // Start binder main thread
        spawnPooledThread(true); }}Copy the code
void ProcessState::spawnPooledThread(bool isMain)
{
    if (mThreadPoolStarted) {
    	// Thread name Binder:pid_ Serial number Serial number starts with 1
        String8 name = makeBinderThreadName(a);// isMain Binder main thread
        sp<Thread> t = new PoolThread(isMain);
        // PoolThread::run calls PoolThread::threadLoop
        t->run(name.string()); }}Copy the code

PoolThread:: Run is called after a series of calls to PoolThread::threadLoop

virtual bool threadLoop(a)
{
    // In the new thread, register the new thread as the binder main thread
    IPCThreadState::self() - >joinThreadPool(mIsMain);
    return false;
}
Copy the code

IPCThreadState: : joinThreadPool Binder drive getAndExecuteCommand infinite loop in the call, speaking, reading and writing, writing BC_ENTER_LOOPER first, then infinite read, no task is dormant.

void IPCThreadState::joinThreadPool(bool isMain)
{
    // The main thread writes BC_ENTER_LOOPER and the non-main thread writes BC_REGISTER_LOOPER
    mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);
    status_t result;
    do {
        // Clear all Binder strong and weak references
        processPendingDerefs(a);// The processing instruction is read from the driver layer by talkWithDriver
        result = getAndExecuteCommand(a);if(result < NO_ERROR && result ! = TIMED_OUT && result ! = -ECONNREFUSED && result ! = -EBADF) {abort(a); }// The non-main thread timed out
        if(result == TIMED_OUT && ! isMain) {break; }}while(result ! = -ECONNREFUSED && result ! = -EBADF);// The driver needs to be notified when a thread terminates
    mOut.writeInt32(BC_EXIT_LOOPER);
    talkWithDriver(false);
}
Copy the code

When talkWithDriver is called for the first time, the BC_ENTER_LOOPER that has written a parcel to the joinThreadPool is written to the driver, notifying the driver that the thread has entered a loop and starts reading the driver

status_t IPCThreadState::getAndExecuteCommand(a)
{
    status_t result;
    int32_t cmd;
    // Communicate with the driver by writing BC_ENTER_LOOPER to the driver
    // The instructions are then read from the driver, where the thread sleeps when there is no task
    result = talkWithDriver(a);if (result >= NO_ERROR) {
        size_t IN = mIn.dataAvail(a);if (IN < sizeof(int32_t)) return result;
        cmd = mIn.readInt32(a);// Maximum number of threads.// Parse and process
        result = executeCommand(cmd);
        // Maximum number of threads. }return result;
}
Copy the code

All we need to know is that ioctl is called to Binder driver binder_iocTL_write_read, followed by the binder_thread_write BC_ENTER_LOOPER branch. The details will be left to the actual communication later

status_t IPCThreadState::talkWithDriver(bool doReceive)
{
    if (mProcess->mDriverFD <= 0) {
        return-EBADF; }...do {
        // Loop to driver binder_ioctl_write_read
        // Run binder_thread_write BC_ENTER_LOOPER case
        // Then call binder_thread_read
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
            err = NO_ERROR;
        else
            err = -errno;
        if (mProcess->mDriverFD <= 0) { err = -EBADF; }}while(err == -EINTR); . }Copy the code

Driver layer thread management

The driver reads BC_REGISTER_LOOPER or BC_ENTER_LOOPER and resets the binder_thread.looper flag of the current thread to indicate that the thread has started looper. The difference is that BC_REGISTER_LOOPER(isMain = false) registered threads are logged to Requested_Threads, limited by the maximum number of threads

drivers/android/binder.c

static int binder_thread_write(struct binder_proc *proc,
			struct binder_thread *thread,
			binder_uintptr_t binder_buffer, size_t size,
			binder_size_t *consumed)
{...while (ptr < end && thread->return_error == BR_OK) {
        ......
        switch (cmd) {
        // Not the main thread
        case BC_REGISTER_LOOPER:
            // This thread is already registered as a binder main thread and cannot be registered again
            if (thread->looper & BINDER_LOOPER_STATE_ENTERED) {
                    thread->looper |= BINDER_LOOPER_STATE_INVALID;
            } else if (proc->requested_threads == 0) {
            // New threads should not be created without a request
            thread->looper |= BINDER_LOOPER_STATE_INVALID;
            } else {
                proc->requested_threads--;
                proc->requested_threads_started++;
            }
            thread->looper |= BINDER_LOOPER_STATE_REGISTERED;
            break;
            / / main thread
        case BC_ENTER_LOOPER:
            if (thread->looper & BINDER_LOOPER_STATE_REGISTERED) {
                    thread->looper |= BINDER_LOOPER_STATE_INVALID;
            }
            // Set the biner_thread. Looper flag corresponding to the calling threadthread->looper |= BINDER_LOOPER_STATE_ENTERED; }}... }Copy the code

Next, go to the binder_thread_read function

  1. binder_proc.todoA queue to storeRequests made by other processes for this process(BINDER_WORK_TRANSACTION),binder_thread.todoFor the currentTask of thread(BINDER_WORK_TRANSACTION_COMPLETE)
  2. The currentbinder_thread.todoIs emptyThe thread goes to sleepUntil thebinder_proc.todoDon’t empty
  3. The thread is woken up when it reads a communication request from the client, goes to the BINDER_WORK_TRANSACTION case, and finally, after a series of judgments, writes BR_SPAWN_LOOPER to the framework layer to start a new thread if the condition is met.
static int binder_thread_read(struct binder_proc *proc,
			      struct binder_thread *thread,
			      binder_uintptr_t binder_buffer, size_t size,
			      binder_size_t *consumed, int non_block)
{...// Whether to hibernate
    intwait_for_proc_work; . retry:// The current thread todo queue is empty and the transaction stack is empty. This value is true, indicating that the thread is idle
    wait_for_proc_work = thread->transaction_stack == NULL&& list_empty(&thread->todo); .// Change the status flag looper
    thread->looper |= BINDER_LOOPER_STATE_WAITING;
    if (wait_for_proc_work)
        // Number of free/ready threads ++
        proc->ready_threads++;

    binder_unlock(__func__);
    
    if (wait_for_proc_work) {
        // Enter this branch
        if(! (thread->looper & (BINDER_LOOPER_STATE_REGISTERED | BINDER_LOOPER_STATE_ENTERED))) { wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error <2);
        }
        binder_set_nice(proc->default_priority);
        if (non_block) {
            // proc.todo Whether binder_work exists
            if(! binder_has_proc_work(proc, thread)) ret = -EAGAIN; }else
            // Non-asynchronous calls sleep the thread until binder_proc.todo is not empty
            // proc->wait indicates the wait queue placed by the thread
            ret = wait_event_freezable_exclusive(proc->wait, binder_has_proc_work(proc, thread));
    } else{... } binder_lock(__func__);// The thread is awakened from here and continues to execute =========================
    
    // Wait thread --, reset flag bit
    if (wait_for_proc_work)
        proc->ready_threads--;
    thread->looper &= ~BINDER_LOOPER_STATE_WAITING;

    if (ret)
        return ret;
    
    while (1) {
        uint32_t cmd;
        struct binder_transaction_data tr;
        struct binder_work *w;
        struct binder_transaction *t = NULL;

        if(! list_empty(&thread->todo)) { w = list_first_entry(&thread->todo, struct binder_work, entry); }else if(! list_empty(&proc->todo) && wait_for_proc_work) { w = list_first_entry(&proc->todo, struct binder_work, entry); }else{... }...switch (w->type) {
            case BINDER_WORK_TRANSACTION: {
                // Get binder_transaction from binder_work
                t = container_of(w, struct binder_transaction, work);
            } break; }}// BINDER_WORK_TRANSACTION is received indicating that there is binder_work to be processed
    // Execute down at this point
    if(! t)continue; . done:// There is no request to create binder threads in the current process, requested_threads = 0;
    // The current process has no free available binder threads, namely ready_threads = 0;
    // The number of threads started by the current process is less than the upper limit (15 by default).
    // The current thread has started a loop
    if (proc->requested_threads + proc->ready_threads == 0 &&
        proc->requested_threads_started < proc->max_threads &&
        (thread->looper & (BINDER_LOOPER_STATE_REGISTERED |
         BINDER_LOOPER_STATE_ENTERED)) /* the user-space code fails to */
         /*spawn a new thread if we leave this out */) {
            proc->requested_threads++;
            // the BR_SPAWN_LOOPER command writes read_buffer
            if (put_user(BR_SPAWN_LOOPER, (uint32_t __user *)buffer))
                    return-EFAULT; binder_stat_br(proc, thread, BR_SPAWN_LOOPER); }}Copy the code

Back to IPCThreadState: : talkWithDriver, when read in driving data call IPCThreadState: : executeCommand processing, here to BR_SPAWN_LOOPER case open a thread and register again, The difference is isMain = false

status_t IPCThreadState::executeCommand(int32_t cmd)
{
    switch ((uint32_t)cmd) {
    case BR_SPAWN_LOOPER:
        // Start a thread, start loop listening driver
        // isMain is false and is registered with BC_REGISTER_LOOPER as a common binder thread
        mProcess->spawnPooledThread(false);
        break; }}Copy the code

The sender initiates a communication

After analyzing Binder initialization, we moved on to ipcThreadState.transact

  1. WriteTransactionData encapsulates the data to be sent
  2. WaitForResponse sends data to the receiver and waits for a reply (the receiver sends a BR_REPLY to the sender after receiving the data). In oneway(asynchronous mode), there is no need to wait for a reply from the receiver
status_t IPCThreadState::transact(int32_t handle,
                                  uint32_t code, const Parcel& data,
                                  Parcel* reply, uint32_t flags)
{...if (err == NO_ERROR) {
        // Data encapsulation
        err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
    }

    if(err ! = NO_ERROR) {if (reply) reply->setError(err);
        return (mLastError = err);
    }
    Oneway: does not need to wait for the receiver to reply
    if ((flags & TF_ONE_WAY) == 0) {
        if (reply) {
            // Send data to the receiver and wait for it to return
            err = waitForResponse(reply);
        } else {
            Parcel fakeReply;
            err = waitForResponse(&fakeReply); }}else {
    	Oneway asynchrony does not need to wait for a reply
        err = waitForResponse(NULL.NULL);
    }
    return err;
}
Copy the code

Will data IPCThreadState. WriteTransactionData Parcel, handle, such as data encapsulation for binder_transaction_data structure, BC_TRANSACTION and this structure are then written to the mOut Parcel, MOut is used to write to the kernel. A few data items to note here are tr.target.handle tr.code tr.data.ptr.buffer tr.data.ptr.offsets and CMD

Note that if Binder entities need to be sent to the server, their addresses are stored in tr.data.ptr.offsets. This is common with AIDL two-way communication. When a client registers a callback with the server, the callback also has a corresponding BBinder. The BBinder is sent to the server when the callback is registered with the server.

status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
    int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
    binder_transaction_data tr;

    tr.target.ptr = 0;
    / / BBinder handle
    tr.target.handle = handle;
    // code is the hello function corresponding to code TRANSACTION_hello
    tr.code = code;
    tr.flags = binderFlags;
    tr.cookie = 0;
    tr.sender_pid = 0;
    tr.sender_euid = 0;

    const status_t err = data.errorCheck(a);if (err == NO_ERROR) {
        tr.data_size = data.ipcDataSize(a);// data.ipcdata () returns a pointer to the original communication data
        tr.data.ptr.buffer = data.ipcData(a); tr.offsets_size = data.ipcObjectsCount(*)sizeof(binder_size_t);
        // ipcObjects() represents the Binder entity address that needs to be passed to the server
        // For example
        tr.data.ptr.offsets = data.ipcObjects(a); }else if (statusBuffer) {
        ......
    } else{... }// where CMD = BC_TRANSACTION
    mOut.writeInt32(cmd);
    mOut.write(&tr, sizeof(tr));

    return NO_ERROR;
}
Copy the code

TalkWithDriver is looping in the waitForResponse function to read the data in the mIn Parcel

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
    uint32_t cmd;
    int32_t err;

    while (1) {
        // Construct binder_write_read
        if ((err=talkWithDriver()) < NO_ERROR) break; . Processing the data read from the driver is analyzed laterreturn err;
}
Copy the code

The talkWithDriver function really starts communicating through the driver

  1. Encapsulate data in mIn mOut into a binder_write_read structure
  2. The call is then made to the driver layer via ioctlbinder_ioctl_write_readIn this case, mOut has data, mIn has no data, and binder_thread_write BC_TRANSACTION case is written before read
  3. Then go to binder_thread_read and process the BINDER_WORK_TRANSACTION_COMPLETE case
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
    if (mProcess->mDriverFD <= 0) {
        return -EBADF;
    }
    // doReceive defaults to true. DoReceive indicates that the caller calling talkWithDriver expects to accept the command protocol returned by the binder driver. The default value is true.
    binder_write_read bwr;
    // there is no data in mIn. NeedRead is true
    const bool needRead = mIn.dataPosition() >= mIn.dataSize(a);// Do not need to receive return or mIn no data can be read
    const size_toutAvail = (! doReceive || needRead) ? mOut.dataSize() : 0;
    bwr.write_size = outAvail;
    bwr.write_buffer = (uintptr_t)mOut.data(a);if (doReceive && needRead) {
        // Data can be read into mIn if return is required and mIn is null
        // where read_size = 256, IPCThreadState is set at initialization
        bwr.read_size = mIn.dataCapacity(a); bwr.read_buffer = (uintptr_t)mIn.data(a); }else {
        bwr.read_size = 0;
        bwr.read_buffer = 0;
    }

    if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;
    bwr.write_consumed = 0;
    bwr.read_consumed = 0;
    status_t err;
    do {
        // Loop to driver binder_ioctl_write_read
        Binder_thread_write BC_TRANSACTION case
        // Then go to binder_thread_read and process BINDER_WORK_TRANSACTION_COMPLETE
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
            err = NO_ERROR;
        else
            err = -errno;
        if (mProcess->mDriverFD <= 0) { err = -EBADF; }}while (err == -EINTR);

    if (err >= NO_ERROR) {
        if (bwr.write_consumed > 0) {
            // Remove the data segment that has been consumed
            if (bwr.write_consumed < mOut.dataSize())
                mOut.remove(0, bwr.write_consumed);
            else
                mOut.setDataSize(0);
        }
        if (bwr.read_consumed > 0) {
            // Read data from driver
            // Reset the read buffer size and pointer position
            mIn.setDataSize(bwr.read_consumed);
            mIn.setDataPosition(0); }...returnNO_ERROR; }}Copy the code

The final data written to the driver layer is a binder_write_read structure, which looks like this

The driver layer processes BC_TRANSACTION

drivers/android/binder.c

In the binder_ioctl_write_read function, the pointer to the binder_write_read parameter is copied from user space. Both write_buffer and read_buffer are assigned to binder_thread_write and binder_thread_read, respectively, and copied back to user space

static int binder_ioctl_write_read(struct file *filp,
				unsigned int cmd, unsigned long arg,
				struct binder_thread *thread)
{
    int ret = 0;
    // The binder_proc pointer is stored in private_data when binder_open is retrieved
    struct binder_proc *proc = filp->private_data;
    unsigned int size = _IOC_SIZE(cmd);
    void __user *ubuf = (void __user *)arg;
    struct binder_write_read bwr;

    if(size ! =sizeof(struct binder_write_read)) {
        ret = -EINVAL;
        goto out;
    }
    // Copy the BWR pointer from user space
    if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
        ret = -EFAULT;
        goto out;
    }
    // write_size>0 indicates that data can be written
    if (bwr.write_size > 0) {
        ret = binder_thread_write(proc, thread,
                                  bwr.write_buffer,
                                  bwr.write_size,
                                  &bwr.write_consumed);
        if (ret < 0) {
            bwr.read_consumed = 0;
            if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
                    ret = -EFAULT;
            gotoout; }}if (bwr.read_size > 0) {
        ret = binder_thread_read(proc, thread, bwr.read_buffer,
                                 bwr.read_size,
                                 &bwr.read_consumed,
                                 filp->f_flags & O_NONBLOCK);
        // Wake up the waiting thread
        if(! list_empty(&proc->todo)) wake_up_interruptible(&proc->wait);if (ret < 0) {
            if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
                    ret = -EFAULT;
            gotoout; }}// Copy the BWR pointer back to user space
    if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
        ret = -EFAULT;
        goto out;
    }
out:
    return ret;
}
Copy the code

CMD, enter BC_TRANSACTION case, and copy the pointer to binder_transaction_data from user space. Call binder_TRANSACTION to start a Binder transaction

static int binder_thread_write(struct binder_proc *proc,
			struct binder_thread *thread,
			binder_uintptr_t binder_buffer, size_t size,
			binder_size_t *consumed)
{
    uint32_t cmd;
    // bwr->write_buffer
    void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
    // Skip consumed data
    void __user *ptr = buffer + *consumed;
    void__user *end = buffer + size; .while (ptr < end && thread->return_error == BR_OK) {
        // Get CMD in mOut
        if (get_user(cmd, (uint32_t __user *)ptr))
            return-EFAULT; .switch (cmd) {
        case BC_TRANSACTION:
        case BC_REPLY: {
            struct binder_transaction_data tr;
            // Copy data from user space, i.e. Tr in mOut
            if (copy_from_user(&tr, ptr, sizeof(tr)))
                return -EFAULT;
            ptr += sizeof(tr);
            // Start a binder_transaction
            binder_transaction(proc, thread, &tr, cmd == BC_REPLY);
            break; }}}Copy the code

The binder_transaction function is very important and involves a lot of things, so it is summarized here

  1. Biner_ref corresponds to the native layer BpBinder and binder_node corresponds to the native layer binder_node
  2. According to the native layerBpBinder.handleFind the binder_ref corresponding to the receiver and get the corresponding binder_node and binder_proc.
  3. Create a binder_transaction calledbinder_alloc_bufCreate a binder_buffer for the recipient process, allocate physical memory, map it to the recipient process and kernel space, and insert it into the recipient process buffer linked list. And assigned to itbinder_transaction.buffer.This buffer is used by the receiver to receive data from the sender.Said beforebinder_mmapThe binder_buffer is created with only one page of physical memory allocated. The rest is allocated at communication time, which is here.
  4. Load fields such as binder_transaction_data.code into binder_transaction and copy them from user spacebinder_transaction_data.data.ptr.bufferGo to the one created abovebinder_buffer.data,The physical memory corresponding to buffer.data is shared between the receiver process and the kernel, thus completing the data transfer from the sender process to the receiver process
  5. binder_transaction.fromSet it to the sender thread binder_thread and binder_TRANSACTIONInsert the sender threadbinder_thread.transaction_stackThe stack
  6. willbinder_transaction.workSet type to BINDER_WORK_TRANSACTION,Insert it into the receiver processbinder_proc.todoThe queueTo wake up the target process to process the work
  7. Create a binder_work of type BINDER_WORK_TRANSACTION_COMPLETE, which is used to tell the sender that the send is complete, and insert it into the sender thread binder_thread.todo queue
static void binder_transaction(struct binder_proc *proc,
			       struct binder_thread *thread,
			       struct binder_transaction_data *tr, int reply)
{
    struct binder_transaction *t;
    struct binder_work *tcomplete;
    // Target process
    struct binder_proc *target_proc;
    // Target thread
    struct binder_thread *target_thread = NULL;
    / / target binder
    struct binder_node *target_node = NULL;
    // Target todo queue
    struct list_head *target_list;
    // The target process is waiting for the queue
    wait_queue_head_t *target_wait;
    if (reply) {
        ......
    }else {
        if (tr->target.handle) {
            struct binder_ref *ref;
            // Handle finds the corresponding binder_ref (Binder reference) and the corresponding binder_node(Binder entity).
            ref = binder_get_ref(proc, tr->target.handle);
            target_node = ref->node;
        } else{... }// Get binder_node for binder_proctarget_proc = target_node->proc; }...if (target_thread) {
        ......
    } else {
        // Target_thread is null and can be processed by any process on the receiving end
        // Gets the todo queue for the target proctarget_list = &target_proc->todo; target_wait = &target_proc->wait; }.../ / create binder_transaction
    t = kzalloc(sizeof(*t), GFP_KERNEL);
    if (t == NULL) {
        return_error = BR_FAILED_REPLY;
        goto err_alloc_t_failed;
    }
    binder_stats_created(BINDER_STAT_TRANSACTION);
    / / create binder_work
    tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);
    if (tcomplete == NULL) {
        return_error = BR_FAILED_REPLY;
        gotoerr_alloc_tcomplete_failed; } binder_stats_created(BINDER_STAT_TRANSACTION_COMPLETE); .if(! reply && ! (tr->flags & TF_ONE_WAY))// The current thread is set to from
        t->from = thread;
    else
        t->from = NULL;
    // binder_transaction_data data is loaded into binder_transactiont->code = tr->code; t->flags = tr->flags; .// Create binder_buffer for this communication, that is, allocate memory blocks from mapped physical memoryt->buffer = binder_alloc_buf(target_proc, tr->data_size, tr->offsets_size, ! reply && (t->flags & TF_ONE_WAY));if (t->buffer == NULL) {
		return_error = BR_FAILED_REPLY;
		goto err_binder_alloc_buf_failed;
	}
    t->buffer->allow_user_free = 0;
    t->buffer->debug_id = t->debug_id;
    t->buffer->transaction = t;
    // Set the target binder_nodet->buffer->target_node = target_node; .// the offp address is used to hold binder_transaction_data.data.ptr.offsets
    offp = (binder_size_t *)(t->buffer->data + ALIGN(tr->data_size, sizeof(void *)));
    // copy data.ptr.buffer from user-space binder_transaction_data to the kernel
    // Assign the value to binder_transaction.buffer.data
    if (copy_from_user(t->buffer->data, (const void __user *)(uintptr_t)
                       tr->data.ptr.buffer, tr->data_size)) {
            return_error = BR_FAILED_REPLY;
            goto err_copy_data_failed;
    }
    // copy ptr.offsets from binder_transaction_data in user space to offp
    if (copy_from_user(offp, (const void __user *)(uintptr_t)
                       tr->data.ptr.offsets, tr->offsets_size)) {
            return_error = BR_FAILED_REPLY;
            goto err_copy_data_failed;
    }
    // The offp-off_end address segment stores Binder data that the client needs to send to the server
    // If the callback is not used, we can skip it directly
    off_end = (void *)offp + tr->offsets_size;
	off_min = 0;
    for (; offp < off_end; offp++) {
        ......
    }
    if (reply) {
            .....
    } else if(! (t->flags & TF_ONE_WAY)) {If BC_TRANSACTION is not oneway, set the transaction stack information of the sending thread
        t->need_reply = 1;
        t->from_parent = thread->transaction_stack;
        thread->transaction_stack = t;
    } else{... } t->work.type = BINDER_WORK_TRANSACTION;// Add BINDER_WORK_TRANSACTION to the target queue
    // The target queue for this communication is the toDO queue corresponding to proc on the server side
    list_add_tail(&t->work.entry, target_list);
    tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
    // Add BINDER_WORK_TRANSACTION_COMPLETE to the toDO queue of the current thread
    list_add_tail(&tcomplete->entry, &thread->todo);
    // Wake up the target process to wait for this BINDER_WORK_TRANSACTION
    if (target_wait)
            wake_up_interruptible(target_wait);
    return;
}
Copy the code

The sender handles BINDER_WORK_TRANSACTION_COMPLETE

Thread. Todo: BINDER_WORK_TRANSACTION_COMPLETE work: BINDER_WORK_TRANSACTION_COMPLETE work Write back a CMD =BR_TRANSACTION_COMPLETE message to user space to indicate that the sending call is complete

static int binder_thread_read(struct binder_proc *proc,
			      struct binder_thread *thread,
			      binder_uintptr_t binder_buffer, size_t size,
			      binder_size_t *consumed, int non_block)
{
    while (1) {
        uint32_t cmd;
        struct binder_transaction_data tr;
        struct binder_work *w;
        struct binder_transaction *t = NULL;
        if(! list_empty(&thread->todo)) {BINDER_WORK_TRANSACTION_COMPLETE was added to the toDO queue of the sending thread
        // Go to this branch and get binder_work
        w = list_first_entry(&thread->todo, struct binder_work, entry);
        } else if(! list_empty(&proc->todo) && wait_for_proc_work) { ...... }else{... }switch (w->type) {
        case BINDER_WORK_TRANSACTION_COMPLETE: {
        / / CMD conversion
        cmd = BR_TRANSACTION_COMPLETE;
        // Write CMD and buffer Pointers back to the sender user space
        if (put_user(cmd, (uint32_t __user *)ptr))
                return -EFAULT;
        ptr += sizeof(uint32_t);
        binder_stat_br(proc, thread, cmd);
        // Delete and release the binder_work
        list_del(&w->entry);
        kfree(w);
        binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
    } break;
}
Copy the code

Driver layer handles the return, IPCThreadState: : talkWithDriver processed the communication also return, return to IPCThreadState: : waitForResponse, Read the BR_TRANSACTION_COMPLETE data written by the driver in the processing mIn (BWR read_buffer).

After processing BR_TRANSACTION_COMPLETE, we need to wait for reply from the receiving end, so we continue the loop to talkWithDriver, and then go to sleep in driver binder_thread_read, waiting for reply

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
    uint32_t cmd;
    int32_t err;

    while (1) {
        if ((err=talkWithDriver()) < NO_ERROR) break;
        err = mIn.errorCheck(a);if (err < NO_ERROR) break;
        if (mIn.dataAvail() = =0) continue;
        cmd = (uint32_t)mIn.readInt32(a);switch (cmd) {
        case BR_TRANSACTION_COMPLETE:
            // Where reply exists, it means to receive a subsequent BR_XXX response from the server
            // So continue the loop calling talkWithDriver
            if(! reply && ! acquireResult)goto finish;
            break; }}}Copy the code

The receiver processes the request

As mentioned in Binder initialization, the process starts the Binder thread pool and cycles through the driver, sleeping in binder_thread_read when there is no job, and waking up when binder_proc.todo is not empty. BINDER_WORK_TRANSACTION work is inserted into binder_proc.todo from sender to receiver, so the receiver is woken up to process this work. Fetch the corresponding binder_transaction from binder_work, load the binder_transaction_data, set CMD to BR_TRANSACTION, and copy it back to user space. Target_node -> PTR represents a weak reference pointer to BBinder, and target_node->cookie represents a pointer to BBinder.

static int binder_thread_read(struct binder_proc *proc,
			      struct binder_thread *thread,
			      binder_uintptr_t binder_buffer, size_t size,
			      binder_size_t *consumed, int non_block)
{...if (wait_for_proc_work)
        // Exit hibernation. The number of hibernated threads is -1
        proc->ready_threads--;
    // Reset flag bit
    thread->looper &= ~BINDER_LOOPER_STATE_WAITING;
    while (1) {
        uint32_t cmd;
        struct binder_transaction_data tr;
        struct binder_work *w;
        struct binder_transaction *t = NULL;
        if(! list_empty(&thread->todo)) { ...... }else if(! list_empty(&proc->todo) && wait_for_proc_work) {// Fetch BINDER_WORK_TRANSACTION binder_work written by the sender
            w = list_first_entry(&proc->todo, struct binder_work, entry);
        } else{... }switch (w->type) {
        case BINDER_WORK_TRANSACTION: {
        // Get binder_transaction from binder_work
        t = container_of(w, struct binder_transaction, work);
        } break; }...if (t->buffer->target_node) {
        // Go to this branch
        // The receiver is binder_node
        struct binder_node *target_node = t->buffer->target_node;

        tr.target.ptr = target_node->ptr;
        // target_node->cookietr.cookie = target_node->cookie; ./ / change the CMD
        cmd = BR_TRANSACTION;
    } else{... }// Assign data from binder_transaction to trtr.code = t->code; tr.flags = t->flags; tr.sender_euid = from_kuid(current_user_ns(), t->sender_euid); . tr.data_size = t->buffer->data_size; tr.offsets_size = t->buffer->offsets_size;// t->buffer->data indicates the address of the sender data
    tr.data.ptr.buffer = (binder_uintptr_t)(
                            (uintptr_t)t->buffer->data +
                            proc->user_buffer_offset);
    tr.data.ptr.offsets = tr.data.ptr.buffer +
                            ALIGN(t->buffer->data_size,
                                sizeof(void *));
    // Copy the CMD and binder_transaction_data pointer to user space
    if (put_user(cmd, (uint32_t __user *)ptr))
            return -EFAULT;
    ptr += sizeof(uint32_t);
    if (copy_to_user(ptr, &tr, sizeof(tr)))
            return -EFAULT;
    ptr += sizeof(tr); . }Copy the code

The receiver driver layer handles the return, return to IPCThreadState: : getAndExecuteCommand, then talkWithDriver () to perform the read mIn (BWR read_buffer) data from the driver, Call IPCThreadState: : executeCommand processing.

status_t IPCThreadState::getAndExecuteCommand(a)
{
    status_t result;
    int32_t cmd;
    // Communicate with the driver by writing BC_ENTER_LOOPER to the driver
    // The instructions are then read from the driver, where the thread sleeps when there is no task
    result = talkWithDriver(a);if (result >= NO_ERROR) {
        size_t IN = mIn.dataAvail(a);if (IN < sizeof(int32_t)) return result;
        cmd = mIn.readInt32(a);// Maximum number of threads.// Parse and process
        result = executeCommand(cmd);
        // Maximum number of threads. }return result;
}
Copy the code

Cookie is converted into BBinder pointer, bbinder. transact is called to send data back to the Java layer, and sendReply is called to sendReply to the client

status_t IPCThreadState::executeCommand(int32_t cmd)
{
    BBinder* obj;
    RefBase::weakref_type* refs;
    status_t result = NO_ERROR;
    switch ((uint32_t)cmd) {
    case BR_TRANSACTION:
        {
            binder_transaction_data tr;
            // Read the structure from the pointer
            result = mIn.read(&tr, sizeof(tr));
            if(result ! = NO_ERROR)break;

            Parcel buffer;
            // binder_transaction_data parses and loads the parcel
            buffer.ipcSetDataReference(
                reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                tr.data_size,
                reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                tr.offsets_size/sizeof(binder_size_t), freeBuffer, this); . Parcel reply;status_t error;
            if (tr.target.ptr) {
                // Check whether strong references are invalid according to BBinder weak references
                if (reinterpret_cast<RefBase::weakref_type*>(
                        tr.target.ptr)->attemptIncStrong(this)) {
                    / / call BBinder: : transact
                    error = reinterpret_cast<BBinder*>(tr.cookie)->transact(tr.code, buffer,
                            &reply, tr.flags);
                    reinterpret_cast<BBinder*>(tr.cookie)->decStrong(this);
                } else{ error = UNKNOWN_TRANSACTION; }}else {
                error = the_context_object->transact(tr.code, buffer, &reply, tr.flags);
            }

            if ((tr.flags & TF_ONE_WAY) == 0) {
                // Non-oneway sends a reply to the client
                if (error < NO_ERROR) reply.setError(error);
                sendReply(reply, 0);
            } else{... }... }break;
}
Copy the code

Callback data to the Java layer

Since the BBinder pointer here is actually a JavaBBinder, BBinder::transact is called to JavaBBinder::onTransact. As for why JavaBBinder is a Java layer, take a look at the initial process of Binder objects at the Java layer, which eventually creates the corresponding JavaBBinder in the Native layer through JNI.

frameworks\native\libs\binder\Binder.cpp

status_t BBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    data.setDataPosition(0);
    status_t err = NO_ERROR;
    switch (code) {
        case PING_TRANSACTION:
            reply->writeInt32(pingBinder());
            break;
        default:
            err = onTransact(code, data, reply, flags);
            break;
    }

    if(reply ! =NULL) {
        reply->setDataPosition(0);
    }
    return err;
}
Copy the code

JavaBBinder::onTransact reflection calls the execTransact method in binder.java, frameworks\base\core\jni\ android_util_binder.cpp

Java Binder class name
const char* const kBinderPathName = "android/os/Binder";

static int int_register_android_os_Binder(JNIEnv* env)
{
    jclass clazz = FindClassOrDie(env, kBinderPathName);
    // binder. execTransact method signature
    gBinderOffsets.mClass = MakeGlobalRefOrDie(env, clazz);
    gBinderOffsets.mExecTransact = GetMethodIDOrDie(env, clazz, "execTransact"."(IJJI)Z");
    gBinderOffsets.mObject = GetFieldIDOrDie(env, clazz, "mObject"."J");
    // Register jNI methods
    return RegisterMethodsOrDie(
        env, kBinderPathName,
        gBinderMethods, NELEM(gBinderMethods));
}


class JavaBBinder : public BBinder
{
protected:
    virtual status_t onTransact(
        uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags = 0)
    {
        JNIEnv* env = javavm_to_jnienv(mVM);
        // mExecTransact = Binder. ExecTransact method signature
        // Reflection calls the execTransact method in binder.java
        jboolean res = env->CallBooleanMethod(mObject, gBinderOffsets.mExecTransact,
            code, reinterpret_cast<jlong>(&data), reinterpret_cast<jlong>(reply), flags);

        if (env->ExceptionCheck()) {
            jthrowable excep = env->ExceptionOccurred(a); env->DeleteLocalRef(excep); }...return res != JNI_FALSE ? NO_ERROR : UNKNOWN_TRANSACTION;
    }
}
Copy the code

Back to the Java layer, Binder. Call execTransact Binder. OnTransact, the Binder is IHelloInterface here. A Stub, so finally to IHelloInterface. The Stub. OnTransact, Stub class Hello method frameworks\ Base \ Core \ Java \ Android \ OS \ binder.java

private boolean execTransact(int code, long dataObj, long replyObj,
            int flags) {
        Parcel data = Parcel.obtain(dataObj);
        Parcel reply = Parcel.obtain(replyObj);
        final boolean tracingEnabled = Binder.isTracingEnabled();
        try{ res = onTransact(code, data, reply, flags); }...Copy the code

The server sends reply to the client

And the last step, the client is still waiting for response, back to IPCThreadState: : sendReply, or familiar formula, to write the driver BC_REPLY, finally all the way to the driver layer binder_thread_write BC_REPLY case

status_t IPCThreadState::sendReply(const Parcel& reply, uint32_t flags)
{
    status_t err;
    status_t statusBuffer;
    err = writeTransactionData(BC_REPLY, flags, - 1.0, reply, &statusBuffer);
    if (err < NO_ERROR) return err;

    return waitForResponse(NULL.NULL);
}
Copy the code

Binder_transaction is called again in binder_thread_write, this time with the reply parameter true

static int binder_thread_write(struct binder_proc *proc,
			struct binder_thread *thread,
			binder_uintptr_t binder_buffer, size_t size,
			binder_size_t *consumed)
{...// Loop read the instructions in buffer
    while (ptr < end && thread->return_error == BR_OK) {
        ......
        switch (cmd) {
        case BC_TRANSACTION:
        case BC_REPLY: {
            struct binder_transaction_data tr;
            if (copy_from_user(&tr, ptr, sizeof(tr)))
                    return -EFAULT;
            ptr += sizeof(tr);
            // Initiate a communication to the client
            binder_transaction(proc, thread, &tr, cmd == BC_REPLY);
            break; }}Copy the code

The reply process from the server to the client is similar to the process of the client initiating a binder_transaction, but takes a different branch

  1. Finally, add BINDER_WORK_TRANSACTION work to the client thread TODO queue
  2. Add BINDER_WORK_TRANSACTION_COMPLETE to its own (server) thread TODO queue
static void binder_transaction(struct binder_proc *proc,
			       struct binder_thread *thread,
			       struct binder_transaction_data *tr, int reply)
{	
    struct binder_transaction *t;
    struct binder_work *tcomplete;
    // Target queue work
    struct list_head *target_list;
    // Target wait queue
	wait_queue_head_t *target_wait;
	if (reply) {
            // The receiver transaction stack
            in_reply_to = thread->transaction_stack;
            if (in_reply_to == NULL) {
                    return_error = BR_FAILED_REPLY;
                    goto err_empty_call_stack;
            }
            binder_set_nice(in_reply_to->saved_priority);
            if(in_reply_to->to_thread ! = thread) { return_error = BR_FAILED_REPLY; in_reply_to =NULL;
                    goto err_bad_call_stack;
            }
            thread->transaction_stack = in_reply_to->to_parent;
            // Binder_thread on the sender
            target_thread = in_reply_to->from;
            if (target_thread == NULL) {
                    return_error = BR_DEAD_REPLY;
                    goto err_dead_binder;
            }
            // The binder_transaction at the top of both stacks should be the same
            if(target_thread->transaction_stack ! = in_reply_to) { return_error = BR_FAILED_REPLY; in_reply_to =NULL;
                    target_thread = NULL;
                    goto err_dead_binder;
            }
            target_proc = target_thread->proc;
	} else{... }if (target_thread) {
      e->to_thread = target_thread->pid;
      // The originating thread
      target_list = &target_thread->todo;
      target_wait = &target_thread->wait;
    } else{... }/ / create binder_transaction
    t = kzalloc(sizeof(*t), GFP_KERNEL);
	/ / create binder_work
	tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL); .if(! reply && ! (tr->flags & TF_ONE_WAY)) ......else
        // bc_reply does not require a follow-up reply
        t->from = NULL;
    // Create binder_buffer for this communication, that is, allocate memory blocks from mapped physical memoryt->buffer = binder_alloc_buf(target_proc, tr->data_size, tr->offsets_size, ! reply && (t->flags & TF_ONE_WAY)); .if (reply) {
        // The binder_transaction pops up from the stack
		binder_pop_transaction(target_thread, in_reply_to);
	} else if(! (t->flags & TF_ONE_WAY)) { ...... }else{... } t->work.type = BINDER_WORK_TRANSACTION;// Add BINDER_WORK_TRANSACTION binder_work to the sending thread toDO queue
    list_add_tail(&t->work.entry, target_list);
    tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
    // Add BINDER_WORK_TRANSACTION_COMPLETE to the toDO queue of the current thread
    list_add_tail(&tcomplete->entry, &thread->todo);
    // Wake up the sender wait queue
    if (target_wait)
            wake_up_interruptible(target_wait);
    return;
}
Copy the code

Next, the server thread binder_thread_read processes BINDER_WORK_TRANSACTION_COMPLETE, CMD into BR_TRANSACTION_COMPLETE returns to the IPCThreadState: : waitForResponse, end the server-side logic.

The client thread handle BINDER_WORK_TRANSACTION binder_thread_read, CMD into BR_REPLY also returns to the IPCThreadState: : waitForResponse, end client logic, since the end of the communication process.

And finally, I’ll show you a picture fromUnderstand the Android Binder communication architecture thoroughly, but the big guy is analyzing the communication process of the APP process to call the system service, while I am analyzing the communication process of two application processes, the overall is basically the same, here, see the figure system_server as another APP process.

Afterword.

After all, the so-called source code analysis or to look in depth, pay attention to is a patience, this paper can only play a detailed role. Binder service_manager in the framework layer does not use the IPCThreadState package, but other places are similar, this article does not mention, interested can read the source code.