The second step in ServiceMananger initialization registers the process object with the Binder driver

    if (binder_become_context_manager(bs)) {
        ALOGE("cannot become context manager (%s)\n", strerror(errno));
        return -1;
    }
Copy the code
int binder_become_context_manager(struct binder_state *bs)
{
    return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0);
}
Copy the code

Here again is the familiar IOCTL system call. This is the first time a BS object is used after initialization. Binder_state is the structure initialized in binder_open. This structure now contains the shared address of the binder.

Let’s look directly at the switch fragment in binder_ioctl

case BINDER_SET_CONTEXT_MGR: ret = binder_ioctl_set_ctx_mgr(filp); if (ret) goto err; . break;Copy the code
static int binder_ioctl_set_ctx_mgr(struct file *filp) { int ret = 0; struct binder_proc *proc = filp->private_data; . binder_context_mgr_node = binder_new_node(proc, 0, 0); if (binder_context_mgr_node == NULL) { ret = -ENOMEM; goto out; } binder_context_mgr_node->local_weak_refs++; binder_context_mgr_node->local_strong_refs++; binder_context_mgr_node->has_strong_ref = 1; binder_context_mgr_node->has_weak_ref = 1; out: return ret; }Copy the code

In fact, there’s only so much we need to focus on. Again, we get the private object from the file, the binder_proc object corresponding to the current process. The application has just started, so all binder drivers are empty. Therefore, we need to create a new binder_node structure and add it to binder’s red-black tree. The binder_node represents a collection of key data, such as processes, work items, and reference lists, in binder drivers.

After adding and generating binder a new Binder _node object, assign it to binder_context_mgr_node. This object is a global object created to quickly find the Service_manager. This is also due to the fact that Android uses this object everywhere.

Let’s look at the binder_new_node method.

static struct binder_node *binder_new_node(struct binder_proc *proc,
                       binder_uintptr_t ptr,
                       binder_uintptr_t cookie)
{
    struct rb_node **p = &proc->nodes.rb_node;
    struct rb_node *parent = NULL;
    struct binder_node *node;

    while (*p) {
        parent = *p;
        node = rb_entry(parent, struct binder_node, rb_node);

        if (ptr < node->ptr)
            p = &(*p)->rb_left;
        else if (ptr > node->ptr)
            p = &(*p)->rb_right;
        else
            return NULL;
    }

    node = kzalloc(sizeof(*node), GFP_KERNEL);
    if (node == NULL)
        return NULL;
    binder_stats_created(BINDER_STAT_NODE);
    rb_link_node(&node->rb_node, parent, p);
    rb_insert_color(&node->rb_node, &proc->nodes);
    node->debug_id = ++binder_last_id;
    node->proc = proc;
    node->ptr = ptr;
    node->cookie = cookie;
    node->work.type = BINDER_WORK_NODE;
    INIT_LIST_HEAD(&node->work.entry);
    INIT_LIST_HEAD(&node->async_todo);
    binder_debug(BINDER_DEBUG_INTERNAL_REFS,
             "%d:%d node %d u%016llx c%016llx created\n",
             proc->pid, current->pid, node->debug_id,
             (u64)node->ptr, (u64)node->cookie);
    return node;
}
Copy the code

At this point, Binder creates a new Binder entity, which is easy if you read my red-black tree article.

Binder will then find the node from the red-black tree based on the node’s weakly referenced address as the key. It will not be found, so a new node will be generated by kzmalloc and added to the rb_node red-black tree for management. Binder’s native objects are also set into cookies. The work mode of node is BINDER_WORK_NODE.

This completes the creation of the first Binder entity representing the Service Manager in the Binder driver.

At this time, the generation mode does not involve the top-level JavaBBinder, BpBinder, IPCThreadState and other core initialization Binder classes. Is the initialization of a very special service. So many places do not refer to service Manager as a binder service, but rather as a binder driven daemon. At the end of the day, however, it’s just a Binder object registered with a Binder driver.

The third step of ServiceMananger’s initialization, service_manager, starts the message wait loop

binder_loop(bs, svcmgr_handler);
Copy the code

This essentially starts the Message waiting initialization in the Android Service architecture.

void binder_loop(struct binder_state *bs, binder_handler func) { int res; struct binder_write_read bwr; uint32_t readbuf[32]; bwr.write_size = 0; bwr.write_consumed = 0; bwr.write_buffer = 0; readbuf[0] = BC_ENTER_LOOPER; binder_write(bs, readbuf, sizeof(uint32_t)); for (;;) { bwr.read_size = sizeof(readbuf); bwr.read_consumed = 0; bwr.read_buffer = (uintptr_t) readbuf; res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr); if (res < 0) { ... break; } res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func); if (res == 0) { ... break; } if (res < 0) { ... break; }}}Copy the code

Based on what we’ve learned, we can roughly say that looper does the following things.

1. Service_manager writes BC_ENTER_LOOPER to binder_write_read, telling binder drivers to enter service loops.

2. Service_manager blocks and waits for binder drivers to write data to binder_write_read.

3. Parse data transmitted from binder drivers.

1. Binder Looper sends the BC_ENTER_LOOPER command

There is a key structure, binder_write_read.

struct binder_write_read { binder_size_t write_size; // Write data size binder_size_t write_consumed; // Write data, how many locations have been written binder_uintptr_t write_buffer; // Write data to the data buffer binder_size_t read_size; // The size of the data to read binder_size_t read_consumed; Binder_uintptr_t read_buffer; // Data buffer to read data};Copy the code

The binder_write_read structure can be divided into two parts. The upper part describes the data to be written into and the lower part describes the data to be read. Binder_write_read structures are typically used to hold framework layer data and pass data to binder drivers.

    bwr.write_size = 0;
    bwr.write_consumed = 0;
    bwr.write_buffer = 0;

    readbuf[0] = BC_ENTER_LOOPER;
    binder_write(bs, readbuf, sizeof(uint32_t));
Copy the code

Binder initially initializes the written data. BC_ENTER_LOOPER is then put into the readbuf property and written to the binder driver via binder_write.

int binder_write(struct binder_state *bs, void *data, size_t len)
{
    struct binder_write_read bwr;
    int res;

    bwr.write_size = len;
    bwr.write_consumed = 0;
    bwr.write_buffer = (uintptr_t) data;
    bwr.read_size = 0;
    bwr.read_consumed = 0;
    bwr.read_buffer = 0;
    res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
    if (res < 0) {
        fprintf(stderr,"binder_write: ioctl failed (%s)\n",
                strerror(errno));
    }
    return res;
}
Copy the code

At this point we see that binder_write_read initializes all read data to 0, while write_buffer writes data, write_size writes data length, and write_consumed is 0. This tells binder drivers to know where the data is and where to start reading it.

static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) { int ret; struct binder_proc *proc = filp->private_data; struct binder_thread *thread; unsigned int size = _IOC_SIZE(cmd); void __user *ubuf = (void __user *)arg; . switch (cmd) { case BINDER_WRITE_READ: ret = binder_ioctl_write_read(filp, cmd, arg, thread); if (ret) goto err; break;Copy the code

At this point, the binder_iocTL_write_read branch will be used based on the data passed down above. This is one of the core branches of Binder, the method that drives reading and writing data between processes.

static int binder_ioctl_write_read(struct file *filp, unsigned int cmd, unsigned long arg, struct binder_thread *thread) { int ret = 0; struct binder_proc *proc = filp->private_data; unsigned int size = _IOC_SIZE(cmd); void __user *ubuf = (void __user *)arg; struct binder_write_read bwr; if (size ! = sizeof(struct binder_write_read)) { ret = -EINVAL; goto out; } if (copy_from_user(&bwr, ubuf, sizeof(bwr))) { ret = -EFAULT; goto out; }... if (bwr.write_size > 0) { ret = binder_thread_write(proc, thread, bwr.write_buffer, bwr.write_size, &bwr.write_consumed); trace_binder_write_done(ret); if (ret < 0) { bwr.read_consumed = 0; if (copy_to_user(ubuf, &bwr, sizeof(bwr))) ret = -EFAULT; goto out; } } if (bwr.read_size > 0) { ret = binder_thread_read(proc, thread, bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK); . if (! list_empty(&proc->todo)) wake_up_interruptible(&proc->wait); if (ret < 0) { if (copy_to_user(ubuf, &bwr, sizeof(bwr))) ret = -EFAULT; goto out; }}... if (copy_to_user(ubuf, &bwr, sizeof(bwr))) { ret = -EFAULT; goto out; } out: return ret; }Copy the code

Binder parses data sent from IOCTL in three steps.

1. Convert the passed data to the kernel’s binder_write_read structure.

2. If the value of write_size in binder_write_read is greater than 0, data is being written. Run binder_thread_write.

3. If the value of read_size in binder_write_read is greater than 0, data needs to be read. Run binder_thread_read.

When finished, copy from the kernel-state binder_write_read data to the user-state binder_write_read data. Because the uBUF passed in at this point is the binder_write_read corresponding to the user space. Therefore, you can copy data directly from kernel space to user space via copy_to_user.

Binder processes the written data first and then the read data for each protocol. To see why, take a look at the following two methods.

Binder handles write data that is passed down from the framework

static int binder_thread_write(struct binder_proc *proc, struct binder_thread *thread, binder_uintptr_t binder_buffer, size_t size, binder_size_t *consumed) { uint32_t cmd; void __user *buffer = (void __user *)(uintptr_t)binder_buffer; void __user *ptr = buffer + *consumed; void __user *end = buffer + size; while (ptr < end && thread->return_error == BR_OK) { if (get_user(cmd, (uint32_t __user *)ptr)) return -EFAULT; ptr += sizeof(uint32_t); trace_binder_command(cmd); if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) { binder_stats.bc[_IOC_NR(cmd)]++; proc->stats.bc[_IOC_NR(cmd)]++; thread->stats.bc[_IOC_NR(cmd)]++; } switch (cmd) { ... case BC_ENTER_LOOPER: binder_debug(BINDER_DEBUG_THREADS, "%d:%d BC_ENTER_LOOPER\n", proc->pid, thread->pid); if (thread->looper & BINDER_LOOPER_STATE_REGISTERED) { thread->looper |= BINDER_LOOPER_STATE_INVALID; binder_user_error("%d:%d ERROR: BC_ENTER_LOOPER called after BC_REGISTER_LOOPER\n", proc->pid, thread->pid); } thread->looper |= BINDER_LOOPER_STATE_ENTERED; break; case BC_EXIT_LOOPER: binder_debug(BINDER_DEBUG_THREADS, "%d:%d BC_EXIT_LOOPER\n", proc->pid, thread->pid); thread->looper |= BINDER_LOOPER_STATE_EXITED; break; . default: pr_err("%d:%d unknown command %d\n", proc->pid, thread->pid, cmd); return -EINVAL; } *consumed = ptr - buffer; } return 0; }Copy the code

Because binder cannot find the boundary of a data structure through sizeof, it uses parameters similar to those used in a Parcel to control data reads and writes.

   uint32_t cmd;
   void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
   void __user *ptr = buffer + *consumed;
   void __user *end = buffer + size;
Copy the code

CMD is the first int in the write_buffer written from the framework, which determines how the driver parses the command data.

2. Buffer Corresponds to the write_buffer of this user space, which stores the data to be processed.

3. PTR corresponds to how much data has been processed by binder drivers at this time.

4. End Determines the buffer boundary.

Data parsing loop

   while (ptr < end && thread->return_error == BR_OK) {
       if (get_user(cmd, (uint32_t __user *)ptr))
           return -EFAULT;
       ptr += sizeof(uint32_t);
Copy the code

The binder_thread returns BR_OK. The binder_thread returns BR_OK.

From the first get_user copy method from user space, the first parameter of each loop must be the qualified CMD corresponding to the binder branch command below. The consumption pointer is then moved forward by an int, followed by the data we need to process.

The command we pass down from user space is BC_ENTER_LOOPER.

  case BC_ENTER_LOOPER:
           binder_debug(BINDER_DEBUG_THREADS,
                    "%d:%d BC_ENTER_LOOPER\n",
                    proc->pid, thread->pid);
           if (thread->looper & BINDER_LOOPER_STATE_REGISTERED) {
               thread->looper |= BINDER_LOOPER_STATE_INVALID;
               binder_user_error("%d:%d ERROR: BC_ENTER_LOOPER called after BC_REGISTER_LOOPER\n",
                   proc->pid, thread->pid);
           }
           thread->looper |= BINDER_LOOPER_STATE_ENTERED;
           break;
Copy the code

In this case, the command simply needs to change the state of the binder_thread corresponding to the current binder_proc.

This completes the service_manager’s writing from user space. Remember the struct handling of binder_write_read above? Because read_size is set to 0, binder_thread_read is not available. Copy binder_write_read to user space.

The Service_manager officially enters the Binder Looper loop waiting for messages.

  for (;;) {
        bwr.read_size = sizeof(readbuf);
        bwr.read_consumed = 0;
        bwr.read_buffer = (uintptr_t) readbuf;

        res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);

        if (res < 0) {
...
            break;
        }
Copy the code

As you can see, this is an infinite loop waiting for binder driver information to return information. But how can being a Google developer really keep the cycle going? Those of you who read the zygote chapter I launched know that running an infinite loop constantly consumes CPU, so you must avoid looper overhead by blocking or something in the loop.

Let’s look at the first part of the loop. At this point, set the length of the data read to readbuf an int array of length 32. It then communicates to the binder driver via IOCTL.

If write_size is 0 and read_size is not 0, the binder_ioctl_write_read code will be used to read data:

if (bwr.read_size > 0) { ret = binder_thread_read(proc, thread, bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK); . if (! list_empty(&proc->todo)) wake_up_interruptible(&proc->wait); if (ret < 0) { if (copy_to_user(ubuf, &bwr, sizeof(bwr))) ret = -EFAULT; goto out; }}... if (copy_to_user(ubuf, &bwr, sizeof(bwr))) { ret = -EFAULT; goto out; } out: return ret;Copy the code

Let’s look at the internal logic of binder_thread_read.

static int binder_thread_read(struct binder_proc *proc, struct binder_thread *thread, binder_uintptr_t binder_buffer, size_t size, binder_size_t *consumed, int non_block) { void __user *buffer = (void __user *)(uintptr_t)binder_buffer; void __user *ptr = buffer + *consumed; void __user *end = buffer + size; int ret = 0; int wait_for_proc_work; if (*consumed == 0) { if (put_user(BR_NOOP, (uint32_t __user *)ptr)) return -EFAULT; ptr += sizeof(uint32_t); } retry: wait_for_proc_work = thread->transaction_stack == NULL && list_empty(&thread->todo); if (thread->return_error ! = BR_OK && ptr < end) { if (thread->return_error2 ! = BR_OK) { if (put_user(thread->return_error2, (uint32_t __user *)ptr)) return -EFAULT; ptr += sizeof(uint32_t); binder_stat_br(proc, thread, thread->return_error2); if (ptr == end) goto done; thread->return_error2 = BR_OK; } if (put_user(thread->return_error, (uint32_t __user *)ptr)) return -EFAULT; ptr += sizeof(uint32_t); binder_stat_br(proc, thread, thread->return_error); thread->return_error = BR_OK; goto done; } thread->looper |= BINDER_LOOPER_STATE_WAITING; if (wait_for_proc_work) proc->ready_threads++; binder_unlock(__func__); trace_binder_wait_for_work(wait_for_proc_work, !! thread->transaction_stack, ! list_empty(&thread->todo)); if (wait_for_proc_work) { if (! (thread->looper & (BINDER_LOOPER_STATE_REGISTERED | BINDER_LOOPER_STATE_ENTERED))) { binder_user_error("%d:%d ERROR: Thread waiting for process work before calling BC_REGISTER_LOOPER or BC_ENTER_LOOPER (state %x)\n", proc->pid, thread->pid, thread->looper); wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2); } binder_set_nice(proc->default_priority); if (non_block) { if (! binder_has_proc_work(proc, thread)) ret = -EAGAIN; } else ret = wait_event_freezable_exclusive(proc->wait, binder_has_proc_work(proc, thread)); } else { if (non_block) { if (! binder_has_thread_work(thread)) ret = -EAGAIN; } else ret = wait_event_freezable(thread->wait, binder_has_thread_work(thread)); } binder_lock(__func__); if (wait_for_proc_work) proc->ready_threads--; thread->looper &= ~BINDER_LOOPER_STATE_WAITING; if (ret) return ret; while (1) { ... } done: *consumed = ptr - buffer; . return 0; }Copy the code

The principle is similar to binder_thread_write. Binder_thread_read does several things.

1. If the binder driver does not read any data at this time, it will add BR_NOOP to the first segment of data returned by user space.

2. Wait_for_proc_work Checks whether the current process needs to wait for work. Binder _thread’s transaction stack is empty and binder_thread’s TODO list does not contain any items that need todo.

3. Set the binder_thread->looper state to BINDER_LOOPER_STATE_WAITING

4. If waiting is required, determine whether the binder is currently blocking or non-blocking when initialized. If it is blocked, the binder _thread-> WAIT queue is removed and the process enters the wait. Remember what I wrote earlier about the nature of waiting queues. In fact, the process at this time will be through the process scheduling, the current process needs CPU resources to transfer out. If it is not blocking, it checks whether there is any work needed in the current binder_thread and returns if there is not.

5. The BINDER_LOOPER_STATE_WAITING of thread->looper is closed when the wait queue of thread->looper is awakened.

6. Enter the while loop to parse data.

In this scenario, we don’t have any queues that need to work, so block the Service_manager with wait_EVENT_freezable.

Service_manager binder_parse Retrieves message messages returned by binder drivers

int binder_parse(struct binder_state *bs, struct binder_io *bio, uintptr_t ptr, size_t size, binder_handler func) { int r = 1; uintptr_t end = ptr + (uintptr_t) size; while (ptr < end) { uint32_t cmd = *(uint32_t *) ptr; ptr += sizeof(uint32_t); #if TRACE fprintf(stderr,"%s:\n", cmd_name(cmd)); #endif switch(cmd) { case BR_NOOP: break; . default: ALOGE("parse: OOPS %d\n", cmd); return -1; } } return r; }Copy the code

In this scenario, if a thread wakes up service_manager, the PTR is actually the readbuf buffer. Regardless of this data, the first returned argument must be BR_NOOP, telling the Service_manager the starting flag bit to start reading data. It then moves the pointer around, reading and processing each piece of information.

As a result, we can simulate how the binder drivers encapsulate data during communication, just like TCP packets.

In particular, when reading communication messages, the packet format will be as follows:

conclusion

From the figure above, we can roughly conclude that Binder drivers can be divided into the following three steps during system initialization:

Binder_open Open the binder driver file, check the version number, and map the process and related information to the kernel

2. Mmap maps the address of the current process to the kernel after confirming that the binder driver can be opened.

3. Register the current Service_manager as a Binder entity with the Binder driver as the first Binder service.

4. Go to binder_loop. The Binder driver is first notified by ioctl that the service_manager has entered cyclic mode. The read data function is then called and blocked. When service_manager is woken up, it begins parsing data from the Binder.

So far, I have described the initialization of the highlighted part in the Android service system in the following figure.

Note that we have not added any binder services at this time. But the basic DNS (Service_manager) and route dispatcher (Binder driver) are already in place, so let’s talk about client and Server initialization.