Before referring to Binder. Think of a Binder as a channel for interprocess communication. There are several types of interprocess communication that we often use.

In Linux, we have the following types of interprocess communication:

1. The pipe line

2.FIFO named pipe

3. The signal signal

4. Message queue

Section 5. The socket condom

6.SharedMemory SharedMemory

User space (user mode) and kernel space (kernel mode)

Some operating systems allow all users to interact with the hardware. However, uniX-like operating systems hide the low-level details associated with the physical organization of the computer from user applications. When a program wants to try out hardware resources, it must make a request to the operating system. The kernel will evaluate the request, allow it to be used, and the kernel will interact with the hardware on behalf of the application. To implement this mechanism, modern operating systems rely on special hardware features that prohibit user programs from directly dealing with the underlying layer or directly accessing arbitrary physical addresses. The hardware introduces at least two modes of execution to the CPU: the user’s non-privileged mode and the kernel’s privileged mode. Also we often say user mode and kernel mode.

This is a definition from the Deep Understanding Linux book. As a simple example, when we need to run a file operation, we will use an operation method like open, and the user program will go from user mode to kernel mode. When open ends, the user program will go back to user mode.

Why Linux systems are designed this way. The biggest reason is to make the underlying kernel transparent, and if there is a problem with the user program, the kernel will not be affected.

Let’s think about how we can connect to each other in two separate process blocks. A very common idea that we can think of is to have one process store the information that needs to be interworked in a file, and the other process read the data from the file.

Pipeline pipe

This idea is widely used in Linux systems. For example, pipe actually creates two files (the file descriptor is actually the kernel cache), one for reading and one for writing. A PIPE is a half-duplex channel, which means that only one process can communicate in one direction at a time. This reduces the possibility of transmission errors caused by multiple processes competing back and forth for file contents.

In this case, pipe must first copy the initialization data into kernel space by calling copy_from_user. In this case, alloc_file and kmalloc call slab to create two file descriptors in kernel space.

The schematic diagram is as follows:

Remember that fd[0] is a read channel and fd[1] is a write channel

FIFO named pipe

Named pipe, this pipe is based on the original processing, relying on the Linux file system. As the name suggests, the pipeline is first in, first out, allowing data to be communicated sequentially. However, the bigger point of naming pipes is to give them names. The original pipe was an anonymous pipe, so only the child process under its control can communicate. Instead of trying to give the address to the second process, you can use the name to find the file and establish the channel.

Signal the signal

The signal is something we’ve heard about for a long time. For example, we often refer to interrupt signals as a kind of signal. There are some built-in notification events in the Linux kernel. Each time this event is emitted, the kernel tells the process to receive the event and handle it.

Kernel implementation:

1. In order to ensure that the corresponding signal can be sent to the correct process. The kernel needs to remember which signals are blocking the current process.

2. When the kernel mode is switched to the user mode, check whether the process generates signals. This detection is triggered once per clock, typically within a few milliseconds.

3. Also detect which signals are being ignored. If the following conditions are met, the signal is ignored

  • The process is not traced, and the task_struct (used to describe the process structure) PT_PTRACED as 0.

  • The process is not blocking such signals

  • The process ignores such signals

4. Signal processing

At this point we need to note that in kernel mode we do not process the signal, often will be thrown into user space, through copy_to_user copy to user space to handle.

The message queue

This sounds a bit like the message queue in Android. They are indeed similar in design. To use a message queue, generate a key through FTOk, and then create a message queue (file) using msgget through the key. Then send or receive things using MSGSND or MSGRCV.

At this point you can actually see on the kernel that a file has been created. The corresponding message is sent to the message queue. Since fTOK generates a key, the reader and writer can find the corresponding message queue in the kernel space and complete the message delivery with the help of this queue. In order to make the data to be able to switch back and forth between user mode and kernel mode, or use copy _from_user,copy_to_user. Its data structure is a linked list.

The socket condom section

This is something all of us are very familiar with. We can’t do network programming without it. It’s actually a special file in principle, and we’re constantly listening for the socket state to respond. Since it is a file operation, it must experience a user mode to kernel mode, kernel mode to user mode transformation. This use of native listening is often used during the Incubation process of Zyogte.

The Shared memory

The shared memory design is the closest to binder’s design. At its core, mMAP memory mapping technology is also used. It is also similar in design to message queues. Ftok is also used to generate a key, and then through this key to apply for memory shMget, you can do operations on this segment of the address.

A semaphore

In fact, the semaphore’s main function is to lock the process, and if a process accesses the resource in use, it will go to sleep.

Overview of Binder

When we introduced some of the basic IPC(inter-process communication) in Linux, we found an interesting phenomenon. Most IPC communications are communicated back and forth through the file as a relay station. This is bound to cause user mode, kernel mode in the back and forth switch, so must cause this data copy twice. So is there a way to optimize this communication method? Binder was born.

So if we’re going to design Binder, how are we going to design Binder? First, in order to make the whole transparent and reliable, we can use TCP/IP this set of ideas to ensure the reliability of information. Second, in order to reduce the back-and-forth between users copying space into the kernel, we can create ways to mimic shared memory.

We can focus on four roles in Binder:

Binder drive

ServiceManager

Binder Client

Binder Service

From here we understand that in kernel space, there is a Binder driver, and this driver is as the whole IPC communication relay station. I don’t care what you do, I just need to find you and give you the message.

At this point, the Service Manager acts as a Binder driven daemon, similar to the DNS status in TCP communications. We will register relevant binders in it, and finally we will find the remote side of the Binder through the service Manager service. In fact, this Service Manager is the first Android service registered with the Binder.

Binder Client Binder’s Client is equivalent to the concept of Client in C/S architecture.

Binder Service Binder server, which is equivalent to the concept of server in C/S architecture.

The Binder driver itself acts as a routing table, a routing distributor. Every time a client wants to find a service, it’s driven by a Binder. The Binder doesn’t care what the content is, just helps you distribute it to the service.

Once again, the concept of server and client is just for the sake of understanding. In fact, there is no such thing as service and client in the Binder driver’s view, only remote (or proxy) and local. Therefore, in the whole IPC process communication, the Binder makes the request as the local side (the client), and the remote side (the proxy/remote side) responds to the request.

Once you have a rough idea of these characters, you can see what the figure above means. Simply, when the Android system starts, it starts a Service Manager process. This process initializes the Binder drivers for the kernel. At this point, DNS and routing are ready. Just wait until the server register in, the client to take the link interaction.

Here is a schematic based on Binder’s design.

Why I say Binder is very similar to TCP. First of all, we never noticed binder in our development, let alone the fact that we had information communicating across processes in Android development. This also shows that binder’s design is excellent and binder has become almost transparent to the upper layer.

So let’s skip the Service Manager and binder drivers and look at the relationship between services and clients.

Binder actually has two looper forms for IPC mode. One is AIDL and the other is added to the Service Manager.

Let’s look directly at the service Manager service hosting pattern. Next, let’s look at the source code a little bit.

Let’s talk about binder’s startup process in chronological order. First of all, we need to load the driver dynamically through the kernel, which benefits from the static modularity of Linux, you can write a driver file, after the kernel loading, loading the kernel. Let’s look at the source code for Binder. C in the kernel.

Binder driver initialization

Here is a brief introduction to driver programming. Driver programming is actually similar to our Android and iOS development, which is also based on the defined interface programming.

For simplicity’s sake, we can think of a driver as a special file that is loaded after the kernel has been loaded. So if it’s a file, then there must be open, close, write and so on.

So the Binder driver has this structure:

static const struct file_operations binder_fops = {
   .owner = THIS_MODULE,
   .poll = binder_poll,
   .unlocked_ioctl = binder_ioctl,
   .compat_ioctl = binder_ioctl,
   .mmap = binder_mmap,
   .open = binder_open,
   .flush = binder_flush,
   .release = binder_release,
};
Copy the code

For the structure file_operations refer the following method Pointers, poll, ioctl, mmap, open, flush, release, etc. As a simple example, when we call the file descriptor to open a file (call the open method), we call the method binder_open through kernel space.

Of course, there is also an initialization function in the module loading daemon.

device_initcall(binder_init);
Copy the code

When we’re driving, we need to call this function, and the binder_init passed by this function is our driver initialization method.

Let’s see what’s in there.

static int __init binder_init(void) { int ret; binder_deferred_workqueue = create_singlethread_workqueue("binder"); if (! binder_deferred_workqueue) return -ENOMEM; binder_debugfs_dir_entry_root = debugfs_create_dir("binder", NULL); if (binder_debugfs_dir_entry_root) binder_debugfs_dir_entry_proc = debugfs_create_dir("proc", binder_debugfs_dir_entry_root); ret = misc_register(&binder_miscdev); if (binder_debugfs_dir_entry_root) { debugfs_create_file("state", S_IRUGO, binder_debugfs_dir_entry_root, NULL, &binder_state_fops); debugfs_create_file("stats", S_IRUGO, binder_debugfs_dir_entry_root, NULL, &binder_stats_fops); debugfs_create_file("transactions", S_IRUGO, binder_debugfs_dir_entry_root, NULL, &binder_transactions_fops); debugfs_create_file("transaction_log", S_IRUGO, binder_debugfs_dir_entry_root, &binder_transaction_log, &binder_transaction_log_fops); debugfs_create_file("failed_transaction_log", S_IRUGO, binder_debugfs_dir_entry_root, &binder_transaction_log_failed, &binder_transaction_log_fops); } return ret; }Copy the code

1. A delayed work queue is created for the binder. Then it is consistent with 2.3.

2. Register the binder with misc_Register to the MISC_List system list

3. Create and debug a /proc/binder/proc folder for binder, Next, create the following five files in /proc/binder: state,stats, transactions, transaction_log, and failed_transaction_log.

This completes the binder driver loading into Android devices.

Now that the “Routing” binder driver is ready, let’s look at the “DNS” Service_Manager.

Service Manager

First look at the passage in init.rc:

start servicemanager
start hwservicemanager
start vndservicemanager
Copy the code

Let’s look at servicemanager.rc again

service servicemanager /system/bin/servicemanager
    class core animation
    user system
    group system readproc
    critical
    onrestart restart healthd
    onrestart restart zygote
    onrestart restart audioserver
    onrestart restart media
    onrestart restart surfaceflinger
    onrestart restart inputflinger
    onrestart restart drm
    onrestart restart cameraserver
    onrestart restart keystore
    onrestart restart gatekeeperd
    writepid /dev/cpuset/system-background/tasks
    shutdown critical
Copy the code

Let’s look directly at the main method after startup.

int main(int argc, char** argv) { struct binder_state *bs; union selinux_callback cb; char *driver; If (argc > 1) {driver = argv[1]; } else { driver = "/dev/binder"; Bs = binder_open(driver, 128*1024); bs = binder_open(driver, 128*1024); . If (binder_become_context_manager(bs)) {ALOGE(" Cannot become Context Manager (%s)\n",  strerror(errno)); return -1; }... // Enter the event waiting for binder_loop(bs, svcmgr_handler); return 0; }Copy the code

In general, there are three steps involved.

1. Enable the binder driver

2. Set Service_Manager as the first service to enter the Binder, which is often referred to as the Binder’s daemon

3. Enter the binder cycle and wait for the command.

The first step of the ServiceManager initialization is to enable the Binder driver

struct binder_state *binder_open(const char* driver, size_t mapsize) { struct binder_state *bs; struct binder_version vers; bs = malloc(sizeof(*bs)); if (! bs) { errno = ENOMEM; return NULL; } bs->fd = open(driver, O_RDWR | O_CLOEXEC); if (bs->fd < 0) { fprintf(stderr,"binder: cannot open %s (%s)\n", driver, strerror(errno)); goto fail_open; } if ((ioctl(bs->fd, BINDER_VERSION, &vers) == -1) || (vers.protocol_version ! = BINDER_CURRENT_PROTOCOL_VERSION)) { fprintf(stderr, "binder: kernel driver version (%d) differs from user space version (%d)\n", vers.protocol_version, BINDER_CURRENT_PROTOCOL_VERSION); goto fail_open; } bs->mapsize = mapsize; bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0); if (bs->mapped == MAP_FAILED) { fprintf(stderr,"binder: cannot map device (%s)\n", strerror(errno)); goto fail_map; } return bs; fail_map: close(bs->fd); fail_open: free(bs); return NULL; }Copy the code

Binder_status is created in user space. Open the binder driver file and return it to the fd property of binder_state.

The core is here

bs->fd = open(driver, O_RDWR | O_CLOEXEC);
Copy the code

How syscall works

How does user mode get into kernel mode? It’s a headache. The open method will actually go into kernel mode, and the way to communicate kernel mode is the sys_call method.

Here’s how, in general, the system finds the kernel by finding the corresponding method. This is where syscall comes in. Those of you who are familiar with C/C ++ programming will understand that we all need this header file <unistd.h> when we want to use system calls. And this header file is the point.

#ifndef _UAPI_ASM_ARM_UNISTD_COMMON_H
#define _UAPI_ASM_ARM_UNISTD_COMMON_H 1
#define __NR_restart_syscall (__NR_SYSCALL_BASE + 0)
#define __NR_exit (__NR_SYSCALL_BASE + 1)
#define __NR_fork (__NR_SYSCALL_BASE + 2)
#define __NR_read (__NR_SYSCALL_BASE + 3)
#define __NR_write (__NR_SYSCALL_BASE + 4)
#define __NR_open (__NR_SYSCALL_BASE + 5)
#define __NR_close (__NR_SYSCALL_BASE + 6)
#define __NR_creat (__NR_SYSCALL_BASE + 8)
#define __NR_link (__NR_SYSCALL_BASE + 9)
...
#endif
Copy the code

Each kernel method is declared in user space. Each call emits an interrupt signal. The user space will call the assembly by calling the assembly call, which will find the corresponding method in the kernel from the header file as an attempt.

Therefore, we can plan a syscall flow chart. Take the most common kill analysis on the web as an example.

From the above picture, we can know that the whole process atmosphere is as follows:

1. Call the kill() method.

2. Call the kill.S assembly method.

3. Enter the kernel mode officially through the assembly method

4. Query sys_kill from sys_call_table

5. Perform the real kernel execution action ret_fast_syscall

6. Go back to the user space kill() code.

From this we can see that there is a one-to-one unistd.h header throughout user space and kernel space that is used to find the one-to-one corresponding method in user space. The core mechanism to realize the transformation from user mode to kernel mode is SWI soft interrupt.

So here’s the trick: every kernel corresponding to the user space call must be XXX corresponding to the xxx.s assembly file. The assembly file contains the address of the exception table (typically __NR_xxx), which is referred to the actual kernel method (sys_xxx) by the CALL assembly instruction.

User space handling of open

#ifndef _FCNTL_H
#define _FCNTL_H

...
int openat(int __dir_fd, const char* __path, int __flags, ...);
int openat64(int __dir_fd, const char* __path, int __flags, ...) __INTRODUCED_IN(21);
int open(const char* __path, int __flags, ...);
...

#endif
Copy the code

At this point we find this method implemented in open.cpp

#include <fcntl.h> #include <stdarg.h> #include <stdlib.h> #include <unistd.h> #include "private/bionic_fortify.h" Extern "C" int __openat(int, const char*, int, int); extern "C" int __openat(int, const char*, int, int); int open(const char* pathname, int flags, ...) { mode_t mode = 0; if (needs_mode(flags)) { va_list args; va_start(args, flags); mode = static_cast<mode_t>(va_arg(args, int)); va_end(args); } return __openat(AT_FDCWD, pathname, force_O_LARGEFILE(flags), mode); }Copy the code

This kernel version uses the __openat exception address instead of the __open exception address. This kernel reduces the number of exception constants. Let’s look at the assembly below __openat.s.

#include <private/bionic_asm.h> ENTRY(__openat) mov IP, r7.cfi_register r7, IP =__NR_openat // enter the kernel swI #0 // retrieve the previous state from the IP, Mov r7, ip.cfi_restore r7 CMN r0, #(MAX_ERRNO + 1) BXLS lr neg r0, r0 b __set_errno_internal END(__openat)Copy the code

First, we’ll give the address in the exception table to register R7, then we’ll go into kernel mode via SWI interrupt, and then we’ll go into user mode when the processing is complete.

Processing of open in kernel space

So let’s go straight to sys_openat. In fact, we couldn’t find a method for sys_openAT at all, but in fact we could find the following method

SYSCALL_DEFINE4(openat, int, dfd, const char __user *, filename, int, flags,
        umode_t, mode)
{
    if (force_o_largefile())
        flags |= O_LARGEFILE;

    return do_sys_open(dfd, filename, flags, mode);
}
Copy the code

And this SYSCALL_DEFINE4 is a define which actually converts the first argument to sys_openat as the method name.

Finally, the open core processing method do_sys_open.

do_sys_open

Next comes a bit of the virtual file system of VFS Linux.

long do_sys_open(int dfd, const char __user *filename, int flags, umode_t mode) { struct open_flags op; int fd = build_open_flags(flags, mode, &op); struct filename *tmp; if (fd) return fd; TMP = getName (filename); if (IS_ERR(tmp)) return PTR_ERR(tmp); Current (task_struct); fd = get_unused_fd_flags(flags); if (fd >= 0) { struct file *f = do_filp_open(dfd, tmp, &op); if (IS_ERR(f)) { put_unused_fd(fd); fd = PTR_ERR(f); } else { fsnotify_open(f); fd_install(fd, f); } } putname(tmp); return fd; }Copy the code

1. Getname Obtains the path name from the process address

2. Get_unused_fd_flags Get the FDT descriptor that is idle in the current FDT process and add it to double capacity expansion or (minimum) PAGE_SIZE *8.

3. If fd>= 0, the file structure is found, and do_filp_open is used to obtain the file structure.

4. Set the fd file descriptor to the file structure

Let’s focus on the do_filp_open method.

path_openat

The core logic for do_filp_open is in path_openat

static struct file *path_openat(int dfd, struct filename *pathname, struct nameidata *nd, const struct open_flags *op, int flags) { struct file *base = NULL; struct file *file; struct path path; int opened = 0; int error; File = get_empty_filp(); if (IS_ERR(file)) return file; file->f_flags = op->open_flag; if (unlikely(file->f_flags & __O_TMPFILE)) { error = do_tmpfile(dfd, pathname, nd, flags, op, file, &opened); goto out; } error = path_init(dfd, pathname->name, flags | LOOKUP_PARENT, nd, &base); if (unlikely(error)) goto out; current->total_link_count = 0; error = link_path_walk(pathname->name, nd); if (unlikely(error)) goto out; error = do_last(nd, &path, file, op, &opened, pathname); while (unlikely(error > 0)) { /* trailing symlink */ struct path link = path; void *cookie; if (! (nd->flags & LOOKUP_FOLLOW)) { path_put_conditional(&path, nd); path_put(&nd->path); error = -ELOOP; break; } error = may_follow_link(&link, nd); if (unlikely(error)) break; nd->flags |= LOOKUP_PARENT; nd->flags &= ~(LOOKUP_OPEN|LOOKUP_CREATE|LOOKUP_EXCL); error = follow_link(&link, nd, &cookie); if (unlikely(error)) break; error = do_last(nd, &path, file, op, &opened, pathname); put_link(nd, &link, cookie); } out: if (nd->root.mnt && ! (nd->flags & LOOKUP_ROOT)) path_put(&nd->root); if (base) fput(base); if (! (opened & FILE_OPENED)) { BUG_ON(! error); put_filp(file); } if (unlikely(error)) { if (error == -EOPENSTALE) { if (flags & LOOKUP_RCU) error = -ECHILD; else error = -ESTALE; } file = ERR_PTR(error); } return file; }Copy the code

We don’t need to get into the details, just understand how binder drives the entire Linux user control to the kernel space.

Here is the core, roughly divided into three steps:

1. Get_empty_filp Obtains an empty file from filpcache

2. Path_init sets nameidata, which is an important structure that generally represents the path of the file in the kernel. It is commonly used in parsing and finding pathnames.

3. Link_path_walk progressively parses the path and initializes the dentry structure, which is the directory structure in the virtual file system. The inode structure is also set. The inode structure, which is important in virtual file systems, refers to the inode. The information that the file system needs to process is stored in the inode. The inode is unique and exists with the existence of the file.

4. Finally, it is submitted to do_last to realize the virtual file system startup file.

Let’s focus on what do_last does, and focus on the core logic

static int do_last(struct nameidata *nd, struct path *path, struct file *file, const struct open_flags *op, int *opened, struct filename *name) { ... if (! (open_flag & O_CREAT)) { if (nd->last.name[nd->last.len]) nd->flags |= LOOKUP_FOLLOW | LOOKUP_DIRECTORY; if (open_flag & O_PATH && ! (nd->flags & LOOKUP_FOLLOW)) symlink_ok = true; /* we _can_ be in RCU mode here */ error = lookup_fast(nd, path, &inode); if (likely(! error)) goto finish_lookup; if (error < 0) goto out; BUG_ON(nd->inode ! = dir->d_inode); } else { ... } retry_lookup: ... finish_lookup: /* we _can_ be in RCU mode here */ error = -ENOENT; if (! inode || d_is_negative(path->dentry)) { path_to_nameidata(path, nd); goto out; } if (should_follow_link(path->dentry, ! symlink_ok)) { if (nd->flags & LOOKUP_RCU) { if (unlikely(unlazy_walk(nd, path->dentry))) { error = -ECHILD; goto out; } } BUG_ON(inode ! = path->dentry->d_inode); return 1; } if ((nd->flags & LOOKUP_RCU) || nd->path.mnt ! = path->mnt) { path_to_nameidata(path, nd); } else { save_parent.dentry = nd->path.dentry; save_parent.mnt = mntget(path->mnt); nd->path.dentry = path->dentry; } nd->inode = inode; /* Why this, you ask? _Now_ we might have grown LOOKUP_JUMPED... */ finish_open: ... finish_open_created: error = may_open(&nd->path, acc_mode, open_flag); if (error) goto out; BUG_ON(*opened & FILE_OPENED); /* once it's opened, it's opened */ error = vfs_open(&nd->path, file, current_cred()); if (! error) { *opened |= FILE_OPENED; } else { if (error == -EOPENSTALE) goto stale_open; goto out; } opened: ... out: if (got_write) mnt_drop_write(nd->path.mnt); path_put(&save_parent); terminate_walk(nd); return error; exit_dput: ... exit_fput: ... stale_open: ... }Copy the code

After simplification, if we wanted to open an existing file, we would go in this order. The line calls lookup_fast to check if the corresponding inode is a mount point using RCU (rCU is a lock that allows multiple reads and writes by one thread). If it is, it is mounted to the system file instead of continuing to search until it is found. Once found, it goes to the finish_LOOKUP tag to determine if the file found is a connection, and if not, it sets the data to Nameidata. Then call mayopen to detect permissions, and finally go to vfs_open to actually call the virtual file system boot method.

vfs_open

Vfs_open calls do_dentry_open

static int do_dentry_open(struct file *f, int (*open)(struct inode *, struct file *), const struct cred *cred) { static const struct file_operations empty_fops = {}; struct inode *inode; int error; f->f_mode = OPEN_FMODE(f->f_flags) | FMODE_LSEEK | FMODE_PREAD | FMODE_PWRITE; path_get(&f->f_path); inode = f->f_inode = f->f_path.dentry->d_inode; f->f_mapping = inode->i_mapping; . f->f_op = fops_get(inode->i_fop); . if (! open) open = f->f_op->open; if (open) { error = open(inode, f); if (error) goto cleanup_all; }...Copy the code

The inode is the inode of the file system. The inode is the inode of the file system. The inode is the inode of the file system, and the inode is the inode of the file system. The open method in the file operator is called once it is determined that it is not open.

Back in the Binder

After a long transition from user space to kernel space, remember binder driver files.

static const struct file_operations binder_fops = {
...
   .open = binder_open,
...
};
Copy the code

F ->f_op->open corresponds to the binder_open method pointer in binder.

binder_open

static int binder_open(struct inode *nodp, struct file *filp) { struct binder_proc *proc; binder_debug(BINDER_DEBUG_OPEN_CLOSE, "binder_open: %d:%d\n", current->group_leader->pid, current->pid); Proc = kzalloc(sizeof(*proc), GFP_KERNEL); if (proc == NULL) return -ENOMEM; get_task_struct(current); proc->tsk = current; INIT_LIST_HEAD(&proc->todo); init_waitqueue_head(&proc->wait); proc->default_priority = task_nice(current); binder_lock(__func__); binder_stats_created(BINDER_STAT_PROC); hlist_add_head(&proc->proc_node, &binder_procs); proc->pid = current->group_leader->pid; INIT_LIST_HEAD(&proc->delivered_death); filp->private_data = proc; binder_unlock(__func__); if (binder_debugfs_dir_entry_proc) { char strbuf[11]; snprintf(strbuf, sizeof(strbuf), "%u", proc->pid); proc->debugfs_entry = debugfs_create_file(strbuf, S_IRUGO, binder_debugfs_dir_entry_proc, proc, &binder_proc_fops); } return 0; }Copy the code

Here’s an attempt to grab some details. First we encounter the first important Binder structure, binder_proc. This structure represents the process object driven by the currently invoked Binder.

struct binder_proc *proc; binder_debug(BINDER_DEBUG_OPEN_CLOSE, "binder_open: %d:%d\n", current->group_leader->pid, current->pid); Proc = kzalloc(sizeof(*proc), GFP_KERNEL); if (proc == NULL) return -ENOMEM; get_task_struct(current); proc->tsk = current;Copy the code

This method, kzalloc, is usually used to apply for and initialize a piece of memory in kernel space. Therefore, for binder_proc, memory is allocated in the kernel. If you’re familiar with Linux, task_struct stands for process, or process descriptor. So at this point, binder_proc first records what process node is currently using the binder.

 INIT_LIST_HEAD(&proc->todo);
    init_waitqueue_head(&proc->wait);
    proc->default_priority = task_nice(current);

    binder_lock(__func__);

    binder_stats_created(BINDER_STAT_PROC);
    hlist_add_head(&proc->proc_node, &binder_procs);
    proc->pid = current->group_leader->pid;
    INIT_LIST_HEAD(&proc->delivered_death);
Copy the code

Next we will initialize several queues required by the binder:

Proc -> Todo Binder’s Todo queue

2. Initialize the proc-> WAIT queue

3. Add proc_node from the current binder_proc list to the static variable binder_procs list.

4. Binder_proc sets the death distribution list, which is used by binder when sending death notifications.

filp->private_data = proc;
Copy the code

Remember that at this point we set the binder_proc object to private data for the current file, in preparation for later initialization.