The opening

This branch is based on AOSPAndroid - 11.0.0 _r25, the kernel branchAndroid - among MSM - wahoo - 4.4 - android11As a basis for analysis

Binder Binder Binder Binder Binder Binder Binder Binder Binder Binder Binder Today, we will begin to analyze the Binder mechanism from the Binder driver code

prompt

Part of the Binder driver code is not in the AOSP project, so we need to clone a separate driver code

As my development device is Pixel2, the Linux kernel version is 4.4.223 and the corresponding branch is Android-MSM-Wahoo-4.4-Android11, so today’s analysis is also based on this branch

I’m from tsinghua university mirror station clone code, qualcomm device, so the address is: aosp.tuna.tsinghua.edu.cn/android/ker…

Initialize the

Binder driver source code is located in the drivers/ Android directory, we start with binder.c file

Linux initcall mechanism

At the bottom of binder.c we see this line of code

device_initcall(binder_init);
Copy the code

In the Linux kernel, drivers are usually started with xxx_initCall (fn), which is actually a macro definition defined in the platform corresponding init.h file

#define early_initcall(fn) __define_initcall(fn, early)
#define pure_initcall(fn) __define_initcall(fn, 0) 
#define core_initcall(fn) __define_initcall(fn, 1) 
#define core_initcall_sync(fn) __define_initcall(fn, 1s) 
#define postcore_initcall(fn) __define_initcall(fn, 2) 
#define postcore_initcall_sync(fn) __define_initcall(fn, 2s) 
#define arch_initcall(fn) __define_initcall(fn, 3) 
#define arch_initcall_sync(fn) __define_initcall(fn, 3s) 
#define subsys_initcall(fn) __define_initcall(fn, 4)
#define subsys_initcall_sync(fn) __define_initcall(fn, 4s) 
#define fs_initcall(fn) __define_initcall(fn, 5) 
#define fs_initcall_sync(fn) __define_initcall(fn, 5s) 
#define rootfs_initcall(fn) __define_initcall(fn, rootfs) 
#define device_initcall(fn) __define_initcall(fn, 6) 
#define device_initcall_sync(fn) __define_initcall(fn, 6s) 
#define late_initcall(fn) __define_initcall(fn, 7) 
#define late_initcall_sync(fn) __define_initcall(fn, 7s)
Copy the code

As you can see, the __define_initcall() function is actually called, and the second argument to this function indicates the priority. The smaller the number, the higher the priority. The priority with s is lower than the priority without s

In the process of Linux kernel startup, various functions need to be called. The underlying implementation is to define a segment in the kernel image file, which is specially used to store the address of these initialization functions. When the kernel starts, it only needs to take out the function pointer at this segment address and execute it one by one. The __define_initcall() function adds a custom init function to the above section

binder_init

With the above function definitions in mind, we can look back at device_initCall (binder_init) to see that the binder_init function is called when the Linux kernel starts

static int __init binder_init(void)
{
    int ret;
    char *device_name, *device_names, *device_tmp;
    struct binder_device *device;
    struct hlist_node *tmp;

    // Initialize memory reclamation for binder
    ret = binder_alloc_shrinker_init();
    if (ret)
        returnret; .// Create a single-threaded work queue for processing asynchronous tasks
    binder_deferred_workqueue = create_singlethread_workqueue("binder");
    if(! binder_deferred_workqueue)return -ENOMEM;
    
    // Create the binder/proc directory
    binder_debugfs_dir_entry_root = debugfs_create_dir("binder".NULL);
    if (binder_debugfs_dir_entry_root)
        binder_debugfs_dir_entry_proc = debugfs_create_dir("proc",
                         binder_debugfs_dir_entry_root);
    // Create 5 files under binder
    if (binder_debugfs_dir_entry_root) {
        debugfs_create_file("state".0444,
                    binder_debugfs_dir_entry_root,
                    NULL,
                    &binder_state_fops);
        debugfs_create_file("stats".0444,
                    binder_debugfs_dir_entry_root,
                    NULL,
                    &binder_stats_fops);
        debugfs_create_file("transactions".0444,
                    binder_debugfs_dir_entry_root,
                    NULL,
                    &binder_transactions_fops);
        debugfs_create_file("transaction_log".0444,
                    binder_debugfs_dir_entry_root,
                    &binder_transaction_log,
                    &binder_transaction_log_fops);
        debugfs_create_file("failed_transaction_log".0444,
                    binder_debugfs_dir_entry_root,
                    &binder_transaction_log_failed,
                    &binder_transaction_log_fops);
    }

    //"binder,hwbinder,vndbinder"
    device_names = kzalloc(strlen(binder_devices_param) + 1, GFP_KERNEL);
    if(! device_names) { ret = -ENOMEM;goto err_alloc_device_names_failed;
    }
    strcpy(device_names, binder_devices_param);

    device_tmp = device_names;
    / / binder, hwbinder, vndbinder call init_binder_device function respectively
    while ((device_name = strsep(&device_tmp, ","))) {
        ret = init_binder_device(device_name);
        if (ret)
            goto err_init_binder_device_failed;
    }

    return ret;

err_init_binder_device_failed:
    ...

err_alloc_device_names_failed:
    ...
}
Copy the code

We’ll focus on the init_binder_device function

init_binder_device

static int __init init_binder_device(const char *name)
{
    int ret;
    struct binder_device *binder_device;

    binder_device = kzalloc(sizeof(*binder_device), GFP_KERNEL);
    if(! binder_device)return -ENOMEM;

    //binder Registers file_operations for virtual character devices
    binder_device->miscdev.fops = &binder_fops;
    // Dynamically allocate the device number
    binder_device->miscdev.minor = MISC_DYNAMIC_MINOR;
    binder_device->miscdev.name = name;

    binder_device->context.binder_context_mgr_uid = INVALID_UID;
    binder_device->context.name = name;
    // Initialize the mutex
    mutex_init(&binder_device->context.context_mgr_node_lock);
    // Register misC device
    ret = misc_register(&binder_device->miscdev);
    if (ret < 0) {
        kfree(binder_device);
        return ret;
    }
    // Add binder devices to linked list (head plug)
    hlist_add_head(&binder_device->hlist, &binder_devices);

    return ret;
}
Copy the code

A binder structure is constructed to store binder parameters. Then misC devices are registered with binder as virtual character devices through misC_register

Register misC devices

Let’s first learn how to register a MISC device in Linux

Misc devices are defined as misC devices that cannot be classified in Linux drivers. Misc devices provided by the Linux kernel are very inclusive. All types of devices that cannot be classified as standard character devices can be defined as MISC devices, such as NVRAM, watchdog, real-time clock, character LCD, etc

In the Linux kernel, all misC devices are organized together to form a subsystem (subsys) for unified management. All devices of type miscDevice in this subsystem share a primary device number MISC_MAJOR(10), but the secondary device number is different

Marked miscdevice structure in the kernel misc equipment, concrete is defined in include/Linux/miscdevice. H

struct miscdevice  {
    int minor;
    const char *name;
    const struct file_operations *fops;
    struct list_head list;
    struct device *parent;
    struct device *this_device;
    const struct attribute_group **groups;
    const char *nodename;
    umode_t mode;
};
Copy the code

When we register misC equipment by ourselves, we only need to fill in the first three items:

  • minor: Secondary device number, if filledMISC_DYNAMIC_MINOR, the kernel dynamically assigns the sub-device number
  • name: device name
  • fops:file_operationsStructure, used to define itselfmiscThe file manipulation function of the device. If this parameter is left blank, the default will be usedmisc_fops

The file_operations structure is defined in include/ Linux /fs.h

struct file_operations {
    struct module *owner;
    loff_t (*llseek) (struct file *, loff_t.int);
    ssize_t (*read) (struct file *, char __user *, size_t.loff_t *);
    ssize_t (*write) (struct file *, const char __user *, size_t.loff_t *);
    ssize_t (*read_iter) (struct kiocb *, struct iov_iter *);
    ssize_t (*write_iter) (struct kiocb *, struct iov_iter *);
    int (*iterate) (struct file *, struct dir_context *);
    unsigned int (*poll) (struct file *, struct poll_table_struct *);
    long (*unlocked_ioctl) (struct file *, unsigned int.unsigned long);
    long (*compat_ioctl) (struct file *, unsigned int.unsigned long);
    int (*mmap) (struct file *, struct vm_area_struct *);
    int (*open) (struct inode *, struct file *);
    int (*flush) (struct file *, fl_owner_t id);
    int (*release) (struct inode *, struct file *);
    int (*fsync) (struct file *, loff_t.loff_t.int datasync);
    int (*aio_fsync) (struct kiocb *, int datasync);
    int (*fasync) (int, struct file *, int);
    int (*lock) (struct file *, int, struct file_lock *);
    ssize_t (*sendpage) (struct file *, struct page *, int.size_t.loff_t *, int);
    unsigned long (*get_unmapped_area)(struct file *, unsigned long.unsigned long.unsigned long.unsigned long);
    int (*check_flags)(int);
    int (*flock) (struct file *, int, struct file_lock *);
    ssize_t (*splice_write)(struct pipe_inode_info *, struct file *, loff_t *, size_t.unsigned int);
    ssize_t (*splice_read)(struct file *, loff_t *, struct pipe_inode_info *, size_t.unsigned int);
    int (*setlease)(struct file *, long, struct file_lock **, void* *);long (*fallocate)(struct file *file, int mode, loff_t offset,
              loff_t len);
    void (*show_fdinfo)(struct seq_file *m, struct file *f);
#ifndef CONFIG_MMU
    unsigned (*mmap_capabilities)(struct file *);
#endif
};
Copy the code

File_operation is the key structure that associates system calls with drivers. Each member of this structure corresponds to a system call. A Linux system call reads the corresponding function pointer in file_operation and then transfers control to the function. Thus complete the Linux device driver work

Finally, call misc_register function to register misC device, function prototype is as follows:

// Register misC device
extern int misc_register(struct miscdevice *misc);
// Uninstall the misC device
extern void misc_deregister(struct miscdevice *misc);
Copy the code

Register binder equipment

Binder_device is a binder_device structure that binds misC devices to misC devices

struct binder_device {
    struct hlist_node hlist;
    struct miscdevice miscdev;
    struct binder_context context;
};
Copy the code

Where hLIST_node is a node in the linked list, miscdevice is the structure parameter necessary to register misC as described above, binder_context is used to hold information about the binder context manager

Back in the code, miscdevice is assigned a value, file_operation is specified, minor is set, binder_context is simply initialized, and misc_register is called to register misC devices. Finally, the binder device is added to a global linked list using header insertion

Let’s look at the file_operation it specifies

static const struct file_operations binder_fops = {
    .owner = THIS_MODULE,
    .poll = binder_poll,
    .unlocked_ioctl = binder_ioctl,
    .compat_ioctl = binder_ioctl,
    .mmap = binder_mmap,
    .open = binder_open,
    .flush = binder_flush,
    .release = binder_release,
};
Copy the code

As you can see, binder drivers support all seven of these system calls, which we’ll examine one by one

binder_proc

Binder_proc is a structure used to describe process context information and manage IPC. Binder_proc is a private structure defined in drivers/ Android /binder.c. Binder_proc is a private structure defined in drivers/ Android /binder.c. Binder_proc is a private structure defined in drivers/ Android /binder.c

struct binder_proc {
    // A node in the hash list
    struct hlist_node proc_node;
    // A red-black tree of threads processing user requests
    struct rb_root threads;
    // Red black tree with binder entities
    struct rb_root nodes;
    // A red-black tree of binder references, sorted by handles
    struct rb_root refs_by_desc;
    // A red-black tree of binder references, sorted by the addresses of their corresponding binder entities
    struct rb_root refs_by_node;
    struct list_head waiting_threads;
    / / process id
    int pid;
    // Process descriptor
    struct task_struct *tsk;
    // All file data opened by the process
    struct files_struct *files;
    struct mutex files_lock;
    struct hlist_node deferred_work_node;
    int deferred_work;
    bool is_dead;
    // Queue of pending events
    struct list_head todo;
    struct binder_stats stats;
    struct list_head delivered_death;
    int max_threads;
    int requested_threads;
    int requested_threads_started;
    atomic_t tmp_ref;
    struct binder_priority default_priority;
    struct dentry *debugfs_entry;
    // It is used to record the user virtual address space and kernel virtual address space allocated by Mmap
    struct binder_alloc alloc;
    struct binder_context *context;
    spinlock_t inner_lock;
    spinlock_t outer_lock;
};
Copy the code

binder_open

We start by opening the Binder driver device

static int binder_open(struct inode *nodp, struct file *filp)
{
    // A structure that manages IPC and holds process information
    struct binder_proc *proc;
    struct binder_device *binder_dev;. proc = kzalloc(sizeof(*proc), GFP_KERNEL);
    if (proc == NULL)
        return -ENOMEM;
        
    // Initializes the kernel synchronization spin lock
    spin_lock_init(&proc->inner_lock);
    spin_lock_init(&proc->outer_lock);
    // Atomic operation assignment
    atomic_set(&proc->tmp_ref, 0);
    // Increase task_struct.usage of the process executing the current system call by 1
    get_task_struct(current->group_leader);
    // make the TSK in binder_proc point to the process executing the current system call
    proc->tsk = current->group_leader;
    // Initialize the file lock
    mutex_init(&proc->files_lock);
    // Initialize the todo list
    INIT_LIST_HEAD(&proc->todo);
    // Set the priority
    if (binder_supported_policy(current->policy)) {
        proc->default_priority.sched_policy = current->policy;
        proc->default_priority.prio = current->normal_prio;
    } else {
        proc->default_priority.sched_policy = SCHED_NORMAL;
        proc->default_priority.prio = NICE_TO_PRIO(0);
    }
    // Find the first address of the binder_device structure
    binder_dev = container_of(filp->private_data, struct binder_device,
                  miscdev);
    // makes the binder_proc context point to the binder_device context
    proc->context = &binder_dev->context;
    // Initialize the binder buffer
    binder_alloc_init(&proc->alloc);
    // The number of objects of type BINDER_STAT_PROC created in the global binder_stats structure increases by 1
    binder_stats_created(BINDER_STAT_PROC);
    // Set the current process ID
    proc->pid = current->group_leader->pid;
    // Initializes the distributed death notification list
    INIT_LIST_HEAD(&proc->delivered_death);
    // Initialize the list of waiting threads
    INIT_LIST_HEAD(&proc->waiting_threads);
    // Save the binder_proc data
    filp->private_data = proc;

    // Because binder supports multithreading, locking is required
    mutex_lock(&binder_procs_lock);
    // Add binder_proc to the binder_procs global list
    hlist_add_head(&proc->proc_node, &binder_procs);
    / / releases the lock
    mutex_unlock(&binder_procs_lock);

    // Create a file in binder/proc named with the id of the process executing the current system call
    if (binder_debugfs_dir_entry_proc) {
        char strbuf[11];
        snprintf(strbuf, sizeof(strbuf), "%u", proc->pid);
        proc->debugfs_entry = debugfs_create_file(strbuf, 0444,
            binder_debugfs_dir_entry_proc,
            (void(*)unsigned long)proc->pid,
            &binder_proc_fops);
    }

    return 0;
}
Copy the code

The binder_open function creates the binder_proc structure and stores initialization and current process information in the binder_proc structure, and then stores the binder_PROC structure in the private_data of the file pointer filp. Add binder_proc to the global linked list binder_procs

There are a few things about Linux that need explaining

spinlock

Spinlock is a spin-locking mechanism provided in the kernel. In the Linux kernel implementation, often encounter Shared data by interrupt context and processes the context scenarios, if only the process context, we can use a mutex or semaphore solve, will not get the lock a process to sleep to wait, but because of the interrupt context is not a process, it does not exist task_struct, Therefore, it can not be scheduled, of course, it can not sleep, this time through the busy waiting mechanism of spinlock to achieve the same effect of sleep

current

In the Linux kernel, we define a macro called current, which is defined in ASM /current.h

static inline struct task_struct *get_current(void)
{
	return(current_thread_info()->task);
}

#define	current	get_current()
Copy the code

It returns a task_struct pointer to the process executing the current kernel code

container_of

Container_of is also a macro defined in Linux to get a pointer to a structure variable based on a pointer to a domain member variable in a structure variable

#define offsetof(TYPE, MEMBER)	((size_t)&((TYPE *)0)->MEMBER)

#define container_of(ptr, type, member) ({              \         
    const typeof( ((type *)0)->member ) *__mptr = (ptr);    \         
    (type *)( (char*)__mptr - offsetof(type,member) ); })Copy the code

fd&filp

Filp ->private_data holds the binder_proc structure. When a process calls the open system function, the kernel returns a file descriptor fd that points to the file pointer filp. The fd is passed in subsequent calls to mmap, IOCtl and other functions that interact with binder drivers. The kernel calls binder_mmap, binder_ioctl, etc., with the fd pointer filp, so that these functions can fetch the binder_proc structure via filp->private_data

binder_mmap

vm_area_struct

Before we look at Mmap, we need to take a look at the vm_area_struct structure, which is defined in include/ Linux /mm_types.h

struct vm_area_struct {
    // The first address of the current VMA
    unsigned long vm_start;
    // Address of the first byte after the last address of the current VMA
    unsigned long vm_end;
    
    / / list
    struct vm_area_struct *vm_next, *vm_prev;
    // Corresponding node in red-black tree
    struct rb_node vm_rb;

    // How much free space is left in front of the current VMA
    unsigned long rb_subtree_gap;

    // Specifies the memory address space to which the current VMA belongs
    struct mm_struct *vm_mm;
    // Access permission
    pgprot_t vm_page_prot;
    // VMA identifier set, defined in include/ Linux /mm
    unsigned long vm_flags;

    union {
        struct {
            struct rb_node rb;
            unsigned long rb_subtree_last;
        } shared;
        const char __user *anon_name;
    };

    struct list_head anon_vma_chain;
    struct anon_vma *anon_vma;

    // The current VMA operates on the function set pointer
    const struct vm_operations_struct *vm_ops;

    // the file offset of the current VMA start address in vm_file, in unit of physical page PAGE_SIZE
    unsigned long vm_pgoff;
    // The file to be mapped (if using file mapping)
    struct file * vm_file;
    void * vm_private_data;

#ifndef CONFIG_MMU
    struct vm_region *vm_region;	/* NOMMU mapping region */
#endif
#ifdef CONFIG_NUMA
    struct mempolicy *vm_policy;	/* NUMA policy for the VMA */
#endif
    struct vm_userfaultfd_ctx vm_userfaultfd_ctx;
};
Copy the code

The vm_AREA_struct structure describes a segment of virtual memory space. Generally, the virtual memory space used by a process is not continuous, and the access attributes of each part of the virtual memory space may be different. Therefore, a process needs multiple VM_AREA_struct structures to describe the virtual memory space (hereinafter referred to as VMA).

Each process has a task_struct structure that describes the memory space of the process. The MM_struct structure has two domain member variables that refer to the VMA header and the red and black root

The range of virtual memory space described by vMA is represented by vm_start and vm_end. Vm_start represents the first address of the current VMA, and vm_end represents the address of the first byte after the last address of the current VMA. That is, the range of virtual memory space is [vm_start, vm_end).

Vm_operations_struct, similar to file_operations above, defines the operation function for virtual memory


Having introduced the VMA, let’s take a look at the binder_mmap function

static int binder_mmap(struct file *filp, struct vm_area_struct *vma)
{
    int ret;
    struct binder_proc *proc = filp->private_data;
    const char *failure_string;

    // Verify process information
    if(proc->tsk ! = current->group_leader)return -EINVAL;

    // Limit the virtual memory address size to 4M
    if((vma->vm_end - vma->vm_start) > SZ_4M) vma->vm_end = vma->vm_start + SZ_4M; .// Check whether the user space is writable (FORBIDDEN_MMAP_FLAGS == VM_WRITE)
    if (vma->vm_flags & FORBIDDEN_MMAP_FLAGS) {
        ret = -EPERM;
        failure_string = "bad vm_flags";
        goto err_bad_arg;
    }
    //VM_DONTCOPY Indicates that the VMA cannot be copied by fork
    vma->vm_flags |= VM_DONTCOPY | VM_MIXEDMAP;
    // The VM_WRITE flag of the VMA cannot be set in user space
    vma->vm_flags &= ~VM_MAYWRITE;
    // Set this vMA operation function set
    vma->vm_ops = &binder_vm_ops;
    / / points to binder_proc
    vma->vm_private_data = proc;

    // Handle the mapping between process virtual memory space and kernel virtual address space
    ret = binder_alloc_mmap_handler(&proc->alloc, vma);
    if (ret)
        return ret;
    mutex_lock(&proc->files_lock);
    // Get the open file information structure files_struct of the process and increment the reference count by one
    proc->files = get_files_struct(current);
    mutex_unlock(&proc->files_lock);
    return 0;

err_bad_arg:
    pr_err("%s: %d %lx-%lx %s failed %d\n", __func__,
           proc->pid, vma->vm_start, vma->vm_end, failure_string, ret);
    return ret;
}
Copy the code
  1. First of all, from thefilpTo obtain the correspondingbinder_procinformation
  2. Will it processtask_structAnd the process that executes the current kernel codetask_structCompared to check
  3. Limits the size of user space virtual memory to 4M
  4. Check whether the user space is writable (binderThe buffer allocated by the driver for the process in user space can only be read, not written.
  5. Set up thevm_flags,vmaCannot write or copy
  6. Set up thevmaSet of operation functions of
  7. willvm_area_structMember variables invm_private_dataPoint to thebinder_proc,vmaYou can get it in the set action functionbinder_proc
  8. Handles the mapping between process virtual memory space and kernel virtual address space
  9. Gets the open file information structure for the processfiles_struct,binder_procthefilesPoint to it and increment the reference count by one

binder_alloc_mmap_handler

Binder_alloc_mmap_handler maps the process virtual memory space to the kernel virtual address space, which is implemented in drivers/ Android /binder_alloc

We have already seen that vm_area_struct represents the virtual address space in the user process, and the corresponding vm_struct represents the virtual address space in the kernel

int binder_alloc_mmap_handler(struct binder_alloc *alloc, struct vm_area_struct *vma)
{
	int ret;
	struct vm_struct *area;
	const char *failure_string;
	struct binder_buffer *buffer;

	mutex_lock(&binder_alloc_mmap_lock);
        // Check whether the kernel buffer has been allocated
	if (alloc->buffer) {
		ret = -EBUSY;
		failure_string = "already mapped";
		goto err_already_mapped;
	}
        // Get a kernel virtual space
	area = get_vm_area(vma->vm_end - vma->vm_start, VM_ALLOC);
	if (area == NULL) {
		ret = -ENOMEM;
		failure_string = "get_vm_area";
		goto err_get_vm_area_failed;
	}
        //alloc->buffer points to the kernel virtual memory space address
	alloc->buffer = area->addr;
        // Calculate the offset from the linear address of the user virtual space to the linear address of the kernel virtual space
	alloc->user_buffer_offset =
		vma->vm_start - (uintptr_t)alloc->buffer; mutex_unlock(&binder_alloc_mmap_lock); .// Request memory
	alloc->pages = kzalloc(sizeof(alloc->pages[0]) *
				   ((vma->vm_end - vma->vm_start) / PAGE_SIZE),
			       GFP_KERNEL);
	if (alloc->pages == NULL) {
		ret = -ENOMEM;
		failure_string = "alloc page array";
		goto err_alloc_pages_failed;
	}
        // The buffer size is equal to the VMA size
	alloc->buffer_size = vma->vm_end - vma->vm_start;

	buffer = kzalloc(sizeof(*buffer), GFP_KERNEL);
	if(! buffer) { ret = -ENOMEM; failure_string ="alloc buffer struct";
		goto err_alloc_buf_struct_failed;
	}
        // point to the kernel virtual space address
	buffer->data = alloc->buffer;
        // Add buffer to the linked list
	list_add(&buffer->entry, &alloc->buffers);
	buffer->free = 1;
        // Add this kernel buffer to binder_alloc's free buffer red-black tree
	binder_insert_free_buffer(alloc, buffer);
        // Set the maximum size of the asynchronous transaction buffer available to the process (to prevent asynchronous transactions from consuming too much of the kernel buffer)
	alloc->free_async_space = alloc->buffer_size / 2;
        // Memory barrier to ensure that instructions are executed sequentially
	barrier();
        / / set binder_alloc
	alloc->vma = vma;
	alloc->vma_vm_mm = vma->vm_mm;
	// Reference count +1
	atomic_inc(&alloc->vma_vm_mm->mm_count);

	return 0; .// Error handling
}
Copy the code
  1. Check whether kernel buffers have been allocated
  2. Find a block of available virtual memory address from the kernel
  3. Save this kernel virtual memory space address tobinder_alloc
  4. Calculate the offset from the linear address of user virtual space to the linear address of kernel virtual space (this makes it very easy to switch between user virtual memory space and kernel virtual memory space)
  5. foralloc->pagesArray request memory, request size is equal tovmaHow many page frames can be allocated
  6. Set up thebufferSize is equal to thevmaThe size of the
  7. forbinder_bufferAllocates memory, populates the parameters to point to the kernel virtual space address, and adds it to the linked list and red-black tree
  8. Set up thebinder_allocThe other parameters

Note that although we calculated the offset from the linear address of the user virtual space to the linear address of the kernel virtual space, we did not establish a mapping relationship. In previous kernel versions, the binder_update_page_range function was called to map kernel virtual memory and process virtual memory to physical memory, respectively. In 4.4.223, the binder_update_page_range function was used to map kernel virtual memory and process virtual memory to physical memory. This will be deferred until after binder_ioctl

After the physical memory mapping is completed, take a 32-bit system with a buffer size of 4M as an example, the effect should be as follows:

conclusion

We’ve seen how binder drivers are registered, analyzed binder_open and binder_mmap operation functions, learned some important constructs, and understood how Mmap maps user space and kernel space. In the next chapter we examine the binder_ioctl, the most important part of binder drivers

reference

  • Misc devices in Linux
  • Memory mapping and VMA
  • Binder Driver Initializer Mapping for Android
  • Initial exploration of Binder Driver series 1
  • Linux 4.16 Binder Drivers learning Notes ——– Interface brief