preface
When it comes to the advantages of Binders over traditional interprocess communication, it is often said that a Binder only needs to do “one copy” as opposed to “two copies”. This is indeed an advantage of binders, but two problems arise upon further reflection:
- Where exactly does this so-called “one copy” take place?
- What the hell was that copy?
Many articles about Binder list the advantages of “one copy”, but either skip over the above two questions or answer them incompletely correctly, causing confusion.
This article seeks to find the right answers to these two questions, so you need to have a general understanding of Binder drivers and their source code, or at least understand my previous article, “Talking About Learning Binder.”
So let’s explore the world of source code with these two questions.
The source code
Before looking at the source code, Binder is a bit of a primer.
- The Binder
mmap
Occur in theProcessState
In the constructor of, that is, a single memory map for a process, which is about 1M in size. - Kernel space reads and writes user space data using two functions:
copy_from_user()
Copy data from user space to kernel space.copy_to_user()
Copy data from kernel space to user space.
- The Binder driver source code contains numerous calls to both functions. We need to figure out what we are copying and where we are copying it with each call.
- To capture the “one copy” point in this article, the source references below will try to focus on the code related to memory operations and skip the rest of the code for now.
Let’s start with memory mapping
Binder memory mapping
ProcessState constructor
ProcessState::ProcessState(const char *driver)
{
if (mDriverFD >= 0) {
// mmap the binder, providing a chunk of virtual address space to receive transactions.
mVMStart = mmap(nullptr, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0); }}Copy the code
Note the comments at the top of the Mmap, which make it clear that the memory map is only used for receiving transactions, that is, writing data to the driver is independent of the memory map. Keep that in mind. Let’s look at the corresponding call to the kernel space:
binder_mmap
static int binder_mmap(struct file *filp, struct vm_area_struct *vma)
{... ret = binder_alloc_mmap_handler(&proc->alloc, vma); . }Copy the code
The actual mapping is done by the binder_alloc_mmap_handler() function. Here we only need to remember the function’s first input parameter &proc->alloc. Through this structure we can find the mapped memory block.
Now that the memory mapping is complete, let’s take a look at where Binder transfers use this particular memory.
Binder transfer process
We only care about memory and data during the transfer.
Initiator User space
In fact, the initiator of the user space to do things like a continuous package, pay attention to what is packed.
IPCThreadState::writeTransactionData
// The data we want to transfer is in the data entry
status_t IPCThreadState::writeTransactionData(... const Parcel& data...)
{ binder_transaction_data tr; .//tr.data.ptr.buffer holds Pointers to data
tr.data_size = data.ipcDataSize();
tr.data.ptr.buffer = data.ipcData();
tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t); tr.data.ptr.offsets = data.ipcObjects(); .// write tr to mOut.
mOut.writeInt32(cmd);
mOut.write(&tr, sizeof(tr));
return NO_ERROR;
}
Copy the code
Here TR only holds Pointers to the data. Tr is then written to the mOut Parcel.
IPCThreadState::talkWithDriver
status_t IPCThreadState::talkWithDriver(bool doReceive)
{... binder_write_read bwr; . bwr.write_size = outAvail; bwr.write_buffer = (uintptr_t)mOut.data(); . bwr.write_consumed =0; . ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >=0. }Copy the code
There’s another layer, BWR. Where bwr.write_buffer holds a pointer to mout.data (). In this case, we’re pointing to TR.
Therefore, in the initiator:
bwr
Contains points totr
The pointer.tr
Contains points todata
The pointer.
With the above two points in mind, let’s look at how the kernel space fetches packages:
Initiator kernel space
binder_ioctl_write_read
static int binder_ioctl_write_read(struct file *filp,
unsigned int cmd, unsigned long arg,
struct binder_thread **threadp)
{...void __user *ubuf = (void __user *)arg;
struct binder_write_read bwr;.if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
ret = -EFAULT;
gotoout; }... ret = binder_thread_write(proc, *threadp, bwr.write_buffer, bwr.write_size, &bwr.write_consumed); }Copy the code
Here we have our first copy_from_user() call. This call copies the user-space BWR to the kernel space. Note, however, that the first entry to copy_from_user() is the target address of the copy, given here as &bwr, a structure inside the function. Obviously this has nothing to do with memory mapping. Next, go to binder_thread_write. The input parameter is bwr.write_buffer. If you look back at the bottom of user space, is it pointing to tr?
binder_thread_write
static int binder_thread_write(struct binder_proc *proc,
struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size,
binder_size_t *consumed)
{
void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
void__user *ptr = buffer + *consumed; .case BC_TRANSACTION:
case BC_REPLY: {
struct binder_transaction_data tr;
if (copy_from_user(&tr, ptr, sizeof(tr)))
return -EFAULT;
ptr += sizeof(tr);
binder_transaction(proc, thread, &tr,
cmd == BC_REPLY, 0);
break; }... }Copy the code
Here we come across a second copy_from_user(). This will copy the user-space tr, ipcThreadState.mout, into the kernel to see if its first entry has nothing to do with memory mapping. This brings us to the key binder_TRANSACTION ().
binder_transaction
static void binder_transaction(struct binder_proc *proc,
struct binder_thread *thread,
struct binder_transaction_data *tr, int reply,
binder_size_t extra_buffers_size)
{...struct binder_transaction *t;. t->buffer = binder_alloc_new_buf(&target_proc->alloc, tr->data_size, tr->offsets_size, extra_buffers_size, ! reply && (t->flags & TF_ONE_WAY)); . copy_from_user(t->buffer->data, (const void __user *)(uintptr_t)
tr->data.ptr.buffer, tr->data_size)
...
off_start = (binder_size_t *)(t->buffer->data +
ALIGN(tr->data_size, sizeof(void*))); offp = off_start; . copy_from_user(offp, (const void __user *)(uintptr_t)
tr->data.ptr.offsets, tr->offsets_size);
}
Copy the code
First look at t->buffer, to which the return value of the binder_alloc_new_buf() function is assigned. Here is the first time we have seen this function, which by its name allocates memory. Look at its first input parameter &target_proc->alloc. Now recall from mMAP that the memory mapping information is stored in the proc->alloc structure. So here we can confirm that we have allocated a chunk of memory in the recipient process’s memory map. T ->buffer points to the mapped memory.
This brings us to our third copy_from_user() call. Recall that tr.data.ptr.buffer in user space refers to the data we want to transfer. So you can see that the copy_from_user() operation copies the initiator’s user-space data directly into the receiver’s kernel memory map. This is the key to “copy once”.
This is followed by a call to copy_from_user(), which copies the offset of some of the transboundary objects associated with the data. This is not a major conflict with the volume of the BWR and TR copies, so “one copy” refers to the above data copy.
At this point we should have a preliminary answer to the “one copy” question, but in order to make the whole process closed loop, let’s look at the second half of the Binder transfer process.
Recipient kernel space
binder_thread_read
static int binder_thread_read(struct binder_proc *proc,
struct binder_thread **threadp,
binder_uintptr_t binder_buffer, size_t size,
binder_size_t *consumed, int non_block)
{...void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
void__user *ptr = buffer + *consumed; . tr.data_size = t->buffer->data_size; tr.offsets_size = t->buffer->offsets_size; tr.data.ptr.buffer = (binder_uintptr_t)
((uintptr_t)t->buffer->data +
binder_alloc_get_user_buffer_offset(&proc->alloc));
tr.data.ptr.offsets = tr.data.ptr.buffer +
ALIGN(t->buffer->data_size,
sizeof(void*)); . copy_to_user(ptr, &tr,sizeof(tr));
}
Copy the code
Here we have our first copy_to_user() call, which is the ipcthreadstate.min that copies tr to the recipient’s user space. Tr.data.ptr. buffer is converted from the kernel-mapped data address pointer to a user-space pointer.
binder_ioctl_write_read
static int binder_ioctl_write_read(struct file *filp,
unsigned int cmd, unsigned long arg,
struct binder_thread **threadp)
{... copy_to_user(ubuf, &bwr,sizeof(bwr)); . }Copy the code
Finally, we come across a second copy_to_user(). Copy the BWR back into user space, notice that the BWR contains a pointer to tr. Bwr. read_buffer refers to this tr, or ipcthreadstate. mIn.
Recipient user space
Now back to the recipient’s user space:
IPCThreadState::executeCommand
status_t IPCThreadState::executeCommand(int32_t cmd)
{...case BR_TRANSACTION:
{
binder_transaction_data tr;
result = mIn.read(&tr, sizeof(tr)); . Parcel buffer; buffer.ipcSetDataReference(reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
tr.offsets_size/sizeof(binder_size_t), freeBuffer, this); . error =reinterpret_cast<BBinder*>(tr.cookie)->transact(tr.code, buffer, &reply, tr.flags); }... }Copy the code
So let’s first read tr out of mIn. Buffer (tr.data.ptr.buffer), the “copy once” address, is assigned to the Parcel directly. The following entity binders can then call Transact to process the data sent by the initiator. Memory maps are really only used to receive data sent by Binder.
conclusion
Binder “one copy” of the two issues (when to copy and what to copy), I believe you have a preliminary understanding. Here’s a picture to summarize what I’ve covered:The key points discussed in the article are shown in the figurecopy_from_user
andcopy_to_user
. The diagonal green arrow is where the “one copy” is. The two green blocks on the right side of the receiver represent memory maps.
The understanding of “copy once” and the role of memory mapping in Binder communication can easily be confused by the number of copy_from_user and copy_to_user calls in the Binder driver source code without careful research. But after a thorough study, the mechanism is not complicated. I hope this article will help you.