Introduction to interprocess communication

The concept of interprocess communication

Interprocess communication (IPC) is to spread or exchange information between different processes.

The purpose of interprocess communication

  • Data transfer: A process needs to send its data to another process.
  • Resource sharing: Multiple processes share the same resource.
  • Notification event: a process needs to send a message to another process or group of processes notifying it of an event, such as the need to notify its parent when a process terminates.
  • Process control: Some processes want complete control over the execution of another process (such as the Debug process). In this case, the control process wants to be able to intercept all lapses and exceptions of the other process and be aware of its state changes in time.

The nature of interprocess communication

The essence of interprocess communication is to let different processes see the same resource.

Communication between running processes is difficult because of the independence between running processes, mainly at the data level, and the code logic level, which can be private or public (such as parent-child processes).

To communicate with each other, processes must rely on a third-party resource. These processes can write or read data to this third-party resource to achieve communication between processes. This third-party resource is actually a memory area provided by the operating system.

Therefore, the essence of interprocess communication is to let different processes see the same resource (memory, file kernel buffers, etc.). Because this resource can be provided by different modules in the operating system, different modes of interprocess communication arise.

Classification of interprocess communication

The pipe

  • Anonymous pipe
  • A named pipe

System V IPC

  • System V message queue
  • System V Shared memory
  • System V semaphore

POSIX IPC

  • The message queue
  • The Shared memory
  • A semaphore
  • The mutex
  • Condition variables,
  • Read-write lock

The pipe

What is a pipe

Pipes are the oldest form of interprocess communication in Unix, and we call the flow of data from one process to another a “pipe.”

For example, count the number of logged-in users we currently use on the cloud server.

Among them, the who command and wc are two programs, when they run up into two processes, the who process through the standard output will play into the “pipe” data, wc process through standard input to read data from the “pipe”, thus is complete transmission of the data, and then complete the data for further processing.

** Note: ** The who command is used to view the current cloud server login user (one user is displayed in a row), wc -l is used to count the current number of lines.

Anonymous pipe

The principle of anonymous pipes

Anonymous pipes are used for interprocess communication and are limited to communication between local parent processes.

Interprocess communication essence, for different process to see the same resources, using the principle of anonymous pipe to realize the communication between a parent and child process is that process of two father and son to see the same open file resources, and then process of father and son can write or read operations on the files, and then realize the communication between a parent and child process.

Note:

  • The same file resource seen by the parent and child processes is maintained by the operating system. Therefore, when the parent and child processes write to the file, the data in the file buffer will not be copied on write.
  • Although the pipeline uses a file scheme, the operating system must not flush the process communication data to disk, because doing so with IO is inefficient and unnecessary. In other words, disk files and memory files do not necessarily correspond one to one. Some files exist only in memory, but not on disk.

Pipe function

The pipe function is used to create anonymous pipes, and the PIP function is modeled as follows:

int pipe(int pipefd[2]);

The pipe function takes an output argument, and the array pipefd returns two file descriptors that point to the read and write ends of the pipe:

The pipe function returns 0 on success and -1 on failure.

Anonymous pipe usage steps

The pipe and fork functions are used together to create an anonymous pipe for parent-child communication, as follows:

1. The parent process calls the pipe function to create the pipe.

2. Parent process creates child process.

3. The parent process closes the write end and the child process closes the read end.

Note:

  1. A pipe can communicate only in one direction, so when the parent process creates the child process, the parent process needs to confirm who is reading and who is writing, and then close the corresponding read-write end.
  2. Data written from the write side of the pipe is buffered by the kernel until it is read from the read side of the pipe.

We can look at these three steps from the point of view of the file descriptor:

1. The parent process calls the pipe function to create the pipe.

2. Parent process creates child process.

3. The parent process closes the write end and the child process closes the read end.

For example, in the following code, the child writes 10 lines of data to an anonymous pipe, and the parent reads the data from the anonymous pipe.

#include <stdio.h> #include <unistd.h> #include <string.h> #include <stdlib.h> #include <sys/types.h> #include <sys/wait.h> int main() { int fd[2] = { 0 }; If (pipe(fd) < 0){// Use pipe to create an anonymous pipe perror("pipe"); return 1; } pid_t id = fork(); If (id == 0){//child close(fd[0]); Const char* MSG = "Hello father, I am child..." const char* MSG =" Hello father, I am child..." ; int count = 10; while (count--){ write(fd[1], msg, strlen(msg)); sleep(1); } close(fd[1]); Exit (0); // Exit (0); } //father close(fd[1]); // Parent process closes the write end // parent process reads data from pipeline char buff[64]; while (1){ ssize_t s = read(fd[0], buff, sizeof(buff)); if (s > 0){ buff[s] = '\0'; printf("child send to father:%s\n", buff); } else if (s == 0){ printf("read file end\n"); break; } else{ printf("read error\n"); break; } } close(fd[0]); // close waitpid(id, NULL, 0); return 0; }Copy the code

The running results are as follows:

Pipe read and write rules

The pipe2 function, similar to the PIPE function, is used to create an anonymous pipe.

int pipe2(int pipefd[2], int flags);

The second argument to the pipe2 function sets the options.

1. When no data can be read:

  • O_NONBLOCK disable: The read call blocks, that is, the process pauses until data is available.
  • O_NONBLOCK enable: Read The call returns -1 with the errno value EAGAIN.

2. When the pipe is full:

  • O_NONBLOCK disable: The write call is blocked until some process reads data.
  • O_NONBLOCK enable: The write call returns -1 and the errno value is EAGAIN.

3. If file descriptors corresponding to all pipe writes are closed, read returns 0. 4. If file descriptors corresponding to all pipe readers are closed, the write operation will signal SIGPIPE, which may cause the write process to exit. 5. Linux guarantees atomicity when the amount of data to be written is not greater than PIPE_BUF. 6. Linux no longer guarantees atomicity when the amount of data to be written is greater than PIPE_BUF.

Characteristics of pipeline

1. The pipeline has its own synchronization and mutual exclusion mechanism.

We call critical resources resources that can only be used by one process at a time. A pipe is a critical resource that allows only one process to write or read to it at a time.

Critical resources need to be protected. If we do not have any protection mechanism for pipeline critical resources, it may occur that multiple processes operate on the same pipeline at the same time, leading to problems such as simultaneous read and write, cross read and write, and inconsistent data read.

To avoid these problems, the kernel synchronizes and mutex pipe operations:

  • Synchronization: The execution of two or more processes in a coordinated manner, in a predetermined order. For example, task A depends on the data generated by task B.
  • Mutually exclusive: A common resource can be used by only one process at a time. Multiple processes cannot use the common resource at the same time.

In fact, synchronization is a more complex kind of mutex, and mutex is a special kind of synchronization. For pipeline scenario, the mutex is two processes can not be on pipeline operation at the same time, they will repel each other, must wait for a process is complete, another to operate, and synchronization is also refers to the two can’t for the pipeline operation, but these two processes must be in a certain order to operate the pipes.

In other words, mutual exclusion is unique and exclusive, but it does not limit the running order of tasks, while there is a clear sequential relationship between synchronous tasks.

2. The life cycle of the pipeline follows the process.

Pipes essentially communicate through files, which means that pipes depend on the file system, and that file will be released when all processes that opened the file exit, so the life of the pipe follows the process.

3. Pipes provide streaming services.

For process A to write data to the pipe, process B can read as much data from the pipe as it wants. This is called streaming service, and the corresponding datagram service is:

  • Streaming service: Data is not divided into specific packet segments.
  • Datagram service: data is clearly segmented, and data is taken by packet segment.

4, the pipeline is half duplex communication.

In data communication, data can be transmitted in the following three ways:

  1. Simplex Communication: Data transmission in Simplex mode is one-way. In communication, one party is fixed as the sender and the other party is fixed as the receiver.
  2. Data can be transmitted in both directions of a signal carrier, but not simultaneously.
  3. Full Duplex: Full Duplex allows data to be transmitted simultaneously in two directions and is equivalent to a combination of two simplex modes of communication. Full duplex allows simultaneous (instantaneous) bidirectional signal transmission.

The pipe is half duplex, so the data can only flow in one direction. When two sides need to communicate, two pipes need to be set up.

Four special cases of pipelines

When using pipes, the following four special situations may occur:

  1. If the writing process does not write, the reading process keeps reading. Therefore, the corresponding reading process will be suspended because there is no data in the pipe. The reading process will wake up when there is data in the pipe.
  2. If the reading process does not read, the writing process keeps writing. When the pipe is full, the corresponding writing process will be suspended until the reading process reads the data in the pipe.
  3. The writing process closes the writing process after it finishes writing data. Then the reading process finishes reading data in the pipe and continues to execute the code logic after the writing process without being suspended.
  4. If the reader process shuts down the reader and the writer process continues writing to the pipe, the operating system kills the writer process.

In the first two cases, the pipe has its own synchronization and mutual exclusion mechanism. The reader process and the writer process have a coordinated process. It is not said that the reader is still reading when the pipe runs out of data, and the writer is still writing when the pipe is full. The reader process reads data if there is data in the pipe, and the writer process writes data if there is space in the pipe. If the condition is not met, the corresponding process is suspended and will not be woken up again until the condition is met.

The third case is also easy to understand. The reader process has read all the data in the pipe, and there are no more writers to write to, so the reader process can execute the rest of the process logic without being suspended.

In the fourth case, it is easy to understand that since there is no longer any process in the pipe to read, there is no point in writing to the writer-side process, so the operating system simply kills the writer-side process. At this time, the child process code is terminated before running out, which belongs to abnormal exit, so the child process must receive some signal.

We can use the following code to see what signal is received when the child process exits in case 4.

#include <unistd.h> #include <string.h> #include <stdlib.h> #include <sys/types.h> #include <sys/wait.h> int main() { int fd[2] = { 0 }; If (pipe(fd) < 0){// Use pipe to create an anonymous pipe perror("pipe"); return 1; } pid_t id = fork(); If (id == 0){//child close(fd[0]); Const char* MSG = "Hello father, I am child..." const char* MSG =" Hello father, I am child..." ; int count = 10; while (count--){ write(fd[1], msg, strlen(msg)); sleep(1); } close(fd[1]); Exit (0); // Exit (0); } //father close(fd[1]); Close (fd[0]); Int status = 0; waitpid(id, &status, 0); printf("child get signal:%d\n", status & 0x7F); Return 0; }Copy the code

The result shows that the child process received signal no. 13 when exiting.

Run the kill -l command to view signals corresponding to 13.

[cl@VM-0-15-centos nonamepipe]$ kill -l

In case 4, the operating system sends the SIGPIPE signal to terminate the child process.

Pipe size

The capacity of the pipe is limited. If the pipe is full, the write side will block or fail. What is the maximum capacity of the pipe?

Method 1: Use the MAN manual

According to the MAN manual, in versions of Linux prior to 2.6.11, the maximum size of a pipe was the same as the size of a system page, and after Linux 2.6.11, the maximum size of a pipe was 65536 bytes.

We can then use the uname -r command to check the version of Linux we are using.

According to the MAN manual, I’m using Linux after 2.6.11, so the maximum capacity of the pipe is 65536 bytes.

Method 2: Run the ulimit command

Second, we can use the ulimit -a command to view the current resource limit Settings.

According to the display, the maximum capacity of the pipe is 512×8=4096 512\times8=4096512×8=4096 bytes.

Method three: test yourself

Here we see that the pipe capacity obtained from the MAN manual is different from the pipe capacity obtained using the ulimit command, so we can test ourselves at this point.

As mentioned earlier, if the reader process does not read data in the pipe, the writer process keeps writing data to the pipe. When the pipe is full, the writer process is suspended. From this, we can write the following code to test the maximum capacity of the pipe.

#include <stdio.h> #include <stdlib.h> #include <sys/wait.h> int main() { int fd[2] = { 0 }; If (pipe(fd) < 0){// Use pipe to create an anonymous pipe perror("pipe"); return 1; } pid_t id = fork(); If (id == 0){//child close(fd[0]); Char c = 'a'; int count = 0; While (1){write(fd[1], &c, 1); count++; printf("%d\n", count); // Print the number of bytes currently written} close(fd[1]); exit(0); } //father close(fd[1]); // The parent process does not read waitPID (id, NULL, 0); close(fd[0]); return 0; }Copy the code

As you can see, without reading the reader process, the writing process can write 65536 bytes of data at most before being suspended by the operating system. That is to say, the maximum capacity of the pipe in my current Linux version is 65536 bytes.

A named pipe

The principle of naming pipes

Anonymous pipes can only be used for communication between processes that have a common ancestor (processes that are related). Typically, a pipe is created by one process, which then invokes fork, after which the pipe can be applied between parent and child processes. If you want to implement communication between two unrelated processes, you can use named pipes to do so. A named pipe is a special type of file where two processes open the same pipe file using the filename of the named pipe, and the two processes see the same resource and can communicate with each other.

Note:

  1. Ordinary files are hard to communicate with, and even if you do, you can’t solve some security problems.
  2. Named pipes, like anonymous pipes, are memory files, but they have a simple image on disk that is always zero in size because neither the named pipe nor anonymous pipe flusher traffic data to disk.

Use the command to create a named pipe

We can create a named pipe using the mkFIFo command.

[cl@VM-0-15-centos fifo]$ mkfifo fifo

As you can see, the created file is of type P, indicating that it is a named pipe file.

Using this named pipe file, you can communicate between the two processes. We write A string per second to the named pipe with A shell script in one process (process A) and A cat command in the other (process B) to read from the named pipe.

The phenomenon is that when process A starts, process B reads one string per second from the named pipe to print to the display. This proves that these two unrelated processes can transfer data, that is, communicate, through named pipes.

As mentioned earlier, it is no longer useful for the writing process to write data to the pipe after the reading process of the pipe exits. The writing process will be killed by the operating system. When we terminate the read-side process, bash will be killed by the operating system and our cloud server will exit, since the round-robin script executed by bash is executed by the command line interpreter on the writer-side.

Create a named pipe

Create named pipes in the program using mkFIFo function, mkFIFo function prototype as follows:

int mkfifo(const char *pathname, mode_t mode);

The first argument to the mkFIFo function is pathName, which represents the named pipe file to be created.

  • If pathName is given as a path, the named pipe file is created under the pathName path.
  • If pathName is given as a filename, the named pipe file is created in the current path by default. (Note the meaning of the current path)

The second argument to the mkFIFo function is mode, which represents the default permission to create named pipe files.

For example, if mode is set to 0666, the named pipe file is created with the following permissions:

However, the permissions of created files are affected by the umask (default file mask). The actual permissions of created files are: mode&(~umask). The default value of umask is 0002. When we set mode to 0666, the actual permissions of the created file are 0664.

If you want to create named pipe files that are not affected by umask, you need to use umask to set the default mask of the file to 0 before creating the file.

umask(0); // Set the default file mask to 0

Return value of the mkFIFo function.

  • The named pipe was created successfully, return 0.
  • Failed to create named pipe, return -1.

Example of creating a named pipe:

Use the following code to create a named pipe named myFIFo in the current path.

#include <sys/types.h> #include <sys/stat.h> #define FILE_NAME "myfifo" int main() { umask(0); If (mkfifo(FILE_NAME, 0666) < 0){// Create named pipe file perror("mkfifo"); return 1; } //create success... return 0; }Copy the code

After running the code, the named pipe myFIFO is created under the current path.

Open rules for named pipes

1. If the current open operation is to open THE FIFO for reading.

  • O_NONBLOCK disable: blocks until a corresponding process opens the FIFO for writing.
  • O_NONBLOCK enable: returns success immediately.

2. If the current open operation is to open FIFO for write.

  • O_NONBLOCK disable: blocks until a process opens the FIFO for reading.
  • O_NONBLOCK enable: failure is returned immediately with error code ENXIO.

Server & Client communication with named pipes

Server (server) and communications between the client (client) before, we need to let the server running, we need to let the server after the operation to create a named pipe file, then open the named pipe in the form of read file, after the server can be read from the named pipe client from the communication of information.

The code for the server side is as follows:

#include "comm.h" int main() { umask(0); If (mkfifo(FILE_NAME, 0666) < 0){// Create named pipe file perror("mkfifo"); return 1; } int fd = open(FILE_NAME, O_RDONLY); If (fd < 0){perror("open"); return 2; } char msg[128]; while (1){ msg[0] = '\0'; Ssize_t s = read(fd, MSG, sizeof(MSG)-1); if (s > 0){ msg[s] = '\0'; // manually set '\0' to printf("client# %s\n", MSG); Else if (s == 0){printf("client quit! \n"); break; } else{ printf("read error! \n"); break; } } close(fd); // Close the named pipe file. }Copy the code

As for the client, the named pipe file has been created after the server runs, so the client only needs to open the named pipe file in write mode, and then the client can write the communication information to the named pipe file, and then realize the communication with the server.

The client code is as follows:

#include "comm.h" int main() { int fd = open(FILE_NAME, O_WRONLY); // Write to open the named pipe file if (fd < 0){perror("open"); return 1; } char msg[128]; while (1){ msg[0] = '\0'; Printf ("Please Enter# "); // Prompt the client to input fflush(stdout); Ssize_t s = read(0, MSG, sizeof(MSG)-1); if (s > 0){ msg[s - 1] = '\0'; // Write the message to the named pipe write(fd, MSG, strlen(MSG)); } } close(fd); // Close the named pipe file. }Copy the code

For how to let the client and the server using the same named pipe file, here we can let the client and the server contains the same header files, the header files provide the sharing of a named pipe file filename, so that the client and the server can pass the filename, open the same named pipe file, and then communicate.

The code for the shared header is as follows:

#pragma once #include <stdio.h> #include <unistd.h> #include <sys/types.h> #include <sys/stat.h> #include <string.h> #include <fcntl.h> #define FILE_NAME "myFIFo" // let the client and server use the same named pipeCopy the code

After the code is written, the server process is run and we can see the named pipe file that has been created on the client side.

Then the client has to run, at this time we write from client information is written to a named pipe client, the server to read the information from a named pipe by printing on display from the server, the phenomenon that pass a named pipe server is to be able to get to the information from the client, in other words, The two processes can now communicate with each other.

When the client and server are running, we can also check the information of these two processes by using the ps command. We can find that these two processes are really two unrelated processes, because their PID and PPID are different. It turns out that a named pipe can communicate between two unrelated processes.

Exit relationship between server and client

When the client exits, the server reads the data in the pipeline and can no longer read it, and then executes its other code (in the current case, exits directly).

The next time the client writes to the pipe, it receives a SIGPIPE 13 signal from the operating system and the client is forcibly killed by the operating system.

Communication takes place in memory

Would the size of the pipe file change if we only had the client write to the pipe and the server didn’t read from the pipe?

#include "comm.h" int main() { umask(0); If (mkfifo(FILE_NAME, 0666) < 0){// Create named pipe file perror("mkfifo"); return 1; } int fd = open(FILE_NAME, O_RDONLY); If (fd < 0){perror("open"); return 2; } while (1){close(fd); // Close the named pipe file. }Copy the code

As you can see, although the server does not read the data in the pipe, the data in the pipe is not flushed to disk. Using ll, you can see that the size of the named pipe file is still 0, which means that the communication between the two processes is still in memory, just like anonymous pipe communication.

Distribute computing tasks using named pipes

It is important to note that the communication between the two processes is not simply a string, the server will do some processing on the information sent by the client.

In this example, the client sends a calculation task to the server. The client sends a double-operand calculation request to the server through a pipe. The server computs the corresponding result after receiving the information from the client.

Here we don’t need to change the code on the client side, just the logic that processes the communication on the server side.

#include "comm.h" int main() { umask(0); If (mkfifo(FILE_NAME, 0666) < 0){// Create named pipe file perror("mkfifo"); return 1; } int fd = open(FILE_NAME, O_RDONLY); If (fd < 0){perror("open"); return 2; } char msg[128]; while (1){ msg[0] = '\0'; Ssize_t s = read(fd, MSG, sizeof(MSG)-1); if (s > 0){ msg[s] = '\0'; // manually set '\0' to printf("client# %s\n", MSG); Char * lable = "+-*/%"; char* p = msg; int flag = 0; while (*p){ switch (*p){ case '+': flag = 0; break; case '-': flag = 1; break; case '*': flag = 2; break; case '/': flag = 3; break; case '%': flag = 4; break; } p++; } char* data1 = strtok(msg, "+-*/%"); char* data2 = strtok(NULL, "+-*/%"); int num1 = atoi(data1); int num2 = atoi(data2); int ret = 0; switch (flag){ case 0: ret = num1 + num2; break; case 1: ret = num1 - num2; break; case 2: ret = num1 * num2; break; case 3: ret = num1 / num2; break; case 4: ret = num1 % num2; break; } printf("%d %c %d = %d\n", num1, lable[flag], num2, ret); Else if (s == 0){printf("client quit! \n"); break; } else{ printf("read error! \n"); break; } } close(fd); // Close the named pipe file. }Copy the code

When the server receives the information from the client, the processing action is not to print it to the display, but to further process the information, so as to get the corresponding results.

Process remote control with named pipes

Interestingly, we can use one process to control the behavior of another, for example, we can input commands from the client into the pipe and then have the server read the commands from the pipe and execute them.

Below we only implement the server to execute the command without the option, if we want the server to execute the command with the option, we can parse the command obtained in the pipe. The implementation is as simple as having the server read the command from the pipe, create a child process, and then replace the process.

There is also no need to change the code on the client side, just the logic on the server side that processes the communication.

int main() { umask(0); If (mkfifo(FILE_NAME, 0666) < 0){// Create named pipe file perror("mkfifo"); return 1; } int fd = open(FILE_NAME, O_RDONLY); If (fd < 0){perror("open"); return 2; } char msg[128]; while (1){ msg[0] = '\0'; Ssize_t s = read(fd, MSG, sizeof(MSG)-1); if (s > 0){ msg[s] = '\0'; // manually set '\0' to printf("client# %s\n", MSG); if (fork() == 0){ //child execlp(msg, msg, NULL); // the process replaces exit(1); } waitpid(-1, NULL, 0); Else if (s == 0){printf("client quit! \n"); break; } else{ printf("read error! \n"); break; } } close(fd); // Close the named pipe file. }Copy the code

After receiving the information from the client, the server replaces the process and executes the command sent by the client.

Copy files with named pipes

Here we use the named pipe to copy the file.

The file to be copied is file. TXT. The content of the file is as follows:

TXT file to the server through the pipeline, create a file-bat. TXT file on the server, and write the data obtained from the pipeline into the file-bat. TXT file, so that the copy of file.

The server needs to create a named pipe, open the named pipe in read mode, and create a file named file-bat. TXT. After that, the server needs to write the data read from the pipe to file file-bat. TXT.

The code for the server side is as follows:

#include "comm.h" int main() { umask(0); If (mkfifo(FILE_NAME, 0666) < 0){// Create named pipe file perror("mkfifo"); return 1; } int fd = open(FILE_NAME, O_RDONLY); If (fd < 0){perror("open"); return 2; } / / create a file file - bat. TXT, and open the file in the form of written int fdout = open (" file - bat. TXT ", O_CREAT | O_WRONLY, 0666); if (fdout < 0){ perror("open"); return 3; } char msg[128]; while (1){ msg[0] = '\0'; Ssize_t s = read(fd, MSG, sizeof(MSG)-1); if (s > 0){ write(fdout, msg, s); } else if (s == 0){printf("client quit! \n"); break; } else{ printf("read error! \n"); break; } } close(fd); // Close the named pipe file close(fdout); // Close the file-bat. TXT file. Return 0; }Copy the code

The client needs to do is to open the existing named pipe file in the way of writing, and then open the file file. TXT in the way of reading, and then need to do is to read out the data in the file and write to the pipe.

The client code is as follows:

#include "comm.h" int main() { int fd = open(FILE_NAME, O_WRONLY); // Write to open the named pipe file if (fd < 0){perror("open"); return 1; } int fdin = open("file.txt", O_RDONLY); If (fdin < 0){perror("open"); return 2; } char msg[128]; Ssize_t s = read(fdin, MSG, sizeof(MSG)) while (1){ssize_t s = read(fdin, MSG, sizeof(MSG)); if (s > 0){ write(fd, msg, s); Else if (s == 0){printf(" Read end of file! \n"); break; } else{ printf("read error! \n"); break; } } close(fd); // Close the named pipe file close(fdin); // Close file. TXT file. Return 0; }Copy the code

The code for sharing header files is the same as before, as follows:

#pragma once #include <stdio.h> #include <unistd.h> #include <sys/types.h> #include <sys/stat.h> #include <string.h> #include <fcntl.h> #define FILE_NAME "myFIFo" // let the client and server use the same named pipeCopy the code

After writing the code, run the server first, then the client, and in a split second the two processes are finished.

In this case, you can see that the file file. TXT is copied.

Run the cat command to print the contents in file file-bat. TXT. The contents in file file-bat. TXT are the same as those in file.

What is the point of using pipes to copy files?

Since this is local file copying using pipes, it doesn’t seem to make much sense, but if we think of the pipes as “network”, the client as “Windows Xshell”, and the server as “centos server”. Then we realize the function of file upload, if the direction is reversed, then realize the function of file download.

The difference between named pipes and anonymous pipes

  • The anonymous pipe is created and opened by the PIPE function.
  • The named pipe is created by the mkFIFo function and opened by the open function.
  • The only difference between a FIFO (named pipe) and a PIPE (anonymous pipe) is the way they are created and opened, and once the work is done, they have the same semantics.

Pipes in the command line

The existing data. TXT file contains the following contents:

We can take advantage of the pipe (” | “) at the same time use the cat command and the grep command, and then implement text filtering.

[cl@VM-0-15-centos pipe]$ cat data.txt | grep dragon

Then the pipe (” | “) in the command game is anonymous pipe or a named pipe?

Due to the anonymous pipe can only be used for communication between groups of related process, while a named pipe can be used for communication between two unrelated process, so we can see first order business in pipe (” | “) connected between each process whether has the affinity.

Below (” | “) through a pipeline connecting the three processes, through the ps command to see the three process can be found that the three processes of PPID is the same, that is to say they are founded by the same parent the child process.

Their parent process is actually the command line interpreter, in this case bash.

That is to say, linked by pipe (” | “) of each process are related, siblings of process between them.

Now that we know that if a named pipe is used between two processes, there must be a corresponding named pipe name on disk, but there is no similar named pipe name when we use the command, so the pipe on the command line is actually anonymous.

System V Interprocess communication

Pipeline communication is file-based in nature, meaning that the operating system does not do much design work for it, and System V IPC is a communication mode specifically designed by the operating system. Either way, they are essentially the same, trying to get different processes to see the same resource provided by the operating system.

The System V IPC provides the following communication modes:

  1. System V Shared memory
  2. System V message queue
  3. System V semaphore

Among them, the system V shared memory and the System V message queue are for the purpose of transmitting data, and the System V semaphore is designed to ensure synchronization and mutual exclusion between processes. Although the system V semaphore and communication seem not directly related, but it belongs to the category of communication.

To clarify: System V shared memory and System V message queues are similar to mobile phones for communicating messages; A System V semaphore is similar to a chess clock used in a chess game to ensure synchronization and mutual exclusion between two players.

System V Shared memory

The fundamentals of shared memory

Shared memory for different processes to see the same resources way, apply for a block of memory in the middle of the physical memory space, and then the memory space, respectively, and their respective page table mapping between each process, and then in the middle of the open space in the virtual address space and fill of the virtual address to the respective page table corresponding to the position, make establish corresponding relationship between virtual address and physical address, These processes now see the same piece of physical memory, which is called shared memory.

Note:The operations mentioned here, such as opening up physical space and establishing mappings, are completed by calling system interfaces, that is to say, these actions are completed by the operating system.

Shared memory data structures

There may be a large number of processes in the system to communicate, so there may be a large number of shared memory, so the operating system must manage it, so shared memory in addition to really open up space in memory, the system must maintain related kernel data structure for shared memory.

The data structure of shared memory is as follows:

	struct ipc_perm     shm_perm;   /* operation perms */
	int         shm_segsz;  /* size of segment (bytes) */
	__kernel_time_t     shm_atime;  /* last attach time */
	__kernel_time_t     shm_dtime;  /* last detach time */
	__kernel_time_t     shm_ctime;  /* last change time */
	__kernel_ipc_pid_t  shm_cpid;   /* pid of creator */
	__kernel_ipc_pid_t  shm_lpid;   /* pid of last operator */
	unsigned short      shm_nattch; /* no. of current attaches */
	unsigned short      shm_unused; /* compatibility */
	void            *shm_unused2;   /* ditto - used by DIPC */
	void            *shm_unused3;   /* unused */
};

Copy the code

When a shared memory is allocated, each shared memory is allocated with a key value, which identifies the uniqueness of the shared memory in the system, so that the processes that communicate with each other can see the same shared memory. The first member of the shared memory data structure is shm_perm. Shm_perm is a structure variable of type IPC_perm. The key value of each shared memory is stored in the structure variable shm_perm.

	__kernel_key_t  key;
	__kernel_uid_t  uid;
	__kernel_gid_t  gid;
	__kernel_uid_t  cuid;
	__kernel_gid_t  cgid;
	__kernel_mode_t mode;
	unsigned short  seq;
};

Copy the code

For the record, the shared memory data structures shmid_ds and ipc_perm are defined in /usr/include/linux/shm.h and /usr/include/linux/ipc.h, respectively.

Creating and releasing shared memory

The establishment of shared memory generally includes the following two processes:

  1. Apply for shared memory space in physical memory.
  2. Connect the obtained shared memory to the address space to establish a mapping relationship.

The release of shared memory generally includes the following two processes:

  1. To disassociate the shared memory from the address space, that is, cancel the mapping relationship.
  2. To release the shared memory space, return the physical memory to the system.

To create shared memory, use the shmget function. The shmget function is modeled as follows:

Copy the code

Shmget ();

  • The first parameter key is the unique identifier of the share to be created in the system.
  • The second parameter size indicates the size of the shared memory to be created.
  • The third parameter, SHMFLG, indicates how shared memory is created.

The shmget function returns the following values:

  • The shmget call succeeds, returning a valid shared memory identifier (user-level identifier).
  • The shmget call failed, returning -1.

Note: We call things capable of calibrating some resources handle, and here shmget function return value is the handle to the Shared memory, actually the handle can identify Shared memory in the user layer, when the Shared memory is created, we in the subsequent use Shared memory related interface, are needed by the handle for various operations specified Shared memory.

The shmget function is passed the first parameter key, which needs to be retrieved using the ftok function

The prototype of the ftok function is as follows:

key_t ftok(const char *pathname, int proj_id); The ftok function converts an existing pathname and an integer identifier proj_id into a key, called an IPC key, that is populated into the data structure that maintains shared memory when the shmget function is used to retrieve shared memory. Note that the file specified by pathName must exist and be accessible.

Note:

  1. Using the ftok function to generate key values may cause conflicts. In this case, you can modify the parameters passed to the ftok function.
  2. All processes that need to communicate with each other need to use the same pathname and integer identifier to obtain the key value using the ftok function, so as to generate the same key value and find the same shared resource.

SHMFLG: SHMFLG: SHMFLG: SHMFLG

In other words:

  • With the combination IPC_CREAT, you are guaranteed to get a handle to the shared memory, but there is no way to confirm whether the shared memory is newly created.
  • Using combined IPC_CREAT | IPC_EXCL, only shmget function call success will obtain a handle to the Shared memory, and the Shared memory must be a new Shared memory.

Now we can use the ftok and shmget functions to create a block of shared memory. After creating a block of shared memory, we can print the key value and handle to the shared memory.

#include <sys/types.h> #include <sys/ipc.h> #include <sys/shm.h> #include <unistd.h> #define PATHNAME "/ home/cl/Linuxcode/IPC/SHM/server c" / / path name # define PROJ_ID 0 x6666 / / # define integer identifier SIZE 4096 / / the SIZE of the Shared memory int main () { key_t key = ftok(PATHNAME, PROJ_ID); If (key < 0){perror("ftok"); return 1; } int shm = shmget(key, SIZE, IPC_CREAT | IPC_EXCL); // Create a new shared memory if (SHM < 0){perror("shmget"); return 2; } printf("key: %x\n", key); // Print the key value printf(" SHM: %d\n", SHM); // Print handle return 0; }Copy the code

After the code is written and run, we can see the output key and handle values:

In Linux, you can use the ipcs command to view information about interprocess communication facilities.

When using ipcs alone, information about message queues, shared memory, and semaphore is listed by default. If you want to view information about one of them, you can choose to carry the following options:

  • -q: lists information about message queues.
  • -m: lists information about the shared memory.
  • -s: Displays semaphore information.

For example, the -m option is used to view information about the shared memory.

At this point, according to the ipcs command view result and our output, we can confirm that the shared memory has been created successfully.

The meanings of each column in the ipcs command output are as follows:

Note: Key is a way to ensure shared memory uniqueness at the kernel level, while SHmid is a way to ensure shared memory uniqueness at the user level. The relationship between key and SHmid is similar to the relationship between FD and FILE*.

The shared memory is released

It can be seen from the above experiment that when our process runs, the applied shared memory still exists and has not been released by the operating system. In practice, pipes are process-dependent, whereas shared memory is kernel-dependent, meaning that shared memory that was created is not released when the process exits.

This indicates that if the process does not actively delete the shared memory created, the shared memory will remain until shutdown and restart (as is the case with System V IPC), and also indicates that IPC resources are provided and maintained by the kernel.

At this point, if we want to release the created shared memory, there are two methods: one is to use the command to release the shared memory, the other is to call the function to release the shared memory after the process communication is finished.

Use commands to release shared memory resources

You can run the ipcrm -m shmid command to release the specified shared memory resource.

[cl@VM-0-15-centos shm]$ ipcrm -m 8

Note: Specify that the user layer ID of the shared memory is used for the deletion, that is, shmid in the list.

Use programs to release shared memory resources

To control shared memory, we need to use SHMCTL function. The function prototype of SHMCTL function is as follows:

int shmctl(int shmid, int cmd, struct shmid_ds *buf);

SHMCTL function parameters:

  1. The first parameter, shmid, represents the user-level identifier of the shared memory under control.
  2. The second argument, CMD, represents the specific control action.
  3. The third parameter, buf, gets or sets the data structure of the controlled shared memory.

The SHMCTL function returns the following values:

  1. SHMCTL call successful, return 0.
  2. SHMCTL call failed, return -1.

There are three common options passed in as the second argument to the SHMCTL function:

For example, in the following code, shared memory is created, and after two seconds the program automatically removes the shared memory, and after two seconds the program automatically exits.

#include <sys/types.h> #include <sys/ipc.h> #include <sys/shm.h> #include <unistd.h> #define PATHNAME "/ home/cl/Linuxcode/IPC/SHM/server c" / / path name # define PROJ_ID 0 x6666 / / # define integer identifier SIZE 4096 / / the SIZE of the Shared memory int main () { key_t key = ftok(PATHNAME, PROJ_ID); If (key < 0){perror("ftok"); return 1; } int shm = shmget(key, SIZE, IPC_CREAT | IPC_EXCL); // Create a new shared memory if (SHM < 0){perror("shmget"); return 2; } printf("key: %x\n", key); // Print the key value printf(" SHM: %d\n", SHM); // Print handle sleep(2); shmctl(shm, IPC_RMID, NULL); Sleep (2); return 0; }Copy the code

We can use the following monitoring script to keep an eye on shared memory allocation while the program is running:

[cl@VM-0-15-centos shm]$ while :; do ipcs -m; echo “###################################”; sleep 1; Done can verify that shared memory was created and freed successfully through the monitoring script.

Association of shared memory

To connect shared memory to the process address space, we need to use the shmat function, which has the following prototype:

Copy the code

Shmat function parameter description:

  • The first parameter, shmid, represents the user-level identifier of the shared memory to be associated.
  • The second parameter, shmaddr, specifies that shared memory is mapped to an address in the process’s address space. This parameter is usually set to NULL, indicating that the kernel is left to decide on an appropriate address location.
  • The third parameter, SHMFLG, represents some properties that are set when associated with shared memory.

The shmat function returns the following values:

  • Shmat successfully returns the starting address of the shared memory mapped to the process address space.
  • Shmat call failed, return (void*)-1.

There are three common options passed in as the third argument to the shmat function:

At this point we can try to associate shared memory using the shmat function.

#include <sys/types.h> #include <sys/ipc.h> #include <sys/shm.h> #include <unistd.h> #define PATHNAME "/ home/cl/Linuxcode/IPC/SHM/server c" / / path name # define PROJ_ID 0 x6666 / / # define integer identifier SIZE 4096 / / the SIZE of the Shared memory int main () { key_t key = ftok(PATHNAME, PROJ_ID); If (key < 0){perror("ftok"); return 1; } int shm = shmget(key, SIZE, IPC_CREAT | IPC_EXCL); // Create a new shared memory if (SHM < 0){perror("shmget"); return 2; } printf("key: %x\n", key); // Print the key value printf(" SHM: %d\n", SHM); Printf ("attach begin! \n"); sleep(2); char* mem = shmat(shm, NULL, 0); If (mem == (void*)-1){perror("shmat"); return 1; } printf("attach end! \n"); sleep(2); shmctl(shm, IPC_RMID, NULL); Return 0; }Copy the code

When the shmget function is used to create the shared memory, no permission is set for the created shared memory. Therefore, the default permission for the created shared memory is 0. Therefore, the server process does not have permission to associate the shared memory.

When creating shared memory using the shmget function, we should set the shared memory creation permissions in the third parameter. The permissions are set according to the same rules as file permissions.

int shm = shmget(key, SIZE, IPC_CREAT | IPC_EXCL | 0666); The number of processes associated with the shared memory has changed from 0 to 1, and the shared memory has changed from 0 to 666.

The shared memory was disassociated

To remove the association between the shared memory and the address space of the process, we need to use the SHMDT function.

int shmdt(const void *shmaddr);

SHMDT function parameter description:

  • The start address of the shared memory to be disassociated is the start address obtained when the shmat function is called.

The SHMDT function returns the following values:

  • SHMDT call successful, return 0.
  • SHMDT call failed, return -1.

Now we can unassociate the shared memory from the process.

#include <sys/types.h> #include <sys/ipc.h> #include <sys/shm.h> #include <unistd.h> #define PATHNAME "/ home/cl/Linuxcode/IPC/SHM/server c" / / path name # define PROJ_ID 0 x6666 / / # define integer identifier SIZE 4096 / / the SIZE of the Shared memory int main () { key_t key = ftok(PATHNAME, PROJ_ID); If (key < 0){perror("ftok"); return 1; } int shm = shmget(key, SIZE, IPC_CREAT | IPC_EXCL | 0666); // Create a new shared memory if (SHM < 0){perror("shmget"); return 2; } printf("key: %x\n", key); // Print the key value printf(" SHM: %d\n", SHM); Printf ("attach begin! \n"); sleep(2); char* mem = shmat(shm, NULL, 0); If (mem == (void*)-1){perror("shmat"); return 1; } printf("attach end! \n"); sleep(2); printf("detach begin! \n"); sleep(2); shmdt(mem); Printf ("detach end! \n"); sleep(2); shmctl(shm, IPC_RMID, NULL); Return 0; }Copy the code

After running the program, you can find the process that the association number of the shared memory changes from 1 to 0 through monitoring, that is, the association between the shared memory and the process is cancelled.

Note: Disconnecting a shared memory segment from the current process does not remove the shared memory, but disconnects the current process from the shared memory.

Now that you know how shared memory is created, associated, de-associated, and freed, you can try to have two processes communicate over shared memory. Before allowing two processes to communicate, we can test whether they can successfully connect to the same shared memory.

The server is responsible for creating shared memory. After creating shared memory, associate the shared memory with the server, and then enter an infinite loop to check whether the server is successfully connected.

The server code is as follows:

#include "comm.h" int main() { key_t key = ftok(PATHNAME, PROJ_ID); If (key < 0){perror("ftok"); return 1; } int shm = shmget(key, SIZE, IPC_CREAT | IPC_EXCL | 0666); // Create a new shared memory if (SHM < 0){perror("shmget"); return 2; } printf("key: %x\n", key); // Print the key value printf(" SHM: %d\n", SHM); Char * mem = shmat(SHM, NULL, 0); While (1){// no operation} SHMDT (mem); // The shared memory is unassociated with SHMCTL (SHM, IPC_RMID, NULL); Return 0; }Copy the code

The client only needs to associate with the shared memory created by the server. Then, the client enters an infinite loop to check whether the client is successfully connected.

The client code is as follows:

#include "comm.h" int main() { key_t key = ftok(PATHNAME, PROJ_ID); If (key < 0){perror("ftok"); return 1; } int shm = shmget(key, SIZE, IPC_CREAT); If (SHM < 0){perror("shmget"); return 2; } printf("key: %x\n", key); // Print the key value printf(" SHM: %d\n", SHM); Char * mem = shmat(SHM, NULL, 0); Int I = 0; While (1){// no operation} SHMDT (mem); Return 0; }Copy the code

To ensure that the server and client can obtain the same key value when using the ftok function, the pathnames and integer identifiers of the ftok function passed by the server and client must be the same. In this way, the same key value can be generated and the same shared resource can be found and connected. Here we can put the shared information into a header file, which is shared by the server and client.

The code for the shared header is as follows:

#include <stdio.h> #include <stdio.h> #include <sys/types.h> #include <sys/ipc.h> #include <sys/shm.h> #include < unistd h > # define the PATHNAME "/ home/cl/Linuxcode/IPC/SHM/server c" / / path name # define PROJ_ID 0 x6666 / / # define integer identifier SIZE 4096 // The size of shared memoryCopy the code

After running the server and client, the monitoring script shows that the server and client are associated with the same shared memory and the number of processes associated with the shared memory is 2, indicating that the server and client are successfully connected to the shared memory.

At this point we can have the server and client communicate, using a simple send string as an example.

The client keeps writing data to the shared memory:

int i = 0;
while (1){
	mem[i] = 'A' + i;
	i++;
	mem[i] = '\0';
	sleep(1);
}

Copy the code

The server constantly reads data from shared memory and prints:

while (1){
	printf("client# %s\n", mem);
	sleep(1);
}

Copy the code

At this point, run the server first to create shared memory. When we run the client, the server starts to output data continuously, indicating that the server and the client can communicate normally.

Shared memory is compared to pipes

After the shared memory is created, there is no need to call the system interface for communication, while the system interface such as read and write is still needed for communication after the pipeline is created. In fact, shared memory is the fastest form of communication among all processes.

Let’s start with pipe communication:

As you can see from this diagram, transferring a file from one process to another requires four copies using piped communication:

  • The server copies information from the input file into a temporary buffer on the server.
  • Copy the server side temporary buffer information into the pipe.
  • The client copies the information from the pipe into the client buffer.
  • Copy the client temporary buffer information into the output file.

Let’s look at shared memory communication:

As you can see from this graph, using shared memory for communication, transferring a file from one process to another requires only two copy operations:

  1. From input files to shared memory.
  2. From shared memory to output files.

So shared memory is the fastest way of communicating between processes because it requires the least number of copies.

But shared memory also has its drawbacks. We know that pipes come with synchronization and mutual exclusion mechanisms, but shared memory does not provide any protection, including synchronization and mutual exclusion.

System V message queue

The fundamentals of message queuing

The message queue is actually created a queue in the system, each member of a queue is a data block, and the information of these types of data blocks are made by two parts, the process of two communicate with each other in some way to see the same message queue, these two processes when sending data to the other party, add data block in the message queue of the, Both processes fetch data blocks at the head of the message queue.

Where a certain block of data in the message queue is sent to whom, depending on the type of data block.

To sum up:

Message queues provide a way to send blocks of data from one process to another. Each block is considered to have a type, and the block received by the receiver process can have different type values. As with shared memory, resources in message queues must be deleted themselves or they will not be automatically cleared, because the life cycle of System V IPC resources is kernel-dependent.

Message queue data structure

Of course, there may be a large number of message queues in the system, and the system must maintain kernel data structures for message queues.

The data structure of the message queue is as follows:

	struct ipc_perm msg_perm;
	struct msg *msg_first;      /* first message on queue,unused  */
	struct msg *msg_last;       /* last message in queue,unused */
	__kernel_time_t msg_stime;  /* last msgsnd time */
	__kernel_time_t msg_rtime;  /* last msgrcv time */
	__kernel_time_t msg_ctime;  /* last change time */
	unsigned long  msg_lcbytes; /* Reuse junk fields for 32 bit */
	unsigned long  msg_lqbytes; /* ditto */
	unsigned short msg_cbytes;  /* current number of bytes on queue */
	unsigned short msg_qnum;    /* number of messages in queue */
	unsigned short msg_qbytes;  /* max number of bytes on queue */
	__kernel_ipc_pid_t msg_lspid;   /* pid of last msgsnd */
	__kernel_ipc_pid_t msg_lrpid;   /* last receive pid */
};

Copy the code

The first member of the message queue data structure is msg_perm, which is a structure variable of the same type as shm_perm. The ipC_perm structure is defined as follows:

	__kernel_key_t  key;
	__kernel_uid_t  uid;
	__kernel_gid_t  gid;
	__kernel_uid_t  cuid;
	__kernel_gid_t  cgid;
	__kernel_mode_t mode;
	unsigned short  seq;
};

Copy the code

For the record: shared memory data structures MSqid_ds and ipc_perm are defined in /usr/include/linux/msg.h and /usr/include/linux/ipc.h, respectively.

Creation of message queues

To create a message queue, we need to use the msgget function. The msgget function prototype is as follows:

Copy the code

To clarify:

Creating a message queue also requires the ftok function to generate a key value, which is taken as the first argument to the MSgget function. The second argument to the msgget function is the same as the third argument to the shmget function used when creating shared memory. A valid message queue identifier (user-layer identifier) returned by the MSgGET function when the message queue is successfully created. To release the message queue, we need to use MSGCTL function. The prototype of MSGCTL function is as follows:

int msgsnd(int msqid, const void *msgp, size_t msgsz, int msgflg); The MSGCTL function takes the same three arguments as the SHMCTL function used to free shared memory, except that the third argument is passed in the data structure of the message queue.

Sends data to the message queue

To send data to the message queue, we need to use MSGSND function. The prototype of MSGSND function is as follows:

Copy the code

MSGSND function parameter description:

  • The first parameter, MSqID, represents the user-level identifier of the message queue.
  • The second parameter, MSGP, represents the data block to be sent.
  • The third parameter, MSGSZ, represents the size of the data block to be sent
  • The fourth parameter, MSGFLG, indicates the method of sending data blocks. The default value is 0.

The MSGSND function returns the following values:

  • The MSGSND call is successful, returning 0.
  • The MSGSND call failed, returning -1.

Where the second parameter of MSGSND function must have the following structure:

	long mtype;       /* message type, must be > 0 */
	char mtext[1];    /* message data */
};

Copy the code

Note: The second member of the structure, mtext, is the message to be sent, and the size of the mtext can be specified when we define the structure.

To get data from a message queue, we need to use MSGRCV function. The prototype of MSGRCV function is as follows:

Copy the code

MSGRCV function parameter description:

  • The first parameter, MSqID, represents the user-level identifier of the message queue.
  • The second parameter, MSGP, represents the retrieved data block and is an output parameter.
  • The third parameter, MSGSZ, indicates the size of the data block to fetch
  • The fourth parameter, MSgTYp, indicates the type of data block to receive.

The return value of the MSGRCV function is as follows:

  • The MSGSND call succeeds and returns the number of bytes actually retrieved from the MTEXT array.

-msgsnd call failed. -1 is returned.

System V semaphore

Semaphore related concepts

  • Because processes need to share resources and some resources need to be mutually exclusive, processes compete to use these resources. This relationship between processes is called process exclusivity.
  • Some resources in the system that can only be used by one process at a time are called critical resources or mutually exclusive resources.
  • The segment of a program that involves a critical resource in a process is called a critical section.
  • IPC resources must be deleted or they will not be automatically deleted because the life cycle of System V IPC follows the kernel.

Semaphore data structure

Kernel data structures are also maintained for semaphores in the system.

The semaphore data structure is as follows:

	struct ipc_perm sem_perm;       /* permissions .. see ipc.h */
	__kernel_time_t sem_otime;      /* last semop time */
	__kernel_time_t sem_ctime;      /* last change time */
	struct sem  *sem_base;      /* ptr to first semaphore in array */
	struct sem_queue *sem_pending;      /* pending operations to be processed */
	struct sem_queue **sem_pending_last;    /* last pending operation */
	struct sem_undo *undo;          /* undo requests on this array */
	unsigned short  sem_nsems;      /* no. of semaphores in array */
};

Copy the code

The first member of the semaphore data structure is also a structure variable of type IPC_perm. The ipC_perm structure is defined as follows:

	__kernel_key_t  key;
	__kernel_uid_t  uid;
	__kernel_gid_t  gid;
	__kernel_uid_t  cuid;
	__kernel_gid_t  cgid;
	__kernel_mode_t mode;
	unsigned short  seq;
};

Copy the code

For the record, the shared memory data structures MSqid_ds and ipc_perm are defined in /usr/include/linux/sem.h and /usr/include/linux/ipc.h, respectively.

Semaphore correlation function

Creation of a semaphore set

To create a semaphore set we need to use semget function, semget function prototype is as follows:

Copy the code

To clarify:

  1. Creating a message queue also requires the ftok function to generate a key value as the first argument to the semget function.
  2. The msgget function’s second argument, NSems, represents the number of semaphores created.
  3. The third argument to the msgget function is the same as the third argument to the shmget function used when creating shared memory.
  4. A valid semaphore set identifier (user-layer identifier) returned by the semget function when the message queue is successfully created.

Deletion of semaphore sets

To delete semaphore sets, we need to use semctl function. Semctl function prototype is as follows:

Copy the code

Operation of a semaphore set

To operate on semaphore sets, we need semop functions. The prototype of semop functions is as follows:

Copy the code

Process mutex

Interprocess communication is realized by sharing resources, which solves the problem of communication, but also introduces a new problem, that is, the critical resources shared by communication processes. If the critical resources are not protected, it may lead to the inconsistency of data obtained by each process from the critical resources.

The essence of protecting critical resources is to protect critical region. We call the code accessing critical resources in the process code as critical region. Semaphore is used to protect critical region, and it is divided into binary semaphore and multivariate semaphore.

For example, if we have a resource with a size of 100 bytes, if we have a resource with a size of 25 bytes, the resource can be divided into four parts, so this resource can be identified by four semaphores.

In binary semaphores, the number of semaphores is 1 (equivalent to treating critical resources as a whole block). Binary semaphore nature solves the mutual exclusion problem of critical resources, as explained in the following pseudo-code:

According to the above code, when process A applies for access to shared memory resources, if sem is 1 (SEM represents the number of current semaphore), then process A successfully applies for resources, and sem needs to be reduced. Then process A can perform A series of operations on the shared memory, but when process A accesses the shared memory, If process B applies for access to the shared memory resource, sem will be 0 at this time, then process B will be suspended until process A finishes accessing the shared memory and adds SEM. At this time, process B will be aroused and then process B will access the shared memory.

In this case, only one process is accessing the same shared memory at any time, thus eliminating the problem of mutual exclusion of critical resources.

In fact, the operation of counter SEM decrement in code is called P operation, and the operation of counter increment is called V operation. P operation is applying semaphore, and V operation is releasing semaphore.

System V IPC contact

Through the study of system V series interprocess communication, it can be found that shared memory, message queue and semaphore, although their internal attributes are very different, but the first member of the data structure to maintain them is really the same, they are all ipC_PERm type member variable.

The advantage of this design is that the operating system can define an array of type struct IPC_perm, and each time we apply for an IPC resource, we can create such a structure in the array.

In other words, in the kernel, you just need to organize all the IPC_perm members of the IPC resource into an array, and then slice the way to get the starting address of the IPC resource, and then you can access each member of the IPC resource.

— — — — — — — —

Copyright notice: This article is originally published BY CSDN blogger “2021Dragon”. It is subject to CC 4.0 BY-SA copyright agreement. Please attach the original source link and this statement. Original link: blog.csdn.net/chenlong_cx…