Article source: www.cnblogs.com/binchen-chi…
As mentioned in the thread/process concurrent server article, improving server performance in the IO layer requires two areas: file descriptor handling and thread scheduling.
What is IO multiplexing? IO is Input/Output. In network programming, file descriptors are IO operations.
Why I/O reuse?
1. Many functions in network programming block, such as CONNECT. IO multiplexing can execute code in a non-blocking form.
2. As mentioned earlier, LISTEN maintains two queues. The queue that completes the handshake may have multiple ready descriptors, and IO reuse can process batch descriptors.
3. Sometimes you need to process TCP and UDP at the same time, listen on multiple ports at the same time, and process read/write and connection at the same time.
Why is epoll more efficient than SELECT?
1. In the case of a large number of connections, select traversal requires each descriptor, and ePoll maintains the event table by the kernel and only needs to process the descriptors with responses.
2. Select is limited to file descriptors. The default value is 1024.
3. The efficiency is not absolute. Select is not necessarily worse than epoll when the connection rate is high, the connection is disconnected, and the connection is frequent. So it depends on the situation.
Epoll has two modes, level trigger and edge trigger.
1. The efficiency of level triggering is lower than that of edge triggering. In level triggering mode, if the events returned by epoll_WAIT are not processed properly and there is still data in the kernel buffer, it will be notified repeatedly until the processing is complete. Epoll uses this mode by default.
2. Edge triggering is efficient, and kernel buffer events are only notified once.
An epoll implementation demo
1 #include <iostream> 2 #include <sys/socket.h> 3 #include <sys/epoll.h> 4 #include <netinet/in.h> 5 #include <arpa/inet.h> 6 #include <fcntl.h> 7 #include <unistd.h> 8 #include <stdio.h> 9 #include <stdlib.h> 10 #include <string.h> 11 #include <errno.h> 12 13 using namespace std; 14 15 #define MAXLINE 5 16 #define OPEN_MAX 100 17 #define LISTENQ 20 18 #define SERV_PORT 5000 19 #define INFTIM 1000 20 21 int main(int argc, char* argv[]) 22 { 23 int listen_fd, connfd_fd, socket_fd, epfd, nfds; 24 ssize_t n; 25 char line[MAXLINE]; 26 socklen_t clilen; 29 struct epoll_event ev,events[20]; 29 Struct epoll_event ev,events[20]; 30 // Generate the epoll-specific file descriptor for accept. 31 epfd=epoll_create(5); 32 struct sockaddr_in clientaddr; 33 struct sockaddr_in serveraddr; 34 listen_fd = socket(AF_INET, SOCK_STREAM, 0); 36 ev.data.fd = listen_fd; 36 EV.data.fd = listen_fd; 37 / / set to deal with the event type 38 ev. Events = EPOLLIN | EPOLLET; 39 // Register epoll events. 40 epoll_ctl(epfd,EPOLL_CTL_ADD,listen_fd,&ev); 41 42 memset(&serveraddr, 0, sizeof(serveraddr)); 43 serveraddr.sin_family = AF_INET; 44 serveraddr.sin_addr.s_addr = htonl(INADDR_ANY); 45 serveraddr.sin_port = htons(SERV_PORT); 46 47 if (bind(listen_fd, (struct sockaddr*)&serveraddr, sizeof(serveraddr)) == -1) 48 { 49 printf("bind socket error: %s(errno: %d)\n",strerror(errno),errno); 50 exit(0); 51 } 52 53 if (listen(listen_fd, LISTENQ) == -1) 54 { 55 exit(0); 56 } 57 58 for ( ; ;) 61 NFDS = epoll_wait(epfd,events,20,500); 63 for (int I = 0; i < nfds; ++ I) 64 {65 if (events[I].data.fd == listen_fd)// If a new SOCKET user is detected to be connected to the bound SOCKET port, a new connection is established. 66 67 { 68 connfd_fd = accept(listen_fd,(sockaddr *)&clientaddr, &clilen); 69 if (connfd_fd < 0){ 70 perror("connfd_fd < 0"); 71 exit(1); 72 } 73 char *str = inet_ntoa(clientaddr.sin_addr); 74 cout << "accapt a connection from " << str << endl; 75 // Set the file descriptor for read operations 76 ev.data.fd = connfd_fd; 77 / / set the read operation for injection test event 78 ev. Events = EPOLLIN | EPOLLET; 80 epoll_ctl(epfd,EPOLL_CTL_ADD,connfd_fd,&ev); If (events[I].events&ePollin) else if (events[I].events&epollin) 83 { 84 memset(&line,'\0', sizeof(line)); 85 if ( (socket_fd = events[i].data.fd) < 0) 86 continue; 87 if ( (n = read(socket_fd, line, MAXLINE)) < 0) { 88 if (errno == ECONNRESET) { 89 close(socket_fd); 90 events[i].data.fd = -1; 91 } else 92 std::cout<<"readline error"<<std::endl; 93 } else if (n == 0) { 94 close(socket_fd); 95 events[i].data.fd = -1; 96 } 97 cout << line << endl; 98 // Set the file descriptor for the write operation 99 ev.data.fd = socket_fd; / 100 / set for injection of write operation event 101 ev. Events = EPOLLOUT | EPOLLET; 102 // Change the event to be processed on socket_fd to EPOLLOUT 103 //epoll_ctl(epfd,EPOLL_CTL_MOD,socket_fd,&ev); 104} 105 else if (events[I].pollout) // Socket_fd = events[I].data.fd; 108 write(socket_fd, line, n); 109 // Set the file descriptor for the read operation 110 ev.data.fd = socket_fd; 111 / / set the read operation for injection test event 112 ev. Events = EPOLLIN | EPOLLET; 114 epoll_ctl(epfd,EPOLL_CTL_MOD,socket_fd,&ev); 113 // Change the event to EPOLIN on socket_fd. 115 } 116 } 117 } 118 return 0; 119}Copy the code
The execution effect is as follows:
When learning Epoll for the first time, it is easy to mistakenly believe that epoll can also achieve concurrency. In fact, it is true that Epoll can achieve high performance concurrent servers. Epoll only provides I/O reuse.
Why is it possible to connect to two clients at the same time? In fact, both clients run on the same process. As mentioned earlier, the descriptors do not affect each other, so one process processes multiple descriptors in a round-robin manner.
Reactor mode:
The Reactor model is very simple to implement. The synchronous IO model is used, that is, the business thread needs to wait or ask actively to process data. The main feature is to use Epoll to monitor whether there is a corresponding listen descriptor and put the customer connection information into a queue in time. The child process/thread takes over each descriptor and performs the next operation on the descriptor, including connect and data read and write. Main program read/write ready event.
The general flow chart is as follows:
Preactor mode:
Preactor mode completely separates IO processing from business, and uses asynchronous IO model, that is, after the kernel completes data processing, it actively notifies the application processing. The main process/thread not only needs to complete listen task, but also needs to complete the mapping of kernel data buffer, and directly transfers data buff to business thread. Business threads only need to process business logic.
The general process is as follows:
\