Preface:
Modified, fully commented, added functionality of the project code:
Github.com/white0dew/W…
What is it? – Linux C++ lightweight Web server, help beginners quickly practice network programming, build their own server.
- Concurrency model using thread pool + non-blocking socket + epoll(both ET and LT) + event processing (both Reactor and simulated Proactor)
- State machines are used to parse HTTP request packets, including GET and POST requests
- Access the server database to register and log in web users and request server pictures and video files
- Realize synchronous/asynchronous log system to record server running status
- Tens of thousands of concurrent connection data can be exchanged through Webbench stress test
Original project code: github.com/qinguoyi/Ti…
Strong invincible! This article is a note I made while studying the project.
First, basic knowledge
To begin this project, you need to have some knowledge of Linux programming and network programming, and the books Unix Network Programming and Linux High Performance Server Programming are recommended.
What is Web Sever?
A Web server generally refers to a Web server, which is a program that resides on a certain type of computer on the Internet and can handle requests from Web clients such as browsers and return corresponding responses – it can place Web files for the world to view; You can place data files for the world to download. The three most popular Web servers are Apache, Nginx and IIS. The relationship between the server and client is as follows:
In this project, the Web request mainly refers to the HTTP protocol. For HTTP protocol knowledge, please refer to the introduction. HTTP is based on TCP/IP.
What is a socket?
How does the client communicate with the host? – the Socket
Socket originated from Unix, and one of the basic philosophies of Unix/Linux is that “everything is a file”, which can be operated in “open – > read/write/read – > Close” mode. Socket is a special file, and some Socket functions are operations on it (read/write IO, open, close). Let’s use the following example to understand the process of using a Socket:
Server-side code
#include <time.h> int main(int argc, char **argv) {int listenfd, connfd; struct sockaddr_in servaddr; char buff[MAXLINE]; time_t ticks; Listenfd = socket (AF_INET, SOCK_STREAM, 0); bzero(&servaddr, sizeof(servaddr)); servaddr.sin_family = AF_INET; Servaddr.sin_addr.s_addr = htonl(INADDR_ANY); servaddr.sin_addr = htonl(INADDR_ANY); servaddr.sin_port = htons(13); // Bind(listenfd, (SA *) & servADDR, sizeof(servADDR)); // The server starts listening on this port (creating a listening queue) Listen(listenfd, LISTENQ); // The server handles code for (; ;) Connfd = Accept(listenfd, (SA *) NULL, NULL); ticks = time(NULL); snprintf(buff, sizeof(buff), "%.24s\r\n", ctime(&ticks)); Write(connfd, buff, strlen(buff)); Close(connfd); }}Copy the code
Client program
#include "unp.h" int main(int argc, char **argv) {int sockfd, n; char recvline[MAXLINE + 1]; struct sockaddr_in servaddr; if (argc ! = 2) err_quit("usage: a.out <IPaddress>"); If ((sockfd = socket(AF_INET, SOCK_STREAM, 0)) < 0) err_sys("socket error"); bzero(&servaddr, sizeof(servaddr)); servaddr.sin_family = AF_INET; servaddr.sin_port = htons(13); /* daytime server */ if (inet_pton(AF_INET, argv[1], &servaddr.sin_addr) <= 0) err_quit("inet_pton error for %s", argv[1]); If (connect(sockfd, (SA *) & servADDR, sizeof(servADDR)) < 0) err_sys("connect error"); While ((n = read(sockfd, recvline, MAXLINE)) > 0) {recvline[n] = 0; /* null terminate */ if (fputs(recvline, stdout) == EOF) err_sys("fputs error"); } if (n < 0) err_sys("read error"); exit(0); }Copy the code
The workflow of the TCP server and TCP client is as follows:
For more information about sockets, see.
Imagine if there are multiple clients that want to connect to the server. How does the server handle these clients? This brings us to IO multiplexing.
What is IO multiplexing?
IO multiplexing refers to the simultaneous management of multiple I/O streams in a single process by recording and tracking the state of each Socket(I/O streams). The reason for its invention is to increase the throughput of the server as much as possible, reference links.
As mentioned above, when multiple clients are connected to a server, it is a question of how to serve each client “simultaneously.” The basic framework of the server is as follows:
The logical unit in the figure is the “write server time” function in the previous example. To solve the problem of multi-client connection, first of all, there must be a queue to sort the connection request, and then it needs to respond to the connected customers by means of concurrent multi-threading.
This project is to use epollIO multiplexing technology to realize the simultaneous monitoring of ** listening socket (listenFD) and connecting socket (the socket after customer request connection). Note that I/O multiplexing can listen for multiple file descriptors at the same time, but it itself blocks, so for efficiency, this part implements concurrency through a thread pool, allocating a logical unit (thread) ** for each ready file descriptor.
Unix has five basic IO models:
- Blocking IO (Waiting for a rabbit)
- Non-blocking IO (polling if it does not have one)
- IO reuse (select, poll, etc., so that the system blocks on a SELECT or poll call instead of a real IO call (such as recvfrom) and waits for the SELECT to return readable before calling the IO system. The advantage is that you can wait for multiple descriptors to be in place)
- Signal-driven IO (SIGIO, which uses signal-processing functions to inform the main process that data is complete and does not block)
- Asynchronous IO (POSIX aiO_ series of functions, the difference between the signal driver is that the kernel tells us when we can do I/O, and the latter is that the kernel tells us when the I/O operation is complete)
For incoming IO events (or other signaling/timing events), there are two additional event handling modes:
- Reactor model: The main thread (I/O processing unit) is only responsible for listening for events (readable and writable) on file descriptors. If so, it immediately notifies the worker thread to put socket readable and writable events into the request queue. The worker thread is used to read and write data, accept new connections, and process customer requests. (Need to distinguish between read and write events)
- Proactor mode: The main thread and kernel handle I/O operations such as reading and writing data and accepting new connections, while the worker thread only handles business logic (giving corresponding return urls), such as handling customer requests.
Generally, synchronous I/O model (such as epoll_WAIT) is used to implement Reactor, and asynchronous I/O model (such as AIO_read and AIO_write) is used to implement Proactor. However, asynchronous I/O is not mature, and synchronous I/O is used to simulate Proactor pattern in this project. See Chapter 4, Thread Pools for further information on this section.
PS: What is synchronous I/O and what is asynchronous I/O?
- Synchronous (blocking) I/O: Wait until the I/O operation is complete. This situation is called synchronous IO.
- Asynchronous (non-blocking) I/O: When code performs AN IO operation, it simply issues IO instructions, does not wait for the IO result, and then executes other code. Some time later, when the IO returns a result (the kernel has copied the data), the CPU is notified to process it. (Asynchronous operation subtext: you do first, I’ll do something else, call me when you’re ready)
IO multiplexing requires select/poll/epoll. The reason Why epoll is used in this project is to refer to the question (Why is epoll faster than select?).
- For SELECT and Poll, all file descriptors are added to their set of file descriptors in user state, and the entire set needs to be copied to kernel state with each call. Epoll maintains the entire set of file descriptors in kernel state, requiring a system call each time a file descriptor is added. The overhead of system calls is high, and in cases where there are many short-term active connections, epoll may be slower than SELECT and Poll due to these high overhead of system calls.
- Select uses a linear table to describe a set of file descriptors. File descriptors have an upper limit. Poll is described using linked lists; The epoll underlying layer is described by a red-black tree and maintains a ready list to which ready events are added from the event table. When called using epoll_wait, only data is observed in the list.
- The greatest overhead of select and poll comes from the kernel’s determination of whether a file descriptor is ready: each time a SELECT or poll call is made, the kernel traverses the entire set of file descriptors to determine whether each file descriptor is active. Epoll does not need to be checked in this way. When activity occurs, the epoll callback is automatically triggered to notify epoll of file descriptors, and the kernel puts these ready file descriptors on the previously mentioned Ready list to be processed after epoll_wait calls.
- Both SELECT and Poll work only in the relatively inefficient LT mode, while ePoll supports both LT and ET modes.
- In summary, select and poll are recommended when the number of FD monitored is small and each FD is active. When a large number of FDS are monitored and only some FDS are active per unit of time, using epoll can significantly improve performance.
What do we mean by LT and ET?
- LT is a level trigger. When an I/O event is ready, the kernel notifies it until it is processed.
- ET is an Edge trigger. When an IO event is ready, the kernel notifies it only once. If it is not handled in time, the IO event is lost.
What is multithreading?
As mentioned above, concurrent multithreading exists as a process in the computer. Threads are further divided into processes, that is, there can be multiple different code execution paths in a process. Compared to a process, a thread does not need the operating system to allocate resources to it because its resources are in the process, and threads are created and destroyed much smaller than processes, so multithreaded programs are more efficient.
But if in the server project, frequently creating/destroying threads are not desirable, it is introduced the thread pool technology, which created a number of threads, when a task needs to perform, just choose a thread from the thread pool to task, task execution, then lost the thread into the thread pool, to wait for the subsequent task.
For details on this section, see: Multithreading and Concurrency.
Ii. Project learning
After completing the basic knowledge of the understanding, now to study the project code, this has a question, on earth, how can be regarded as understanding an open source project? Let’s go through all the code again, okay?
If it is repeated again, the cost performance is too small. If the open source project is needed for work, or if it is to be modified, then an overview of the code as a whole is essential. But if it’s just to learn the architecture and ideas of the project, it’s good to look at a feature as a whole, and then focus on the pieces of code you’re interested in.
For the server project in this article, the author is mainly to learn the web server related knowledge, do not need to know all, but most of the code needs to be sorted out, so I adopted this way to learn:
- Code architecture, what modules each directory is responsible for (this section can be combined with open source project documentation to speed up the understanding of the project)
- Compile and run it to see what it does;
- Pick a function, study its code implementation, I will first pick “user login registration” function to study, and then consider other functions;
- Add functionality. How do I add functionality to an existing framework? Like uploading files, blogs, etc.? Add a message board?
- Unfinished…
Ok, the learning route has been planned, now we begin the code learning journey!
The code architecture
Open the project with VsCode. The code structure of the project is as follows:
Reference documentation, the project code framework is as follows:
Compile operation
Install Mysql, create database, modify code, compile, run:
Sh./server // Open the browser localhost:9006Copy the code
The following information is displayed:
Click new user, register an account and then log in, there are three functions:
A picture/video/wechat official account is displayed on the webpage respectively.
By reading the code framework and running logic, a server runtime flow chart is given as follows:
Of all the features I am most interested in is the login registration function to see how it is implemented.
Function s
As for the login function, the page hopping logic is shown in the following picture, the original picture is from the Two Ape Society:
The logic in the figure above is clear. According to whether the HTTP request method is GET or POST, determine whether to obtain the registration/login user interface or update the user password to jump to the login success interface. For the introduction of the HTTP section refer to 3. Pull up the carrot mud -HTTP.
To be more specific, you first need to retrieve all the usernames and passwords from the database (PS: the transfer of user passwords in a real large project can refer to user login practices), and these usernames and passwords are stored in some data structure (such as a hash table).
When a browser request arrives, it returns the corresponding INTERFACE HTML or error message based on its request.
It’s a finite state machine. Finite state machine?
Finite state machine refers to the process of system state transferring from one state to another, representing “selection” and “updating state”. For more information, see: Finite state machines?
Because there are so many details inside this feature, please skip to chapter 3, HTTP.
Three, pull up the carrot mud — HTTP
This section is a detailed explanation of the logon registration function in Chapter 2. We first introduce the use of Epoll, then introduce HTTP, and then give details of the user login registration process.
Epoll
This section introduces the function call framework of epoll, starting with the functions commonly used by epoll.
Commonly used functions
epoll_create
// Create a file descriptor that indicates the epoll kernel event table // This descriptor will be used as the first argument to other epoll system calls //size does not work. int epoll_create(int size)Copy the code
epoll_ctl
Int epoll_ctl(int epfd, int op, int fd, struct epoll_event *event)Copy the code
Epfd: is the handle of epoll_creat
Op: denotes an action, represented by three macros:
- EPOLL_CTL_ADD (register new fd to EPFD),
- EPOLL_CTL_MOD (modify the listening event of a registered FD),
- EPOLL_CTL_DEL (delete a fd from epfd);
Event: Tells the kernel what event to listen for
The event structure is defined as follows:
struct epoll_event {
__uint32_t events; /* Epoll events */
epoll_data_t data; /* User data variable */
4};
Copy the code
Events describes the event types, including the following epoll event types
- EPOLLIN: indicates that the corresponding file descriptor can be read (including that the peer SOCKET is properly closed)
- EPOLLOUT: indicates that the corresponding file descriptor can be written
- EPOLLPRI: indicates that the corresponding file descriptor has urgent data to read
- EPOLLERR: Indicates that an error occurs in the corresponding file descriptor
- EPOLLHUP: the corresponding file descriptor is hung up.
- EPOLLET: Set EPOLL to Edge Triggered mode, as opposed to Level Triggered
- EPOLLONESHOT: monitors only one event. If you want to continue monitoring the socket after this event, you need to add the socket to the EPOLL queue again
- EPOLLET: edge trigger mode
- EPOLLRDHUP: indicates that the read is closed and the peer is closed. Not all kernel versions support this option.
epoll_wait
Int epoll_wait(int epfd, struct epoll_event *events, int maxEvents, int timeout)Copy the code
Among them,
- Events: a collection of events to store in the kernel,
- Maxevents: tells the kernel how big this event is. It cannot be larger than the size of epoll_create().
- Timeout: indicates the timeout period.
- Return value: how many file descriptors are ready on success, 0 on time, -1 on error;
example
How does epoll work in practice? Source code link.
// TCP server epoll Concurrent server #include <stdio.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #include <unistd.h> #include <errno.h> #include <pthread.h> #include <ctype.h> #include <sys/socket.h> #include <arpa/inet.h> #include <sys/epoll.h> #define MAX_LINK_NUM 128 #define SERV_PORT 8888 #define BUFF_LENGTH 320 #define MAX_EVENTS 5 int count = 0; Int tcp_epoll_server_init(){int sockfd = socket(AF_INET,SOCK_STREAM,0); if(sockfd == -1){ printf("socket error! \n"); return -1; } struct sockaddr_in serv_addr; struct sockaddr_in clit_addr; socklen_t clit_len; serv_addr.sin_family = AF_INET; serv_addr.sin_port = htons(SERV_PORT); serv_addr.sin_addr.s_addr = htonl(INADDR_ANY); Int ret = bind(sockfd,(struct sockaddr*)&serv_addr,sizeof(serv_addr)); if(ret == -1){ printf("bind error! \n"); return -2; } listen(sockfd,MAX_LINK_NUM); Epoll int epoll_fd = epoll_create(MAX_EVENTS); if(epoll_fd == -1){ printf("epoll_create error! \n"); return -3; Struct epoll_event ev; Struct epoll_event events[MAX_EVENTS]; Ev.events = EPOLLIN; ev.data.fd = sockfd; int ret2 = epoll_ctl(epoll_fd,EPOLL_CTL_ADD,sockfd,&ev); if(ret2 == -1){ printf("epoll_ctl error! \n"); return -4; } int connfd = 0; Int NFDS = epoll_wait(epoll_fd,events,MAX_EVENTS,-1); if(nfds == -1){ printf("epoll_wait error! \n"); return -5; } printf("nfds: %d\n",nfds); // check for(int I = 0; i<nfds; If (events[I].data.fd == sockfd){if(events[I].data.fd == sockfd){connfd = accept(sockfd,(struct) sockaddr*)&clit_addr,&clit_len); if(connfd == -1){ printf("accept error! \n"); return -6; } ev.events = EPOLLIN; ev.data.fd = connfd; if(epoll_ctl(epoll_fd,EPOLL_CTL_ADD,connfd,&ev) == -1){ printf("epoll_ctl add error! \n"); return -7; } printf("accept client: %s\n",inet_ntoa(clit_addr.sin_addr)); printf("client %d\n",++count); } else{char buff[BUFF_LENGTH]; int ret1 = read(connfd,buff,sizeof(buff)); printf("%s",buff); } } } close(connfd); return 0; } int main(){ tcp_epoll_server_init(); }Copy the code
HTTP
HTTP is introduced
The HTTP message
HTTP packets are classified into request packets (sent by the browser to the server) and response packets (returned by the server). Each packet must be generated in a specific format to be identified by the browser.
- A request packet consists of four parts: request line, header, blank line and request data.
The request line, which specifies the request type (method), the resource to access, and the HTTP version to use.
The request header, the part immediately after the request line (the first line), is used to specify additional information to be used by the server.
Empty line, the empty line after the request header is required even if the request data in part 4 is empty.
Request data, also known as the body, can be added to any other data.
- Response message = status line + message header + blank line + response body consists of four parts
The status line consists of the HTTP protocol version number, status code, and status message.
A message header that specifies some additional information to be used by the client.
Blank lines. Blank lines after the message header are required.
The response body and the text message returned by the server to the client.
HTTP status codes and request methods
There are five types of HTTP status codes:
- 1XX: Indicating message – indicating that the request has been received and processing continues.
- 2xx: Succeeded: The request is successfully processed.
200 OK: The client request is processed normally.
206 Partial Content: The client made a Partial content request.
- 3xx: Redirect – Further action must be taken to complete the request.
301 Moved Permanently, the resource has been Permanently Moved to a new location, and any future access to the resource will use one of the URIs returned by this response.
302 Found: Temporary redirection. The requested resource was temporarily obtained from a different URI.
- 4XX: Client error – The request has a syntax error and the server cannot process the request.
400 Bad Request: Syntax errors exist in the Request packet.
403 Forbidden: The request is rejected by the server.
404 Not Found: Requested resource Not Found on server.
- 5xx: Server side error – The server failed to process the request.
500 Internal Server Error: The Server failed to execute the request.
There are eight methods named after HTTP1.1, as follows:
Since this project mainly involves GET and POST, what are the differences and connections between the two directives?
Simply put, GET is used to GET new web pages; POST is used to pass user form data to the server, such as user name, password, message, and so on.
Further, GET includes parameters in the URL, and POST passes parameters through the Request body.
In fact, GET and POST are only two types of output transmission identifiers defined by HTTP. Their transmission size is limited by TCP/IP protocol, and POST generally requires two transmission.
Here are two examples of GET and POST:
GET
HTTP/1.1 Host:img.mukewang.com user-agent :Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.106 Safari/537.36 Accept:image/webp,image/*,*/*; Q = 0.8 Referer:http://www.imooc.com/ Accept - Encoding: gzip, deflate, SDCH Accept - Language: useful - CN, useful; Q =0.8 Null line request data is nullCopy the code
POST
POST/HTTP1.1 Host:www.wrox.com user-agent :Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; The.net CLR 2.0.50727; The.net CLR 3.0.04506.648; .NET CLR 3.5.21022) Content-Type: Application/X-www-form-urlencoded Content-Length:40 Connection: Keep Alive - empty line name = Professional % 20 ajax & publisher = WileyCopy the code
HTTP Processing Flow
The HTTP processing flow is divided into the following three steps:
- ** Connection processing: ** The browser sends an HTTP connection request, and the main thread creates an HTTP object to receive the request and read all data into the corresponding buffer, inserts the object into the task queue, and waits for the worker thread to take out a task from the task queue for processing.
- Processing message request: After the worker thread takes out the task, it calls the process processing function to parse the request message through the master and slave state machines.
- ** Return response packet: ** Generates a response packet after parsing and returns it to the browser.
The following three steps are introduced in turn:
Join processing
In the connection phase, the most important thing is the TCP connection process and the reading of the HTTP request message (actually reading the request message is just reading the data sent by the client). The TCP connection process involves epoll kernel event creation. For details, see the epoll section.
How does the server read HTTP packets? First, the server needs to create an HTTP class object for each HTTP connection that has been established. This code looks like this (the server is always running an eventloop because the server is actually event-driven) :
Void WebServer::eventLoop() {...... while (! Int number = epoll_wait(m_epollFD, events, MAX_EVENT_NUMBER, -1); if (number < 0 && errno ! = EINTR) { LOG_ERROR("%s", "epoll failure"); break; } // for (int I = 0; i < number; i++) { int sockfd = events[i].data.fd; If (sockfd == m_listenfd) {bool flag = dealClinetData (); if (false == flag) continue; } / / handle exception event else if (events [I] events & (EPOLLRDHUP | EPOLLHUP | EPOLLERR)) {/ / the server closes the connection, Remove the corresponding timer util_timer *timer = users_timer[sockfd]. Timer; deal_timer(timer, sockfd); Else if ((sockfd == M_pipefd [0]) && (events[I].events & EPOLLIN)) {bool flag = DealWithSignal (timeout, stop_server); if (false == flag) LOG_ERROR("%s", "dealclientdata failure"); } else if (events[I].events & EPOLLIN) {dealWithRead (sockfd); } else if (events[I].events & EPOLLOUT) {dealWithWrite (sockfd); }}... }}Copy the code
The dealClientData () function on line 22 calls timer () to create a new client that connects to the user and adds a timed event (see next section).
After completing these steps, the server maintains a series of client connections. When one of the clients clicks a button on the web page, generating a request message and transmitting it to the server, dealWithRead () is called in the event loop code above.
This function adds the port event append to the task request queue and waits for threads in the thread pool to execute the task. According to the Reactor/Proactor model, the worker reads the HTTP request data by the read_once () function, see http_conn.cpp.
The read_once() function reads data from the browser (client) side into the cache array for subsequent worker threads to process.
Request message processing
When there are idle threads in the Webserver thread pool, a thread calls process () to complete the parsing of the request message and the corresponding task of the message. See http_conn/process () for details:
Void http_conn::process() {//NO_REQUEST: the request is incomplete and needs to be received. HTTP_CODE read_ret = process_read(); If (read_ret == NO_REQUEST) {// Register and listen for read events modfd(m_epollFD, m_sockfd, EPOLLIN, m_TRIGMode); return; } // call process_write Complete packet response bool write_RET = process_write(read_RET); if (! write_ret) { close_conn(); } // Register and listen for write events modfd(m_epollfd, m_sockfd, EPOLLOUT, m_TRIGMode); }Copy the code
I’ll start with the processing of the request message, the process_read() function.
This function encapsulates the master and slave state machines through a while loop, and processes each line of the message in a loop. The primary state machine here refers to the process_read() function and the secondary state machine refers to the parse_line() function.
The slave state machine is responsible for reading a line of the message (and changing \r\n to \0\0), and the master state machine is responsible for parsing the data in the line. The master state machine calls the slave state machine internally, and the slave state machine drives the master state machine. The relationship between them is shown below:
The process_read() function is so important to understand the HTTP connection and processing that you must look at the source code.
// Finite state machine processing request message http_conn::HTTP_CODE http_conn::process_read() {// LINE_STATUS LINE_STATUS = LINE_OK; HTTP_CODE ret = NO_REQUEST; char *text = 0; while ((m_check_state == CHECK_STATE_CONTENT && line_status == LINE_OK) || ((line_status = parse_line()) == LINE_OK)) { text = get_line(); m_start_line = m_checked_idx; LOG_INFO("%s", text); switch (m_check_state) { case CHECK_STATE_REQUESTLINE: { ret = parse_request_line(text); if (ret == BAD_REQUEST) return BAD_REQUEST; break; } case CHECK_STATE_HEADER: { ret = parse_headers(text); if (ret == BAD_REQUEST) return BAD_REQUEST; else if (ret == GET_REQUEST) { return do_request(); } break; } case CHECK_STATE_CONTENT: { ret = parse_content(text); if (ret == GET_REQUEST) return do_request(); line_status = LINE_OPEN; break; } default: return INTERNAL_ERROR; } } return NO_REQUEST; }Copy the code
The above code is using switch… Case to reflect the choice of the main state machine, and the main state machine state is made from CHECK_STATE_REQUESTLINE/headers/CONTENT, the three marks to said: is requests, and parse the request HEADER, parse the message body (body). Supplementary information about judgment conditions and circulatory bodies is given below:
- Judge conditions
- The main state machine moves to CHECK_STATE_CONTENT, a condition that involves parsing the message body
- Move from the state machine to LINE_OK, a condition that involves parsing the request line and the request header
- If the condition is true, the loop continues, otherwise it exits
- The loop body
- Read data from the state machine
- Call get_line to assign data from the state machine to text indirectly via m_start_line
- The main state machine parses text
PS: This part of the reading must be combined with the source code! It involves a lot of character array pointer addition and subtraction, please feel carefully!
The primary state machine is CHECK_STATE_REQUESTLINE, and then parse_request_line() is called to parse the request line to obtain the HTTP request method, destination URL, and HTTP version number. The state changes to CHECK_STATE_HEADER.
After entering the body of the loop, parse_headers() is called to parse the request header information. First determine whether it is a blank line or a request header. Blank lines further differentiate between POST and GET. In the case of the request header, update the short and long connection state, host, and so on.
Note: One of the differences between GET and POST request messages is whether there is a message body.
When a POST request is used, CHECK_STATE_CONTENT needs to be resolved to retrieve the information (user name and password) in the POST message body.
Reference links:
Mp.weixin.qq.com/s/wAQHU-QZi…
Return response message
After parsing the request message, it is clear that if the user wants to log in/register, he/she needs to jump to the corresponding interface, add the user name, authenticate the user, etc., and write the corresponding data into the corresponding message and return it to the browser. The flow chart is as follows:
After the request message is parsed in process_read(), the state machine calls the do_request() function, which handles the functional logic. This function concatenates the website root and URL files, and then determines the file attributes by stat. Url, which can be abstracted as IP :port/ XXX, XXX is set through the ACTION attribute (request message) of the HTML file. M_url is the request resource resolved in the request packet, which starts with /, that is, X. There are eight cases of m_URL resolved in the project, as shown in the do_request() function, part of the code is as follows:
// function logical unit http_conn::HTTP_CODE http_conn::do_request() {strcpy(m_real_file, doc_root); int len = strlen(doc_root); //printf("m_url:%s\n", m_url); const char *p = strrchr(m_url, '/'); / / handle cgi if (cgi = = 1 && (* (p + 1) = = '2' | | * (p + 1) = = '3')) {/ / judging according to the sign is the login inspection or registered char flag = m_url [1]. char *m_url_real = (char *)malloc(sizeof(char) * 200); strcpy(m_url_real, "/"); strcat(m_url_real, m_url + 2); strncpy(m_real_file + len, m_url_real, FILENAME_LEN - len - 1); free(m_url_real); //user=123&passwd=123 char name[100], password[100]; int i; for (i = 5; m_string[i] ! = '&'; ++i) name[i - 5] = m_string[i]; name[i - 5] = '\0'; int j = 0; for (i = i + 10; m_string[i] ! = '\ 0'; ++i, ++j) password[j] = m_string[i]; password[j] = '\0'; If (* (p + 1) = = '3') {/ / if it is registered, the first to test whether the database has a namesake, / / not wish, to increase the data... if (users.find(name) == users.end()) { m_lock.lock(); int res = mysql_query(mysql, sql_insert); users.insert(pair<string, string>(name, password)); m_lock.unlock(); if (! res) strcpy(m_url, "/log.html"); else strcpy(m_url, "/registerError.html"); } else strcpy(m_url, "/registerError.html"); }... }Copy the code
The stat function is used to obtain the type and size of a file. Mmap maps files to the memory to improve access speed. For details, see the MMAP principle. Iovec defines vector elements. Usually, this structure is used as a multi-element array. Writev is clustered write, see link;
After do_request() is executed, the child thread calls process_write() to generate response messages (add_status_line, add_headers, etc.). Add_reponse () is called to update m_write_IDx and m_write_buf during response packet generation.
It is worth noting that the response message is divided into two types, one is the existence of the request file, through the IO vector mechanism IOVEC, declare two IOVEC, the first point to m_write_buf, the second point to the Mmap address m_file_address; The other option is to request an ioVEC, pointing to m_write_buf.
In fact, the response message is written to the HTML file data in the server, which is parsed, rendered and displayed on the browser page.
In addition, the verification logic code of the user login registration is in do_request(), which verifies and adds the user by querying or inserting the Mysql database.
The above is the detailed introduction of the registration/login module, and then the details of the thread pool, log, timer, etc. of the project are explored in modules.
Thread pools
This section focuses on the thread pool implementation of the project. The overall framework is as follows:
define
Thread pools are defined as follows:
template <typename T> class threadpool { public: /*thread_number is the number of threads in the thread pool, Threadpool (int acTOR_model, connection_pool *connPool, int thread_number = 8, int max_request = 10000); ~threadpool(); bool append(T *request, int state); bool append_p(T *request); */ static void *worker(void *arg); */ static void *worker(void *arg); -----class specific void run(); private: int m_thread_number; // the number of threads in the pool int m_max_requests; Pthread_t *m_threads; STD ::list<T *> m_workQueue; // Request queuelocker m_queuelocker; // the mutex sem m_queuestat protects the request queue; Connection_pool *m_connPool; // database int m_ACTOR_model; // Model switch (Reactor/Proactor)};Copy the code
Note that the thread pool uses template programming to make it more extensible: a variety of task types are supported.
Thread pooling requires a certain number of threads to be created up front. The most important API is:
Int pthread_create (pthread_t *thread_tid, const pthread_attr_t *attr, const pthread_t *attr, // A pointer to a thread property, usually set to NULL void * (*start_routine) (void *), // the address of the thread function void *arg); // the parameter in start_routine()Copy the code
The third argument in the function prototype, a function pointer, points to the address of the processing thread function. This function is required to be a static function. If a thread function is a member function of a class, set it to a static member function (void* does not match a non-static member function with this). For further information please see.
Thread pool creation
Thread pool creation in project:
threadpool<T>::threadpool( int actor_model, connection_pool *connPool, int thread_number, int max_requests) : m_actor_model(actor_model),m_thread_number(thread_number), m_max_requests(max_requests), m_threads(NULL),m_connPool(connPool) { if (thread_number <= 0 || max_requests <= 0) throw std::exception(); m_threads = new pthread_t[m_thread_number]; Pthread_t is a long integer if (! m_threads) throw std::exception(); for (int i = 0; i < thread_number; If (pthread_create(m_threads + I, NULL, worker, this)! = 0) { delete[] m_threads; throw std::exception(); If (pthread_detach(m_threads[I])) {delete[] m_threads; throw std::exception(); }}}Copy the code
Pthread_detech () should be called when a thread is created, because Linux threads have two states: Joinable and unjoinable.
If the thread is in joinable state, the stack and thread descriptors occupied by the thread are not released when the thread function itself exits. It is only when pthread_JOIN is called that the main thread blocks, waiting for the child thread to finish, and then reclaiming the child thread resources.
The unjoinable attribute can be specified at pthread_create, or pthread_detach(pthread_detach()) ina thread after it is created. For example: Pthread_detach (pthread_self()), changes the state to unjoinable to ensure resource release. If you add pthread_detach(pthread_self()) to the function header, the thread state changes and the pthread_exit thread exits automatically at the end of the function. Saves the trouble of cleaning up the thread’s ass.
Join the request queue
When epoll detects that an event is active on a port, it puts the event into the request queue (note that it is mutually exclusive) and waits for the worker thread to process it:
Bool threadPool <T>::append_p(T *request) {m_queuelocker.lock(); if (m_workqueue.size() >= m_max_requests) { m_queuelocker.unlock(); return false; } m_workqueue.push_back(request); m_queuelocker.unlock(); m_queuestat.post(); return true; }Copy the code
Above is the task request of Proactor mode. If you don’t know Reactor and Proactor mode, please go back to Chapter 1, IO reuse. What this project realizes is a concurrent structure based on semi-synchronous/semi-reactor type. Taking Proactor mode as an example, the workflow is as follows:
- The main thread acts as an asynchronous thread and listens for events on all sockets
- If a new request comes in, the main thread receives it to get a new connection socket and registers the read/write event on that socket in the epoll kernel event table
- If a read/write event occurs on the connected socket, the main thread receives data from the socket and inserts the data into the request queue as a request object
- All worker threads sleep on the request queue and compete (such as mutex) to take over when a task arrives
Here’s how it works :(image from)
threading
When creating a thread pool, calling pthread_create points to the worker() static member function, while running () is called inside the worker().
Template <typename T> void *threadpool<T>::worker(void *arg) {// Arg is this! Threadpool *pool = (threadPool *)arg; // Run () is called when each thread in the pool is created. return pool; }Copy the code
The run() function can also be seen as a loopback event, waiting for m_queuestat() to post, i.e. a new task has entered the request queue, and a task has been picked up from the request queue for processing:
Void threadPool <T>::run() {while (true) {m_queuestat.wait(); m_queuelocker.lock(); if (m_workqueue.empty()) { m_queuelocker.unlock(); continue; } T *request = m_workqueue.front(); m_workqueue.pop_front(); m_queuelocker.unlock(); if (! request) continue; / /... The thread starts to process the task}}Copy the code
** Note: ** Run () is called every time pthread_CREATE is called, because each thread is independent and sleeps on the work queue, only when the signal variable is updated to wake up the race for the task.
V. Timer
The principle of analytic
If a client is connected to the server for a long time and does not interact with the data, the connection has no meaning and occupies the server’s resources. In this case, the server needs a way to detect and process meaningless connections.
In addition to handling inactive connections, the server has timed events such as closing file descriptors.
To do this, the server assigns a timer to each event.
This project uses SIGALRM signal to realize the timer. First, every timed event is in an ascending linked list, and the SIGALRM signal is periodically triggered by alarm() function. Then the signal callback function notifies the main loop by pipe, and the main loop processes the timer on the ascending linked list after receiving the signal: If no data is exchanged within a certain period of time, the connection is closed.
For low-level API parsing in this section, I recommend reading the source code comments I added or referring to the president’s article.
Code and block diagrams
Because the timer part is complicated to call in the source code, it can be understood with this block diagram:
Textual narration:
The server first creates a linked list of timer containers, and then uses a unified event source to process exception events, read and write events and signal events in a unified manner. Timers are used according to the corresponding logic of different events.
Specifically, when the browser is connected to the server, create the timer corresponding to the connection, and add the timer to the timer container linked list;
When handling abnormal events, execute periodic events, and the server closes the connection and removes the corresponding timer from the linked list.
When processing timing signals, set the timing flag to true so that the timer processing function can be executed.
When processing read/write events, if a read event occurs on a connection or a connection sends data to the browser, the corresponding timer is moved backward. Otherwise, a timed event is executed.
6. Log system
In order to record the server running status, error information, access data files, etc., need to establish a log system. In this project, the singleton pattern is used to create a logging system. The block diagram of this part is as follows (original from president) :
According to the figure above, the system has two writing modes: synchronous and asynchronous.
In asynchronous write mode, the producer-consumer model is encapsulated as a blocking queue, and a writer thread is created. The worker thread pushes the content to be written into the queue, and the writer thread takes the content from the queue and writes it to the log file. In synchronous write mode, the output content is directly formatted and the information is written to the log file.
The system can be classified by day and classified by line.
This part of the recommendation is directly combined with the source code, starting with log.h to read, first look at the synchronous write mode, in the asynchronous write log and blocking queue reading.
Or refer to the president’s: Log system.
Seven, other
Database connection pool
In the process of user connection, this project adopts the following method: each HTTP connection obtains a database connection, obtains the user account password in it for comparison (it consumes resources, which is definitely not done in the actual scenario), and then releases the database connection.
So why create a database connection pool?
The database access process is as follows: To access a database, the system creates a database connection, completes database operations, and then disconnects the database connection. – If the system needs to access the database frequently, it needs to frequently create and disconnect the database connection. Creating a database connection is a time-consuming operation and may cause security risks to the database.
When the program initialization, centralized creation of multiple database connections, and their centralized management, for the use of the program, can ensure faster database read and write speed, more secure and reliable.
In fact, database connection pools and thread pools are basically the same idea.
In this project, not only database connection pool is implemented, but also the acquisition and release of database connection is encapsulated by RAII mechanism, avoiding manual release.
This part is easier to understand, and it is recommended to read the source code directly.
Encapsulating synchronization class
In order to facilitate the implementation of RAII mechanism of synchronous class, this project encapsulated the pthread library and implemented mutex and condition_variable similar to C++11.
You can learn this by reading the source code in the folder Lock.
The resources
(Main information) President’s own article:
Github.com/qinguoyi/Ti…
TinyWebServer
Huixxi. Making. IO / 2020/06/02 /…
Book.douban.com/subject/247…
Baike.baidu.com/item/WEB%E6…
Comparison of mainstream servers:
www.cnblogs.com/sammyliu/ar…
Blog.csdn.net/u010066903/…
Project Address:
Github.com/qinguoyi/Ti…