The implementation of C++ network library involves many contents related to network programming, such as the application of non-blocking IO and IO multiplexing, the idea of Reactor model, the role of Buffer in application layer, the implementation of one-loop-one-thread model in Thread pool, etc. Muduo is a multithreaded TCP network library developed by Chen shuoshen. This paper records the thinking after learning the source code of muduo and summarizes some logic in the process of designing C++ network library for reference.
Network library fundamentals
Non-blocking IO and IO multiplexing
The difference between blocking IO and non-blocking IO is whether a program blocks wait while the IO is not ready. IO multiplexing refers to listening for multiple descriptors through some mechanism (usually epoll) so that once a descriptor is ready, the program can be told to read and write accordingly.
Epoll principle
The epoll underlayer maintains monitoring descriptors using red-black trees and ready descriptors using linked lists, while epoll_WAIT calls only observe ready lists. The efficient implementation mechanism of ePoll is to asynchronously place ready descriptors in ready lists by registering interrupt callbacks in the kernel.
Reactor model
IO multiplexing mechanism depends on the multi-channel dispenser, an event the dispenser object is responsible for the request dispatching events to the corresponding event handlers, event handlers need to register the callback function in advance, the dispenser capture IO ready events, then ready event distribution to the corresponding event handlers, by the processor to complete the actual IO operations.
Process Oriented Implementation (single thread)
The server creates ListenFD, sets it to non-blocking and starts listening; The server then creates an ePollFD, registers listenFD with ePollfd, and calls epoll_wait to block and wait. When listenfd has a readable event, epoll_wait is woken up. The program calls Accept to obtain Connfd, sets connfd as non-blocking, registers connfd and pays attention to events with epollfd, and circulates epoll_wait again to block and wait. When a read-write event occurs in Connfd, epoll_wait is awakened and the program invokes the corresponding read-write processing logic.
Object-oriented Abstract Design (multithreading)
The basic concept
Network libraries generally listen for three types of events: network IO events, timer events, and own thread wake up events. Timer events are used to process some control logic in the network library, such as timeout disconnection. Self thread wake up event is a necessary notification mechanism in network library, which is convenient for I/O reuse of wake up blocking wait.
Network library threads generally fall into several categories: IO threads are used to process connection requests; Computation threads are used to perform the complex operations required by the request. Other threads include logging threads that log asynchronously and some business threads.
The core design
A server code of C++ multithreaded TCP network library based on Reactor model mainly contains the following core parts:
- EventLoop
- Channel
- Poller
- TcpServer
- Acceptor
- TcpConnection
- EventLoopThreadPool
-
EventLoop is an abstraction of the IO multiplexing wake-up process. Each thread binds to an EventLoop instance and runs the loop function, which implements the loop logic of IO reuse, waiting to fetch ready events, and processing them through callback functions. EventLoop instances are of two types in the network library: main-loop for listening FDS and accepting connections, and io-loop for handling data requests for connections assigned to that thread.
-
The Channel object manages THE IO events of a FD. A Channel object is associated with a FD, which can be listenFD, EventFD, timefd, etc., and is bound to an EventLoop instance. A Channel implements IO event handlers that are executed on the thread that binds the EventLoop instance.
-
Poller is an IO reuse abstraction that provides an interface for update events and wait-ready events. EpollPoller inherits from Poller, encapsulates epoll-related system calls and implements Poller interfaces.
-
TcpServer is an abstraction of TCP function services. It contains Acceptor, EventLoop thread pools and manages all connections to them.
-
An Acceptor creates a ListenFD and listens on it. An Acceptor contains a Channel object that is associated with listenFD and bound to main-loop.
-
TcpConnection is an abstraction of the connection between a client and a TCP server.
-
EventLoopThreadPool connects an EventLoop to a ThreadPool. EventLoopThreadPool is initialized when the TcpServer instance is created and holds IO threads. When a TcpConnection object is created, it is bound to an IO thread that handles all requests for the connection.
Logic thinking
How to associate TcpSever with EventLoop
Demo creates an EventLoop instance and a TcpServer instance as a parameter. The TcpServer instance calls the start function, and the EventLoop instance calls the loop function.
During the initialization of Acceptor objects, listenFD and concern events are registered with epoll. The TcpServer instance invokes the start function to start the listening. After the client initiates a connection, listenFD has a readable event on the main-loop, and the listenFD Channel’s read handler function is called. The read handler function inserts the Accept logic of the Acceptor object. After the new ConnFD is obtained, the TcpConnection object is created. The TcpConnection object contains a Channel object, which is associated with ConnFD and bound to an IO thread. The IO thread is selected using round-robin polling algorithm. The IO thread is retrieved from EventLoopThreadPool. Connfd and concern events are registered in the POLler corresponding to the IO thread.
How is data read and written
When connfd has IO events ready, the corresponding IO-loop exits and waits to collect all channels on that thread that need event processing, and then calls the corresponding callback function of the Channel to read and write.
Data reading and writing requires the application layer Buffer, which may not be able to read all the data in the kernel Buffer once. The read data should be saved to the application layer receiving Buffer, which can solve the sticky packet problem. A write operation may not be able to write all the data into the kernel Buffer. There should be an application-layer Buffer that fills the application-layer Buffer when data is not fully written to the kernel, and then POLLOUT events are watched in epoll LT mode. The POLLOUT event will fetch data from the Buffer sent by the application layer and write it into the kernel Buffer until the Buffer sent by the application layer is completely written. Finally, the POLLOUT event will be canceled.
Events that the business layer needs to care about
- Connection establishment: OnConnection
- Disconnected: OnConnection
- The message arrives: OnMessage
- Message sent (half) : OnWriteComplete, low traffic services do not need to care
For message arrival events:
When the connFD data arrives, it is first received by the kernel and stored in the kernel Buffer. Then the network library event loop’s readable event is triggered to read the data from the kernel Buffer to the application Buffer, and the network library callback OnMessage function performs the processing of the message arrival event at the business level.
If the received TCP packet is complete, it is directly taken out for processing. If the received TCP packet is incomplete, OnMesssage returns immediately. In this way, when the kernel receives data next time, A readable event that continues to fire the network library event loop until OnMessage determines that the packet is complete.
For message completion events:
The application layer sends data. If the kernel send buffer is large enough, it fills the kernel buffer with all the data to be sent and triggers a send completion event. The network library calls back the OnWriteComplete function to indicate that the message is sent.
If the kernel send Buffer is insufficient, some data is filled into the kernel Buffer and the rest is appended to the application layer send Buffer. After the kernel send Buffer sends out the data, a writable event is triggered. In this event, the data sent by the application layer Buffer is continued to fill the kernel send Buffer (if the kernel send Buffer is still not large enough, some more will be filled) until the data filled by the application layer send Buffer is completely filled, and the send completion event in the network library event loop is triggered. The OnWriteComplete callback indicates that the message has been sent.
How is the processing logic of the business layer passed to the network library
The three half-event callback functions are implemented at the business layer, passed to the TcpServer instance (which can have default three half-event callback functions), and then passed to the TcpConnection object when the object is created. The onConnection callback is embedded in the TCP connection setup and release handler, and the onMessage callback is embedded in the TCP connection read handler and called in the appropriate scenario.
The Connection object passes the Connection release, read/write, and error handling functions to the Channel in the Connection object, so that the Channel on which the IO-loop listens can call the channel-related callback when an event is ready.
Because of abstraction and layering, callback functions are needed as intermediaries, and good callback design relies on a high degree of abstraction.