The caching principle of Nginx server is an important knowledge point in the learning process. After learning fully, your ability will have a lot of improvement — and improvement is not limited to Nginx on the one hand, the technical theory is well understood, for understanding other content will also be of great help.
The main idea of Web caching
The basic idea of Web caching technology is to make use of the principle of time locality of customer access, to create a local copy of the content that the customer has visited on the Nginx server, so that the data can be accessed again within a period of time, without the need to send a request to the backend server through the Nginx server. Therefore, it can reduce network traffic between Nginx server and back-end server, reduce network congestion, and reduce data transmission delay, improve user access speed. At the same time, replica resources on the Nginx server can also respond to relevant user requests when the backend server goes down, which improves the robustness of the backend server.
Nginx cache implementation principle
Proxy Store-based caching mechanism
01404 Incorrect driver
When the Nginx server finds that the user request data does not exist on the server, a 404 error will be generated. The server can catch the error and further turn to the backend server to request relevant data. Finally, the backend request data will be returned to the client and cached in the server local.
02 The resource driver does not exist
In principle, this method is basically equivalent to the 404 error driver. The difference is that this method directly drives the communication and Web cache between the Nginx server and the backend server through the location if condition judgment of the location block, and does not generate a 404 error if the resource does not exist.
Profile snippet:
These two caching mechanisms can only cache response data in 200 states and do not support dynamically linked requests. For example: getsource? Id = 1 and getsource? Both requests with id=2 return the same resource. So in fact, it is generally implemented with Nginx and Squid server architecture.
Memcached-based caching mechanism
Memcached creates a space in memory and then creates a Hash table in which the cached data is stored using keys and values for management. Memcached consists of two core modules: the server and the client. The server calculates the Hash value of the key to determine the location of the key/value pair on the server. When the location is determined, the client sends a query request to the corresponding server, asking the server to find and return the desired data.
That concludes the discussion of nginx server caching, and I hope you’ve found something useful in this article. If you have a better idea, let us know in the comments
If you want to learn more about Linux knowledge systems, you can take a look at the hundreds of knowledge systems we spent over a month organizing hundreds of hours:
“Linux cloud computing from entry to master” Linux learning entry tutorial series combat notes