— All brain maps are made by myself, please do not abuse without permission —

A review of Nginx fundamentals

define

Is a high-performance, reverse-proxy Web server.

function

  1. The HTTP server
  2. Reverse proxy server
  3. Load balancing server
  4. Dynamic and static separation

The characteristics of

  • cross-platform(Runs on most Unix-like systems)
  • Easy to use + simple configuration
  • High stability, low probability of downtime
  • High performance, can withstand high load of postgraduate entrance examination(Support 50000+ concurrent connections)
  • Very low memory and CPU usage

Forward agent

You can use a proxy server to access inaccessible websites

The reverse proxy

Browser clients == send requests to ==> reverse proxy servers (such as Nginx)

The reverse proxy server selects the original server to provide services for the resulting response, which is ultimately returned to the client browser
  • The browser does not need to do any configuration
  • The original service that provides the service is not visible to the client browser
  • Nginx+Tomcat as a whole (server side)

Load balancing server

Reverse proxy, if there are multiple target servers -> the process of finding which one to use as a suitable machine -> is called load balancing

The core is to solve the problem of high load

Dynamic and static separation

Separate the dynamic/static resource handling on the target server

For example, if Tomcat is not good at handling static resources, you can delegate the processing of static resources to a server such as Nginx

Nginx installation

You are advised to install Nginx in a Unix-like environment. For example, the cloud server purchased on UCloud is enough for the learning environment.

useyumInstall dependencies

yum -y install gcc zlib zlib-devel pcre-devel openssl openssl-devel

Upload the downloaded Nginx package to your own server

Go to the directory where the installation package is decompressed and run a series of commands

  1. The tar - XVF nginx - 1.18.0. Tar. Gz
  2. bash ./configure
  3. make
  4. make -install

After successful execution, you will see the corresponding folder in the following path

Go to nginx/sbin/ and you can see the nginx script and start it

Accessing the Nginx installed address on the local machine (port default 80)

Common commands

1 closed nginx

./nginx -s stop

2 Reload the nginx

After changing the configuration file, you want to reload./nginx -s reload

Reaccess (P.S. readers need to configure their own ports to allow through the firewall)

Nginx core configuration file interpretation

The Nginx core configuration file consists of three parts: the global block, the Events block, and the HTTP block

Part 1 Global block

From the start of the configuration file to the Events block, where the configuration affects the overall running of the Nginx server such as the number of worker processes, the location of error logs, etc

Part 2 events

The Events block mainly affects the network connection between the Nginx server and the user, such as worker_connections 1024, indicating that the maximum number of connections supported by each workderprocess is 1024

Part 3 HTTP block

HTTP block is the most frequently configured part, virtual host configuration, listening port configuration, request forwarding, reverse proxy, load balancing and so on

Nginx reverse proxy

Part 1 Requirement Description

(Photo by Hook Education)

Part 2 Requirement 1 completed

Then execute./nginx -s reload to reload the nginx

Part 3 Requirement 2 completed

  • The location of grammar:

    The location [= | | | ~ ~ * ^ ~] / uri / {... }

Then execute./nginx -s reload to reload the nginx

Server /location in Nginx is like Host/Context in Tomcat

Nginx load balancing

Part 1 Requirement Description

When repeated requests to the same external server become too much, you can use Nginx as a load balancer to spread the load among different targets.

(Photo by Hook Education)

Part 2 Requirement completion

  1. Start by unpacking the Tomcat toolkit on your own server
  2. Then make two copies:
  3. Change their port numbers separately
  • 8080
  • 8081
  • 8082
  1. Turn them all on
  2. Modify Nginx load balancing configuration
  3. reload Nginx
  4. Access the/ABC service on Nginx

Part 3 Nginx load Policy

  • Polling policy (default)
By default, each request is allocated to a different server one by one in chronological order. If a server goes offline, the request is automatically removed
upstream lagouServer{
 server 111.229248.243.:8080;
 server 111.229248.243.:8082;
}
location /abc {
 proxy_pass http://lagouServer/;
}
Copy the code
  • Weight strategy
weightRepresents the weight. By default, each load server is 1. The higher the weight is, the more requests will be allocated (for scenarios with uneven server performance).
upstream lagouServer{
 server 111.229248.243.:8080 weight=1;
 server 111.229248.243.:8082 weight=2;
}
Copy the code
  • ip_hash
Each request
The IP address is allocated based on the hash result, each client request will be fixed to the same target server for processing, which can solve the session problem
upstream lagouServer{
 ip_hash;
 server 111.229248.243.:8080;
 server 111.229248.243.:8082;
}
Copy the code

5, Nginx static separation

The classic combination of Nginx+Tomcat (Nginx handles static resource requests, and Tomcat handles dynamic resource requests) is to reverse proxy the target server Tomcat. We can see the index.jsp of the target server ROOT project, which itself is Tomcat handling the dynamic resource request.

—-> So, let’s just configure static access

Part 1 Requirement Description

(Photo by Hook Education)

Part 2 Requirement completion

Modify the nginx.conf file

The reload Nginx service accesses the configured static URL

6, Nginx mechanism analysis

Nginx starts as a daemon running in the background (including a Master process and multiple Worker processes)

The Master process

Managing the Work Process


1 Receive external signals and send signals to each worker process (./ nginx-s reload)


2 Monitor the running status of worker processes. When a worker process exits abnormally, the Master process will automatically restart a new worker process

Worker processes

Handle network requests


Multiple worker processes are peer, competing equally for requests from clients,
The processes are independent of each other.

1. Only 2 worker processes can be processed in one worker process for one request, and requests from other processes cannot be processed. 3

(Photo by Hook Education)

Nginx multi-process benefits

  • Each worker process is independent and does not need to be locked to save costs
  • Each worker process is independent and does not affect each other. If one process ends abnormally, the others can still provide services
  • The multi-process model provides support for the Reload hot deployment mechanism