Nginx fundamentals

  • What exactly is Nginx?

    Nginx is a high-performance HTTP and reverse proxy Web server. Its core features are low memory and high concurrency

  • What can Nginx do (Application Scenario)

    • Http servers (Web servers) are very high-performance, very efficient and can withstand high loads.

Support 50000 concurrent connections, and the CPU and memory usage is very low, 10000 inactive connections take up 2.5M of memory. – Reverse proxy server (the browser sends the request to Nginx, and the Nginx reverse proxy selects the original server to get the result, Finally back to the client browser Nginx + Tomcat) – forward proxy (domestic access to Google) – load balancing server (by Nginx reverse proxy multiple Tomcat, such a process is called load balancing) – static separation (static resources by Nginx Direct handling, dynamic Servlet request proxy to Tomcat)

  • Nginx characteristics
    • cross-platform
    • Configuration is simple
    • High concurrency, good performance, stability, low downtime
  • Nginx main commands
    • (nginx installation directory)./nginx
    • nginx -s stop
    • Nginx -s reload (nginx.conf configuration file,Master node does not restart)

Nginx core configuration file interpretation

The Nginx core configuration file conf/nginx.conf contains three blocks: global block, Events block, and HTTP block.

  • Global block

    From the start of the configuration file to the Events block, where the configuration affects the overall running of the Nginx server, such as the number of worker processes, the location of error logs, etc

  • The events of

    The Events block mainly affects the network connection between the Nginx server and the user, such as worker_connections 1024, indicating that the maximum number of connections supported by each workderprocess is 1024

  • HTTP block The HTTP block is the most frequently configured part, virtual host configuration, listening port configuration, request forwarding, reverse proxy, load balancing, and so on⚠️ You can configure multiple servers using HTTP

Reverse proxy in Nginx application scenarios

  • To deploy tomcat 8080
  • Modify the nginx configuration

  • Nginx -s relod (restart, configuration takes effect)
  • Nginx server/location == Tomcat Host/Context
    The location [= | | | ~ ~ * ^ ~] / uri / {... }Copy the code

    In nginx configuration files, location takes one of the following forms:

    1. Exact match location = /lagou {}
    2. Match path prefix location ^~ /lagou {}
    3. Case insensitive regular match location ~* /lagou {}
    4. Location ~ /lagou {}
    5. Normal path prefix matches location /lagou {}

    Priority: ASC

Load balancing in Nginx application scenarios

1. Nginx load balancing policy

  • Polling (default policy)
    Upstream lagouServer {server 111.229.248.243:8080; Server 111.229.248.243:8082; } location /abc { proxy_pass http://lagouServer/; }Copy the code
  • Weight weight
    Upstream {server 111.229.248.243:8080 weight=1; Server 111.229.248.243:8082 weight = 2; }Copy the code
  • Ip_hash By IP address. Each client accesses the same service
    upstream lagouServer{
        ip_hash;
        server 111.229.248.243:8080;
        server 111.229.248.243:8082;
    }
    Copy the code

Static and static separation in Nginx application scenarios

Nginx process mechanism analysis

1. The Nginx start

It runs in the background as a daemon, with one Master process and multiple Worker processes

  • The Master process
    • Receiving external signals like Worker sending signals such as nginx-s reload
    • Monitor the Worker. When the Worker exits abnormally, the Master automatically restarts the new Worker
  • Worker processes
    • Network request processing, each Worker is independent, a request can only be in one Worker. The number of worker processes can be set, generally consistent with the number of CPU cores on the machine.

2. Schematic diagram of Nginx process model

  • Use./ nginx-s reload to illustrate the nginx signal processing part

    1. Master Checks the syntax of the configuration file
    2. Configure (for example, if the listener port is changed, try to assign a new listener port)
    3. If the configuration succeeds, use the new configuration to create the worker process
    4. New worker success, old worker close
    5. The old worker keeps running until there are no more requests
  • The worker process processes the request

    • Requests come in from multiple workers, and each Worker has the potential to process the request
    • After the Master is started, the socket link will be established and multiple workers will be fork out. So, listenFD, the listening descriptor for all Worker processes, becomes readable upon the arrival of a new connection
    • Nginx uses mutex to ensure that only one Worker processes a request. The process that holds the mutex registers listenFD to read the event, calls Accept to accept the connection, and then parses, processes, and returns the connection to the client
  • Nginx multi-process model benefits

    • Each worker process is independent and does not need to be locked to save costs
    • Each worker process is independent and does not affect each other. If one process ends abnormally, the others can still provide services
    • The multi-process model supports the Reload hot deployment mechanism

Lagouedu nginx summary