Here is a basic introduction to common nginx usage, and refer to the nginx documentation for the rest.

An overview of the

Ngnix is a server that can be used to do

  • The HTTP server
  • HTTP reverse proxy server
  • Mail proxy server
  • TCP/UDP proxy server

Basic usage

This assumes that the download is complete. Nginx has a main process and some worker processes. The main process reads and calculates the configuration and maintains the worker process. The worker process handles the real requests

Start, stop, and restart

Use the following syntax for related operations

nginx -s signal
Copy the code

Where signal is desirable

  • Stop Forcibly stops immediately
  • Quit stops but waits for the current request to finish processing
  • Reload to restart
  • The reopen reopen of log records can be used to generate a new log file after the old one is removed

Configuration file structure

Nginx contains many modules that can be controlled using directives in configuration files. A simple instruction contains a space-separated name and argument, followed by a semicolon. A block directive is like a simple directive, but instead of a semicolon, it ends with a curly brace. If there are other directives in the curly brace, it is called a context, such as events, HTTP, server, and location. Any place outside the context is considered to be the main context, where events and HTTP directives are in the main context, server in the HTTP context, and location in the server context.

The content after the hash sign is considered a comment.

Static Resource service

A Web server can be used to provide static access to resources, allowing access to different paths on request, such as adding the following configuration in an HTTP context

server { location / { root /data/www; } location /images/ { root /data; }}Copy the code

If a request can match multiple paths, the longest one will prevail. For example, the request/will match the first one, go to /data/ WWW/to get the resource, and the request /images/ will match the second one, go to /data/images/ to get the resource.

Proxy server

A proxy server is a server that, when it receives a request, sends the request to the proxied server and then sends the obtained request to the client.

Let’s do a simple proxy server that receives requests for port 80, where images use a local directory, and other requests are sent to port 8080.

Use two server block directives in the configuration file. The first one listens on port 80. For images, we use the re to check the associated suffix, preceded by the re with ~.

server { location / { proxy_pass http://localhost:8080; } location ~ \.(gif|jpg|png)$ { root /data/images; }}Copy the code

The other listens on port 8080, where the server context contains the root field, which is used by default when there is no root parameter in the location context.

server {
    listen 8080;
    root /data/up1;

    location / {
    }
}
Copy the code

How does Nginx handle a request

Name-based virtual server

We can configure three virtual servers that listen on port 80

server { listen 80; server_name example.org www.example.org; . } server { listen 80; server_name example.net www.example.net; . } server { listen 80; server_name example.com www.example.com; . }Copy the code

This configuration routes the host field in the request header, and if no match is found, it routes to the default server, which is the first, also specified with the default_server argument

server { listen 80 default_server; server_name example.net www.example.net; . }Copy the code

If requests without the host field are not allowed to access, the connection can be closed using the following configuration, where the null character name can be matched

server {
    listen      80;
    server_name "";
    return      444;
}
Copy the code

The server_name here can be represented in the following format

  • The accurate name
  • Start with an asterisk, for example*.example.org
  • Ending with an asterisk, for examplemail.*
  • regular

Use HTTP load balancing

Load balancing across multiple application instances is typically used to optimize resource utilization, maximize throughput, reduce latency, and tolerate faults. We can use Nginx as an effective load balancer to distribute traffic to multiple application servers to improve performance, stability, and reliability.

There are three approaches to nGINx load balancing

  • Round-robin Requests are allocated in turn
  • Least-connected assigns requests to active connections with the smallest serial number
  • This ensures that the same request is processed by the same server

The simplest configuration looks like this

http { upstream myapp1 { server srv1.example.com; server srv2.example.com; server srv3.example.com; } server { listen 80; location / { proxy_pass http://myapp1; }}}Copy the code

By default, round-robin instructions are added to the server group if least-Connected or IP_hash is used

upstream myapp1 { least_conn; # or ip_hash server srv1.example.com; server srv2.example.com; server srv3.example.com; }Copy the code

Configure HTTPS

You need to add the SSL parameter (http2 parameter enables http2) and specify the associated certificate and private key

server { listen 443 ssl http2; server_name www.example.com; ssl_certificate www.example.com.crt; ssl_certificate_key www.example.com.key; . }Copy the code

In order to forward HTTP requests to HTTPS, the data on port 80 is forwarded to HTTPS

    server {
        listen       80;
        rewrite ^(.*)$ https://$host$1 
    }
Copy the code

Add the header

You can add response headers in the HTTP, server, location context using add_header, syntax

add_header name value [always];
Copy the code

Such as

 location / {
           add_header    Cache-Control 0;
        }
Copy the code

The end