• An overview of the
  • Basic concept
  • The installation
    • The source code to install
    • Apt to install
  • Configuration parameters
  • source

An overview of the

Nginx is rich in features, can be used as HTTP server, can also be used as a reverse proxy server, mail server. Support FastCGI, SSL, Virtual Host, URL Rewrite, Gzip and other functions. It also supports many third party module extensions.

Nginx is a lightweight Web/reverse proxy server and email (IMAP/POP3) proxy server.

Nginx is characterized by low memory consumption and high concurrency. In fact, NGINx’s concurrency does perform better in the same type of web server.

Basic concept

The agent

The so-called agent is a representative, a channel

Involves two roles, one is the agent role, one is the target role, the agent role through the agent to access the target role to complete some tasks process is called the agent operation process; Just like in life, when a customer goes to adidas to buy a pair of shoes, the store is the agent, the agent is the manufacturer of Adidas, and the target role is the user.

Forward agent

In today’s network environment, we are going to visit if due to technical need foreign web sites, you will find in abroad there is no way to access a web site we through the browser, at this time we will use an operation FQ visit, main FQ way is to find a can access to foreign websites, proxy server, We send the request to a proxy server, which visits a foreign site and passes the data to us.

This proxy mode is called forward proxy. The biggest characteristic of forward proxy is that the client is very clear about the server address to access. The server only knows which proxy server the request is coming from, not which specific client. Forward proxy mode masks or hides real client information.

The reverse proxy

Requests sent by multiple clients to the server are received by the Nginx server and then distributed to the back-end business processing server for processing according to certain rules. At this point, the source of the request (i.e. the client) is clear, but it is not clear which server handles the request. Nginx acts as a reverse proxy. Reverse proxy: Used to hide server information in distributed cluster deployment.

The reverse proxy In the computer world, due to a single server’s ability to deal with the client (user) request has a limit, when the user access requests and trendy, pouring cause server busy not to come over, you can use multiple servers to share the tens of thousands of users request, the server to provide the same service, for users, You don’t feel any difference at all. The server receives the client’s request, then distributes the request to a specific server for processing, and then feeds the server’s response back to the client.

Load balancing

Load balancing in actual project operation process, hardware load balancing and load balance two kinds of software, hardware load balancing is also called hard load, such as load balancing, F5 relatively expensive cost is higher, but the stability of the data safety and so on has very good security, such as China mobile, China unicom company will choose hard load operation; For cost reasons, more companies are opting for software load balancing, which is a message queue distribution mechanism using existing technology combined with host hardware.

Strategy:

  1. Polling policy (default)

Each request is allocated to a different backend server one by one in chronological order. If the backend server goes down, the request is automatically removed. The weight polling policy specifies the polling probability. The weight is proportional to the access ratio. It is used when the performance of back-end servers is uneven. 3. IP binding of the IP_hash client Each request is assigned according to the hash result of the access IP address. In this way, each visitor accesses the same back-end server permanently, which can solve the session problem. The policy allocates requests based on the response time of the back-end server, with priority given to those with short response times. 5. Url_hash (third-party) Allocates requests based on the hash result of the url so that each URL is directed to the same backend server, which is more effective when the backend server is used for caching. 6. The least_CONN Minimum number of connections policy prioritizes requests to less stressed servers, balances the length of each queue and avoids adding more requests to stressed servers.

The installation

The source code to install

This example uses Ubuntu18.04 as an example

Before installing Nginx, you need to install the following (GCC /pcre/zlib/openssl) plug-ins

1 Check whether GCC is installed

$dpkg -l | grep gcc
Copy the code

If you see a libgcc – 5 – dev is ok, not install

If no, perform the following steps to install it

$apt-get install build-essential
Copy the code

2 Check whether the PCRE is installed

$dpkg -l | grep pcre
Copy the code

If no, perform the following steps to install it

$apt-get install libpcre3 libpcre3-dev
Copy the code

3 Check whether OpenSSL is installed

$dpkg -l | grep openssl
Copy the code

If no, perform the following steps to install it

$apt-get install openssl libssl-dev
Copy the code

4 Check whether zlib is installed

$dpkg -l | grep zlib
Copy the code

If no, perform the following steps to install it

$apt-get install zlib1g-dev
Copy the code

5 Download the Nginx installation package

Go to this website to find the latest installation package, nginx.org/download/

$cd /usr/local # This installation address casually find a place to put it, personal habit
$Wget HTTP: / / http://nginx.org/download/nginx-1.15.10.tar.gz
Copy the code

6 to install Nginx

$The tar - ZXVF nginx - 1.15.10. Tar. Gz
$cdNginx - 1.15.10
$./configure
$make
$make install
Copy the code

7 Check whether the Nginx configuration file is valid

$cd /usr/local/nginx/sbin
$./nginx -t
#Or add the -t argument to boot at point 8
Copy the code

8 start Nginx

$/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf
#Nginx installation directory address -c Nginx configuration file address
#or
$/usr/local/nginx/sbin/nginx -t -c /usr/local/nginx/conf/nginx.conf
Copy the code

9 Check whether the server is started

$ps -ef|grep nginx
Copy the code

10 stop Nginx

$kill 13387 # master pid
Copy the code

11 restart Nginx

$cd /usr/local/nginx/sbin
$./nginx -s reload
#or
$kill -hup 13387
Copy the code

12 the default HTML view nginx, port to see/usr/local/nginx/conf/nginx. Conf, 80 by default. The default HTML is in /usr/local/nginx/html

Apt to install

Apt instruction installation

$apt-get install nginx
Copy the code

2 Directory to which the VM is installed

#Executable file
$cd /usr/sbin

#The configuration file
$cd /etc/nginx

#The log file
$cd /var/log/nginx
Copy the code

3 Start Nginx and automatically join systemd

$systemctl start nginx.service
Copy the code

Configuration parameters

There are many configuration items, just watch the basic, then slowly detailed, and according to the scenario to configure the relevant parameters.

The default configuration

#user nobody;
worker_processes  1;

#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;

#pid logs/nginx.pid;events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' #  '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php${# proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php${# root HTML; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443 ssl; # server_name localhost; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_cache shared:SSL:1m; # ssl_session_timeout 5m; # ssl_ciphers HIGH:! aNULL:! MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # #}}}Copy the code
. # block events {# block events... } HTTP # HTTP block {... # HTTP global block server #server block {... Location [PATTERN] #location block {... } location [PATTERN] { ... } } server { ... }... # HTTP global block}Copy the code

Global block: configure directives that affect nginx globally. Generally, there are user groups running nginx server, nginx process PID storage path, log storage path, configuration file import, allowed number of worker processes, etc. 2. Events block: The configuration affects the nginx server or network connection to the user. The maximum number of connections per process, which event-driven model to choose to handle connection requests, whether to allow simultaneous acceptance of multiple network connections, enable serialization of multiple network connections, etc. 3, HTTP block: can nest multiple servers, configure proxy, cache, log definition and most functions and third-party module configuration. Such as file import, miME-type definition, log customization, whether to transfer files using SendFile, connection timeout, number of single connection requests, etc. 4, server block: configure the relevant parameters of the virtual host, an HTTP can have multiple servers. 5. Location block: Configure the routing of the request and the processing of various pages. Upstream: Configure the back-end server address as an integral part of the load balancing configuration.

The configuration of kvP-MPS nginx. Cong -location is used as an example

Configure the root directory ‘/’ to the index.html page. The front-end configured root directory goes directly to the login page

Configure file directory ‘/’ to ‘/data/kvp-mps’, because some front-end file upload and download functions, so you need to configure the download directory

location / { root /home/kst/kvp-mps/web; index index.html index.htm; try_files $uri $uri/ /index.html; if ($request_filename ~* ^.*? \.(txt|doc|pdf|rar|gz|zip|docx|exe|xlsx|ppt|tar|pptx)$){ add_header Content-Disposition: 'attachment; '; } } location /data/ { alias /data/kvp-mps/; }Copy the code

Some Common Configurations

Here are some of the built-in global variables commonly used in nginx configuration. You can use them anywhere in the configuration.

The variable name function
$host Request informationHostIf not in the requestHostLine is equal to the server name set
$request_method Client request type, for exampleGet,Post
$remote_addr The client’sIPaddress
$args Parameters in the request
$content_length Request headerContent-lengthfield
$http_user_agent The clientagentinformation
$http_cookie The clientcookieinformation
$remote_addr The client’sIPaddress
$remot_port Client port
$server_protocol Protocol used in the request, for exampleHTTP / 1.0,HTTP / 1.1
$server_addr Server address
$server_name Server name
$server_port Port number of the server

To solve the cross domain

Cross domain definition

The same origin policy restricts how documents or scripts loaded from the same source can interact with resources from another source. This is an important security mechanism for isolating potentially malicious files. Read operations between different sources are generally not allowed.

Definition of homology

If both pages have the same protocol, port (if specified), and domain name, then both pages have the same source.

URL The results of why
http:// domain name/dir2 / other. HTML successful
http:// domain name/dir/inner/another. HTML successful
https:// domain name/dir2 / other. HTML failure Different protocols (httpsandhttp)
http:// : 81 / dir2 / other. HTML failure Different ports (81and80)
http://another domain name /dir2/other.html failure Different domain name

Nginx addresses cross-domain principles

server { listen 80; Server_name Front-end domain name /IP address; Location/KST / {proxy_pass backend domain name /IP/; }}Copy the code

Proxying a request to a front-end domain back to a back-end domain is treated as same-origin access to the front-end domain, but nginx’s forwarding front end is unaware of it

Note The suffix of the back-end domain name/In this way, the proxy will directly concatenate the back-end domain name before the request path, that is, the requestBack-end domain name + request pathOtherwise, the request will be receivedBack-end domain name + front-end path + request path

The request filtering

Status code Filtering

error_page 500 501 502 503 504 506 /50x.html; Location = /50x.html {# change the following path to the path of the HTML. root /root/static/html; }Copy the code

URL filtering

Accurately matches urls based on URL name filtering. All urls that do not match are redirected to the home page.

location / {
    rewrite  ^.*$ /index.html  redirect;
}
Copy the code

Request Type Filtering

if ( $request_method ! ~ ^(GET|POST|HEAD)$ ) { return 403; }Copy the code

source

www.jianshu.com/p/ab44b933a…

www.cnblogs.com/knowledgese…

www.cnblogs.com/wyd168/p/66…

Mp.weixin.qq.com/s/wecUdGnuH…