nginx
What is Nginx?
Nginx is a free, open source, high-performance HTTP server and reverse proxy server. It is also an IMAP, POP3, SMTP proxy server. Nginx can be used as an HTTP server for web site publishing and as a reverse proxy for load balancing.
Nginx functions: 1. As a Web server, nginx supports more concurrent connections, which is more efficient and can support up to 50,000 concurrent connections in response. Reverse proxy is a load balancing server. When the number of visits to a website reaches a certain level, a single server cannot meet user requests. Multiple server clusters can be used as reverse proxies. In addition, multiple servers can evenly share the load, so that a server is not idle due to the high load of a server downtime. 3.HTTP server, static and dynamic separation, namely the separation of dynamic resources and static resources 4. 5. Limit the number of user requests and set Nginx and Nginx Plus connection requests within a reasonable range of real user requests. For example, if you think a normal user can request a login page every two seconds, you can set Nginx to receive a client IP request every two seconds (equivalent to about one request per minute).
Nginx configuration file
Nginx core configuration file by default is on * * / usr/local/Nginx/conf/Nginx. Conf. * *
Instruction name instruction value; # global block, which sets up worker_processes for the entire Nginx server run1; # worker_connections {worker_connections} # worker_connections {worker_connections1024; } # HTTP block is an important part of Nginx server configuration, proxy, caching, logging, third-party module configuration... HTTP {# directive name directive value; include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout65; #server block, which is Nginx configuration and virtual host related content server {listen80; server_name localhost; Location / {# instruction name instruction value; root html; index index.html index.htm; } error_page500 502 503 504/50x.html; location = /50x.html { root html; }}}Copy the code
Nginx has three big blocks: the global block, the Events block, and the HTTP block
User [user group] : used to configure the user and user group for the worker process running on the Nginx server. Default is nobody. Worker_processes [number] : the number of generated work process, the default is 1. The daemon [off | on] : does set nginx start to daemon, the default value is not on the pid (path) : The current master used to configure Nginx process ID number stored file path, the default for: / usr/local/Nginx/logs/Nginx pid error_log [path + log level] : Error_log logs/error.log error; Include [file] : used to introduce other configuration files to make Nginx configuration more flexible
Events: the main set, Nginx server and the user’s network connection, this part had a greater influence on the performance of Nginx server The main Settings: accept_mutex [on | off] : Serialization is used to install Nginx network connection, solve the phenomenon of the default is on multi_accept [on | off] : used to set whether to allow to receive multiple network connections at the same time, the default to off worker_connections [number] : Use [method] : specifies which event driver the Nginx server uses to process network messages, depending on the operating system
The HTTP block is an important part of the Nginx server configuration, including proxy, caching, logging, and third-party module configuration. The server block and the location block default_type [mine type] are used to configure the default MIME type for Nginx to respond to front-end requests. Default value default_type text/plain; Access_log: sets the attributes of user access logs and records all user access requests log_format: specifies the output format of logs. Sendfile [on | off] : is used to install Nginx server is using sendfile () transfer files, this property can greatly improve the performance of the Nginx process static resource is off by default keepalive_timeout [time] : used to set the timeout of long connection, Default 75s keepalive_requests: Set the number of keep-alive connections to be used. Default is 100
Server block: listen: used to configure the listening port, default listening port 80 server_name: used to set the virtual host service name, that is, IP address or domain name, the default value is “”
Location: * the location [= | | ~ ~ | ^ ~ | @] uri {… Root [path] : sets the root directory of the request. The default is HTML alias [path] : Error_page [code… [=[response]] URI;] : Sets the error page of the website
Forward agent
What is forward proxy?Origin server is a server that sits between the client and the origin server. In order to get content from the origin server, the client sends a request to the agent and specifies the destination (the origin server). The agent then forwards the request to the origin server and returns the content to the client (for protection purposes)The clientFor example, if the client uses the Intranet, it cannot access the website of the Internet under normal circumstances. In this case, a proxy server is required. The client requests the proxy server, forwards the request to the target server, and returns the result to the client.
The reverse proxy
What is a reverse proxy? A proxy server on the client side and the server side, but the purpose of the proxy server is to protect the server side, when the external client to access our server to avoid exposing the actual IP address of the application server, we will let the client access the proxy server, The proxy server then obtains the response content from the actual application server and returns it to the client. That is, the client accesses the IP address of the reverse proxy server and forwards the IP address to the target server through a specific match. The client does not know the address of the target server.
Server_name: indicates the host name. Set this parameter to the address of the reverse proxy server. Proxy_pass: indicates the address to which the nginx proxy resides
Load balancing
What is load balancing? When a server has more traffic per unit of time, the greater the pressure on the server will be. When the pressure exceeds its capacity, the server will crash. In order to avoid server crashes and provide users with better experience, load balancing is used to share server load. We can set up many, many servers, form a server cluster, when the user visits the website, first visit an intermediate server, let the intermediate server choose a less stressful server in the server cluster, and then introduce the access request to the server. Nginx uses the polling algorithm by default for load balancing.
Nginx load balancing configuration file main Settings: 1. Polling HTTP block Settings:
Upstream myServer {server IP address; Server IP address; }Copy the code
In the Location block set:
proxy_pass http://myserver
Copy the code
The default implementation of polling algorithm to achieve load balancing. 2. Weight Specifies the polling probability. Weight is proportional to the access ratio and is used when the back-end server performance is uneven. HTTP block Settings:
Upstream myServer {server 192.168.0.14weight =8; Server 192.168.0.15 weight = 10; }Copy the code
3.IP binding IP_hash Each request is allocated according to the hash result of the access IP address. In this way, each visitor accesses the same back-end server, which can solve the session problem. That is, the hash value is calculated using the IP address and the number of machines on the back end, and requests from clients are pinned to the same hash server. HTTP block Settings:
upstream myserver {
ip_hash;
server 192.168.0.14:88;
server 192.168.0.15:80;
}
Copy the code
4. Fair (third party) allocates requests based on the response time of the back-end server, with priority given to those with short response times. HTTP block Settings:
upstream myserver {
server server1;
server server2;
fair;
}
Copy the code
Dynamic and static separation
What is static separation? Static resources are deployed on Nginx (or a separate server). When a request comes in, if it is a request for a static resource, If it is a dynamic resource request, Nginx uses the principle of reverse proxy to forward the request to the background application to process, so as to achieve static and static separation. Main configuration file modification for static and static separation/data/image static resource: /data/image static resource: /data/image static resource
Location /image{root /data; The index index. HTML; }Copy the code
High availability
What is high availability? High availability means that only one device provides services at a time. When the device fails, the backup server starts to provide services. What is Keepalived? Keepalived is a high performance server high availability or hot spare solution, Keepalived mainly to prevent the occurrence of server single point of failure problems, can be implemented through its collaboration with Nginx web server high availability. We use Keepalived to manage the Nginx on two devices and dummy an IP. We have two devices with Nginx installed respectively 192.168.. And 192.168.. So we can create a virtual IP 192.168.xx.xx, external requests to access the virtual IP instead of the real Nginx, let the virtual IP to access the service Nginx, and then Nginx to access Tomcat.
Install Keepalived: blog.csdn.net/xyang81/art…
Master and modify keepalived nginx server configuration file/etc/keepalived/keepalived nginx conf file:
Vrrp_script chk_http_port {script "/usr/local/nginx/nginx_check.sh" interval 2 # Weight 2} vrrp_instance Eth0 # Set the nic bound to the instance. Virtual_router_id 51 # Set the virtual_Router_id for the same instance Priority 100 #MASTER advert_int 1 # priority 100 #MASTER advert_int 1 # Auth_type PASS # Primary/secondary authentication auth_pass 8888} virtual_ipAddress {# VIP 192.168.101.100 }}Copy the code
Nginx prepared:
Vrrp_script chk_http_port {script "/usr/local/nginx/nginx_check.sh" interval 2 # Weight 2} vrrp_instance Virtual_router_id 51 virtual_Router_id must be the same for the same instance Priority 99 # advert_int 1 # priority 99 # Auth_type PASS # Primary/secondary authentication auth_pass 8888} virtual_ipAddress {# VIP 192.168.101.100 }}Copy the code
Script file executed (to determine whether the nginx server process is available) :
#! / bin/bash A = ` ps - C nginx ¨ Cno - the header | wc -l ` if [$A - eq 0]; then /usr/local/nginx/sbin/nginx sleep 2 if [ `ps -C nginx --no-header |wc -l` -eq 0 ]; then killall keepalived fi fiCopy the code
Note the following when configuring the standby Nginx: State must be set to BACKUP, priority must be lower than that of MASTER, and the value of virtual_router_id must be the same as that of MASTER. Set the nic bound to the instance: Run the ifconfig command to view the name of the nic on the local host