Overview what is Nginx?

Engine X (Nginx) is a lightweight Web server, reverse proxy server, and email (IMAP/POP3) proxy server.

What is a reverse proxy?

In Reverse Proxy mode, a Proxy server receives Internet connection requests, forwards the requests to the Intranet server, and returns the results to the Internet client. In this case, the proxy server acts as a reverse proxy server.

Installing and using Nginx installation using Nginx is relatively simple, just a few commands.

The common commands are as follows:

Nginx -s stop Quickly shuts down nginx, possibly without saving the information, and quickly terminates the Web service. Nginx -s quit Smoothly shut down nginx, save the information, and end the Web service on schedule. Nginx-s reload is reloaded because the nginx-related configuration has been changed and needs to be reloaded. Nginx -s reopen the log file. Nginx -c filename Specifies a configuration file for nginx instead of the default. Nginx -t does not run and only tests the configuration file. Nginx checks the syntax of the configuration file and tries to open the file referenced in the configuration file. Nginx -v Displays the nginx version. Nginx -v displays the nginx version, compiler version, and configuration parameters.Copy the code

Bat file in the nginx installation directory. Double-click the startup batch file to run it. As follows:

@echo off rem If nginx has been started and pid file has been recorded, Exe -t -c conf/nginx.conf rem Displays the version information. Nginx. exe -v rem starts nginx according to the specified configuration nginx.exe -c conf/nginx.confCopy the code

If you’re running Linux, write a shell script, much the same.

Nginx configuration combat

I always believe that the configuration of various development tools or combined with actual combat to tell, will make people easier to understand.

HTTP reverse proxy configuration Let’s start with a small goal: just complete an HTTP reverse proxy, regardless of the complex configuration.

Note: conf/nginx.conf is the default configuration file of nginx. You can also specify your configuration files using nginx -c

#user [user]; Worker_processes 1 D:/Tools/nginx-1.10.1/logs/error.log; Error_log D: / Tools/nginx - 1.10.1 / logs/notice. The log notice; Error_log D: / Tools/nginx - 1.10.1 / logs/info. The log info; PID D:/Tools/nginx-1.10.1/logs/nginx. PID D:/Tools/nginx-1.10.1/logs/nginx.pid {worker_connections 1024; # set the maximum number of concurrent links for a single background worker process Types file include D:/Tools/nginx-1.10.1/conf/mime.types; default_type application/octet-stream; Log_format main '[$remote_addr] - [$remote_user] [$time_local] "$request" "$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; Access_log D: / Tools/nginx - 1.10.1 / logs/access log main; rewrite_log on; The # sendFile directive specifies whether nginx calls sendFile (zero copy) to output files. For common applications, # must be set to on. If it is used for applications such as downloads, it can be set to off to balance disk and network I/O processing speed. Reduce uptime. Sendfile on; #tcp_nopush on; Keepalive_timeout 120; tcp_nodelay on; #gzip switch #gzip on; Upstream: upstream zp_server1{server 127.0.0.1:8089; } #HTTP server listen to port 80; Use www.xx.com to access server_name www.helloworld.com; # point to the webApp directory root D:\01_Workspace\Project\github\zp\SpringNotes\spring-security\spring-shiro\src\main\webapp; Charset UTF-8; Proxy_connect_timeout 180; proxy_send_timeout 180; proxy_read_timeout 180; proxy_set_header Host $host; proxy_set_header X-Forwarder-For $remote_addr; Location / {proxy_pass http://zp_server1; } # static file, Nginx own location ~ ^ / javascript (images' | | js flash | | | CSS media | static) / {root D:\01_Workspace\Project\github\zp\SpringNotes\spring-security\spring-shiro\src\main\webapp\views; Static files don't update very often. You can set the expiration date to a higher value. If you update them frequently, you can set it to a lower value. expires 30d; } # NginxStatus {stub_status on; access_log on; auth_basic "NginxStatus"; auth_basic_user_file conf/htpasswd; } # prohibit access to.htxxx file location ~ /\. } #error handling page (optionally configurable) #error_page 404/404.html; #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root html; #}}}Copy the code

Ok, let’s try it:

Start webApp with the same port as upstream set in nginx.

Change host: add a DNS record to the host file in the C:\Windows\System32\drivers\etc directory 127.0.0.1 www.helloworld.com startup

Go to www.helloworld.com in your browser and, unsurprisingly, it’s already available.

Load Balancing Configuration

In the previous example, the proxy only pointed to a server.

However, in the actual operation of websites, most of them have multiple servers running the same APP, so they need to use load balancing to divert traffic.

Nginx can also implement simple load balancing functions.

In this scenario, the application is deployed on three Linux servers: 192.168.1.11:80, 192.168.1.12:80, and 192.168.1.13:80. The website domain name is www.helloworld.com, and the public IP address is 192.168.1.11. Nginx is deployed on the server where the public IP address resides to implement load balancing for all requests.

Nginx.conf is configured as follows:

Include /etc/nginx/mime.types; default_type application/octet-stream; # set log format access_log/var/log/nginx/access. The log. Upstream load_balance_server {#weigth: 192.168.1.11:80 weight=5 Server 192.168.1.12:80 weight = 1; Server 192.168.1.13:80 weight = 6; } #HTTP server server {# listen 80 port listen 80; Use www.xx.com to access server_name www.helloworld.com; Location / {root /root; # define the server's default root directory location index index.html index.htm; Proxy_pass http://load_balance_server; # list of servers defined by load_balance_server # Here are some reverse proxy configurations (optional configurations) #proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; Http_forwarded_header x-FORWARded_for $REMOTE_ADDR This server is forwarded_forwarded-for. proxy_connect_timeout 90; #nginx connection timeout proxy_send_timeout 90; Proxy_read_timeout 90; Proxy_buffer_size 4K; proxy_buffer_size 4k; # change the buffer size of the proxy server (nginx). #proxy_buffers, set proxy_busY_buffers_size 64K for pages with an average of less than 32K; Proxy_temp_file_write_size 64K; [upstream] set cache folder size to 10 mbit/s. Client_body_buffer_size 128K; # buffer agent buffers the maximum number of bytes of client requests}}}Copy the code

The site has multiple WebApp configurations

When a website has more and more rich functions, it is often necessary to peel off some relatively independent modules and maintain them independently. In this case, usually, there will be multiple WebApps.

For example, let’s say the www.helloworld.com site has several webApps, finance, Product, and admin. The ways to access these applications are distinguished by context:

www.helloworld.com/finance/

www.helloworld.com/product/

www.helloworld.com/admin/
Copy the code

As we know, the default HTTP port number is 80. If you start these three WebApp applications on the same server at the same time, using port 80, it is definitely not possible. Therefore, the three applications need to bind different port numbers.

So, the problem is that when users actually visit www.helloworld.com site and visit different WebApps, they will not still have the corresponding port number to access it. So again, you need a reverse proxy to do the processing.

Configuration is also not difficult, let’s see how to do it:

Upstream product_server{server www.helloworld.com:8081; } upstream admin_server{ server www.helloworld.com:8082; } upstream finance_server{ server www.helloworld.com:8083; } server {# omit some basic configuration here # default to product server location / {proxy_pass http://product_server; } location /product/{ proxy_pass http://product_server; } location /admin/ { proxy_pass http://admin_server; } location /finance/ { proxy_pass http://finance_server; }}}Copy the code

HTTPS Reverse proxy configuration

Some sites with high security requirements may use HTTPS, a secure HTTP protocol that uses the SSL communication standard.

The HTTP protocol and SSL standards are not covered here. However, there are a few things you need to know about configuring HTTPS using nginx:

HTTPS’s fixed port number is 443, which is different from HTTP’s port 80

The SSL standard requires the introduction of a security certificate, so in nginx.conf you need to specify the certificate and its corresponding key

The others are basically the same as the HTTP reverse proxy, except that the Server configuration is different.

#HTTP server server {# listen on port 443. 443 is a well-known port number, which is mainly used for HTTPS. Use www.xx.com to access server_name www.helloworld.com; Ssl_certificate cert.pem specifies the location of the SSL certificate file. Ssl_certificate_key cert.key; Ssl_session_cache shared:SSL:1m; ssl_session_timeout 5m; MD5 ssl_ciphers HIGH:! aNULL:! MD5; ssl_prefer_server_ciphers on; location / { root /root; index index.html index.htm; }}Copy the code

Static site configuration

Sometimes we need to configure static sites (that is, HTML files and a bunch of static resources).

For example, if all static resources are in the /app/dist directory, we just specify the home page and host for the site in nginx.conf.

The configuration is as follows:

worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; gzip on; gzip_types text/plain application/x-javascript text/css application/xml text/javascript application/javascript image/jpeg image/gif image/png; gzip_vary on; server { listen 80; server_name static.zp.cn; location / { root /app/dist; index index.html; # forward any request to index.html}}}Copy the code

Then, add HOST:

127.0.0.1 static. Zp. Cn

At this point, you can access the static site by visiting static.zp.cn in your local browser.

Setting up a file server Sometimes a team needs to archive some data or data, and a file server is essential. Using Nginx can be very fast and easy to build a simple file service.

Configuration essentials in Nginx:

Autoindex is enabled to display directories. By default, autoIndex is disabled. Turn autoindex_exact_size on to show the size of the file. Turn autoindex_localTime on to display the modification time of the file. Root is used to set the root path open for file services. Charset Set to charset UTF-8, GBK; , can avoid Chinese garble problem (Windows server Settings are still garble, I have not found a solution).Copy the code

The simplest configuration is as follows:

autoindex on; Autoindex_exact_size on; Autoindex_localtime on; Server {charset utF-8, GBK; Listen 9050 default_server; listen [::]:9050 default_server; server_name _; root /share/fs; }Copy the code

Cross-domain solutions

In web domain development, the front – end separation pattern is often used. In this mode, the front end and back end are independent Web applications. For example, the back end is a Java application and the front end is a React or Vue application.

When independent Web Apps access each other, there are bound to be cross-domain problems. There are two ways to solve cross-domain problems:

1, CORS

Set the HTTP response header on the backend server and add the domain name you need to run to access-Control-Allow-Origin.

2, the json

The back end constructs JSON data according to the request, and returns it, and the front end uses JSONP across domains.

These two ideas are not discussed in this paper.

It should be noted that Nginx also provides a cross-domain solution based on the first approach.

For example: www.helloworld.com website is composed of a front-end app and a back-end app. The front-end port number is 9000 and the back-end port number is 8080.

When the front end and back end interact using HTTP, the request is rejected because of cross-domain issues. Here’s how nginx works:

First, set cORS in the enable-coron.conf file:

# allow origin list
set $ACAO '*';

# set single origin
if ($http_origin ~* (www.helloworld.com)$) {
  set $ACAO $http_origin;
}

if ($cors = "trueget") {
    add_header 'Access-Control-Allow-Origin' "$http_origin";
    add_header 'Access-Control-Allow-Credentials' 'true';
    add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
    add_header 'Access-Control-Allow-Headers' 'DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
}

if ($request_method = 'OPTIONS') {
  set $cors "${cors}options";
}

if ($request_method = 'GET') {
  set $cors "${cors}get";
}

if ($request_method = 'POST') {
  set $cors "${cors}post";
}
Copy the code

Next, include enable-coron.conf in your server to introduce cross-domain configuration:

# -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- for project nginx configuration fragment # # this file can be directly in nginx config in the include (recommended) # or copy In nginx, the domain name must be configured with DNS hosts. Need to cooperate with another configuration file under this directory # -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- upstream front_server {server www.helloworld.com:9000; } upstream api_server{ server www.helloworld.com:8080; } server { listen 80; server_name www.helloworld.com; location ~ ^/api/ { include enable-cors.conf; proxy_pass http://api_server; rewrite "^/api/(.*)$" /$1 break; } location ~ ^/ { proxy_pass http://front_server; }}Copy the code

At this point, we’re done.