This is the 18th day of my participation in the August Genwen Challenge.More challenges in August

WangScaler: A writer with heart.

Declaration: uneducated, if there is a mistake, kindly correct.

Nginx plays an important role in project development. For example, we often say that load balancing, static and static separation, reverse proxy can be implemented through Nginx, so today we take a look at how nginx is used.

The installation

There are many ways to install it, but we chose the simplest yum to install it.

yum install nginx
Copy the code

Start the

systemctl restart nginx.service
Copy the code

You can run the following command to check the service status

systemctl status nginx.service
Copy the code

You can also check out the nginx page by visiting http://IP directly from your browser. The default is port 80

If you want to boot, you can use the command

systemctl enable nginx.service
Copy the code

Dynamic and static separation

We develop the process of static pages, is no need to go through the backend processing, if the backend access, waste of resources, can be configured through nginx, directly through nginx to return static pages.

  server {
        listen       80 default_server;
        listen       [::]:80 default_server;
        server_name  _;
        root         /usr/share/nginx/html/wangscaler;
        # Load configuration files for the default server block.
        include /etc/nginx/default.d/*.conf;
        error_page 404 /index.html;
        location = /40x.html {
        }
        error_page 500 502 503 504 /50x.html;
            location = /50x.html {
        }
    }
​
Copy the code

The static page address installed with yum is /usr/share/nginx/html. You can place your static pages in this folder, such as my project wangscaler. You can configure the root option in the nginx configuration file /etc/nginx/nginx.conf.

Load balancing

For example, Tornado is a single-threaded framework that mainly processes tasks through epoll. In order to enhance the application’s ability to deal with high concurrency, it is a very good choice to carry out load balancing for Tornado project. You can also see from Tornado official documents that NGINx is also recommended by Tornado official for load balancing.

For example, our project is deployed on three machines. The background interface prefix is/wangScaler /v1

server {
        listen       80 default_server;
        listen       [::]:80 default_server;
        server_name  _;
        root         /usr/share/nginx/html/wangscaler;
        # Load configuration files for the default server block.include /etc/nginx/default.d/*.conf; location /wangscaler/v1{ proxy_pass http://WangScalerApi; } upstream WangScalerApi { least_conn; Server 192.168.1.100:8888 weight = 3; Server 192.168.1.200:8888 weight = 3; Server 192.168.1.300:8888 weight = 3; } error_page 404 /index.html; location = /40x.html { } error_page 500 502 503 504 /50x.html; location = /50x.html { } }Copy the code

The configuration of your server may be different, such as low configuration of the server can be less weight, weight can reduce permissions. Of course, there are many nginx weight policies, interested or other needs can check the information.

TCP load balancing

Our TCP, UDP services, EMQX services, etc., can also use Nginx for load balancing

stream {
    upstream WangScalerServer {
       #hash $remote_addr consistent;Server 192.168.1.100:3333 max_fails=2 fail_timeout=30s weight=8; Server 192.168.1.200:3333 max_fails=2 fail_timeout=30s weight=10; Server 192.168.1.300:3333 max_fails=2 fail_timeout=30s weight=7; } server { listen 9999; proxy_pass WangScalerServer; }}Copy the code

Open hash $remote_addr consistent; , will enable your clients with the same IP address to connect to the same server.

encryption

When we deploy our static site (blog), but we do not want others to access it directly, we can also add passwords.

mkdir /usr/local/wangscaler
​
yum install -y httpd-tools
​
htpasswd -bc /usr/local/wangscaler/htpasswd.users wangscaler password
Copy the code

The htpasswd.users file is used to configure the password. Configure in the nginx configuration file.

location /wangscaler{
  auth_basic "wangscaler";
  auth_basic_user_file /usr/local/wangscaler/htpasswd.users;
  proxy_pass http://WangScalerApi;
}
Copy the code

Original IP address transparent transmission

After nginx load, no matter our Web service or TCP service, the IP address is the IP address of the nginx machine, but our business often need the original IP address, such as log records. So how to pass through the original IP?

The Web service

location /wangscaler{
  proxy_pass http://WangScalerApi;
  proxy_set_header Host $host;
  proxy_set_header X-Real-IP $remote_addr;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
    
Copy the code

Add the HEAR field to the request as x-real-ip.

TCP service

server {
        listen       9999;
        proxy_protocol on;
        proxy_pass WangScalerServer;
    }
Copy the code

Just add proxy_protocol on; When the client connects, nginx sends a packet to the server with the original IP address.

conclusion

The above is the most basic operation, nginx has a lot of powerful features, if you are interested in research, that’s all for today, right

events {
    worker_connections 1024;
    use epoll;
}
Copy the code

Make sure your worker_connections are scaled up appropriately for your business.

Come all come, click “like” and then go!

Follow WangScaler and wish you a promotion, a raise and no bucket!