Small knowledge, big challenge! This article is participating in the creation activity of “Essential Tips for Programmers”.

preface

Many times a project goes live and needs a production test, but you don’t want users to access the latest interface services and see the latest content. So you need a guy called the whitelist to control the flow.

There are many ways to implement it. Before introducing nginx implementation, let’s talk about the general scheme.

Plan a

There is debate on this point. Is it front-end control or background control?

The background control

Let’s leave it to the back end.

  1. The background writes a whitelist configuration file. If the configuration center is a service architecture, there is no need to restart the service. It automatically obtains whitelisted users and only whitelisted users can access the latest service.

  2. On the other hand, if there is no configuration center, the situation will occur: every time the whitelist is added or deleted, the service needs to be restarted to take effect. Obviously, this method is not very flexible.

  3. Another option is client-side versioning, which means that a new service can be accessed only by the specified client version. As usual, the latest version is available, so the beta testers can upgrade to the latest client version, and other users cannot access it without being prompted to upgrade.

  4. Is there another one? There is the next thing to do, controlled by Nginx.

Scheme 2

Nginx (Engine X) is a high-performance HTTP and reverse proxy Web server that also provides IMAP/POP3/SMTP services. Nginx was developed by Igor Sesoyev for the second most visited rambler.ru site in Russia (р а блер), the first public version 0.1.0 was released on 4 October 2004.

It distributes source code under a BSD-like license and is known for its stability, rich feature set, simple configuration files, and low system resource consumption. On June 1, 2011, Nginx 1.0.4 was released.

Nginx is a lightweight Web/reverse proxy server and E-mail (IMAP/POP3) proxy server distributed under the BSD-like protocol. In fact, nginx’s concurrent ability is better in the same type of web server. In Mainland China, nginx website users include: Baidu, JINGdong, Sina, netease, Tencent, Taobao, etc.

—- is from Baidu Baike

  1. IP policy is used to restrict traffic, that is, only the specified IP address can access, but this requires a third node to control

If you have two nodes A and B, how do you control IP access to the newly released service?

First question, where does IP come from?

A > database, if we will record the user’s common IP address, then we can filter out a batch, of course, at present, may be many cases of IP is dynamic change.

B > Restrict the Intranet, that is, after the launch, only allow the company network to access the new service, and to access the old service, you need to switch their own traffic.

Second question, what happens after you verify it?

Nodes A > C need to be released or IP address restrictions released. You will need to operate Nginx twice.

B > You do not need to release it. During server Dr, if other servers fail, you can quickly enable the standby server to ensure normal user access.

Upstream myservername{# ip_hash; Server 127.0.0.1:9100; Server 127.0.0.1:9102; } server { listen 80; server_name localhost; Location / {# () do not use if, suppose remote call service IP address, = 192.168.2.188, redirect to http://$host/jforum; If ($remote_addr = '192.168.2.188'){rewrite ^/(.*)$http://$host/jforum permanent; } root /data/www/html/dist/flaget; index index.html index.htm; try_files $uri $uri/ /index.html; } # nginx localhost/jforum localhost ^~ /jforum {proxy_pass http://127.0.0.1:8080; This is the third service node proxy_redirect off; proxy_set_header Host $host; proxy_set_header Http-referer $http_referer; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; }Copy the code

Results demonstrate

Localhost = localhost = localhost = localhost = localhost

If you visit the same address: http://localhost, you can jump to the same address as http://localhost/jforum

This is one way to handle this, but it depends on the service to control, otherwise the request background service does not know where to go, generally recommended to determine the location of the current need to control the location block or outside the location block to determine and set the value

The set $myservername 127.0.0.1:9100; If ($remote_addr! = '192.168.2.188'){set $myServername 127.0.0.1:8080; } # location ^~/jforum {proxy_pass http://$myservername; $myServername = $myservername = $myservername = $myservername = $myservername proxy_redirect off; proxy_set_header Host $host; proxy_set_header Http-referer $http_referer; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; }Copy the code

This allows you to test the response to http://localhost/jforum from clients with different IP addresses.

conclusion

Nginx uses reverse proxies to whitelist new services; This is also a practical solution. You can also use different Settings to implement SERVER Dr.

  1. Nginx still has a lot of features to learn and practice to grasp its power;

  2. Many years ago, I had contact with the setting of the white list, but at that time I always felt too far away, so I did not humbly consult the operation and maintenance leaders.