This is a free collection of knowledge available in graphics and video, and you’re looking at the graphics right now.
The NGINX series is divided into three parts: Basics, Advanced, and Enterprise Practice. You are reading the basics.
The video version is published in my own community. If you like to watch the video, you can go to the community and play it after logging in with wechat scanning code or logging in with wechat authorization.
Writing in the front
Basic Learning Purpose: Understand NGINX, and be able to operate by yourself, can independently complete load balancing configuration, and bind domain names to achieve access to back-end services through domain names.
NGINX series learning objectives: to understand NGINX, to be able to independently complete load balancing configuration, to build their own high availability enterprise production environment, to monitor NGINX.
Those of you who listened to the video have been able to configure load balancing on your own and have submitted assignments
In this half year, we will export many open courses, including graphic version and video version. The important thing is, it’s free!! You can listen when you get here. The course list is as follows
Green is the content that has been published, and red is the content that is being prepared.
NGINX Basic Text graphical version
All right, here we go. Let’s get started.
What is NGINX?
NGINX is a high-performance HTTP server that is also a Reverse proxy server. In addition to HTTP, it also supports mail and TCP/UDP.
What can it do?
In my opinion, it is a gateway. Function 1 Request forwarding, Function 2 Traffic limiting, Function 3 Authentication, and function 4 Load balancing. The reverse proxy server mentioned above can be categorized as request forwarding.
Forward proxy, reverse proxy?
We don’t speak too much truth, can read reading zhuanlan.zhihu.com/p/25707362 about this problem on other platforms
Here we briefly summarize, the forward proxy proxy object is the client, the reverse proxy object is the server.
For those of you who do crawlers, the IP proxy you use is the forward proxy, through which crawlers forward requests to the back end. The NGINX reverse proxy in question forwards requests from the client to the back end. Borrow some pictures from the article mentioned above
Are many companies using NGINX?
Most companies use NGINX, from Google Meta(Facebook) to Amazon Alibaba Tencent HUAWEI, to 70%+ (I guess, more than that) of the world’s Internet companies, and the community uses NGINX
Install NGINX
Installation based on Ubuntu20.04, cloud server. The foundation first through the quick installation, so that we can operate, learn some foundation, the subsequent advanced will be compiled and installed.
Open Terminal, run sudo apt install nginx -y, and wait for the command to complete the installation. After the installation is complete, it will start by itself, you can access your server’s address, for example, my server IP is 101.42.137.185, then I visit http://101.42.137.185
If Welcome to nginx is displayed, the service is normal. If not, please check the error message output by Terminal during installation, or check your firewall and security group policy. (If you do not understand or cannot operate correctly, you can learn from the Linux cloud Server open course previously published by the community.)
NGINX basic working principle and module relations
NGINX has one main process and multiple worker processes. The main process is used to maintain itself, such as reading configurations, parsing configurations, maintaining worker processes, reloading configurations, and so on. The worker process is the process that responds specifically to the request.
The number of working processes can be adjusted in the configuration file.
NGINX is made up of modules that are controlled by the configuration in the configuration file, which means that the configuration file determines how NGINX works.
Here or quote other articles, not their own one by one. NGINX principle and architecture can reference zhuanlan.zhihu.com/p/133257100, in fact, in the initial stage we need to focus on only one place, that part is the module of browsing a clue, don’t have to.
NGINX signal
Signal, by which I mean control signal. Signal is a module that controls the working state of NGINX
nginx -s signal
Copy the code
Common signals are
Reload Reload the configuration is reopened. The log file is reopenedCopy the code
The correct shutdown for NGINX is NGINX -s quit, which allows NGINX to finish the work it has started before exiting.
NGINX configuration description
Based on the previous community Open lectures, we can take a look at the application management configuration of NGINX before we start talking about configuration. Use the status command to find the NGINX Server configuration file
> systemctl status nginx
Copy the code
View NGINX Server configuration
[Unit] Description=A high performance web server and a reverse proxy server Documentation=man:nginx(8) After=network.target [Service] Type=forking PIDFile=/run/nginx.pid ExecStartPre=/usr/sbin/nginx -t -q -g 'daemon on; master_process on; ' ExecStart=/usr/sbin/nginx -g 'daemon on; master_process on; ' ExecReload=/usr/sbin/nginx -g 'daemon on; master_process on; ' -s reload ExecStop=-/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 --pidfile /run/nginx.pid TimeoutStopSec=5 KillMode=mixed [Install] WantedBy=multi-user.targetCopy the code
See the ExecStart option to confirm that NGINX is installed in /usr/sbin/nginx. This configuration file is in line with what we learned in the Linux Cloud Server open class, which is mentioned here.
Find the default master profile
The configuration file section begins
NGINX has a master configuration file and a secondary configuration file. The default name of the master configuration file is nginx.conf and it is stored in /etc/nginx.conf by default. The path of the secondary configuration file is controlled by the primary configuration file. The specific path is set through the primary configuration file. The name and path of the secondary configuration file can be changed.
If you don’t know where the master configuration file is after installation, you can either use the default path or use the find command to find it.
> sudo find / -name nginx.conf
/etc/nginx/nginx.conf
Copy the code
Basic structure and functions of the master configuration file. Run cat /etc/nginx/nginx.conf to list the contents of the file. If you don’t know, then you can learn the specific Linux file viewing instructions through the community’s previously published Open class Linux cloud server open class.
user www-data; # worker_processes auto; Number of working processes pid /run/nginx.pid; Include /etc/nginx/modules-enabled/*.conf; Worker_connections 768; # number of simultaneous connections allowed # multi_accept on; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; Ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE ssl_prefer_server_ciphers on; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; include /etc/nginx/conf.d/*.conf; # include /etc/nginx/sites-enabled/*; } # example #mail {... #}Copy the code
I have adjusted the configuration files listed here to remove the commented content and keep the valid content. The meanings of important items are annotated in Chinese.
When you see the configuration, you must be a little confused about what it is. Let’s look at the basic syntax of NGINX configuration files.
NGINX configuration file basic syntax
Configuration items in NGINX configuration files are called directives, which can be divided into simple directives and block directives. Simple instructions consist of instruction names and arguments, separated by Spaces and terminated with a semicolon, for example
worker_processes auto;
Copy the code
Worker_processes is the directive that sets the number of worker processes. Auto indicates the number of processes. The value can be a number or auto. The value is calculated based on the number of cpus.
Block instructions have a similar syntax format to simple instructions, with single curly braces wrapping more simple instructions, for example
http { server { ... }}Copy the code
Context/context
Context is sometimes called context. If a block instruction contains other instructions, the block instruction is called context. Common contexts for example
events
http
server
location
Copy the code
There is a hidden context directive, main. It does not need to display declarations; the outermost layer of all directives is the scope of Main. Main serves as a reference for other contexts, such as events and HTTP, which must be in the main scope; The server must be in HTTP; Location must be in the server; The above restrictions are fixed and cannot be placed arbitrarily, otherwise you cannot run NGINX programs, but you can see error messages in the log.
You must be tired of talking so much. Let’s do it!
Configure proxies for back-end programs using NGINX
A simple WEB service, such as the flask application below
from flask import Flask from flask_restful import Resource, Api app = Flask(__name__) api = Api(app) class HelloWorld(Resource): def get(self): App.logger. info(" Receive a request, and response 'arg '") return {'message':' arg ', "address": "https://chuanjiabing.com"} api.add_resource(HelloWorld, '/') if __name__ == '__main__': App. The run (debug = True, the host = "127.0.0.1," port = 6789)Copy the code
Write the content to a file on the server, such as /home/ubuntu/ke.py.
Remember to install the Python library pip3 install flask-restful before starting
Ubuntu 20.04 comes with a new version of Python by default, so don’t worry about the environment. Run the Web backend service python3 /home/ubuntu/ke.py
With the back end started, let’s configure NGINX
The secondary configuration file is in /etc/nginx/conf.d. The secondary configuration file is in /etc/nginx/conf.d
> sudo vim /etc/nginx/conf.d/ke.conf server { listen 8000; server_name localhost; location / { proxy_pass http://localhost:6789; }}Copy the code
Check whether the configuration file is correct
> sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Copy the code
Reload configuration
> sudo nginx -s reload
Copy the code
Browser to access my server http://101.42.137.185:8000/, http://ip:port for example
You can see the output from the back end
NGINX log files
By default, logs are classified into normal logs and internal error logs. The log path can be set in the main configuration file
/var/log/nginx/access.log
/var/log/nginx/error.log
Copy the code
Viewing Normal Logs
> the cat/var/log/nginx/access. Log 117.183.211.177 - [19 / Nov / 2021:20:18:46 + 0800] "the GET/HTTP / 1.1" 200 107 "-" "Mozilla / 5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, Like Gecko) Chrome/94.0.4606.71 Safari/537.36" 117.183.211.177 -- [19/Nov/ 201:20:18:48 +0800] "GET /favicon.ico HTTP / 1.1 "404 209" http://101.42.137.185:8000/ "" Mozilla / 5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.71 Safari/537.36"Copy the code
Official Documentation – Log format nginx.org/en/docs/htt…
Default log format
log_format compression '$remote_addr - $remote_user [$time_local] '
'"$request" $status $bytes_sent '
'"$http_referer" "$http_user_agent" "$gzip_ratio"';
Copy the code
You can configure the parameters in the main configuration file. For details, see the official documents.
Use NGINX to configure the proxy for the front-end program
A simple HTML document
> vim /home/ubuntu/index.html <html><meta Charset = "utf-8" / > < body > < title > wear then technology community < / title > < div > < p > wear then technology community < p > < a > https://chuanjiabing.com < / a > < / div > < body > < / HTML >Copy the code
Both large and small front-end projects typically need to be compiled into HTML documents and then provided with accessible services using an application like NGINX.
Note: Some Vue/React services may be deployed with server-side rendering, but most are compiled to HTML. The simple example here is no different in configuration than any other front-end engineering project. As an example, don’t worry about learning NGINX.
> sudo vim /etc/nginx/conf.d/page.conf server { listen 1235; server_name localhost; charset utf-8; location / { root /home/ubuntu/; index index.html index.htm; }}Copy the code
Load balancing based on NGINX
Imagine a scenario where, for example, the back-end service on your server is mainly used to format time, there are many crawlers that need to call it, and you need to ensure that the service is stable and available.
Scenario extension: Suppose you reverse a JS algorithm, now all crawlers need to call this algorithm to generate a sign value before making a request, and then go to the request with the value. If you put JS code in Python/Golang to do local call execution, then when you change the algorithm you need to change/redeploy all the crawlers, but as a WEB service, you just need to change/restart the WEB service.
There are two obvious disadvantages in the case of 1 back-end service now:
1. Insufficient service performance. Too many requests will lead to program lag and slow response, affecting the overall efficiency;
2, the overall service is unstable, once the process exits or the server crashes, the service will not be accessible;
Benefits of using load balancing
1. Start multiple back-end services and configure load balancing so that requests are forwarded to other gates for processing on demand (such as taking turns), so that more work needs can be undertaken;
2. An NGINX supports multiple back-end services. When one or more services exit, other services are still working.
NGINX simply introduces the proxy_pass directive and the corresponding upstream context to implement load balancing. A simple load balancing configuration for example
⚠️ Before the experiment, please start several back-end programs first. You can copy the Flask code to another file (for example, /home/ubuntu/main.py, but you need to change the port number to 6799), and if you want to see the effect of the load on the web page, You can use 6789/6799 in the response content to identify the specific back-end program.
# / etc/nginx/conf. D/ke. Conf content to upstream backend {server localhost: 6789; server localhost:6799; } server { listen 8000; server_name localhost; location / { proxy_pass http://backend; }}Copy the code
Save and reload the configuration
> sudo nginx -s reload
Copy the code
Visit http://101.42.137.185:8000/ for many times, you can see the content of the page are 6789 and 6799 the two alternating back-end service returns information, it shows that load balancing configuration of success.
Domain name resolution and configuration
Open the console of cloud service provider (Tencent Cloud as an example, because Tencent Cloud lightweight server is used when recording the tutorial), other cloud service providers have different interfaces, please act accordingly.
Search for domain name resolution in the search box (Tencent’s is DNSPOD)
Enter to find the domain name to resolve (here is the premise that you have bought the domain name, good record. If not, you can see what I do), click Parse
Click Add Record
Enter the subdomain name (for example, KE) in the host record box, enter the server IP address in the record box, and select Save. Other options default.
After completing the Settings of the cloud server console, the application on our server cannot be accessed through the domain name
Go to the server to change the NGINX auxiliary configuration file, change the port, bind the domain name
> sudo vim/etc/nginx/conf. D/ke. Conf # changes to the server in the context of a listen and server_name listen 80; server_name ke.chuanjiabing.com;Copy the code
Remember overload configuration
> sudo nginx -s reload
Copy the code
Then you can access the service through the domain name http://ke.chuanjiabing.com/
Homework: Post 3 screenshots of NGINX load balancing configuration of back-end program under the community course post. One is the configuration screenshot; The other two are screenshots of the load configuration taking effect when the browser is accessed.
The syllabus of subsequent advanced chapters and enterprise practice chapters is as follows. The learning objectives of subsequent courses are as follows: To be able to apply NGINX well in work, complete deployment of enterprise production environment and alarm monitoring
NGINX advanced
NGINX load balancing strategy theory
Compile and install NGINX
Implement permission verification based on NGINX
NGINX based access traffic limiting
Simple anti-crawler based on NGINX
NGINX based implementation of non-stop update
NGINX enterprise practice
NGINX HTTPS configuration practices
NGINX plug-in installation
NGINX data monitoring combat
High availability deployment practices in NGINX production environments