What is Nginx
Nginx(pronunciation: Engine X) is a lightweight HTTP server (compared with Apache and Lighttpd), as well as a high-performance HTTP and reverse proxy server. Currently, mainstream websites in China are basically built on Nginx, such as Sina, Tencent, netease, Douban.
Nginx is written in an event-driven manner. If you are interested, you can look at their source code here, which gives it very good performance, but also a very efficient reverse proxy, load balancing (do not know the reverse proxy and load balancing children please feel free to move to the end of the article.)
The official site also points out several basic features of Nginx as an HTTP server:
- Handle static files, indexed files, and automatic indexing; Open the file descriptor buffer
- Caches free reverse proxy acceleration, simple load balancing and fault tolerance
- FastCGI, simple load balancing and fault tolerance
- Modular structure, including Gzipping, Byte ranges,chunked Responses, and SSI-filter and other filters.
- Supports SSL and TLSSNI.
For front-end children, may not encounter the basic server things, but if I like the “full stack engineer” heart problem can be research, their own successful configuration of Nginx to start their own services, and then encounter these problems about Nginx themselves can solve, rich ~
Next I will teach you from the installation to configuration, build up the Nginx environment, go ~
Install and start Nginx
Since I use a Mac for business, I install Nginx on BREW. Enter the following command on the terminal to install Nginx:
Brew Update && Brew install nginx is strongly recommended for every brew installCopy the code
You can then open your browser to http://localhost:8080 and see the Nginx welcome message.
Unlike Linux, Nginx listens for port 8080 by default on Mac. If you don’t want to enter a port number every time you open a web page (like me), run the following command on your terminal:
# 1.8.0 comes with below please according to the latest installed version number corresponding to modify sudo chown root: wheel/usr/local/Cellar/nginx / 1.8.0 comes with/bin/nginx sudo chmod u + s / usr/local/Cellar/nginx / 1.8.0 comes with/bin/nginx # vi editor open nginx configuration file, Find server listen of field and its value changed to 80 vi/usr/local/etc/nginx/nginx. ConfCopy the code
After modifying the configuration information, run the following command to check the syntax of the configuration file and reload the configuration:
nginx -t && nginx -s reloadCopy the code
For more information about the Nginx command, you can enter Nginx -h. To automatically start Nginx every time you boot, run the following command on your terminal:
ln -sfv /usr/local/opt/nginx/*.plist ~/Library/LaunchAgents
launchctl load ~/Library/LaunchAgents/homebrew.mxcl.nginx.plistCopy the code
Nginx configuration incomplete explanation
Nginx can be used to the fullest extent, depending on the configuration file, because there are too many Nginx configuration, can not be explained one by one, interested in the official documentation. I’ll explain some of the more important configurations. Here’s the configuration file for Nginx on my machine (I’ll annotate it) :
| it is strongly recommended that you first open your default Nginx configuration with my relative | can be executed in the terminal cat/usr/local/etc/Nginx/Nginx. Conf. The default view the default configuration file
The # user field indicates which user group is responsible for the maintenance of the Nginx service. The default is nobody. Cainengtian staff user cainengtian staff user cainengtian staff Sysctl -n hw.ncpu worker_processes 4; sysctl -n hw.ncpu worker_processes 4 The # error_log field indicates the location of Nginx error logging. The debug/info/notice/warn/error/crit # above pattern from left to right to record information from the most detailed to a minimum error_log/usr/local/var/logs/nginx/error. The log debug. # if the number of worker_processes is greater than 1, then Nginx will start multiple processes. So written into the file # signal is simpler, you only need to manually create like me this position: touch/usr/local/var/run/nginx pid pid/usr/local/var/run/nginx pid; Events {# Maximum number of connections that each worker process can process concurrently # As a reverse proxy server, the calculation formula is: 'worker_processes * worker_connections / 4' # When acting as an HTTP server, the formula is divided by 2 worker_connections 2048; } # tokens off the nginx version of the error page include mime.types; default_type application/octet-stream; # log format, $log_format main '$remote_addr - $remote_user [$time_local] "$request" '# '$status $log_format main '$remote_addr - $remote_user [$time_local] "$request' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; Nginx access log access_log off sendfile on; Tcp_nopush on # send all headers in one packet instead of one after the other # do not cache tcp_nodelay on; keepalive_timeout 65; gzip on; client_max_body_size 10m; client_body_buffer_size 128k; # About the following paragraph in the back to talk about! include /usr/local/etc/nginx/sites-enabled/*; }Copy the code
Nginx configuration best practices
Above the last line configuration file will include key words/usr/local/etc/nginx/sites – enabled/folder all the files below are loaded into the current configuration file, like this can separate the configuration file, Nginx. Conf basic not modify this configuration file changes later, when you just need to configure different site in the/usr/local/etc/nginx/sites – enabled/increasing new file, this is a good way to configure.
Like me in/usr/local/etc/nginx/sites – enabled/added below two files, used to configure the common HTTP and HTTPS services:
touch /usr/local/etc/nginx/sites-enabled/default
touch /usr/local/etc/nginx/sites-enabled/default-sslCopy the code
Default Configuration Parsing
The structure of the Nginx configuration is as follows:
. events { ... } http { ... server { ... location xxx { ... }}}Copy the code
Nginx. conf file (nginx.conf) is used to configure the server.
Server {# Nginx listen 80; Server_name localhost = aotu.jd.com; # root /var/ WWW /; Charset UTF-8; PHP index.php index.php index.php index.php index.php index.php index.php index.php index.php index.php index.php index.php index.php index.php index.php # autoindex on; # parse PHP was introduced here include/usr/local/etc/nginx/conf. D/PHP - FPM. } # 404 redirect the page to 404.html, relative to the root directory above error_page 404/404.html; # 403 redirect to 403. HTML, relative to the root directory above error_page 403/403.html; HTML error_page 500 502 503 504/50x.html; location = /50x.html { root html; }}Copy the code
The above configuration means: /var/ WWW/if you don’t find it, look for index.html. If you don’t find it again, look for index.htm. 404 to 404.html if not found, 404 to 404.html if you happen to set /var/www/ as root, 403 to 403.html without permission.
Note that the root field in the server field needs to be distinguished from the alias field.
The directory must contain a directory with the same name as the directory specified after location. http://localhost/test1 :/ var/ WWW /test1 :/ var/ WWW /test1 :/ var/ WWW /test1 :/ var/ WWW /test1 :/ var/ WWW /test1 :/ var/ WWW /test1 /var/www/; } # if alias is used, the specified directory is correct and must be followed by a "/". Location /test2/ {alias /var/ WWW /; alias /var/ WWW /; }Copy the code
In practice, if you encounter 404, you can first consider whether this is the cause.
Configuring a Reverse Proxy
For front-end engineers, NodeJS is probably the easiest full-stack skill to use. Once a Node application is written using the Express framework, for example, the access address is: http://localhost:3000/, but after deploying to the server, we certainly don’t want others to access it in this way. The best thing is to hide the port number.
For example, I have a Node service with the name o2blog_wx. The address to visit when starting Node is http://localhost:3000/, but for the external network we want it to be: http://aotu.jd.com/o2blog_wx, we will through the Nginx configuration (with comments) in detail.
server { listen 80; server_name aotu.jd.com; root /var/www/; Location /o2blog_wx/ {# reverse proxy we set # via the proxy_pass field Namely when visiting http://aotu.jd.com/o2blog_wx after Nginx reverse proxy to the server of http://127.0.0.1:3000 # at the same time as a result of the analysis on the server when o2blog_wx this field # to be processed So through the rewrite fields to regular match replace # is http://aotu.jd.com/o2blog_wx/hello after into http://127.0.0.1:3000/hello proxy_pass Nginx resolution to the server http://127.0.0.1:3000; rewrite ^/o2blog_wx/(.*) /$1 break; }}Copy the code
Configuring temporary jumps
Sometimes we feel that the initial configuration of the URL is not good to replace, but do not want the original link failure, such as the beginning of the external network link is: aotu.jd.com/o2blog_wx/,…
At this point, you can configure a 302 temporary jump on Nginx as follows:
The location/o2blog_wx / {# when matching to http://aotu.jd.com/o2blog_wx/ will jump to return 302 http://aotu.jd.com/wxblog http://aotu.jd.com/wxblog }Copy the code
Configuring restricted Access
Not all the resources on a server are open to the outside world. In this case, you need to configure a restricted access through Nginx. For example, to check the PHP information of this server, we can implement the following configuration to restrict access:
PHP location = /info {allow 10.7.101.224; deny all; rewrite (.*) /info.php }Copy the code
Only machines with IP 10.7.101.224 can access: aotu.jd.com/info, other machines will…
Of course, the best practice is to whitelist the IP addresses, so that some IP addresses can be accessed and others cannot.
Default – SSL configuration resolution
We all know that HTTP is in plaintext during transmission, which makes it easy for information to be stolen during any transmission. Hence the emergence of SSL (Secure Sockets Layer) and the upgraded TLS (Transport Layer Security Protocol). The HTTP application layer adds TLS/SSL layer in the middle of TCP/IP transport layer, collectively called HTTPS.
How to configure HTTPS sites with Nginx? Here is the content of the default-SSL configuration file:
Server {# HTTPS listens to 443 SSL by default; server_name localhost; root /var/www/; The following are all required SSL ON configurations; The following two CRTS are generated using OpenSSL. Specific can see [here] (http://nginx.org/en/docs/http/configuring_https_servers.html) ssl_certificate SSL/localhost. CRT; ssl_certificate_key ssl/localhost.key; ssl_session_timeout 10m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers HIGH:! aNULL:! MD5; ssl_prefer_server_ciphers on; Location = /info {allow 127.0.0.1; deny all; rewrite (.*) /info.php; } location /phpmyadmin/ { root /usr/local/share/phpmyadmin; index index.php index.html index.htm; } location / { include /usr/local/etc/nginx/conf.d/php-fpm; } error_page 403 /403.html; error_page 404 /404.html; }Copy the code
Once configured above, you can access our Nginx home page at https://localhost.
Of course, if you want to use in the external network, you must buy a third-party trust certificate, interested in children’s shoes can Google understand, here not in detail.
summary
If you want to learn more about Nginx configuration, it is highly recommended to read the official documentation. It is still the same old saying: Give a man a fish, give him a fish.
The reverse proxy
The forward proxy is A server (proxy server Z) located between the origin server (Server B) and the client (user A). In order to obtain content from the original server, User A sends A request to proxy server Z and specifies the target (server B), and proxy server Z forwards the request to server B and returns the obtained content to the client. Clients must make some special Settings to use forward proxies. The following picture (from the network, if there is any infringement, please contact me to delete ~) :
As can be seen from the figure above, the so-called forward proxy is the proxy server to replace the visitor [user A] to access the target server [server B], in the real example is “over the wall”! But if proxy SERVER Z is fully (or not fully) controlled, it becomes a chicken.
A reverse proxy is the opposite of a forward proxy in that the proxy server is like the original server to the client and does not require any special setup for the client. The client sends a normal request to the content in the reverse proxy’s namespace, and the reverse proxy then determines where to forward the request (the original server) and returns the obtained content to the client.
The core functions of a reverse proxy server are as follows:
As can be seen from the figure above, user A always thinks that it is accessing the original server B instead of the proxy server Z, but in fact the reverse proxy server receives the reply from user A, obtains the required resources from the original resource server B, and sends them to user A. Due to the firewall, only proxy server Z is allowed access to raw resource server B. Although the firewall and reverse proxy work together to protect raw resource server B in this virtual environment, user A is unaware of this.
When there is more than one reverse proxy server, we can even make them into clusters. When more users access resource server B, let different proxy server Z (X) answer different users, and then send the resources needed by different users.
, of course, the reverse proxy server as a forward proxy server has the role of the CACHE, it can CACHE the original server B’s resources, rather than to every time the original resource request data server B, especially some static data, such as images and documents, and user X if the reverse proxy server can do so from the same network, Then user X accesses reverse proxy server X and gets high quality speed. This is the core of CDN technology. The diagram below: