This is the 26th day of my participation in the August Text Challenge.More challenges in August

Introduction to the

In this article we will take a look at nginx, so the first question we have is: what is Nginx? What can Nginx do?

Nginx is a high-performance HTTP and reverse proxy Web server, but also provides IMAP/POP3/SMTP services, characterized by less memory, strong concurrency. Nginx can be used as a static page web server, but also supports CGI protocol dynamic language, such as: Perl, PHP, but does not support Java, Java programs can only be completed with Tomcat. Nginx is designed for performance optimization, performance is the most important consideration, implementation is very focused on efficiency, can withstand high load test.

Relevant concepts

To gain a deeper understanding of Nginx, here are some of its most important concepts:

  1. The reverse proxy
  2. Load balancing
  3. Dynamic and static separation

The reverse proxy

Before we look at reverse proxies, we can look at what a forward proxy is. If the Internet outside the LAN is considered as a huge resource library, users in the LAN need to access the Internet through a proxy server. This proxy service is called forward proxy.

So what is a reverse proxy?

In reverse proxy, in fact, the client is non-inductive for agent, because the client does not require any configuration can access, we only need to send the request to the reverse proxy server, the reverse proxy server to select the target server to get data, and then returned to the client, the reverse proxy server and the target server is a server, The proxy server address is exposed and the IP address of the real server is hidden.

Load balancing

The client sends multiple requests to the server, which processes the requests, some of which may interact with the database, and then returns the results to the client. This architecture mode is suitable for the early system with relatively single, relatively few concurrent requests, and the cost is low, but with the gradual increase of data, the current server can not cope with the high concurrency, what should be done?

The easiest thing to do, of course, is to upgrade the server configuration, but this method is too expensive. If the server configuration is at its peak, but still cannot handle the large number of concurrent requests, how to solve this problem?

Review just reverse proxy, the client sends the request will be through the reverse proxy server, and the reverse proxy server to select the target server, assuming now article 30 requests at the same time, we have three servers, so the load balancing to do is to put the article 30 average allocation request to the three servers, make each server processing requests to converge with those of 10.

Dynamic and static separation

In order to speed up the speed of website analysis, usually the dynamic pages and static pages by different servers to resolve, to speed up the resolution, reduce the original single server pressure.

Install the tutorial

Now that the concept is out of the way, it’s time to talk about how to install Nginx, using the centos6 environment as an example.

nginx:nginx.org/

Download a version of nginx-1.19.0 for example.

Then click here to start downloading:

After downloading it, put it aside and let’s download the dependencies that Nginx needs.

Execute the following command to download the PCRE:

wget http:/ / downloads.sourceforge.net/project/pcre/pcre/8.37/pcre-8.37.tar.gz
Copy the code

After downloading, unpack it and execute the command:

tar -xvf pcre8.37.tar.gz
Copy the code

After decompressing, go to the directory:

cd pcre8.37.tar.gz
Copy the code

Then execute the command to compile and check it:

./configure
Copy the code

Some students may encounter this problem when doing a compilation check:

This problem is caused by the absence of a GCC compiler.

yum install gcc-c++
Copy the code

Finally execute the installation instruction:

make && make install
Copy the code

At this point pcRE is installed, then install zlib, execute the command:

yum -y install make zlib zlib-devel gcc-c++ libtool openssl openssl-devel
Copy the code

After all dependencies are installed, you can start installing nginx. Remember the nginx package you downloaded earlier?

wget https:/ / nginx.org/download/nginx-1.19.1 tar. Gz
Copy the code

Download it and unpack it:

tar -xvf nginx1.191. tar.gz
Copy the code

Then go to the decompression directory and execute the command:

./configure
Copy the code

Finally execute the installation instruction:

make && make install
Copy the code

Before we can start nginx, we need to set up the firewall and execute the following command:

vi /etc/sysconfig/iptables
Copy the code

Change the file to the following:

# Firewall configuration written by system-config-firewall # Manual customization of this file is not recommended. *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -a INPUT -m state -- state ESTABLISHED,RELATED -j ACCEPT -a INPUT -p icmp -j ACCEPT -a INPUT -I lo -j ACCEPT -A INPUT -m state -- state NEW -m TCP -p TCP -- dport 22-j ACCEPT -a INPUT -m state -state NEW -m TCP -p TCP -dport 80 -j ACCEPT -a INPUT -m state -state NEW -m TCP -p TCP -dport 3306-j accept-a INPUT -j REJECT -reject -with icmp-host-shell-a FORWARD -j REJECT -reject -with icmp-host-prohibited COMMITCopy the code

Nginx has port 80, so let’s leave port 80 open and restart the firewall:

service iptables restart
Copy the code

At this point we go to sbin in nginx directory:

cd /usr/local/nginx/sbin/
Copy the code

Execute instructions:

./nginx
Copy the code

Nginx starts if you encounter this problem:

Nginx: error while loading shared libraries: libpcre.so.
Copy the code

Just execute this command:

ln -s /usr/local/lib/libpcre.so1. /lib64/
Copy the code

After successful startup, let’s look at the IP address of the Linux environment:

In this case, open the browser, and enter 192.168.124.7 in the address box.

Nginx common commands

/usr/local/nginx/sbin /usr/local/nginx/sbin /usr/local/nginx

  • Check the version
  • Start the nginx
  • Close the nginx
  • Reload nginx

Check the version

./nginx -v
Copy the code

Start the nginx

./nginx
Copy the code

Close the nginx

./nginx -s stop
Copy the code

Reload nginx

./nginx -s reload
Copy the code

Nginx configuration implementation

Nginx configuration is as follows:

  1. The reverse proxy
  2. Load balancing
  3. Dynamic and static separation

The reverse proxy

Before implementation, there is a requirement: open a browser and enter www.test.com in the address bar to jump to the Tomcat home page.

After entering www.test.com in the browser address bar, we need to send this request to the Nginx server, and then the Nginx server to the Tomcat server. Because a domain name is involved, we need to configure it in the Hosts file in the Windows system.

To configure the hosts file, go to C:\Windows\System32\drivers\etc to find the hosts file and make the following changes:

The IP address is in front of the domain name, so that the two make a mapping relationship.

/usr/local/nginx/conf /usr/local/nginx/conf /usr/local/nginx/conf

Restart nginx with the./nginx command. Be sure to execute this command in the sbin directory of nginx.

Finally we test, in the browser address bar type www.test.com, access success.

Load balancing

The implementation of the effect of the load balance here, make a request first, enter http://192.168.124.7/test.html in your browser’s address bar, will distribute the request to ports 8080 and 8081.

To prepare, place two Tomcats to simulate two servers:

Tomcat is configured as port 8081 in tomcat8081 directory. Finally, an HTML file is stored in two Tomcat webapps directories:

Then start the two Tomcat servers separately and you are ready to load balance nginx:

Restart the nginx server after the configuration is complete.

This completes the load balancing operation, but how does it work? Watch the demo below:

Because the nginx server is listening to port 80, so we can omit the port number directly through IP access, will find that when constantly refreshing the page for request, sometimes will display 8080, sometimes will display 8081, which indicates that load balancing has been successfully implemented. This happens when the Nginx server splits requests evenly between two servers each time.

Dynamic and static separation

Static/static separation is simply a matter of passing all static requests to Nginx and passing all dynamic requests to Tomcat.

To make preparations, create a data folder in the Linux root directory, and create HTML and image folders in this folder, with a test. HTML file and a test. PNG file respectively.

Next, configure nginx:

After the configuration is complete, restart the nginx server.