0. Prepare

Use the Debian environment. Install Nginx(default), a Web project, install Tomcat (default), etc.

1. An nginx. conf configuration file



Basic configuration of this file, you can implement the load. But the relationships are trickier to understand. This blog, is not a teaching article, is a record, convenient to see their own later.

2. Basic explanation

Now suppose there is a computer 192.168.8.203, which has Tomcat deployed on it. Port 8080 in it has J2EE service, and you can browse web pages normally through the browser. One problem is that Tomcat is a comprehensive Web container, and the processing of static pages should be resource-intensive, especially if you have to read the static pages from disk every time and then return them.

This can be a drain on Tomcat resources and can have a performance impact on dynamic page parsing. Adhering to the Linux philosophy, one piece of software only does one thing. Tomcat should only handle JSP dynamic pages. Here we use the previous understanding of Nginx to reverse proxy. The first step of the agent, the separation of dynamic and static pages. This is easy.


Nginx /etc/nginx/nginx.conf has a default configuration file. In fact, most of the same, the key is the server section setup. Here I set the server segment as shown above and copy the other segments.

Behavior 35 listens on port 80 of the local host. Lines 37-39 indicate the default home page. The default home page here is I’m index.jsp, which corresponds to an index in my project. This can be changed as needed

indexindex.jspindex.htmlindex.htmindex.php

Refer to other articles for details. Key line 40, this is regular matching, and there’s a lot on the web. This matches all the static web suffixes I used in my project. Line 41 is the proxy address. So here I’m proxying into my Web application. Expires 30d is cached for 30 days, where the Cache is the user’s cache-Control field corresponding to the front-end page

The re in line 44 matches pages without suffixes. The JSP pages in my project are suffix-free. This can be modified as needed. Also proxy to 192.168.8.203:8080 here. At this point you might say, what’s the point of that? Of course not. To simply achieve static and dynamic separation, we can modify line 41 to

root /var/lib/tomcat7/webapps/JieLiERP/WEB-INF

It is directly obtained from the local disk without proxy. A look at the Tomcat log shows that the static page is not accessed. But there’s another problem with that.

This is not flexible, and is not friendly to memory caching and cluster deployment, as discussed below, hence the following notation. Let me write another server segment.


This time listen on port 808, and then on the above code 41 can be changed to proxy_pass http://192.168.8.203:808, here to achieve the separation of static and static. If there are multiple servers, just change the corresponding IP address. If you find that the connection is not working, you need to check the firewall, permissions, and other external issues. This configuration looks like this.

If this were the case, we would find that transferring pages directly would consume too much bandwidth. Corresponding to the Web optimization, the idea here is to gzip the page, and then pass it to the user, and then decompress it, which can effectively reduce bandwidth. This is where Nginx’s gzip module comes in. The default Nginx is integrated with the GZIP module. Simply add the following configuration to the HTTP segment.


Give a home page to see the effect

Never mind the difference in the number of requests, those two requests come from Google plugins. Don’t feel like I’m lying to you.

Caching is certainly important for a site that many people visit.

Nginx and Redis can be used as a buffer for Nginx. However, Nginx can be used as a buffer for Nginx. Nginx can also be used as a buffer for Nginx.

It’s not as efficient as Redis, but it’s better than nothing. Nginx’s default cache is disk file system cache, not memory level cache like Redis. At first I thought that was all Nginx was. Later I checked my writing materials and found that I was too naive and didn’t know Linux very well. Everything in Linux is a file.

It turns out that we can cache files in memory to the corresponding Linux file system. This may be a bit confusing, but please search for the /dev/shm file directory yourself. We cache files in this file directory, which is actually equivalent to an in-memory cache. It’s just managed by the file system. So it’s not as good as the memory cache of custom Redis.

Perform basic configuration in the HTTP segment



After the configuration of these two can basically achieve, here are a few notes, but also troubled me for a long time. Line 6 of the first code above, proxy_ignore_headers if specified in the HTML head of a Web project


If these are not cached, add a proxy_ignore_HEADERS configuration item. /dev/shm file system permissions are granted only to root users by default, so chmod 777-r /dev/shm is not a very safe way to do this. If in fact, a user group can be given, setting the user group is the first line of configuration

userwww www;

Line 6 of the second code above adds a header field to see if the cache was hit.

Rm -rf /dev/ SHM/jielierp/proxy_ * However, the cached structure information is still in the nginx process. If you do not restart the nginx process, it will not be accessible.

So remember to reboot. Here’s how it works

First visit

On the second visit, press Ctrl+Shift+R in the browser to force refresh

And you can see the effect here. Let’s look at /dev/shm

It’s almost over here. Finally, the key technical point is cluster, cluster, cluster. Upstream: This is where you need to use upstream. See the configuration file at the beginning? That’s it



That’s the cluster group. Upstream is the keyword and static and dynamic are the names of the two server cluster groups. In the first example, server 127.0.0.1:808 is the server address, followed by weight=1. Write as many as you can.

Pro test, one of the cluster is broken, does not affect the system operation. For more polling rules, you can refer to more resources on the web. I won’t say more here. How to use it? proxy_pass

http://192.168.8.203:808 changed to proxy_pass http://static; That’s how you balance things out.

And that’s the end of it.

The above parts can be configured according to their own needs to achieve load balancing single room. One drawback of this approach is that if the nginx in front of the machine is down, the back of the machine will lose the ability to access, so you need to implement multiple nginx multi-machine load in front. That’s a whole other topic. There are no studies yet. We’ll talk about it later.

For example, after I log in to server1, the next time the dynamic server group polls, it may be allocated to server2, so I need to log in again.

A palliative approach is to configure polling rules, Hash based on the IP requested by the user, and then assign the corresponding server. The configuration is as follows:


This enables one user to one server node. This way, there will be no problem with double logins. Another solution is to use the cache system for unified storage management of sessions. Specific practice I still had not tried, reference data has relevant article, can understand.

Nginx adds SSL functionality. Nginx also has SSL functionality by default. We don’t need to install it. First let’s generate some necessary certificates. The process is relatively simple.



Pem, client.pem, client.key, and unsecure files in a directory of Nginx. The rest of the configuration is as follows:


Restart Nginx and we can access Https sites. But he came up with this

There is no problem with this, the specific reason is that the CA certificate needs to be recognized. Therefore, the HTTPS certificate generated by ourselves above is only generated by ourselves. If you want to change into the following type, you need to spend money to purchase, and the rest of the online solution.

(Although a self-generated certificate can be used, it still cannot resist DNS spoofing, so this insecure certificate is the same as no certificate at all. But it’s said to stop carriers from hijacking.)

Add one that automatically jumps to a secure HTTPS connection when we enter an HTTP connection. This is a practical one. Methods or there are a variety of specific can see resources inside the blog. I use the following one, I think it is relatively simple, less code changes. Proxy forwarding is performed for port 80.


The reason why someone is always better than you is because they’re already better and they’re constantly trying to get better, and you’re still satisfied and secretly happy.