Foreword a while ago have net friend ask, how optimize website? This is really a big problem. After a brief chat with him and some random points, I felt it was necessary to sort out an article. I happened to be doing a crawler blog a few days ago, so I shared the general idea to exchange knowledge with everyone and make progress together.

Optimized version 1

The system starts out like this, a Tomcat drags a MySql service, running on a 2C 4G Linux server, all requests go to Tomcat, all queries go to MySql, looks like a nonsense?

Resources are limited, so how to effectively use resources to improve service performance? Tomcat claims to be able to handle hundreds of thousands of concurrent accesses, but it takes a lot of work and a good enough machine.

Tomcat optimization

Tomcat supports the following three modes:

BIO: one thread processes one request. Disadvantages: a large number of threads may waste resources when the number of concurrent requests is high. This method is used by default on Linux operating systems such as Tomcat7 and below. NIO: With Java’s asynchronous IO processing, a large number of requests can be processed with a small number of threads. Tomcat8 uses this mode by default on Linux. Tomcat7 must modify the Connector configuration to enable it. APR(Apache Portable Runtime) : Resolves I/O blocking at the operating system level. If Linux has APR and Native installed, Tomcat supports APR when starting directly. In order to make it easy to use, we choose NIO mode. Friends can download Tomcat8 or above directly, and generally use the default connection pool.

Version 2

As you may know, Tomcat container is not very capable of handling static requests, so you need a service that can handle static file requests. Nginx is recommended. Of course, you can use its variants Tengine and OpenResty to achieve static and static separation.

Version 3

Backend service link resources are valuable and can slow down the overall system response time at high concurrency. Here we can cache some hot data, the back-end read cache, if the data exists directly return, otherwise to read the database.

Version 4

Resources are limited, but users may be unlimited, and there may be malicious users, crawlers, and hot searches. In order to allow the normal access of a large department of users, we use pre-flow limiting and realize various traffic limiting schemes through token bucket algorithm or leakage bucket algorithm.

Version 5

ultimate

If only one Nginx is enough for a blog, then you can load balance multiple Tomcat servers into the Nginx application layer for traffic limiting. Back-end single service can be used for interface traffic limiting back-end service user sessions can be centrally stored in Redsi Bloom filtering block prevent cache penetration hot data read Redis cache if necessary Redis and MySql can be in charge of the cluster summary Optimization process may only be the tip of the iceberg, But the general idea is something like that, identifying problems and solving them, and architecture is an evolution.

Follow me, reply to “information” and get more Java architecture information!!