Problem a:
First of all, we should consider why we want to solve the high concurrency, where the high concurrency bottleneck appears, have known friends must know is in the database, because in a large number of requests to operate the database will appear data disorder, oversold, system crash, mysql deadlock and other phenomena.
2: train of thought
-
1. Page statics: the whole page is stored in Redis, and the page value in Redis is read the next time you visit
-
2. CDN: Mainly accelerate the static resource files of the whole website, such as images, CSS, JS, etc. (go to Ali to see the tutorial)
-
3. Mathematical verification code: when users calculate verification code results, they can reduce a large number of requests to enter at the same time and reduce the pressure of Redis, mysql and servers.
-
4: Inventory identification: this is a huge optimization, through the identification to judge whether redis inventory is enough, if not enough to interrupt to read redis inventory. Example: Boolean over = map.get(goodsId); If (over) {return result. error(‘ insufficient inventory ‘); if(over) {return result. error(‘ insufficient inventory ‘); }… No matter how many requests come in later, only two lines of code will be run.
-
5. Generate dynamic URL: mainly to prevent malicious users through fixed URL to kill goods in advance seconds (security issues this can not be taken lightly, you even security measures have not done a good job of the following operations are useless)
-
6. Redis destocking: Before the user seconds kill goods, Redis obtains the current inventory quantity, and then directly subtract the inventory stored by Redis at the time of seconds kill (everyone rest assured here redis and MySQL data is synchronized, as long as the MQ queue operation is completed, MySQL database will -1 quantity), so as to avoid MySQL to read inventory data.
-
7. MQ message queue: it is an intermediate message key, through the producer sends a message to the consumer, business operation, and producers do not need to know the results, which is after the user clicks the seconds kill waiting to be processed as a result, and then go to the polling query processing results (asynchronous) operation, thus avoiding the request to operate the database continuously. (The polling query is also directly queried from redis, because the results of the second kill will be put into Redis after the success of the second kill, and the polling will be queried through key)
-
8. Nginx: A good solution to high concurrency is to add a few more Tomcat servers. When the user visits, the request can be submitted to the idle Tomcat server.
-
9. Hash database clusters and library tables
① Large sites have complex applications, these applications must use the database, so in the face of a large number of access, the bottleneck of the database will soon appear, then a database will soon be unable to meet the application, so we need to use database cluster or library table hash.
② In terms of database cluster, many databases have their own solutions, Oracle, Sybase and so on have a good solution, commonly used MySQL to provide Master/Slave is a similar solution, what kind of DB you use, refer to the corresponding solution to implement.
③ The database cluster mentioned above is limited by the DB type in terms of architecture, cost and expansibility, so we need to consider improving the system architecture from the perspective of applications, library table hash is commonly used and the most effective solution.
④ We install business and application or function modules in the application program to separate the database, different modules correspond to different databases or tables, and then in accordance with a certain strategy for a page or function of smaller database hash, such as user table, table hash according to the user ID, In this way, the performance of the system can be improved at a low cost and the system has good scalability.
- 10. Load balancing
Load balancing will be a high-end solution for large websites to deal with heavy load and large number of concurrent requests.
- 11. Reverse proxy
The server that the client accesses directly is not the server that provides the service directly. It fetches resources from another server and returns the results to the user.
Proxy server and reverse proxy server:
Proxy servers fetch resources on our behalf and return the results. For example, a proxy server that accesses the Internet. Reverse proxy server is when we normally access a server, the server itself calls another server.
In reverse proxy, a user’s request is sent to a load balancing device, which then sends the request to an idle application server for processing. After processing, the request is returned to the user through the load balancing device. In this way, the subsequent distribution is invisible to the user.
Reverse proxy implementation
1) A load balancing device is required to distribute user requests to idle servers
2) The server returns its own service to the load balancer
3) Load balancing returns server services to users
Proxy server we take the initiative to use, is for our service, do not need to have their own domain name; The reverse proxy is used by the server itself, unbeknownst to us, with its own domain name.
Thank you for reading to the end of this article
Java learning, interview; Obtain documents and videos for free