First, the causes of hotspot issues

The causes of hot issues are as follows:

1.1 Users consume far more data than they produce (hot-selling products, hot news, hot comments, star live broadcasts).

Some unexpected events in daily work life, such as:

During the “Double Eleven” period, some popular commodities are discounted for promotion. When one of these commodities is clicked or purchased tens of thousands of times, a large demand will be formed. In this case, hot issues will be caused.

Similarly, hot news, hot comments and star live broadcasts, which are widely published and browsed, will also produce hot issues.

1.2 Centralized Requests Exceed the performance limit of a single Server.

When the server reads data for access, it usually divides the data into fragments.

During this process, the corresponding Key is accessed on a host Server. When the access exceeds the Server limit, hot Key problems may occur.

Second, the harm of hotspot issues

• Concentrated traffic reaches the upper limit of the physical NIC.

• Cache sharding service crashed due to too many requests.

• DB breakdown, causing an avalanche of services.

As mentioned above, when the number of hot Key requests on a host exceeds the maximum number of network adapters on the host, other services on the server fail due to excessive traffic concentration.

If hotspots are too concentrated and the cache of hotspot keys exceeds the current cache capacity, the cache sharding service will be destroyed.

When the cache service crashes, additional requests are cached to the background DB. As the DB performance is weak, request penetration is easy to occur when large requests are received, which leads to avalanche and seriously affects the device performance. The most comprehensive summary of Redis high availability solutions ever, this article is highly recommended.

Common solutions

Common solutions focus on modifying the client and Server side accordingly.

3.1 Server Caching Scheme

First, the Client will send the request to the Server, and the Server is a multi-threaded service, which has a local Cache space based on the Cache LRU policy. The most comprehensive summary of Redis high availability solutions ever, this article is highly recommended. Follow the wechat official account of Java technology stack and reply the keyword: redis in the background to obtain a treasure book of the latest technology of Redis.

When the Server itself is congested, the Server does not send further requests to the DB but directly returns them. Only when the Server itself is clear, the Client requests are sent to the DB and the data is written back to the cache.

At this point, the cache is accessed and rebuilt.

However, the scheme also has the following problems:

• Cache failure, multithreaded build cache issues.

• Cache loss, cache build issues.

• Dirty reading problems.

3.2 Memcache and Redis schemes are used

This solution solves the hot Key problem by deploying the cache separately on the client.

The Client accesses the service layer first and then the cache layer on the same host.

This solution has the advantages of nearby access, fast speed and no bandwidth limitation, but it also has the following problems:

• Memory resources are wasted.

• Dirty reading problems.

3.3 Using the Local Cache Scheme

The following problems occur with local caching:

• Hot spots need to be known in advance.

• Limited cache capacity.

• Inconsistent time growth.

• The hotspot Key is missing.

Traditional hotspot solutions have various problems. How to solve hotspot problems?

Four, Ali cloud database solution of hot spots

4.1 Read/Write Separation Solution Hot read

The functions of each node in the architecture are as follows:

• The SLB layer performs load balancing

• The Proxy layer performs read/write separation automatic routing

• The Master is responsible for writing requests

• The ReadOnly node is responsible for read requests

• The Slave and Master nodes perform HIGH availability

In practice, the Client sends the request to the SLB, which then distributes the request to multiple proxies. The Proxy identifies the request and classifies the request.

For example, Write requests are sent to the Master module and Read requests are sent to the ReadOnly module.

The read-only nodes in the module can be further expanded to solve the hot read problem effectively.

Read/write separation flexibly expands read hotspot capacity, stores a large number of hotspot keys, and is friendly to clients.

4.2 Hotspot Data Solution

This solution solves the problem of hotspot keys by actively discovering and storing hotspots.

First, the Client accesses SLB and sends various requests to the Proxy through SLB. The Proxy forwards the requests to Redis at the back end in a route-based manner.

The solution of hot key is to increase the cache on the server.

To be specific, local cache is added on Proxy. Local cache adopts LRU algorithm to cache hotspot data, and back-end DB node adds hotspot data calculation module to return hotspot data.

The Proxy architecture has the following advantages:

• Proxy Local cache hotspot with horizontally expanded read capability.

• The DB node periodically computes hotspot data sets.

• THE DB feeds back the Proxy hotspot data.

• Fully transparent to clients without any compatibility.

4.3 Hotspot Key Processing

4.3.1 Reading Hotspot Data

Hot Key processing is mainly divided into two forms: write and read. In the process of data writing, when SLB receives data K1 and writes it into a Redis through a Proxy, data writing is completed. A complete Ali Cloud Redis development specifications, this highly recommended reading.

If K1 becomes a hotspot key after calculation by the back-end hotspot module, the Proxy will cache the hotspot, and the client can access K1 next time without Redis.

Finally, because the proxy can be expanded horizontally, the access ability of hotspot data can be arbitrarily enhanced.

4.3.2 Discovering Hotspot Data

For the discovery of hotspot data on DB, request statistics on Key will be carried out in a period. After reaching the request level, hotspot Key will be located and all hotspot keys will be put into a small LRU linked list.

When requesting access through Proxy, Redis will enter a feedback phase and mark the data if it finds that the target is a hotspot.

When DB calculates hot spots, the main methods and advantages are as follows:

• Hotspot statistics based on statistical thresholds.

• Hotspot statistics based on statistical periods.

• Version number based statistics without resetting the initial value.

• DB calculation has minimal performance impact and minimal memory footprint.

V. Comparison of the two schemes

It can be seen from the above comparative analysis that:

Compared with traditional methods, Ali Cloud has greatly improved the solution of hot Key. Whether it is based on read/write separation solution or hot data solution, it can do flexible horizontal capacity expansion in the actual processing environment, transparent to the client, and has certain data inconsistency.

In addition, read/write separation mode can store a larger amount of hot data, and proxy-based mode has cost advantages.

Today, we have collected: 2020 Internet Big factory real interview questions, mainly: Ant Financial, Pinduoduo, Ali Cloud, Baidu, Vipshop, Ctrip, Fengchao Technology, Lexin, Isoftstone, OPPO, Yinsheng payment, Ping an of China and other primary, intermediate, advanced Java interview questions set.