preface
High concurrency often occurs in business scenarios with a large number of active users and a high concentration of users, such as seckilling activities and receiving red packets on a regular basis.
In order to make the business run smoothly and give users a good interactive experience, we need to design a high concurrency processing scheme suitable for our own business scenarios according to the estimated concurrency amount and other factors.
In the years of e-commerce related product development, I have been fortunate enough to encounter a variety of concurrent pits, which have a lot of blood and tears. The summary made here is my archived record and shared with everyone.
Server Architecture
Business from the initial development to gradually mature, server architecture is also from relatively single to cluster, and then to distributed services.
A service that can support high concurrency requires good server architecture, load balancing, database clustering, noSQL cache clustering, static files need to upload CDN, these are powerful backing to make business procedures run smoothly.
The server needs operation and maintenance personnel to cooperate with the construction, I will not say more about the specific, so far.
The server architecture required is as follows:
The server
Load balancing (e.g. Nginx, Aliyun SLB)
Resource monitoring
distributed
The database
Primary/secondary separation, cluster
DBA table optimization, index optimization, etc
distributed
nosql
Primary/secondary separation, cluster
redis
mongodb
memcache
cdn
html
css
js
image
Concurrent test
High concurrency related services require concurrent testing to assess the amount of concurrency that the entire architecture can support through extensive data analysis.
To test high concurrency, you can use a third-party server or your own test server, use test tools to test concurrent requests, analyze the test data and get an assessment of the number of concurrent requests. This can be used as a warning reference. As the saying goes, a bosom friend can fight a hundred battles with his enemy without danger.
Third-party Services:
Ali Cloud performance test
Concurrent test tools:
Apache JMeter
Visual Studio performance load testing
Microsoft Web Application Stress Tool
Practical solution
General plan
Daily user traffic is large, but it is scattered. Occasionally, users gather together.
Scenario: User check-in, user center, user order, etc
Server architecture Diagram:
High concurrency architecture in the eyes of the architect
Description:
These businesses in the scene are basically operated by users after they enter the APP. Except for the activity day (618, Double 11, etc.), the number of users of these businesses is not high. Meanwhile, the tables related to these businesses are all big data tables, and most of the businesses are query operations, so we need to reduce the queries that users directly hit DB. If the cache does not exist, perform DB query and cache the query results.
Updating user-related caches requires distributed storage. For example, user ids are used to hash groups and users are distributed to different caches. In this way, the total amount of a cache set is not large and the query efficiency is not affected.
Solutions such as:
Users check in to get credits
Calculate the key of the user distribution and find the user’s check-in information in the Redis hash
If the login information is displayed, the system returns the login information
If not, DB queries whether there is any check-in today. If so, it synchronizes the check-in information to redis cache.
If today’s check-in record is not found in DB, the check-in logic will be carried out, and DB will be added to today’s check-in record and check-in integral (this whole DB operation is a transaction).
Caches the check-in information to Redis and returns the check-in information
Note that there are logic problems in concurrent cases, such as checking in multiple times a day and issuing multiple points to users.
My blog post [High Concurrency in The eyes of Big Talkers] has a related solution.
Customer orders
Here, we only cache the order information of the user on the first page, 40 pieces of data per page, and the user generally only sees the order data on the first page
The user accesses the order list, if it is first page read cache, if it is not read DB
Calculate the key of the user distribution and find the user order information in the Redis hash
If the user order information is found, the order information is displayed
If it does not exist, DB queries the order data on the first page, and then caches redis to return the order information
The user center
Calculate the key of the user distribution and find the user order information in the Redis hash
If the user information is displayed, the system returns the user information
If no user DB query exists, then cache redis and return user information
Other business
The above examples mostly store cache data for users. If it is common cache data, pay attention to the following
Note that the common cache data needs to be considered under concurrent conditions may lead to a large number of HITS to the DB query, can use the management background update cache, or DB query lock operation.
My blog post on update caching issues and recommended solutions.
The above example is a relatively simple high concurrency architecture, the concurrency value is not high situation can be very good support, but as the business grow, user concurrency value increase, our architecture will continuous optimization and evolution, such as to service business, each service has its own complicated architecture, their balanced server, distributed database, Nosql primary and secondary cluster, such as: user service, order service;
The message queue
Active services, such as seckill and secrush, generate high concurrent requests in an instant
Scene: get red envelopes regularly, etc
Server architecture Diagram:
High concurrency architecture in the eyes of the architect
Description:
In the scenario, timed claim is a high-concurrency service. For example, the active users will flood in at the point of arrival, and the DB will receive a sudden hit. If it fails to hold, it will break down and affect the whole service.
Like this kind of business that is not only query operation and will have high concurrency insert or update data, the general scheme mentioned above cannot support, when the concurrency is directly hit DB;
The design of this business will use the message queue, can participate in the user’s information to add to the message queue, and then write a multi-threaded program to consume the queue, to the queue of users to issue red envelopes;
Solutions such as:
Get red envelopes regularly
Redis lists are commonly used
When a user participates in an activity, the user participation information is pushed to the queue
Then write a multi-threaded program to pop data, for the business of issuing red envelopes
In this way, users with high concurrency can normally participate in activities and avoid the risk of database server downtime
Additional:
There are many services that can be done with message queues.
For example, regular SMS sending service uses sset(sorted set), sending time stamp as the sorting basis, SMS data queue ascending according to time, and then writing a program periodic cycle to read the first item in the Sset queue, whether the current time exceeds the sending time, if so, SMS sending.
Level 1 cache
The number of concurrent requests to the cache server exceeds the number of requests that the server can receive, and some users fail to read data due to timeout during connection establishment.
So there needs to be a solution to reduce hits to the cache server when the concurrency is high;
Then there is the first-level cache scheme, site server cache cache is used to store data, pay attention to the store only part of the request large amount of data, and to control the amount of data cache, cannot excessive use of the site server memory and affect the normal operation, the site of the application level cache expiration time need to set up the second unit, The specific time is set according to the service scenario. The purpose is to enable the data acquisition to hit the level-1 cache without connecting to the noSQL data server and reduce the pressure on the NOSQL data server when there are high concurrent requests
For example, the commodity data interface on the first screen of APP is public and not customized for users, and the data will not be updated frequently. If the request volume of such interface is large, it can be added to level 1 cache.
Server architecture Diagram:
High concurrency architecture in the eyes of the architect
Reasonable specification and the use of NOSQL cache database, according to the business split cache database cluster, this basic can be very good support business, level cache after all is the use of site server cache so still want to make good use of.
Static data
If the data of high concurrent requests does not change, it can reduce the resource pressure of the server without asking its own server to obtain data.
If the update frequency is not high and the data is allowed to be delayed in a short time, the data can be statically converted into JSON, XML,HTML and other data files and uploaded to the CDN. When the data is pulled, the CDN will be pulled first. If the data is not obtained, the CDN will be retrieved from the cache and database. When the manager operates the background to edit the data and then generates a static file to upload and synchronize it to the CDN, the acquisition of data can be hit on the CDN server in high concurrency.
CDN node synchronization has a certain delay, so it is very important to find a reliable CDN server vendor
I have specially sorted out the above technologies. Many technologies can not be explained clearly by a few words, so I simply recorded some videos with my friends. Many problems are actually very simple, but the thinking and logic behind them are not simple. Let me share with you.
Other options
For data update frequency is not high, APP, PC browser can cache data to a local, then each time you request interface to upload the current version number of the cache data, the server receives the version number to determine the version number and the latest data version number are consistent, if not the same as for the latest data query and returns the latest data and the latest version number, If so, return a status code indicating that the data is up to date.