This is the first day of my participation in the August Challenge. For details, see:August is more challenging
11 in paid treasure to still vaguely remember last year, double the war room, close to zero, everyone stared at the duty room of the second level to monitor the market, when trading peak curve climb slowly, finally become so steep, is designed.the students are very excited, cheers with climbing curve reached its peak, 583000 transactions per second, is also a new peak trading record, However, the growth rate is still much lower than that of the previous years, which often doubled or increased by 30 to 40 percent.
In 2010, the peak payment was 20,000 transactions per minute on Singles’ Day, which changed to 256,000 transactions per second on Singles’ Day in 2017, and 583,000 transactions per second last year, more than 1,000 times that of the first Singles’ Day in 2009.
To resist such a large payment TPS, ant has done a lot of top architecture design and bottom implementation optimization, among which the most core is THE LDC architecture.
LDC is called Logic Data Center, which is a concept compared with the traditional Internet Data Center (IDC).
IDC believe that everyone is very clear, is the physical data center, is to be able to build the station of the physical room.
LDC (Logical Data Center), the core architecture idea is that no matter how your physical room deployment is, for example, you may have three IDCs, respectively in two different cities (often called “two and three centers”), in logic is unified, I logically regard as a whole, unified coordination deployment.
Why does LDC exist
What problem is LDC designed to solve? Let’s start with the evolution of the architecture. In the last article, the architecture evolution of the disaster recovery design of the computer room, we use a specific application to deduce.
First, take a look at the single application architecture as shown in the following figure. The request goes to the gateway interface, the gateway interface directly adjusts the application or service, and the service queries or writes data to the storage layer.
The biggest risk of this architecture mode is that services and storage are single-point. The access capacity and performance are limited by the capacity and performance of storage and applications. In terms of disaster recovery, once a failure occurs, the single point of application or storage can only be recovered.
Later, engineers began to split the application horizontally and the service vertically.
Horizontal split should be very familiar with, is to add the server, each server is deployed as an example, the vertical resolution is made to the domain resolution the service, such as a trading system, there are merchants, commodity domains, such as user domain, order domain, split into multiple services, service decoupling, services can be independent, the complexity of the application will be higher.
The distributed architecture solves the service problems of single point, a server outage, service is also available, but the storage layer or a single point, and as business growth, expansion of the machine, the more you find query into efficiency time-consuming but is slow, at a certain stage of analysis found that the storage layer in the performance bottleneck. The above figure only spent 2 servers to connect to the database, the real distributed system may be tens of hundreds of, or even thousands of, if they are connected to a DB, the number of connections, lock contention and other issues, SQL performance slow imagine.
As you all know, Internet companies began to do read/write separation, separating read requests from write requests.
Read/write separation This implies that data is not immediately used after being written.
There is a time lag between data being written and being used immediately. The data will not be read until it is synchronized from the slave library. The actual statistics show that in normal applications, 90% of the data will not be used immediately after being written.
However, this architecture did not solve the write problem, and as the traffic grew, the write data became the bottleneck. Sub-database sub-table came into being, and the middleware of sub-database sub-table began to become popular, and now it has basically become the standard configuration of medium and large Internet companies.
The basic idea is to split the data according to the specified dimension, the more common is the userId dimension, for example, take the last two bits of the userId, you can split hundreds of libraries and hundreds of tables, but also divided by the specified module to take the remainder, for example, divided by 64, you can split 64 libraries according to the remainder range of 0-63.
About depots table, there are a lot of people know that the vertical separation and horizontal split two (the vertical and horizontal said above is the split system, here refers to the storage), vertical separation is in accordance with the business dimension split, put the same business types of tables in a library, often according to the concept of domain model resolution, such as order, user base, product such as library, Horizontal splitting is to cut the large amount of data table (library) into many small amount of data table (library), reduce the access pressure of the library and table, can be compared with the horizontal and vertical splitting system:
Horizontal split | The vertical resolution | |
---|---|---|
The system dimension | Add server | Large systems are divided into subsystems according to business domains |
Database dimension | The big data table is divided into multiple small tables according to the userId table division rule | The large table is divided into multiple sub-tables based on the service domain |
Why are they called horizontal and vertical? It makes sense. Imagine a user table with a bunch of fields in it, as shown below
The vertical split is to split the blue table of user information and the green table of order information from the right into two tables by cutting vertically in the middle. The library is divided into user library and order library.
Horizontal split, is the level of a knife, reduce the amount of data.
Everyone see this, do not think that the problem is solved, after the database table, if the application level can hold, the database level can do concurrency to ten thousand this level. But going another order of magnitude is a little more difficult.
Why is that? Since a library instance is shared by all applications, every time you add a machine to the database, the number of connections to the database will increase correspondingly. The increment is the minimum number of connections set by at least the machine.
Why does an application need to connect to all database instances?
A: The traffic of the gateway layer may go to any server. For example, the request of user A is sent to the server. In this case, the server must have the database connection of user A’s userId fragment.
Dividing the database into tables only solves the problem of single database and single table access pressure, but because each server connects to all database instances at the same time, the capacity expansion does not continue at a certain stage, because there is a bottleneck in the connection number of library instances.
What about the database bottleneck?
In fact, you have guessed that the userId sharding is isolated at the application layer, and the traffic of the specified UID sharding is routed to the specified application unit at the gateway layer. The traffic of the application unit is digested internally, as shown in the figure below:
For example, uid = 37487834, and the last two digits are 34 and belong to the 00-49 range, then user traffic is directly routed to the 00-49 application group, and all data interaction operations are completed in this cell.
In this case, the number of connections to the database is reduced by half, and the number of connections to the database can be split up to 100 cells. The number of connections to the database can be split up to 2 cells and up to 100 cells.
I have emphasized the word unit here because it is the core concept in LDC, so let’s focus on what the word unit means.
For example, a user can complete a complete set of business processes in a Zone, and the traffic does not need other zones to provide services. With the ability to complete a complete set of services, a complete set of businesses can be completed in a single Zone, which is logically self-contained. What are the benefits of this? If a Zone fails, the routing layer directly transfers the traffic from this Zone to other zones. The other zones that receive the traffic from this Zone can share the traffic, facilitating traffic allocation.
The following diagram is a schematic diagram of the deployment architecture of ant Zones sharded by region and userId, with some simplification. The actual Zone deployment unit is a little more complex.
The Zone described above is capable of completing a complete set of business processes in the UID dimension. Services that depend on each other are provided by the Zone, and the invocation between services is completed in the Zone. But smart you can think of a problem, some data can not be divided according to the userID dimension, global how to do only one, such as configuration center data, it is centralized storage, global only one configuration, configuration is also global effect.
In fact, there are three types of zones in ants: RZone, GZone, and CZone
RZone: A logical, self-contained, and minimum unit of overall service system deployment. All services and libraries that can be divided by userId dimension are deployed in RZone.
GZone: GZone is a Global Zone. From its name, you can know that only one GZone service and library is deployed globally. The GZone service and library must be deployed in a certain equipment room.
CZone: CZone is interesting. The reason why there is CZone is to solve the disadvantages of GZone. As mentioned in the last architecture article “How To Do High Availability of Internet Companies from the Collapse of B Station”, inter-city deployment takes a long time due to the distance. The service of Hangzhou machine room needs to use the service deployed by GZone, which can only be invoked across the city and machine room. It is likely that a service will have many RPC calls, which will definitely take a lot of time. To build a bridge for data synchronization between cities, CZone is to play the role of bridge, responsible for GZone data before the city synchronization, C is the meaning of city.
Because of the “write/read time difference” I mentioned earlier, data is written to the GZone with a certain delay and is synchronized from the CZone to other Czones.