Workers working soul, workers hellohello-Tom and online 🤣
Has not updated for a long time, some time ago we all know that zhengzhou is experiencing floods, and after the outbreak of disease (waterlogging!), let Tom elder brother deeply realized the fragility of life and civilization are vulnerable in front of the disaster, it is so lucky to be alive, 2021 now so that I will continue to output, fully reflect the quality of our working people, Today Tom shares the topic
Everyone can understand the series: "Distributed System Transformation Solutions - data section".
Distributed system involves a wide range of content and knowledge, which can not be explained in a single article. For complex upgrade schemes and boring theoretical knowledge (CAP, TCC), how to calmly and continuously solve problems in the face of the soaring QPS is the key. Next brother Tom will talk about several common problems we may have in the process of practical operation.In the early architectures, our application would first fetch data from the cache when requesting data. If it could not get data from the cache, it would fetch data from the database again and write data to the cache after obtaining data. This pattern should be the early development pattern of many companies. There will be a lot of problems in this single architecture, Tom brother from several aspects of common problems to help you step on the pit, and provide solutions.
The database
As soon as our system was launched, a steady stream of users began to register on our platform by means of the “crafty” product design routine. Within 3 months, the number of users registered reached 3000W. At this time, the user_INFO table with 3000W single table was shivering. But how exactly? Tom will use the common hash scheme. First, we can separate users to create a database (vertical split), which I will create at one time
User_info_0, user_info_1,…. user_info_1023,
1024 user tables, we estimate 500W data per table, calculated 500W * 1024 = 5.12 billion, such user estimates for most Internet companies are absolutely enough. But it brings many new problems, such as historical data migration, user information query and so on. Let’s start with historical data migration.
Historical Data Migration
Our online system runs all the time, and the synchronization happens offline, so it is difficult to achieve consistency during this period. We should know from Tom’s previous article that the data double-write scheme will be adopted in the historical data migration, that is, two stages.
Online first stage we will also go to class and write a copy each of the new table, and the migration program will continue to run online, migration program and online will also, of course, there is a sequence of problem, if the migration program is updated on a thread and then cover is likely to exist (ask the students to imagine the picture, must want to understand oh ~). Therefore, we must add the version number or timestamp and other schemes to determine whether the current updated data is equal to the version existing in the database, so as to ensure atomicity, especially the user balance field operation must be careful.
// An implementation pattern similar to optimistic locking, Update userInfo set balance = balance + #{money} where userId = #{userId} and version = #{version}Copy the code
In the second stage, after the offline synchronization program runs, the data of the old table and the new table should be basically consistent. At this time, we can directly delete the operation code of the old table, query the relevant content and replace the operation of the new table, and then go online again, and successfully complete the process of data migration.
User Information Query
This is also a common problem scenario. Assume that there are 1024 tables, and the user data falls evenly in the 1024 table. For example, when we query by the user’s mobile phone number, it could have been easily written
select * from user_info where mobile like '%#{mobile}%'
Copy the code
How to write the query in the current way
select * from user_info_0 where mobile like '%#{mobile}%'
select * from user_info_1 where mobile like '%#{mobile}%'
...
select * from user_info_1023 where mobile like '%#{mobile}%'
Copy the code
Er… It is not possible to loop all 1024 tables once, and the company’s operation personnel may search users based on more user information. If we do need to filter all tables to find users matching phone numbers according to the current table segmentation method, how to do here?
Nosql (PS: nosql) for elasticSearch, use the right tools to do the right things. Introduce different middleware will greatly enhance the complexity of the system), we can according to user needs to search will all of these conditions as the index in the condition of es, Tom elder brother here and es is not recommended for database table structure to maintain consistent, can query, we do not have so much need to filter the content, es as no (database, We should store the required hot data in ES, first check the corresponding user information in ES according to the conditions, and then query the detailed data in the corresponding table according to the user ID. When we insert DOC in ES, a field index can be stored in the database table name corresponding to the doc user data. This allows you to quickly locate the user’s details from that table in the DB.
Introducing es middleware new libraries, we should also consider the old library data migration project (solution can continue to use the data to write solutions), but how to ensure the consistency of the data in DB and es is a new problem, a lot of friend should meet the DB and cache are not consistent, the DB do not agree with no this kind of problem, how should we, cap theory:
CAP principle, also known as CAP theorem, refers to Consistency, Availability and Partition tolerance in a distributed system. The CAP principle is that, at best, these three elements can achieve two, not all at once.
Concept or want to meet, USES the principle of CP, use the eventual consistency to solve the problem, the introduction of message middleware to help us to realize the fault tolerance, a message in the consumer is not successful, have a retry mechanism, the retry after reaching the threshold can be artificial processing or turn into a dead-letter queue, wait for after service back to normal again consumption to ensure the consistency of the final result.
Under said speaking, it is assumed that we are now modify the user information, at the same time to modify the user cache, mq in DB after the update is successful, we can send an event (make sure to send the mq and update the DB within a transaction), mq receives the user basic information, to deal with asynchronous update user cache, if the process consumption failed, Mq will help us implement the retry mechanism, where you don’t have to worry about message loss, most vendors’ queues will fall off without a successful message being sent, and reliability is guaranteed (except for outages, power outages, asynchronous flushes, etc.).
Es a document performance index in the hundreds of millions or no problem, but we also want to consider too much time to continue, many Internet companies will be in accordance with the time range to secondary index partitions, or dead you can check for nearly three months, or nearly 6 months of data, dynamic partitioning index name, alias corresponds to the recent data, history data and sinking, ES can also be used to preheat only recent data, and that’s the way it works.
Big data analysis
With the help of big data analysis, we can provide users with better services. The first step of big data analysis must have data, which needs to be imported into suitable middleware such as HLIVE to run. However, it is impossible for our code to say that we are writing ES and synchronizing it to HLIVE. If storage middleware such as mongodb and Clickhouse are added later, the code will become more and more bloated. Therefore, Brother Tom here introduces canal, a data synchronization method extended based on MYSQL binlog.
Canal simulates mysql sending dump command from the library to the master library, pulling down binlog data and sending it to various middleware for synchronization. Hlive, ES, Kafka, etc., are supported in this way, the code intrusion is reduced a lot. We only conduct follow-up operations for data, and most Internet companies also play in this way. So the architecture might look like this:Of course, the possible problems in the process of synchronization have been marked by Brother Tom in the figure. Canal is a very suitable middleware for non-invasive architecture reconstruction. For specific application methods, you can go tocanalCheck out the tutorial on the official website.
That’s it for this installment, and I’ll pick it up next time, caching.
1. How to solve the big key?
2. Traffic overwhelms Redis?
I am HelloHello-Tom, a programmer in a second-tier city