An overview,
Log is the key to record the information of various problems in the system, and it is also a common mass data. The log platform provides one-stop log services for all business systems of the Group including log collection, consumption, analysis, storage, index and query. The logs are scattered and inconvenient to view, the log search operation is complex and inefficient, and service exceptions cannot be discovered in a timely manner.
With the development and growth of Youlike business, tens of billions of logs are generated every day (according to statistics, 500,000 logs are generated on average per second, and the peak value can reach 800,000 logs per second). The logging platform has undergone many changes and upgrades as the business continues to evolve. This article shares with you zan in the current log system construction, evolution and optimization of the experience, here first to introduce a brick, welcome everyone to discuss.
Second, the original log system
In 2016, Youzan started to build a unified log platform applicable to business systems, responsible for collecting all system logs and business logs, converting them into streaming data and uploading them to the log center (Kafka cluster) through flume or LogStash. Track, Storm, Spark, and other systems analyze and process logs in real time, and store logs in HDFS for offline data analysis or write logs into ElasticSearch for data query. The overall architecture is shown in the figure below.
As more and more applications are added, more and more logs are added. As a result, problems and new requirements occur in the following aspects:
1. There is no uniform standard for service logs and various formats of service logs. The introduction of new applications undoubtedly increases the cost of log analysis and retrieval. 2. Multiple data log data collection methods, high operation and maintenance costs 3.
– The default management strategy of Es is adopted. All indexes correspond to 3*2 shards (3 primary, 3 replica). Some indexes have a large number. A lot of Bulk Request Rejected occurs, and disk I/O is concentrated on a few machines.
– The logs of bulk Request Rejected are not processed, and the service logs are lost
– Logs are retained for seven days by default. As SSDS are used as storage media, the storage cost is too high as services grow
– In addition, the Elasticsearch cluster is not physically isolated. As a result, all indexes in the Elasticsearch cluster cannot work properly when the Es cluster becomes oom
4. The log platform collects a large amount of user log information. You cannot directly view the error information in a certain period, which makes fault locating difficult.
3. Evolution of existing systems
From generation to retrieval, logs mainly go through the following stages: collection -> transmission -> buffering -> processing -> storage -> retrieval. The detailed architecture is as follows:
#3.1 Log Access
Log access is divided into two modes: SDK access and Http Web service access
-SDK access: The log system provides SDKS of different languages. The SDKS automatically encapsulate the log content into the final message body according to the unified protocol format, and finally send the log forwarding layer (Rsyslog-hub) through TCP.
-Http Web service access: Some services that cannot access logs using the SDk can be directly sent to the Web service deployed by the log system through Http requests, and then uniformly forwarded to the Kafka cluster at the log buffer layer by the Web Protal
3.2 Log Collection
3.3 Log Buffering
Kafka is a high performance, high availability, easy extension of distributed logging system, the whole process of data processing can be decoupled, Kafka cluster as a log buffer layer, platform for distributed logging behind consumer service provides asynchronous decoupling, peak cutting ability, also have a lot of data accumulation, the characteristics of high throughput, speaking, reading and writing.
3.4 Log Segmentation
Log analysis is a top priority in order to process data more quickly, easily, and accurately. The log platform uses Spark Streaming computing framework to consume service logs written to Kafka. Yarn, as a container for computing resource allocation management, allocates different resources to process different log models based on different service log levels.
After the Spark task is run, a single batch of tasks asynchronously write all the logs to the ES cluster. Business before access can be set in the management platform for different log model arbitrary matching filtering alarm rules, the spark each excutor task in local memory to save a copy of such a rule, the rule set period of time, count after reaching the alarm rules the configured threshold, through designated channels to specify users to send the alarm, To find problems in time. When traffic suddenly increases, ES will have bulk Request Rejected logs and write them to Kakfa again, waiting for compensation.
3.5 Log Storage
– Previously, all logs were written to the ES cluster on SSDS. LogIndex directly corresponds to the index structure in ES. As services grow, To solve the problem that the Es disk usage reaches 70% to 80% on a single node, the system uses Hbase to store original log data and ElasticSearch index content to store and index data.
-Index The Index is created by day. If an Index is created in advance, the number of shards corresponding to the Index of the next day will be determined based on the amount of historical data. In this way, data cannot be written due to centralized creation. Currently, the log system saves only service logs generated within the last 7 days. If a longer retention time is configured, the logs are saved to archive logs.
– For storage, Hbase and Es are distributed systems that can be linearly scaled.
4. Multi-tenant
With the continuous development of the log system, the QPS of the whole network log is more and more large, and some users have more and more diverse requirements for real-time, accuracy, word segmentation and query of the log. To meet users’ requirements, the log system supports the multi-tenant function and assigns logs to different tenants based on users’ requirements to avoid mutual impact.
The architecture for a single tenant is as follows:
-SDK: Can be customized on demand, or skynet’s TrackAppender or SkynetClient
-Kafka cluster: Shared or specified Kafka cluster can be used
-Spark cluster: The Spark cluster is on the YARN cluster and resources are isolated. In general, no special isolation is required
– Storage: Contains ES and Hbase. The ES and Hbase can be shared or deployed separately as required
5. Existing problems and future plans
At present, as a functional module integrated in Skynet, Youzan log system provides easy-to-use search methods, including time range query, field filtering, NOT/AND/OR, fuzzy matching AND other methods, AND can highlight query fields to locate log context, basically meeting most of the existing log retrieval scenarios. However, there are still many deficiencies in the log system, including:
1. Lack of partial link monitoring: logs can be retrieved from their generation through multi-stage modules and now are collected. The log buffer layer is not connected in series, so it is impossible to accurately monitor the loss situation and push alarms in time.
2. Currently, one log model corresponds to one Kafka topic, and three partitions are assigned to each topic by default. Due to the difference in the amount of logs written to the log model, some topics have high load, while others waste resources, and are not convenient for dynamic resource scaling. Too many topics lead to too many partitions, which wastes resources on Kafka and increases latency and Broker downtime recovery time.
3. Currently, ik_max_word is used for Chinese word segmentation of Elasticsearch. The word segmentation target is Chinese, and the text will be split with the most fine granularity.
The above deficiencies are also what we will try to improve in the future. In addition, deeper value mining of logs is also the direction we will explore, so as to escort the normal operation of business.
At 13:30 on April 27th (Saturday), Uzan technology middleware team and Elastic Chinese community held an offline technology exchange event in Hangzhou, focusing on Elastic’s open source products and surrounding technologies.
The event is free and limited to 200 participants.
Scan the QR code below and reply “Register” to participate
Welcome to join us and let’s talk about it