Centralized logging

Logs are classified into system logs, application logs, and service logs. System logs are used by O&M personnel, application logs are used by R&D personnel, and service logs are used by service operation personnel. This section describes application logs to learn about application information and status and analyze the causes of application errors.

With the increasing complexity of systems and the advent of the era of big data, it is common to need dozens or even hundreds of servers. Therefore, there is an urgent need for a set of products for logging and centralized management. ELK implements centralized log management platform, which covers centralized management and control of distributed log collection, retrieval, statistics, analysis and Web management.

1.1 Introduction to ELK

ELK is short for Elasticsearch, Logstash, and Kibana. These three open source tools combine to create a powerful centralized log management platform.

Elasticsearch is an open source distributed search engine that provides search, analysis, and data storage functions. Its features include: distributed, automatic discovery, automatic index sharding, index copy mechanism, RESTful interface, multiple data sources and automatic search load, etc.

Logstash is an open source tool for collecting, parsing, and filtering logs. Supports almost any type of logs, including system logs, service logs, and security logs. It can receive logs from many sources, including Syslog, messaging (such as RabbitMQ), and Filebeat; Output data in a variety of ways, including email, WebSockets, and Elasticsearch.

Kibana is a user-friendly Web-based graphical interface for searching, analyzing, and visualizing data stored in Elasticsearch. It utilizes Elasticsearch’s RESTful interface to retrieve data, allowing users not only to customize dashboard views, but also to query, aggregate, and filter data in special ways.

1.2 structure of ELK

The following figure is an architecture diagram for centralized log management ELK. For the sake of performance, Beats+EK is used to build a centralized log management system.



ELK architecture

2. Configuration method

2.1, Elasticsearch

After Elasticsearch is deployed, you need to change the main properties in the elasticSearch.yml configuration file: Cluster name, node name, network, host, discovery. Zen. Ping. Unicast. Hosts. Among them, when deploying Elasticsearch is cluster pattern deployment, then discovery. Zen. Ping. Unicast. Hosts this property will need to be configured.

2.2, Logstash

Configure Input, Filter (optional), and Output in filebeat pipeline.conf to collect, Filter, and Output data, as shown in the following figure:



Logstash configuration

Then enable the Logstash service with the filebeat-pipline.conf file, as shown below:



Enable the Logstash service

Note: Since Beats+EK is used to implement centralized log management, there is no need to configure Logstash.

2.3, Kibana

You can modify the kibana.yml configuration file to connect to the correct Elasticsearch service address. You only need to configure the Elasticsearch. After the configuration, run the bin/kibana & command to enable the Kibana service, as shown in the second figure below. Finally can open in the browser Kibana administration page (access address: http://139.198.13.12:4800/) to view the log.



Configuration description of Kibana



Enable the Kibana service

2.4, Filebeat

The filebeat.yml configuration file contains fileBeat, Output, Shipper (optional), and Logging (optional). Filebeat defines the log file information to be monitored, and Output sets the Output target of log data.

In filebeat.yml, the main attribute values are named as follows:

  1. The naming convention for fields.appid is {AppID}.
  2. The naming convention for fields.appName is {English name of product line}.{English name of project} (If the project name consists of two or more English words, use between the words. Separated).
  3. Note that the value of index is {product line Name}, but the English letters must be lowercase and cannot start with an underscore or contain a comma.

The following figure shows an example of filebeat.yml configuration:



Example of the filebeat.yml configuration

The FileBeat service is deployed on the server where log files are stored. To enable the FileBeat service on Windows:

1. Start search in Windows, type Powershell, open the file location of Powershell, right-click Powershell.

Or start cmd.exe as an administrator and run powershell to go to the Powershell window.

Note:

Make sure you open the PowerShell window as an administrator, otherwise you will not have permission to create fileBeat when you run the. Ps1 script in step 2 below:

2, go to the fileBeat executive directory, for example, CD ‘E:\ELK\filebeat-1.3.0-windows’, and run the following command: Exe -executionPolicy unrestricted-file.\install-service-filebeat.ps1.

3. You can then view, enable, and stop the FileBeat service using the following commands in the PowerShell window:

  • Check the fileBeat Service status: get-service fileBeat
  • Start fileBeat: start-service filebeat
  • Stop the filebeat Service: stop-service filebeat

Three, use method

3.1. Log4Net Local Logs

1. Log storage path specification: {drive letter}:\Log4Net{AppID}\, where AppID is the six-digit code of the project we do. For example: D:\Log4Net\110107\

Log4net. config config:

__Mon Dec 04 2017 11:02:01 GMT+0800 (CST)____Mon Dec 04 2017 11:02:01 GMT+0800 (CST)__<? The XML version = "1.0" encoding = "utf-8"? > <configuration> <configSections> <section name="log4net" type="System.Configuration.IgnoreSectionHandler"/> </configSections> <appSettings> </appSettings> <log4net> <appender name="FileAppender" type="log4net.Appender.RollingFileAppender"> <! - 150202, AppID <file value="D:\Log4Net\150202\" /> <rollingStyle value="Composite" /> <datePattern value="yyyy-MM-dd".log"" /> <staticLogFileName value="false" /> <param name="Encoding" value="utf-8" /> <maximumFileSize  value="100MB" /> <countDirection value="0" /> <maxSizeRollBackups value="100" /> <appendToFile value="true" /> <layout Type = "log4net. Layout. PatternLayout" > < conversionPattern value = "record time: % date thread: [% thread] log level: % 5 level record class: % logger log message: %message%newline" /> </layout> </appender> <logger name="FileLogger" additivity="false"> <level value="DEBUG" /> <appender-ref ref="FileAppender" /> </logger> </log4net> </configuration>__Mon Dec 04 2017 11:02:01 GMT+0800 (CST)____Mon Dec 04 2017 11:02:01 GMT+0800 (CST)__Copy the code

Take note:

  • MaximumFileSize set to 100MB; Set countDirection to an integer greater than -1; MaxSizeRollBackups is set to 100.
  • Log file content specification: Each log content in a log file must record error log messages of the time thread log level.

3.2. Log Query

Based on Kibana query log (access address: http://139.198.13.12:4800/), mainly through the following steps:

  1. Select the business index library.
  2. Select a date range.
  3. Enter the content to be found for precise query, you can also enter * to achieve fuzzy query.
  4. You can click the expansion icon of a log to view details about the log.
  5. On the left side of the Kibana interface, the index library selection box, the Selected Fields list, and the Available Fields list are arranged from top to bottom. Change the log table rendering on the right side of Kibana by adding available fields to the list of selected fields.

Please refer to the picture below:



Kibana log query page

4. Demo download and more information

  • Log4NetDemo download address: github.com/das2017/Log…
  • ELK website: www.elastic.co/
  • Regular expression configuration instructions: www.elastic.co/guide/en/be…

The list of topics covered in this series is as follows. If you are interested, please pay attention:

  • Introduction: Small and medium-sized R&D team architecture practice three points
  • Cache Redis: Quick start and application of Redis
  • Message queue RabbitMQ: How to use the good news queue RabbitMQ?
  • Centralized log ELK
  • Task scheduling Job: Task scheduling Job in the architectural practice of small and medium-sized R&D teams
  • Metrics: What does app monitoring do?
  • Microservices framework MSA
  • Solr
  • Distributed coordinator ZooKeeper
  • Small tools:
  • Dapper.NET/EmitMapper/AutoMapper/Autofac/NuGet
  • Release tool Jenkins
  • Overall architecture design: How to do e-commerce enterprise overall architecture?
  • Single project architecture design
  • Uniform application layering: How to standardize all application layering in a company?
  • Debugging tool WinDbg
  • Single sign-on (sso)
  • Enterprise Payment Gateway
  • “Article

The authors introduce

Zhang Huiqing, an IT veteran of more than 10 years, has successively served as architect of Ctrip, chief architect of Gooda Group, CTO of Zhongqing E-Travel and led the upgrading and transformation of the technical architecture of the two companies. Focus on architecture and engineering efficiency, technology and business matching and integration, technology value and innovation.

Yang Li has years of experience in Internet application system research and development. She used to work in Gooda Group and now works as the system architect of Zhongqing E-Travel. She is mainly responsible for the architecture design of the business system of the company’s R&D center, as well as the accumulation and training of new technologies. The current focus is on open source software, software architecture, microservices and big data.

Thanks to Yuta Tian guang for correcting this article.