@[TOC](Elasticsearch ELK Stack)

Elasticsearch ELK Stack: blog.csdn.net/weixin_4252…

Elasticsearch official user manual: www.elastic.co/guide/en/el…

Kibana official user manual: www.elastic.co/guide/cn/ki…

Basic introduction

  • What is distributed logging

In distributed applications, logs are distributed on different storage devices. If you manage dozens or hundreds of servers, you’re still using the traditional method of logging on to each machine in turn. Does this feel cumbersome and inefficient? So we use centralized log management, distributed log is to collect, track and process large-scale log data.

  • Why use distributed logging

Generally, we need to perform log analysis scenarios: directly grep and awk in log files to obtain the desired information. However, in large-scale scenarios, this method is inefficient. Problems include how to archive too many logs, how to search text too slowly, and how to query multi-dimensional logs. Centralized log management is required. Logs from all servers are collected and summarized. The common solution is to set up a centralized log collection system to collect, manage, and access logs on all nodes in a unified manner.

  • ELK distributed logs

ELK is short for Elasticsearch, Logstash, and Kibana.

Elasticsearch is a Java-based, open source, distributed search engine. It features distributed, zero configuration, automatic discovery, index sharding, index copy, restful interface, multiple data sources, and automatic search load.

Based on NodeJS, Kibana is also an open source and free tool that provides a log analysis friendly Web interface for Logstash and ElasticSearch to aggregate, analyze and search important data logs.

Based on Java, Logstash is an open source tool for collecting, analyzing, and storing logs.

The following isELKHow does it work:

ElasticSearch

Introduction to the

ElasticSearch is a Lucene-based search server. It provides a distributed multi – user – capable full – text search engine based on RESTful Web interface. Elasticsearch, developed in Java and released as open source under the Apache license, is a popular enterprise-level search engine. Designed for cloud computing, can achieve real-time search, stable, reliable, fast, easy to install and use.

We build a website or an application and add a search function, but it is very difficult to complete the creation of the search. We want to search for solutions to run faster, we hope to have a zero configuration and a completely free search pattern, we hope to be able to simply use JSON over HTTP to index data, we hope that our search server available all the time, we hope to be able to start with a and extend to the hundreds of, we want to live search, We want simple multi-tenancy and we want to build a cloud solution. So we use Elasticsearch to solve all of these problems and possibly more.

ElasticSearch is at the heart of the Elastic Stack, and is a distributed, RESTful search and data analysis engine that can solve a variety of emerging use cases. At the heart of the Elastic Stack, it stores your data centrally, helping you find what you expect and what you don’t expect.

download

Select the version you want. Currently, older versions already require JDK11 support

I use 7.13 + JDK 1.8 locally and on the server, but the 7.15 version does not support it

Official Download address

The installation

Windows

/ elasticsearch or bin/elasticsearch. BatCopy the code

Linux

  • Decompress to the corresponding directory
Tar -zxvf elasticSearch-7.10.2-linux-x86_64.tar. gz -c /usr/local
Copy the code
  • Modify the configuration
cd /usr/local/ elasticsearch - 7.10.2 / config/vim elasticsearch. YmlCopy the code
node.name: node-1
path.data: /usr/local/ elasticsearch - 7.10.2 / data path. Logs: / usr /local/ elasticSearch-7.10.2 /logs net. host: 127.0.0.1 http.host: 0.0.0.0 http.port: 9200 discovery.seed_hosts: ["127.0.0.1"]
cluster.initial_master_nodes: ["node-1"]
Copy the code
  • createesBecause the userElasticSearchDoes not supportRootThe user does it directly, so we need to create oneesThe user
useradd es
chown -R es:es /usr/local/ elasticsearch - 7.10.2Copy the code

Start the

  • Switch the user to es
su - es
/usr/local/ elasticsearch - 7.10.2 / bin/elasticsearchCopy the code
  • The background to start
/usr/local/ elasticsearch - 7.10.2 / bin/elasticsearch - dCopy the code

Open it in the browser9200Port address: (http://120.78.129.95:9200/ (opens new window)), if the following information is displayed, it indicates that the startup is successful

Security Settings

Modifying a Configuration File

Modify the elasticSearch. yml file under the config directory, add the following content to it, and restart the file

Turn off security Settings

xpack.security.enabled: false
Copy the code

Turn on security Settings

xpack.security.enabled: true
xpack.license.self_generated.type: basic
xpack.security.transport.ssl.enabled: true
Copy the code

Set the user name and password

You need to set passwords for 4 users: Elastic, Kibana, logstash_system, and Beats_system

The elastic account has the superuser role and is a built-in superuser. Kibana account: You have the kibanA_system role that kibana uses to connect to and communicate with ElasticSearch. The Kibana server submits requests as this user to access the cluster monitoring API and the.kibana index. Index cannot be accessed. Logstash_system Account: has the logstash_system role. The Logstash user is used to store monitoring information in Elasticsearch.

CMD command line Go to the ES installation directory and run the following command line

Here, the user name and password are set implicitly, not the keyboard problem

bin/elasticsearch-setup-passwords interactive
Copy the code

Change the user name and password

Linux server

TestSuper elasticsearch-users useradd TestSuper -r superuser "Content-Type:application/json" -XPOST -u elastic:now_password 'http://10.10.17.19:9200/_xpack/security/user/elastic/_password' 3-d '{" password ":" 123456 "}'Copy the code

windows

Reshipment solution

Incomplete, crashed, restarting…

Logstash

Introduction to the

Logstash is an open source server-side data processing pipeline that can capture data from multiple sources at the same time, transform it, and then send it to your favorite repository (our repository is ElasticSearch, of course)

download

Download from the official website:www.elastic.co/cn/download…)

The installation

  • Decompress to the corresponding directory
Tar -zxvf logstash-7.10.2.tar.gz -c /usr/local
Copy the code
  • Adding a Configuration File
cd /usr/local/ logstash - 7.10.2 / bin vim logstash - elasticsearch. ConfCopy the code
input {
	stdin {}
}
output {
	elasticsearch {
		hosts => '120.78.129.95:9200'
	}
	stdout {
		codec => rubydebug
	}
}
Copy the code

Start the

./logstash -f logstash-elasticsearch.conf
Copy the code

Kibana

Introduction to the

Kibana is an open source data analysis and visualization platform designed to work with Elasticsearch as part of the Elastic Stack. You can use Kibana to search, view, and interact with data in the Elasticsearch index. You can easily use charts, tables and maps to analyze and present data in a variety of ways.

download

It needs to be the same version as ES

Download from the official website:www.elastic.co/cn/download…)

The installation

  • Decompress to the corresponding directory
Tar -zxvf kibana-7.10.2-linux-x86_64.tar.gz -c /usr/local
mv /usr/local/ kibana - 7.10.2 - Linux - x86_64 / usr /local/ kibana - 7.10.2Copy the code
  • Modify the configuration
cd /usr/localKibana - 7.10.2 / config/vim kibana. YmlCopy the code
server.port: 5601 
server.host: "0.0.0.0" 
elasticsearch.hosts: ["http://120.78.129.95:9200"] 
kibana.index: ".kibana"
Copy the code
  • Authorized ES Users
Chown -r es: es/usr/local/kibana 7.10.2 /Copy the code

Start the

  • Switch the user to es
su - es
/usr/local/ kibana - 7.10.2 / bin/kibanaCopy the code
  • The background to start
/usr/local/ kibana - 7.10.2 / bin/kibana &Copy the code

Open it in the browser5601Port address: (http://120.78.129.95:5601/ (opens new window)), if the following information is displayed, it indicates that the startup is successful

To switch between Chinese

In the config/kibana. Yml added

i18n.locale: "zh-CN"
Copy the code

Log collection

Install the logstash on the server and configure the rule, for example, create logstash-apache.conf

input {
  file {
    path => "/home/ruoyi/logs/sys-*.log"
	start_position => beginning
	sincedb_path => "/dev/null"
	codec => multiline {
      pattern => "^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}"
      negate => true
      auto_flush_interval => 3
      what => previous
    }
  }
}

filter {
  if [path] =~ "info" {
    mutate { replace => { type= >"sys-info" } }
    grok {
      match => { "message"= >"%{COMBINEDAPACHELOG}" }
    }
    date {
      match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z"]}}else if [path] =~ "error" {
    mutate { replace => { type= >"sys-error"}}}else {
    mutate { replace => { type= >"random_logs" } }
  }
}

output {
  elasticsearch {
    hosts => '120.78.129.95:9200'
  }
  stdout { codec => rubydebug }
}
Copy the code
  • Start the logstash
./logstash -f logstash-apache.conf
Copy the code
  • throughkibanaVisually retrieve log data for each service

reference

  • Doc. Ruoyi. VIP/ruoyi – cloud…