Kibana deployment

Kibana is a visual interface for easy management of ES

Mkdir /data/es/kibana/config mkdir /data/es/kibana/plugins /data/es/kibana/config # create configuration file vim kibana.ymlCopy the code
server.name: kibana

server.host: "0.0.0.0" 

elasticsearch.hosts: [" http://10.90.x.x:9200 ", "http://10.90.x.x:9200", "http://10.90.x.x:9200"]

xpack.monitoring.ui.container.elasticsearch.enabled: true

i18n.locale: "zh-CN"



After enabling security authentication, you need to add the following configuration

elasticsearch.username: "kibana"

elasticsearch.password: "Password previously created for the Kibana account"

Copy the code

Docker pull kibana: 7.5.0

Start the service:

docker run -d --name kibana -p 5601:5601 -v /data/es/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml -v / data/es/kibana/plugins: / usr/share/kibana/plugins: rw - name kibana kibana: 7.5.0

So Kibana was launched successfully, the specific use of the interface is not introduced, their own groping, the following to add common content

Command line tool

If a node has not been sharded, you can use this command to find it. Generally, network problems cause sharding failure and cannot be recovered. Closing sharding and then opening it solves the problem

GET /_cluster/allocation/explain



# Fix index shard that cannot be recovered

POST /_cluster/reroute? retry_failed=true

Copy the code

Create an index lifecycle policy

You can directly fill in the form on the page. Generally, a deletion policy is created and associated with the deletion policy during template configuration

Index template use

Create index template k8S-template and set the index matching mode

The index set

{

  "index": {

    "lifecycle": {

      "name""k8s-logs-delete-policy"// Associate lifecycle policies

    },

    "refresh_interval""30s".

    "number_of_shards""1".

    "number_of_replicas""1"

  }

}

Copy the code

The mapping set

{

  "dynamic_templates": [

    {

      "message_field": {

        "path_match""message".

        "mapping": {

          "norms"false.

          "type""text"

        },

        "match_mapping_type""string"

      }

    },

    {

      "string_fields": {

        "mapping": {

          "norms"false.

          "type""text".

          "fields": {

            "keyword": {

              "ignore_above"256.

              "type""keyword"

            }

          }

        },

        "match_mapping_type""string".

        "match""*"

      }

    }

].

  "properties": {

    "@timestamp": {

      "type""date"

    },

    "geoip": {

      "dynamic"true.

      "properties": {

        "ip": {

          "type""ip"

        },

        "latitude": {

          "type""half_float"

        },

        "location": {

          "type""geo_point"

        },

        "longitude": {

          "type""half_float"

        }

      }

    },

    "@version": {

      "type""keyword"

    }

  }

}

Copy the code

If a life cycle is required, the scroll policy creates an alias

Logstash deployment

Logstash is used to receive log data, process it and push it to ES. Deploy the LogStash next

Docker pull logstash: 7.5.0

Start command:

docker run -d -p 5044:5044 -p 9600:9600 -it -v /data/logstash/config/:/usr/share/logstash/config/ --name logstash Logstash: 7.5.0

Create a directory: mkdir -p /data/logstash/config

Properties; charge: up; charge: up; charge: up; charge: up

The core of the Logstash capability is filtering the collected logs. For example, a log is formatted as JSON data through filter and processed by grok filter. The configuration is as follows

es.conf

input { beats { port => 5044 codec => plain{ charset=>"UTF-8" } } } filter { grok { match => [ "message", "\ {0, 5} % s. {TIMESTAMP_ISO8601: logTime} {0, 10} \ | * \ | % {LOGLEVEL: LOGLEVEL} {0, 3} \ | \ s % {GREEDYDATA: logMessage}", "message", "\ {0, 3} % s. {TIMESTAMP_ISO8601: logTime} \ s % {0, 10} {LOGLEVEL: LOGLEVEL} \ s {0, 10} % {GREEDYDATA: logMessage}"] Overwrite => ["message"]}} output {elasticSearch {action => "index" Hosts = > [10.90. 7.0.x.x: "9200", "10.90. 7.0.x.x: 9200", "10.90. 7.0.x.x: 9200"] index = > "k8s - % % {index} - {+ YYYY. MM. Dd}" Codec => plain {format => "%{message}" charset => "UTF-8"} user => "elastic" password => "Enter the password"}}Copy the code

log4j2.properties

logger.elasticsearchoutput.name = logstash.outputs.elasticsearch
logger.elasticsearchoutput.level = debug
Copy the code

logstash.yml

http.host: "0.0.0.0"

http.port: 9600

xpack.monitoring.elasticsearch.hosts: [" http://10.90.x.x:9200 ", "http://10.90.x.x:9200", "http://10.90.x.x:9200"]

xpack.monitoring.elasticsearch.username: "elastic"

xpack.monitoring.elasticsearch.password: "Enter the password you set"

Copy the code

pipelines.yml

- pipeline.id: my-logstash

  path.config: "/usr/share/logstash/config/*.conf"

  pipeline.workers: 3

Copy the code

After deployment, access address verification:

10.90 xx. Xx: 9600

Summary of other contents:

Monitoring configuration

# cluster or not

xpack.monitoring.enabled: true

The initial user must be this

xpack.monitoring.elasticsearch.username: "logstash_system"

The es password is set for logstash_system

xpack.monitoring.elasticsearch.password: "Enter the password you set"

#es cluster X

xpack.monitoring.elasticsearch.hosts: [" http://10.90.x.x:9200 ", "http://10.90.x.x:9200", "http://10.90.x.x:9200"]

Copy the code

Configuration and Concepts

A field @TIMESTAMP is automatically generated. By default, this field stores the time when the Logstash message/event is received. In other words, this field is not the actual time when the log is generated

The @timestamp field is resolved by logstash and stored in es as a field. You can also customize the matching rule

File layout planning: https://segmentfault.com/a/1190000015242897 at the time of container mapping may be used

Each logstash filter plugin has four methods called add_tag, remove_tag, add_field, and remove_field. They take effect when the plug-in filter matches successfully

Cerebro, a front-end visualization tool

This is also a tool to manage ES, depending on personal usage habits, can be used as a backup component

docker run -d -p 9000:9000 -v /data/cerebro/application.conf:/opt/cerebro/conf/application.conf –name es-cerebro lmenezes/cerebro:latest