ELK installation procedure

A simple introduction

  • ElasticSearch: Stores log information.
  • Logstash: Collects, processes, and forwards log information.
  • Kibana: Provides a searchable Web visual interface.

Preparations Install the JDK

  • Elasticsearch7 comes with JDk11. If the JDK is not installed, es7 uses the default JDk11
  • If installed, use the installed JDK. Lower than 11 has a warning, but does not affect use.
  • However, you still need a Java environment to install Logstash, so it is recommended to install JDk11
  • The JDK installation process is omitted

Install the Elasticsearch

#Download the ElasticSearch installation packageWget HTTP: / / https://mirrors.huaweicloud.com/elasticsearch/7.7.0/elasticsearch-7.7.0-linux-x86_64.tar.gz#Unpack theTar - XVF elasticsearch 7.7.0 - Linux - x86_64. Tar. GzCopy the code

Create a dedicated user for Elasticsearch (Elasticsearch requires not to run as root)

# create user name haoxy
useradd haoxy
Set password for newly created user
passwd haoxy
Give the installation permission to the new user
chown -R haoxy:haoxy /usr/local/elk/elasticsearch-7.7.0
Modify the system configuration file for elasticSearch to run
echo 'vm.max_map_count=262144' >> /etc/sysctl.conf
echo 'haoxy hard nofile 65536' >> /etc/security/limits.conf
echo 'haoxy soft nofile 65536' >> /etc/security/limits.conf
Copy the code

Modifying a Configuration File

#The elasticSearch file directory is displayedCD elasticsearch 7.7.0 / config vi elasticsearch. Yml#Open comments for these configurations and fill in the corresponding valuesNetwork. Host: your own server IP http.port: 9200 discovery.seed_hosts: [" IP address "]#Open this configuration item
node-name
#Node-1 is the value of node-name. The default value is Node-1
cluster.initial_master_nodes: ["node-1"]
Copy the code

Start the Easticsearch

#Switch roles[root @ localhost elasticsearch - 7.7.0] # su suyu#It is recommended that you do not use -d for the first startup, because there may be errors during the first startup. If you are confident, you can add -d[haoxy @ localhost elasticsearch - 7.7.0] $bin/elasticsearch - dCopy the code

The startup only output log looks something like this:

a43f33) Copyright (c) 2020 Elasticsearch BV
[2020-09-04T15:16:13.769][INFO ][o.e.d.DiscoveryModule    ] [node-1] using discovery type [zen] and seed hosts providers [settings]
[2020-09-04T15:16:15.269][INFO ][o.e.n.Node               ] [node-1] initialized
[2020-09-04T15:16:15.270][INFO ][o.e.n.Node               ] [node-1] starting ...
[2020-09-04T15:16:15.484][INFO ][o.e.t.TransportService   ] [node-1] publish_address {101.. 5675.:9300}, bound_addresses {101.. 5675.:9300}
[2020-09-04T15:16:15.911][INFO ][o.e.b.BootstrapChecks    ] [node-1] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2020-09-04T15:16:15.944][INFO ][o.e.c.c.Coordinator      ] [node-1] cluster UUID [aY5lgyvbRuqb61LQ8A6hKA]
[2020-09-04T15:16:16.304][INFO ][o.e.c.s.MasterService    ] [node-1] elected-as-master ([1] nodes joined)[{node-1}{ALYnSqgTTM2yRFGOvHhA_A}{zyCxSJGsS8y2ODCmZHJQNA}{101.. 5675.} {101.. 5675.:9300}{dilmrt}{ml.machine_memory=3974909952, xpack.installed=true, transform.node=true, ml.max_open_jobs=20} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 2, version: 28, delta: master node changed {previous [], current [{node-1}{ALYnSqgTTM2yRFGOvHhA_A}{zyCxSJGsS8y2ODCmZHJQNA}{101.. 5675.} {101.. 5675.:9300}{dilmrt}{ml.machine_memory=3974909952, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}]}
[2020-09-04T15:16:16.432][INFO ][o.e.c.s.ClusterApplierService] [node-1] master node changed {previous [], current [{node-1}{ALYnSqgTTM2yRFGOvHhA_A}{zyCxSJGsS8y2ODCmZHJQNA}{101.. 5675.} {101.. 5675.:9300}{dilmrt}{ml.machine_memory=3974909952, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}]}, term: 2, version: 28, reason: Publication{term=2, version=28}
[2020-09-04T15:16:16.539][INFO ][o.e.h.AbstractHttpServerTransport] [node-1] publish_address {101.. 5675.:9200}, bound_addresses {101.. 5675.:9200}
[2020-09-04T15:16:16.540][INFO ][o.e.n.Node               ] [node-1] started
[2020-09-04T15:16:16.860][INFO ][o.e.l.LicenseService     ] [node-1] license [091daa59-1347-45db-8b02-b3e8b570315b] mode [basic] - valid
[2020-09-04T15:16:16.862][INFO ][o.e.x.s.s.SecurityStatusChangeListener] [node-1] Active license is now [BASIC]; Security is disabled
[2020-09-04T15:16:16.877][INFO ][o.e.g.GatewayService     ] [node-1] recovered [0] indices into cluster_state
Copy the code

Then type in IP :9200 in your browser

Easticsearch is now installed;

Install the Logstash

#Download the Logstash installation packageWget HTTP: / / https://mirrors.huaweicloud.com/logstash/7.7.0/logstash-7.7.0.tar.gz#Unpack the logstashThe tar - XVF logstash - 7.7.0. Tar. Gz#Go to the Logstash configuration folderCD logstash - 7.7.0 / config /#Add the following paragraph to the file
vim logstash.conf
## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #Input {TCP {mode => "server" host => "10.1.56.75" # install the logstash IP port => 4560 # set the logstash port codec => json_lines}} Output {elasticSearch {hosts => ["10.1.56.75:9200"] #es IP address and port #user => "haoxy" #password => "haoxy" #password => "haoxy" #password## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
#Go to the logstash-7.7.0/ directory and start. After successful start win: CRTL + C, MAC :control+ C
bin/logstash -f config/logstash.conf &
Copy the code

Install Kibana

#Switch to root to download the Kibana packageWget HTTP: / / https://mirrors.huaweicloud.com/kibana/7.7.0/kibana-7.7.0-linux-x86_64.tar.gz#Unpack theTar - XVF kibana 7.7.0 - Linux - x86_64. Tar. Gz#Make this folder writableChmod 777 kibana - 7.7.0 - Linux - x86_64#We will create a new user directory permissions, here also can not boot as rootChown -r haoxy: haoxy/usr/local/elk/kibana - 7.7.0 - Linux - x86_64#Go to the Kibana directoryCD kibana - 7.7.0 - Linux - x86_64#Modifying a Configuration File
vim ./config/kibana.yml

#Open this configuration line and set the value to the IP address of the ElasticSearch serverElasticsearch. Hosts: [" http://10.1.56.75:9200 "]#Open this configuration line, default is localhost, change to 0.0.0.0 do not change external network accessServer host: "0.0.0.0"#Go to the bin directory to boot
cd bin/
#Switch roles to start. Root is also not allowed here
su haoxy
#By default, port 2601 is used for backend startup
./kibana &
Copy the code

The startup success log is as follows:

Here our ELK is complete, log in to the Kibana visual interface http://10.1.56.75:5601

Springboot integrates with the ELK system

Then start the SpringBoot project, rely on the LogStash plugin in the POM file, and send the log file to the LogStash plugin. The Logstash will upload the log file to elasticSearch. Kibana pulls the log file from elasticSearch. We can see the log

 <dependency>
      <groupId>net.logstash.logback</groupId>
      <artifactId>logstash-logback-encoder</artifactId>
      <version>5.3</version>
  </dependency>
Copy the code

Add the log configuration file logback-spring.xml to the resourcet directory and change the destination to your own logstash IP and port


      
<! -- This log saves log information of different log levels to different files -->
<configuration>
    <include resource="org/springframework/boot/logging/logback/defaults.xml"/>

    <springProperty scope="context" name="springAppName"
                    source="spring.application.name"/>
    <springProperty scope="context" name="serverPort"
                    source="server.port"/>

    <! -- Log output position in the project -->
    <property name="LOG_FILE" value="${BUILD_FOLDER:-build}/${springAppName}"/>

    <! -- Console log output style -->
    <property name="CONSOLE_LOG_PATTERN"
              value="%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} [% % the CLR (15.15 t]) {abbreviation} % m % n ${wEx LOG_EXCEPTION_CONVERSION_WORD: - %}}"/>

    <! -- Console output -->
    <appender name="console" class="ch.qos.logback.core.ConsoleAppender">
        <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
            <level>INFO</level>
        </filter>
        <! -- Log output code -->
        <encoder>
            <pattern>${CONSOLE_LOG_PATTERN}</pattern>
            <charset>utf8</charset>
        </encoder>
    </appender>

    <! Json-formatted Appender for logstash output
    <appender name="logstash"
              class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <destination>10.1.56.75:4560</destination>
        <! -- Log output code -->
        <encoder
                class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
            <providers>
                <timestamp>
                    <timeZone>UTC</timeZone>
                </timestamp>
                <pattern>
                    <pattern>
                        {
                        "severity": "%level",
                        "service": "${springAppName:-}",
                        "port": "${serverPort:-}",
                        "trace": "%X{X-B3-TraceId:-}",
                        "span": "%X{X-B3-SpanId:-}",
                        "exportable": "%X{X-Span-Export:-}",
                        "pid": "${PID:-}",
                        "thread": "%thread",
                        "class": "%logger{40}",
                        "rest": "%message"
                        }
                    </pattern>
                </pattern>
            </providers>
        </encoder>
    </appender>

    <! -- Log output level -->
    <root level="INFO">
        <appender-ref ref="console"/>
        <appender-ref ref="logstash"/>
    </root>
</configuration>
Copy the code

Write a Controller

@RestController
@Slf4j
public class ElkController {

    @RequestMapping("elk")
    public String testElk(String params) {

        log.info("Interface input parameter {}", params);

        params = "Hello World";

        log.error("error message {}", params);

        returnparams; }}Copy the code

Let’s access the ElkController and print the log to the console:

Then we go to Kibana to configure the index information, create search rules can display the log to Discover!

Create the index information that we wrote when we configured the Logstash configuration file

Click on the create




By this time we ELK log analysis system has been built, springboot log has also been sent to the log system;