Today’s sharing started, please give us more advice ~

preface

Set up the ELK environment under Docker using Aliyun 1-core 2G Centos7.5 cloud host, and import the commodity data of MS SQL Server for Kibana to display the configuration process.

For the Docker configuration, this article uses the open source project Docker-elk, which maintains a Docker Compose version of Elastic Stack, to make simple changes on it.

1. Environment preparation

Docker & Docker Compose

Docker Compose and Docker Compose

Note: For Windows /MacOS Docker Compsoe installed with Docker, do not need to install again. After Docker is installed, it is recommended to configure the related permissions first, linux-postinstall.

1.2 docker – elk project

Clone to the specified directory, in this case to /app/docker-elk

The initial structure of the project (ELK Version 7.13) is as follows:

├─ ├─ bass Exercises, ├─ bass Exercises, exercises, exercises, exercises, exercises, exercises, exercises, exercises, exercises, exercises Extensions │ ├ ─ ─ apm -server │ ├ ─ ─ app – search │ ├ ─ ─ curator │ ├ ─ ─ logspout ├ ─ ─ kibana │ ├ ─ ─ the config │ │ └ ─ ─ kibana. Yml │ └ ─ ─ Dockerfile ├ ─ ─ LICENSE ├ ─ ─ logstash │ ├ ─ ─ the config │ │ └ ─ ─ logstash. Yml │ ├ ─ ─ Dockerfile │ └ ─ ─ pipeline │ └ ─ ─ logstash. Conf └ ─ ─ README. Md └ ─ ─ the env

1.3 docker compose

Docker Compose is a convenient tool for configuring multiple containers. In the previous run container after container, many configurations had to be written directly on the command line, and it was inconvenient to write parameters again if you needed to run it again.

Note: The.env file at the root of the project defines a variable that specifies the version of elK, which is latest by default. Docker-compose build: docker-compose build

To run the Docker compose project, switch to the project root directory and run the docker-compose up [service name] command. If no service name is specified, all services will be started.

2 Elastics Stack configuration

Perform the following configuration first:

Change /etc/sysctl.conf to vm.max_map_count = 262144 and run sysctl -p to make the change take effect or exit the current terminal to make it take effect again.

Change xpack.license.self_generated. Type in the elasticSearch. yml configuration file to basic to disable the x-pack-related charging function.

I chose to turn off xpack.Monitoring. Enabled for all three parts of the Elastic Stack because the machine memory is too small and the container exits in the actual run. Open.yml and modify the corresponding configuration item as follows:

xpack.monitoring.collection.enabled: false

xpack.monitoring.enabled: false

In particular, for Ali cloud server, also need to configure security group rules to open the docker port, otherwise the external network can not access the Docker container. Port rules added in this paper are shown as follows:

Record the Elastic Stack configuration separately. It should be noted that as for the configuration of the three, as mentioned above, the layout file can greatly facilitate the container configuration of Docker, especially the multi-container scenario, making docker more convenient to run. Also, each container’s environment variables in the choreography file (?) The container itself can also be configured in enviroment. In other words, the Elastic Stack to be configured in this article can either find the.yml in their respective directories to write configuration items to, or find the Environment property of each container in the choreographer file to write configuration items to.

2.1 Elasticsearch

For Elasticsearch, see docker-comemage. yml for Elasticsearch Docker configuration. And some runtime parameters for Elasticsearch itself. In this article, Elasticsearch Logging is used to write files and persist them to disk and to enable cross-domain logging.

2.1.1 cross-domain

Start with cross-domain enable Settings. Elasticsearch needs to be cross-domain enabled in order to access ElasticSearch using the ElasticSearch-head plugin.

As mentioned above, either docker-comemess. yml or elasticSearch. yml can be configured directly. This article is directly in the layout file. Go to the environment property of ElasticSearch Service and add:

http.cors.enabled: “true”

http.cors.allow-origin: “*”

The initial Elastic Stack of the Docker-ELk project has an authentication mechanism, so you need to add:

http.cors.allow-headers: “Content-Type,Content-Length,Authorization”

http.cors.allow-credentials: “true”

At this point, start the elasticSearch container: Docker-compose up elasticsearch docker-compose up elasticsearch docker-compose up elasticsearch docker-compose up elasticsearch http://yourhost:9200/ Add authentication information after the address.

Note that the user name and password used in this article are default.

2.1.2 logging

Elasticsearch run logging (Logging) is not required. However, several Errors occurred during the deployment of Elasticsearch. In this article, log logging is configured and persisted to facilitate the pit filling learning.

By default, the Elastic Stack sends run logs to the console. You can see them on the screen, but you need to specify a path for storing logs in a file:

path.logs: /usr/share/elasticsearch/logs

The next step is to configure the log4j2.properties file. Create a new file called log4j2.properties under ElasticSearch Config. This article chooses to copy the contents from the container. Note that there are two similar looking files in the container, log4j2.properties and log4j2.file.properties. The difference is that the latter has more log write to file configuration. So, the latter is what we need to copy.

Once the file is created, it needs to be bound to the container. Open docker-comemage. yml and add the following volumes to elasticSearch:

Of course, there are other ways to implement configuration mapping, and this article uses the above approach, which is mount files by binding. More on binding mounts below.

In this case, start the container and go inside the container to check whether the corresponding folder has log files. If you do not have a log file, please check the above configuration one by one. Sometimes it may be caused by a clerical error.

Note: You need to restart after modifying the configuration file each time. If the configuration still does not take effect, manually stop the docker stop [container ID] and manually start the docker-compose up logstash. For the record, docker-compose down stops and removes the instance (docker-compose down -v removes the volume along with it).

2.2 Logstash

The main workload of this paper is actually in Logstash, so the configuration content of Logstash is quite large.

About importing data from SQL Server into ES;

Generally speaking, the common ideas to solve this problem are as follows:

Using database triggers, create triggers on the fields or tables that you want to listen on

Use database features such as SQL Server CDC or MySQL Bin Log

Where DB data is modified in the program, data in the search engine is also modified

Use a field such as LastModifyTime in the DB to track changes

2.2.1 Importing Data

First, download the JDBC JAR package and decompress it to the corresponding path.

Then, prepare a.conf file, the template shown below.

The SQL statement in the input JDBC plugin above is configured with statement_filepath. So the corresponding some. SQL file also needs to be bind-mount to the container.

Here is another official version of the.conf file:

Table Table1 is periodically queried and tracked based on the last query time. The initial time of the first query is ‘1970-01-01 08:00’. For details, see the JDBC document above. The filter here is used to rename the field name.

Run Elasticsearch and Logstash. Open the head plugin to see the corresponding index and document.

2.2.2 logging

As with Elasticsearch, this article configates the logging of LogStash to eliminate bugs.

Log writes to the same configuration as Elasticsearch, specifying the path to the logs folder.

Optionally, you can specify the level and format of the log. Log level The default log level is INFO. Add the following three lines to YML.

log.level: info

path.logs: /usr/share/logstash/logs

log.format: plain

Logstash also requires configuring the log4j2.properties file. If not specified, the default file is used. Note that the default file is the Docker version, which is much simpler than the normal version. Here is the normal version of logstash (Version 7.12) log4j2.properties.

The log4j2.properties file also requires bind mount.

After the above two points are configured, run the logstash and find the corresponding files under logs.

Now save the log to hard disk to persist the data. The idea is to use the data volume. You can also use bind mount to persist data to hard disk.

1. Open docker-comemess. yml to find volumes at the end of the file and add named Volume to them, as shown below:

volumes:

elasticsearch:

logstash_logging:

elasticsearch_logging:

Add the log data volume from Elasticsearch above as well.

2. Configure volumes for logstash and ElasticSearch, as shown in the following figure:

  • type: volume

    source: elasticsearch_logging

    target: /usr/share/elasticsearch/logs

Source is the name of the volumes you just defined and target is the path where path.logs was configured in.yml.

Run docker volume ls and docker volume inspect volume_name to check the volume paths of ElasticSearch and Logstash.

As you can see, there is a file in the directory of ElasticSearch Logging. Not under logstash. Go into the Logstash container and find the previous logs folder, which is empty.

This is because the Logstash container is run as the Logstash user, and the automatically generated volume data volume storage directory requires root permission. Elasticsearch, on the other hand, runs as root by default. This can also be observed after entering the container. Therefore, the solution for this article is to set the logstash to start as root and add an attribute user: root to the orchestration file logstash, as shown below:

Here, the basic configuration of Logstash is completed. Here is the additional configuration: Configure logstash multiple Pipelines. The reason for configuring multiple channels is to display logging to Kibana and view information such as SQL statements executed by it.

Before we start attaching configuration, a quick question. The ali Cloud instance selected in this paper has a relatively low configuration and is always exited during container operation.)

2.2.3 multiple pipelines

Logstash supports multiple pipeline1. First, create a new.conf file, which is selected in the Pipeline folder of logstash for this article. The files used in this article are as follows:

This article runs into a problem where the first letter of the first line of a new file is swallowed, so the first line is used as a comment line.

Create a file named Field.yml under The config folder of the logstash stash.

For the record, this article confused the.conf and.yml file suffixes during configuration, resulting in a lot of delay.

Similarly, the configuration file needs to be mounted to the container. Add this item to the logstash folder in the choreography file:

Start the Logstash and on the head you can see that the logging data is synchronized to es.

2.3 Kibana

Kibana’s YML file configuration is as follows:

The i18N. locale line sets the Kibana interface language to Chinese. . After setup, start Kibana and access port 5601 of host.

Today’s share has ended, please forgive and give advice!