• ES configuration

Elasticsearch profile

Elasticsearch is an open source distributed search engine that collects, analyzes, and stores data. Its features are: distributed, zero configuration, automatic discovery, index automatic sharding, index copy mechanism, restful interface, multi-data sources, automatic search load and so on.Copy the code

Elasticsearch installation

Elasticsearch installation package: elasticSearch 6.7.2.tar.gz Upload the installation package to a Linux server https://www.elastic.co/cn/downloads/ (1) (2) unzip the installation package/root @ master1 ~ # tar - ZXVF elasticsearch - 6.7.2. Tar. Gz Unzip the installation packageCopy the code

Elasticsearch configuration

The Elasticsearch file is used to configure information about the Elasticsearch service. The yml file is used to configure information about the Elasticsearch service, such as the path of a configuration file and the Java path configuration.Copy the code

When configuring cluster nodes, ensure that the primary firewall is disabled first

Edit the configuration file on 192.168.1.100: this server is master. Node. master: true # If the node is the master node node.data: Whether false # data node/root @ master1 ~ # vim/usr/elasticsearch/elasticsearch - 6.7.2 / config/elasticsearch. Yml cluster. Name: Initial_master_nodes: master-1 # Initialize the node as node.master: Network. Host: 0.0.0.0 # Listen on address (default: 0.0.0.0). Port: 9300 # ES internal communication interface bootstrap.memory_lock: Discovery. seed_host: [" 192.168.2.100 192.168.2.101 ", ""," 192.168.2.102 ", "192.168.2.103", "192.168.2.104"]Copy the code

Error: ElasticSearch. yml failed to configure Node. name.Error cause is actually because of a missing space! This is because yML files must write property values with Spaces or property errors will be reported

  1. memory locking requested for elasticsearch process but memory is not locked

Memory_lock: true While taking place in system Considerations ES node performance will be very poor and also affect the stability of the node. So avoid considerations at all costs. Considerations may cause the Java GC cycle latency to deteriorate from milliseconds to minutes and, more seriously, may cause node response delays or even disengagement from the cluster. Therefore, it is best to limit the memory usage of ElasticSearch, and optionally use less swap

  1. Modify the file/etc/elasticsearch/elasticsearch. Yml, above the error is produced after opening, if open to modify the system configuration file bootstrap. Other memory_lock: true
  2. Modify the file/etc/security/limits. Conf, finally add the following content.
    • soft nofile 65536
    • hard nofile 65536
    • soft nproc 32000
    • hard nproc 32000
    • hard memlock unlimited
    • soft memlock unlimited
  3. Modify the following contents in the /etc/systemd/system.conf file.
    • DefaultLimitNOFILE=65536
    • DefaultLimitNPROC=32000
    • DefaultLimitMEMLOCK=infinity

After that, restart the system. No error was reported when elasticSearch was restarted.

Reboot -n: saves data and restarts./ ElasticSearch -d: starts in the backgroundCopy the code

3.max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

Because ES is built based on Lucene, lucene’s design is powerful because Lucene makes good use of OS memory to cache index data for fast query performance. Lucene’s index files segements are stored in a single file and are immutable. It is friendly for OS to keep index files in cache for quick access. Therefore, it makes sense to leave half of our physical memory to Lucene; The other half of the physical memory is reserved for ES (JVM heap). Therefore, the following principles can be followed in setting ES memory:

1. When the machine memory is less than 64G, follow the general principle of 50% for ES and 50% for Lucene. 2. When the machine memory is larger than 64 GB, follow the following rules: a. If full-text retrieval is the main usage scenario, it is recommended to allocate 4 to 32 GB of ES Heap memory. The remaining memory is reserved for use by the operating system (segments cache) by Lucene to provide faster query performance. B. If the main usage scenario is aggregation or sorting, and the character types are mostly numerics, dates, geo_points, and Not_analyzed, it is recommended to allocate 4 to 32GB of ES Heap and leave the rest for the operating system. Used by Lucene (Doc values cache) to provide fast document-based clustering and sorting performance. C. If the usage scenario is aggregation or sorting and is based on analyzed character data, then more heap size is required. It is recommended to run multiple ES instances on the machine, each instance maintaining no more than 50% ES Heap setting (but not more than 32GB, and the heap memory setting below 32GB). JVM uses object metric compression techniques to save space), leaving more than 50% to Lucene. More es performance tuning parameters, reference documentation: https://www.jianshu.com/p/532b540d4c46Copy the code

Solutions:

Max_map_count =262144 reference data (4g/4194304 8GB /8388608) Max_map_count =4194304 or 8388608 run sysctl -p to permanently change the vmCopy the code

Slave node Settings:

Node. master: fasle # Whether the master node node.data: Whether true # # data node configuration details/root @ node1 ~ # vim/usr/elasticsearch/elasticsearch - 6.7.2 / config/elasticsearch yml cluster. Name: Node. master: false # Whether the node is the master node node.data: true # Whether the node is the data node net. host: Port: 9200 #es cluster provides external access to the interface transport.tcp.port: 9300 # ES cluster internal communication interface bootstrap.memory_lock: true # avoid ES using swap partition discovery.seed_host: [" 192.168.2.100 192.168.2.101 ", ""," 192.168.2.102 ", "192.168.2.103." "192.168.2.104"] discovery.zen.minimum_master_nodes: 3 ## To avoid split brain, at least half +1Copy the code

View cluster status Browser inputhttp://10.2.4.72:9200/_cluster/health?pretty

ES sets sharding

Index name /_settings (the number of shards must be set when creating the index) {“index”: {“number_of_shards”: 1}}

ES Settings copy (default is 1 copy, which can be added after index creation)

Index name /_settings {“index”: {“number_of_replicas”: 1}}

  • Logstash configuration

Decompress the installation package

The tar - ZXVF logstash - 7.8.0. Tar. GzCopy the code

2. Logstash configuration

1. Create a.conf file. Mine is logstash. As shown in the figure, modify the Logstash configuration. The input part should be the same as the port in FilBeat by default 5044 (optional). The filter part should be preprocessed or parsed differently for different log contents. Filetype is the new custom field in fileBeat below. 3. The red fields in the figure show the IP address of the input data to the specified ES and the log time format of the output after the log is readCopy the code

Logstash Start command

Nohup./bin/logstash -f conf/logstash.Copy the code
  • Filebeat configuration

Reference documentation blog.csdn.net/a464057216/…

Decompress the installation package

Tar - ZXVF filebeat 7.8.0 - Linux - x86_64. Tar. GzCopy the code

Configure filebeat.ymlThe figure shows that you can monitor multiple logsThe figure above shows the output of FileBeat to the specified Logstash. Ensure that the port number is the same as the configured logstash

Filebeat start command

/filebeat -e -c filebeat. Yml -d "Publish" & > nohup.Copy the code

Iv. Note that after nohUP is successfully executed in the shell, press any key on the terminal keyboard to return to the shell input command window, and then enter exit in the shell to exit the terminal. If you directly click the close program button to close the terminal after the successful execution of nohup, the session corresponding to the command will be broken. As a result, the processes corresponding to nohup will be notified to shutdown together, and the program cannot be invoked to continue the background operation after the terminal is shutdown. Base exit Exits the user

  • Kibana configuration

Decompress the installation package

The tar ZXVF/home/wyk/kibana - 7.7.0 - Linux - x86_64. Tar. GzCopy the code

Edit the conf file

Port: 5601 server.host: "0.0.0.0" elasticSearch. hosts: ["http://localhost:9200"] #elasticsearch.username: "kibana" #elasticsearch.password: "123456"Copy the code

Three.

Chmod +x kibana Grant execution permission nohup./ Kibana & run in backgroundCopy the code

In Kibana, you can see that two indexes have been created and different logs have been written: