preface

This article is about installing the ElasticSearch cluster and Kinaba.

ElasticSearch is introduced

ElasticSearch is a Lucene-based search server that encapsulates Lucene and provides REST APIS. ElasticSearch is a highly scalable open source full text search and analysis engine for fast storage, search and analysis of big data. ElasticSearch features: Distributed, high availability, asynchronous write, multiple apis, document oriented. ElasticSearch core concepts: Near real-time, clustering, node (save data), index, sharding (index sharding), replica (sharding can set multiple copies). It can quickly store, search and analyze huge amounts of data. ElasticSearch use cases: Wikipedia, Stack Overflow, Github, and more.

ElasticSearch cluster installation

First, environmental choice

ElasticSearch cluster installation depends on the JDK (JDK 6.5.4) and Kibana (JDK 6.5.4). ElasticSearch cluster installation depends on the JDK (JDK 1.8).

Download address:

ElasticSearch – 6.5.4: artifacts. Elastic. Co/downloads/e…

Kibana – 6.5.4: artifacts. Elastic. Co/downloads/k…

JDK1.8: www.oracle.com/technetwork…

ElasticSearch has several important node properties: primary node, data node, query node, and ingest node. The primary node and data node are the most important for ElasticSearch.

ElasticSearch cluster installation table

2. Linux configuration

Before installing ElasticSearch, we need to make some changes to the Linux environment to prevent problems later on!

1. Change the maximum memory limit

Modify the sysctl.conf file

vim /etc/sysctl.conf
Copy the code

Add the following configuration at the end:

vm.max_map_count = 655360
vm.swappiness=1
Copy the code

Then save and exit and enter the following command for it to take effect

   sysctl -p
Copy the code

Save and exit, and enter the following command for it to take effect

  sysctl -p
Copy the code

Use the following command to view:

tail -3 /etc/sysctl.conf
Copy the code

2. Change the maximum number of threads

Modify the 90-nproc.conf file

  vim /etc/security/limits.d/90-nproc.conf 
Copy the code

Note: The file name of 90-nproc.conf may be different on different Linux servers. You are advised to check the file name in /etc/security/limits.d/ before changing the file name.

The following content

   soft nproc 2048
Copy the code

Modified to

  soft nproc 4096
Copy the code

Use the following command to view:

tail -3 /etc/security/limits.d/90-nproc.conf
Copy the code

3. Change the maximum number of open files

Modify the limits. Conf

vim /etc/security/limits.conf
Copy the code

Add the following at the end:

hard nofile 65536
soft nofile 65536
Copy the code

4. Disable the firewall

Note: in fact, you can not close the firewall, permission Settings, but in order to facilitate access, so they closed the firewall. Every machine does it!!

CentOS 6 Query firewall status: [root@localhost ~]# service iptables status [root@localhost ~]# service iptables stop: [root@localhost ~]# service iptables start: [root@localhost ~]# service iptables restart Disable firewall permanently: [root@localhost ~]# chkconfig iptables off Disable firewall permanently: [root@localhost ~]# chkconfig iptables on

CentOS 7 Disable the firewall systemctl stop firewalld

JDK installation

1. Document preparation

Decompress the downloaded JDK tar-xvf JDK-8u144-linux-x64.tar. gz to the opt/ Java folder, create a new folder, and rename the folder to jdk1.8

Mv jdk1.8.0_144 /opt/ Java mv jdk1.8.0_144 jdk1.8Copy the code

2. Environment configuration

First type Java -version to see if the JDK is installed, and if so, uninstall it if the version is not appropriate

rpm -qa | grep java 
Copy the code

Check the information

Then type:

RPM -e --nodeps You want to uninstall JDK information, for example, RPM -e --nodeps java-1.7.0-openJDK-1.7.0.99-2.6.5.1.el6.x86_64Copy the code

Once you’re sure it’s gone, unzip the downloaded JDK

tar  -xvf   jdk-8u144-linux-x64.tar.gz
Copy the code

Move to the opt/ Java folder, create a new one if not, and rename the folder to jdk1.8.

Mv jdk1.8.0_144 /opt/ Java mv jdk1.8.0_144 jdk1.8Copy the code

Then edit the profile file and add the following configuration input: vim /etc/profile

Export JAVA_HOME=/opt/ Java /jdk1.8 export JRE_HOME=/opt/ Java /jdk1.8/jre export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib export PATH=.:${JAVA_HOME}/bin:$PATHCopy the code

After successfully adding, type:

source /etc/profile
Copy the code

To make the configuration take effect, and then view the version information enter:

java  -version 
Copy the code

Install ElasticSearch

1. Document preparation

Unzip the downloaded ElasticSearch file

Input:

The tar - XVF elasticsearch - 6.5.4. Tar. GzCopy the code

Then move it to the /opt/elk folder, create it if it does not exist, and rename the folder to Masternode.

Enter /opt/elk:

Mv ElasticSearch -6.5.4 /opt/elk mv ElasticSearch -6.5.4 MasterNodeCopy the code

2. Modify the configuration

Create an elastic user and assign permissions to the folder where elasticSearch resides. Create an elastic user and assign permissions to the folder where ElasticSearch resides. The command is as follows:

adduser elastic
chown -R elastic:elastic  /opt/elk/masternode
Copy the code

Select * from /home directory where ElasticSearch data and logs are stored. Select * from /home directory where ElasticSearch data and logs are stored. Create a folder for ElasticSearch data and logs in the home directory. You can create a folder for each node. So the folder creation here is going to be created with the user that we just created, go to Elastic and create the folder.

su elastic    
mkdir  /home/elk
mkdir  /home/elk/masternode
mkdir  /home/elk/masternode/data
mkdir  /home/elk/masternode/logs
mkdir  /home/elk/datanode1
mkdir  /home/elk/datanode1/data
mkdir  /home/elk/datanode1/logs
Copy the code
Configure the master node

After the masterNode is created, modify the masterNode configuration. After the modification, copy the masterNode configuration to the same directory named datanode1, and then modify the Datanode configuration only a little. Select * from elasticSearch. yml and jvm.options. Notice that this is still an Elastic user!

 cd /opt/elk/
vim masternode/config/elasticsearch.yml
vim masternode/config/jvm.options
Copy the code

The configuration of the ElasticSearch. yml file on MasterNode is as follows:

cluster.name: pancm node.name: master path.data: /home/elk/masternode/data path.logs: / home/elk/masternode/logs network. The host: 0.0.0.0 network. Publish_host: 192.169.0.23 transport.. TCP port: 9301 HTTP port: 9201 discovery. Zen. Ping. Unicast. Hosts: [" 192.169.0.23:9301 ", "192.169.0.24:9301", "192.169.0.25:9301"] node. The master: true node.data: false node.ingest: false index.number_of_shards: 5 index.number_of_replicas: 1 discovery.zen.minimum_master_nodes: 1 bootstrap.memory_lock: true http.max_content_length: 1024mbCopy the code

Description of elasticSearch. yml file parameters:

  • Cluster. name: indicates the cluster name. The configurations of nodes in a cluster must be consistent. Es automatically discovers es in the same network segment. If there are multiple clusters in the same network segment, you can use this attribute to distinguish different clusters.

  • Node. name: indicates the name of the node. Data: the path where data is stored. Log: the path where logs are stored.

  • Network. Host: Sets the IP address, which can be ipv4 or ipv6. The default is 0.0.0.0.

  • Net. publish_host: Sets the IP address for other nodes to interact with this node. If it is not set, it automatically determines that the value must be a real IP address.

  • Transport.tcp. port: Sets the TCP port for interaction between nodes. The default value is 9300.

  • Http. port: Specifies the HTTP port for external services. The default value is 9200.

  • Discovery. Zen. Ping. Unicast. Hosts: set the initial list of master node in the cluster, can automatically discover new by these nodes of the cluster nodes.

  • Node. master: Specifies whether this node is eligible to be elected node. Default: true. Data: specifies whether the node stores index data. The default value is true.

  • Node. ingest: Specifies whether this node uses pipes. Default: true.

  • Index. number_OF_SHards: Sets the default number of index fragments. The default number is 5.

  • Index. number_of_replicas: Sets the default number of index replicas. The default number is 1.

  • Discovery.zen. minimum_master_nodes: Set this parameter to ensure that nodes in the cluster can know the other N master nodes. The default is 1, but a larger value (2-4) can be set for large clusters.

  • Bootstrap. memory_lock: Set to true to lock memory. Since ES will be less efficient when the JVM starts considerations, to make sure it doesn’t swap, set the ES_MIN_MEM and ES_MAX_MEM environment variables to the same value and make sure the machine has enough memory to allocate to ES. In Linux, you can use ulimit -l unlimited to allow elasticSearch processes to lock memory. Max_content_length: Sets the maximum content size, which defaults to 100MB. -…

The properties of the ElasticSearch node are also mentioned.

  1. The combination of node.master: true and node.data: true indicates that this node is both a master node and stores data. If a node is elected as the true master node, it also stores data, which puts more pressure on the node. This is the default configuration for each node of ElasticSearch, which is fine in a test environment. It is not recommended to do this in practice, as this is equivalent to mixing the roles of the master node and the data node.

  2. The combination of node.master: false and node.data: true indicates that this node is not eligible to be the master node, and therefore does not participate in elections, but only stores data. This node is called the data node. You need to configure several such nodes in a cluster to store data and provide storage and query services.

  3. The combination of node.master: true and node.data: false indicates that the node does not store data, is qualified as a master node, can participate in elections, and may become a true master node. This node is called a master node.

  4. Node. master: false Node. data: false This combination indicates that this node does not serve as a master node, nor does it store data. This node is used as a client node, mainly for load balancing of massive requests.

  5. Node.ingest: true executes the preprocessing pipe and is not responsible for data and cluster-related things. It preprocesses documents before indexing, intercepts bulk and INDEX requests for documents, and then transforms them. By passing the document back to the BULK and Index APIS, the user can define a pipeline specifying a series of preprocessors.

Example of elasticSearch. yml file configuration

JVMS. Options: Xms and Xmx: 2G

-Xms2g
-Xmx2g
Copy the code

For more information about ElasticSearch, see the ElasticSearch documentation.

Data Node Node configuration

Copy the masterNode node to datanode1 and rename it to Datanode1. Then use the red box in the above example to make simple changes. The command is as follows:

cd /opt/elk/
cp -r  masternode/ datanode1
vim datanode1/config/elasticsearch.yml
vim datanode1/config/jvm.options
Copy the code

The configuration of the ElasticSearch. yml file on datanode is as follows:

cluster.name: pancm node.name: data1 path.data: /home/elk/datanode/data path.logs: Host: 0.0.0.0 network. Publish_host: 192.169.0.23 transport.tcp.port: 9300 HTTP. 9200 discovery. Zen. Ping. Unicast. Hosts: [" 192.169.0.23:9301 ", "192.169.0.24:9301", "192.169.0.25:9301"] node. The master: false node.data: true node.ingest: false index.number_of_shards: 5 index.number_of_replicas: 1 discovery.zen.minimum_master_nodes: 1 bootstrap.memory_lock: true http.max_content_length: 1024mbCopy the code

JVMS. Options: Xms and Xmx: 8G

-Xms8g
-Xmx8g
Copy the code

Note: After the configuration is complete, run the ll command to check whether the masterNode and Datanode1 permissions belong to user elastic. If they do not, run the chown -r elastic:elastic + path command to grant permissions to user masterNode and Datanode1.

After the above configuration is complete, you can use the same method once on another machine, or use the FTP tool to transfer files, or use the SCP command to transfer files remotely and then modify them differently for different machines. SCP command example:

JDK environment transfer:

scp -r /opt/java root@slave1:/opt
scp -r /opt/java root@slave2:/opt
Copy the code

ElasticSearch environment transport:

scp -r /opt/elk root@slave1:/opt
scp -r /home/elk root@slave1:/opt
scp -r /opt/elk root@slave2:/opt
scp -r /home/elk root@slave2:/opt
Copy the code

3. Start ElasticSearch

Again, you’ll need to use elastic to boot, on every node on every machine! In the /opt/elk directory, enter:

su elastic    
cd /opt/elk
./masternode/bin/elasticsearch -d
./datanode1/bin/elasticsearch -d
Copy the code

After the startup is successful, you can run the JPS command or enter IP +9200 or IP +9201 in the browser. If the following page is displayed, success is achieved!

Four, Kibana installation

You can install Kinaba on only one machine as user root. You need to ping kinaba from the ElasticSearch server network.

1. Document preparation

Unzip the kibana-6.5.4-linux-x86_64.tar.gz configuration file and run the following command on Linux:

Tar - XVF kibana 6.5.4 - Linux - x86_64. Tar. GzCopy the code

Move to /opt/elk and rename the folder to kibana6.5.4 Enter:

Mv kibana-6.5.4-linux-x86_64 /opt/elk mv kibana-6.5.4-linux-x86_64 kibana6.5.4Copy the code

2. Modify the configuration

Go to folder and modify the Kibana.yml configuration file:

CD/opt/elk/kibana6.5.4 vim config/kibana ymlCopy the code

Add the following to the configuration file:

server.host: "localhost"
Copy the code

Is amended as:

Server host: "192.169.0.23"Copy the code

Then a new line is added. This line means login without password

xpack.security.enabled: false
Copy the code

Save and exit!

3. Kinaba starts

Start as user root. In the Kibana6.5.4 folder directory enter:

nohup ./bin/kibana >/dev/null   2>&1 &
Copy the code

Browser input:

http://IP:5601

5. Wrong solutions to problems

1, Max Virtual memory areas VM. Max_map_count [65530] is too low, increase to at least [262144]

Reason: The memory limit is too small! Solution: Change the maximum memory limit, refer to the first Linux environment configuration!

2, Max number of threads [2048] for user [elastic] is too low, increase to at least [4096]

Reason: Too few thread limits! Solution: Change the maximum number of threads limit, refer to article 2 of the Linux environment configuration!

3, max file descriptors [65535] for elasticsearch process likely too low, increase to at least [65536]

Cause: Too few open files! Solution: Change the maximum number of open files, refer to the third Linux environment configuration!

4, ERROR:bootstrap checks failed

Cause: The memory is not locked. Solution: Add bootstrap.memory_lock: true configuration to elasticSearch. yml configuration file on the machine reporting the error!

other

Install ElasticSearch and Head for Windows:

www.cnblogs.com/xuwujing/p/…

Music to recommend

Original is not easy, if you feel good, I hope to give a recommendation! Your support is the biggest motivation for my writing! Copyright: www.cnblogs.com/xuwujing CSDN blog.csdn.net/qazwsxpcm Personal blog: www.panchengming.com