This is the 14th day of my participation in the August Text Challenge.More challenges in August

The body of the

Step 1: Create a common user

Note: ES cannot be started as the root user, it must be installed as a normal user.

Here we use the Hadoop user to install our ES service

Step 2: Download and upload the zip package, and then unzip it

Download the es installation package and upload it to /opt/ Bigdata /soft node01 server. Run the following command as user es on the server

[hadoop@node01 ~]$ cd /opt/bigdata/soft/ [hadoop@node01 soft]$ wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.7.0.tar.gz [hadoop @ node01 soft] $tar ZXF. - Elasticsearch - 6.7.0. Tar. Gz - C/opt/bigdata/install /Copy the code

Step 3: Modify the configuration file

Modify the elasticsearch. Yml

The node01 server uses hadoop users to modify configuration files

CD/opt/bigdata/install/elasticsearch - 6.7.0 / config/mkdir -p/opt/bigdata/install/elasticsearch - 6.7.0 / logs/mkdir -p / opt/bigdata/install/elasticsearch - 6.7.0 / datas vim elasticsearch. YmlCopy the code
cluster.name: myes
node.name: node01
path.data: The/opt/bigdata/install/elasticsearch - 6.7.0 / datas
path.logs: The/opt/bigdata/install/elasticsearch - 6.7.0 / logs
network.host: 192.16852.100.
http.port: 9200
discovery.zen.ping.unicast.hosts: ["node01"."node02"."node03"]
bootstrap.system_call_filter: false
bootstrap.memory_lock: false
http.cors.enabled: true
http.cors.allow-origin: "*"
Copy the code

Modify the JVM option

Node01 Run the following command as user es to resize the JVM heap based on the memory size of each server

CD/opt/bigdata/install/elasticsearch - 6.7.0 / config vim JVM. The options - Xms2g - Xmx2gCopy the code

Step 4: Distribute the installation package to other servers

Node01 distributes the installation package to other servers as user ES

CD /opt/bigdata/install/ SCP -r elasticSearch-6.7.0 / node02:$PWD SCP -r elasticSearch-6.7.0 / node03:$PWDCopy the code

Step 5: Modify the ES configuration file on node02 and Node03

The ES configuration file also needs to be modified for node02 and node03. Node02 Run the following command as user Hadoop to modify the ES configuration file

CD/opt/bigdata/install/elasticsearch - 6.7.0 / config/vim elasticsearch. YmlCopy the code
cluster.name: myes
node.name: node02
path.data: The/opt/bigdata/install/elasticsearch - 6.7.0 / datas
path.logs: The/opt/bigdata/install/elasticsearch - 6.7.0 / logs
network.host: 192.16852.110.
http.port: 9200
discovery.zen.ping.unicast.hosts: ["node01"."node02"."node03"]
bootstrap.system_call_filter: false
bootstrap.memory_lock: false
http.cors.enabled: true
http.cors.allow-origin: "*"
Copy the code

Node03 hadoop

CD/opt/bigdata/install/elasticsearch - 6.7.0 / config/vim elasticsearch. YmlCopy the code
cluster.name: myes
node.name: node03
path.data: The/opt/bigdata/install/elasticsearch - 6.7.0 / datas
path.logs: The/opt/bigdata/install/elasticsearch - 6.7.0 / logs
network.host: 192.16852.120.
http.port: 9200
discovery.zen.ping.unicast.hosts: ["node01"."node02"."node03"]
bootstrap.system_call_filter: false
bootstrap.memory_lock: false
http.cors.enabled: true
http.cors.allow-origin: "*"
Copy the code

Step 6: Modify the system configuration to resolve startup problems

Ordinary users are now used to install ES service, and ES service requires more resources on the server, including memory size and number of threads. So we need to unshackle resources for the average user

Solve startup problem 1: limit the maximum number of open files for ordinary users

Error message description:

max file descriptors [4096] for elasticsearch process likely too low, increase to at least [65536]
Copy the code

ES requires a large number of index files to be created and a large number of files to be opened in the system, so we need to lift the limit on the maximum number of files to be opened in The Linux system, otherwise the ES startup will throw the wrong three machines. Use the ES user to run the following command to lift the limit on the open file data

sudo vi /etc/security/limits.conf

* soft nofile 65536
* hard nofile 131072
* soft nproc 2048
* hard nproc 4096
Copy the code

Solve startup problem 2: limit the number of threads started by common users

Run the following command on three machines to open the maximum number of files

sudo vi /etc/sysctl.conf

vm.max_map_count=655360
fs.file-max=655360
Copy the code

Run the following command to take effect

sudo sysctl -p
Copy the code

Note: After the above two problems are modified, be sure to reconnect Linux to take effect. Close secureCRT or XShell and restart the tool to connect to Linux

After reconnecting, execute the following command. With this result, you are ready to start ES

[hadoop@node01 ~]$ ulimit -Hn
131072
[hadoop@node01 ~]$ ulimit -Sn
65536
[hadoop@node01 ~]$ ulimit -Hu
4096
[hadoop@node01 ~]$ ulimit -Su
4096
Copy the code

Step 7: Start the ES service

Run the following command on the three machines to start es service as hadoop user

Nohup/opt/bigdata/install/elasticsearch - 6.7.0 / bin/elasticsearch > & 1 & 2Copy the code

After successful startup, the JSP can see the ES server process and access the page

http://node01:9200/?pretty

You can see some information after es is started

Note: if a machine service startup fails, then to which a machine/opt/bigdata/install/elasticsearch – 6.7.0 / logs the path below to view the error log