This is the 11th day of my participation in Gwen Challenge
Setting up es Cluster
download
Download 7.11.1 here
cd /usr/local
mkdir es
cd es
mkdir logs
mkdir data
yum install perl-Digest-SHA
wget https:/ / artifacts. Elastic. Co/downloads/elasticsearch/elasticsearch 7.11.1 - Linux - x86_64. Tar. Gz
wget https:/ / artifacts. Elastic. Co/downloads/elasticsearch/elasticsearch 7.11.1 - Linux - x86_64. Tar. Gz. Sha512
shasum -a 512 -c elasticsearch-7.111.-linux-x86_64.tar.gz.sha512
tar -xzf elasticsearch-7.111.-linux-x86_64.tar.gz
cd elasticsearch-7.111./
Copy the code
Adding an ES User
This must be added because ES cannot be started with root privileges.
useradd -u 80 es
passwd es
chown -R es:es /usr/local/es
su - es
Copy the code
Configure es for server 1
The cluster here. The name refers to the name of the cluster, node. The name refers to the name of the node, the path. The data storage of data path, the path. The logs log storage paths, publish_host must be configured, no configuration can’t and the other nodes to establish channels, It is where the other cluster components communicate. Seed_hosts specifies the cluster node information, initial_master_nodes the initial primary node, Http.coron. enabled and http.coron. allow-origin are cross-domain related configurations. Otherwise, the ES-head plug-in cannot be used normally.
cluster.name: myes
node.name: es1
path.data: /usr/local/es/data
path.logs: /usr/local/es/logs
network.host: 0.0. 0. 0
http.port: 9200
network.publish_host: 192.168. 0120.
discovery.seed_hosts: ["192.168.0.120"."192.168.0.121"."192.168.0.122"]
cluster.initial_master_nodes: ["es1"]
http.cors.enabled: true
http.cors.allow-origin: "*"
Copy the code
Configure es for server 2
cluster.name: myes
node.name: es2
path.data:/usr/local/es/data
path.logs: /usr/local/es/logs
network.host: 0.0. 0. 0
http.port: 9200
network.publish_host: 192.168. 0121.
discovery.seed_hosts: ["192.168.0.120"."192.168.0.121"."192.168.0.122"]
cluster.initial_master_nodes: ["es1"]
http.cors.enabled: true
http.cors.allow-origin: "*"
Copy the code
Configure es for server 3
cluster.name: myes
node.name: es3
path.data: /usr/local/es/data
path.logs: /usr/local/es/logs
network.host: 0.0. 0. 0
http.port: 9200
network.publish_host: 192.168. 0122.
discovery.seed_hosts: ["192.168.0.120"."192.168.0.121"."192.168.0.122"]
cluster.initial_master_nodes: ["es1"]
http.cors.enabled: true
http.cors.allow-origin: "*"
Copy the code
Modify file permissions
Grant permissions to the ES user for all files in the folder.
chown -R es:es /usr/local/es
Copy the code
Configuring the Server
An error occurred before the configuration
max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [655360]
Solutions:
Add vm.max_map_count=655360 to the /etc/sysctl.conf file and run the sysctl -p command to make the changes take effect.
Start the
In the bin directory./ elasticSearch -d starts the three servers gradually, -d stands for background start.
Check whether the cluster is successful
http://IP:9200/_cluster/health?pretty
The cluster I tested was called es-dev, which should be Myes
Enable x-pack and use SSL authentication
After this function is enabled, use the password to access ES. Otherwise, you cannot access es directly. /bin/elasticsearch-certutil cert File elastics-certificate. p12, place it in the config folder on the three servers, and run it
./bin/elasticsearch-keystore create
./bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
./bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password
Copy the code
Modify es configuration file plus
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
Copy the code
The complete YML file will become
cluster.name: myes
node.name: es1
path.data: /usr/local/es/data
path.logs: /usr/local/es/logs
network.host: 0.0. 0. 0
http.port: 9200
network.publish_host: 192.168. 0120.
discovery.seed_hosts: ["192.168.0.120"."192.168.0.121"."192.168.0.122"]
cluster.initial_master_nodes: ["es1"]
http.cors.enabled: true
http.cors.allow-origin: "*"
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
Copy the code
Restart and revalidate the cluster
Problem reproduction
Not executed after the elastic-certificates.p12 file was generated
./bin/elasticsearch-keystore create
./bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
./bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password
Copy the code
Startup problem
uncaught exception in thread [main]
ElasticsearchSecurityException[failed to load SSL configuration [xpack.security.transport.ssl]]; nested: ElasticsearchException[failed to initialize SSL TrustManager]; nested: IOException[keystore password was incorrect]; nested: UnrecoverableKeyException[failed to decrypt safe contents entry: javax.crypto.BadPaddingException: Given final block not properly padded. Such issues can arise if a bad key is used during decryption.];
Likely root cause: java.security.UnrecoverableKeyException: failed to decrypt safe contents entry: javax.crypto.BadPaddingException: Given final block not properly padded. Such issues can arise if a bad key is used during decryption.
at java.base/sun.security.pkcs12.PKCS12KeyStore.engineLoad(PKCS12KeyStore.java:2103)
at java.base/sun.security.util.KeyStoreDelegator.engineLoad(KeyStoreDelegator.java:220)
at java.base/java.security.KeyStore.load(KeyStore.java:1472)
at org.elasticsearch.xpack.core.ssl.TrustConfig.getStore(TrustConfig.java:98)
at org.elasticsearch.xpack.core.ssl.StoreTrustConfig.createTrustManager(StoreTrustConfig.java:66)
at org.elasticsearch.xpack.core.ssl.SSLService.createSslContext(SSLService.java:438)
at java.base/java.util.HashMap.computeIfAbsent(HashMap.java:1224)
at org.elasticsearch.xpack.core.ssl.SSLService.lambda$loadSSLConfigurations$5(SSLService.java:527)
at java.base/java.util.HashMap.forEach(HashMap.java:1425)
at java.base/java.util.Collections$UnmodifiableMap.forEach(Collections.java:1521)
at org.elasticsearch.xpack.core.ssl.SSLService.loadSSLConfigurations(SSLService.java:525)
at org.elasticsearch.xpack.core.ssl.SSLService.<init>(SSLService.java:143)
at org.elasticsearch.xpack.core.XPackPlugin.createSSLService(XPackPlugin.java:458)
at org.elasticsearch.xpack.core.XPackPlugin.createComponents(XPackPlugin.java:290)
at org.elasticsearch.node.Node.lambda$new$16(Node.java:560)
at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:271)
at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578)
at org.elasticsearch.node.Node.<init>(Node.java:564)
at org.elasticsearch.node.Node.<init>(Node.java:278)
at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:216)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:216)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:387)
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159)
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:150)
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:75)
<<<truncated>>>
For complete error details, refer to the log at /data/es/logs/es-dev.log
Copy the code
After the preceding command is executed, the system runs normally.
Install kibana
Kibana can perform some ES interface, operation ES, also can do visualization, easy to use.
Download and install
curl -O https:/ / artifacts. Elastic. Co/downloads/kibana/kibana 7.11.1 - Linux - x86_64. Tar. Gz
tar -xzf kibana-7.111.-linux-x86_64.tar.gz
cd kibana-7.111.-linux-x86_64/
Copy the code
Modifying a Configuration File
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://192.168.0.120:9200"."http://192.168.0.121:9200"."http://192.168.0.122:9200"]
Copy the code
X pack needs to be configured if ES is enabled
elasticsearch.username: "kibana"
elasticsearch.password: "password"
Copy the code
Start the
chown -R es:es /usr/local/es
./kibana >/dev/null 2> &1 &
Copy the code
Install the es – head
Es-head can easily view cluster information and document data through the interface. Remember to configure es across domains; otherwise, es cannot be used normally.
yum module install nodejs/development
yum install git
git clone git://github.com/mobz/elasticsearch-head.git
cd elasticsearch-head
npm install
chown -R es:es /usr/local/es
Copy the code
configuration
Modify/gruntfile. js in es-head directory, add hostname attribute, also can change the listening address can be once and for all, default open is the cluster address.
Start the
npm run start
Copy the code