Zookeeper is used to synchronize the status between server nodes in distributed systems, and can also be used for service discovery. Zookeeper is widely used in distributed systems.

This article introduces a method to build a Zookeeper cluster.

Like most distributed systems, Zookeeper is suitable for deployment in a cluster of servers with an odd number of nodes.

The Zookeeper deployment environment and software requirements are as follows:

  • Local Service Cluster
  • openjdk8
  • Zookeeper – 3.4.14

This article uses the previously configured local VIRTUAL machine cluster. If you need to configure a server cluster, please refer to my previous article on setting up a server cluster.

Software installation

Installing the software is relatively easy. Just unpack Zookeeper into /opt/module (see my previous article if you have any questions about this) and rename the directory apache-Zookeeper-3.6.1.

$tar -zxvf apache-zookeeper-3.6.1-bin.tar.gz -c /opt/module/Copy the code

The cluster configuration

Create a zkData directory in the apache-Zookeeper-3.6.1 directory to store the data generated during service running and the configuration of the cluster

$ mkdir -p zkData
Copy the code

Go to the conf directory and rename zoo_sample. CFG to zoo.cfg

$ cd conf
$ cp zoo_sample.cfg zoo.cfg
Copy the code

Then edit the zoo. CFG file and configure dataDir to the zkData directory you just created

$vi zoo. CFG dataDir = / opt/module/apache - they are - 3.6.1 track/zkDataCopy the code

The configured software is then synchronized to the other two servers

$rsync - RVL/opt/module/apache - they are - 3.6.1 track/[email protected]: / opt/module/apache - they are - $rsync - RVL 3.6.1 / opt/module/apache - they are - 3.6.1 track/[email protected]: / opt/module/apache - they are - 3.6.1Copy the code

As a final step, you need to perform it on each of the three machines. For example, on the 192.168.56.3 machine, go to the zkData directory and create a myID file. Fill it with a random number, as long as it doesn’t duplicate it with the other two machines

$ cd zkData
$ touch myid
Copy the code

Then edit zoo. CFG and add the following information to the configuration file: Note that the number after server must be the same as the number in the myID file of each server. Then configure the machine name and port number for election

$ cd conf
$ vi zoo.cfg
server.3=bigdata1:2888:3888
server.4=bigdata2:2888:3888
server.5=bigdata3:2888:3888
Copy the code

Bigdata1, Bigdata2, and Bigdata3 are aliases of the three machines defined in the /etc/hosts file.

Edit myID and zoo.cfg on the other two machines. The contents in myID cannot be the same.

At this point, the configuration is complete.

Run the validation

After the configuration is complete, perform the following operations on the three machines:

$ bin/zkServer.sh start
Copy the code

If no error is reported, the Zookeeper service on the three machines should be started. Before starting, we cannot determine which server is the leader and which server is the follower, which will be automatically elected according to the status of the server at that time.

After startup, you can view the machine status:

Bigdata1: You can see that this is a slave node

ZooKeeper JMX enabled by default Using config: /opt/module/apache-zookeeper-3.6.1/bin/.. /conf/zoo.cfg Client port found: 2181. Client address: localhost. Mode: followerCopy the code

Bigdata2: This is also a slave node

ZooKeeper JMX enabled by default Using config: /opt/module/apache-zookeeper-3.6.1/bin/.. /conf/zoo.cfg Client port found: 2181. Client address: localhost. Mode: followerCopy the code

Bigdata3: This is a master node

ZooKeeper JMX enabled by default Using config: /opt/module/apache-zookeeper-3.6.1/bin/.. /conf/zoo.cfg Client port found: 2181. Client address: localhost. Mode: leaderCopy the code

After confirming that the cluster is started, you can connect to the cluster through a client, which can be used in a variety of programming languages in addition to the command-line client used below.

$ bin/zkCli.sh
Copy the code

After connecting to the cluster, run ls/to view the root node of the cluster. By default, there is only one empty ZooKeeper node

$ ls /
[zookeeper]
Copy the code

Create a node named ray with the content rayJun

$ create /ray "rayjun"
Copy the code

Use the get command to view the contents under the Ray node

$ get /ray
rayjun
Copy the code

The text/Rayjun

Follow the wechat official account and chat about other things