There are three ZooKeeper installation modes:
- Standalone mode: There is only one ZooKeeper service
- Pseudo cluster mode: Multiple ZooKeeper services in a single node
- Cluster mode: Multiple ZooKeeper servers
1 Standalone (Standalone mode) installation
The ZooKeeper’s official website to download address: zookeeper.apache.org/releases.ht…
Operation as shown in the figure:
Please be sure to have a stable release.
3.4.14
Centos system
1.1 Downloading the Installation Package
Enter the following command:
Wget HTTP: / / https://archive.apache.org/dist/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gzCopy the code
The diagram below:
1.2 Decompressing the installation package
Tar ZXVF - apache - they are - 3.4.14. Tar. GzCopy the code
Once the decompression is complete, move the decompression package to /usr:
Apache - they are - mv 3.4.14 / usr /Copy the code
And rename apache-zookeeper-3.4.14 to zookeeper-3.4.14.
The directory structure of ZooKeeper is as follows:
[root@instance-e5cf5719 zookeeper-3.4.14]# ls bin data Ivy. XML logs readme. md Zookeeper-3.4.14.jar Sha1 zookeeper-docs zookeeper-recipes build. XML dist-maven lib NOTICE. TXT README_packaging Asc zookeeper-client zookeeper-it zookeeper-server conf ivysettings. XML license. TXT pom.xml SRC zookeeper-3.4.14.jar Zookeeper - 3.4.14. Jar. Md5 zookeeper - contrib zookeeper - juteCopy the code
- Bin directory – zK executable scripts directory, including ZK server process, ZK client, and other scripts. Sh is the script in Linux and. CMD is the script in Windows.
- Conf directory – configuration file directory. The zoo_sample. CFG file is an example configuration file. You need to change it to your own name, which is zoo.cfg in most cases. Log4j. properties is the log configuration file.
1.3 set up a zoo. CFG
Go to the /usr/zookeeper-3.4.14/conf directory, and view zoo_sample. CFG. The zoo_sample. CFG file is an example configuration file that needs to be modified as your own.
cp zoo_sample.cfg zoo.cfg
Copy the code
Take a look at the zoo. CFG file:
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/tmp/zookeeper # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1Copy the code
It looks complicated, but there are only a few lines after the comments are removed:
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/tmp/zookeeper
clientPort=2181
Copy the code
- tickTime=2000: Colloquially called
Tick the time
, is the heartbeat interval, the default is 2000 milliseconds, that is, every two seconds heartbeat.- TickTime Indicates the unit of time used to maintain the heartbeat between the client and the server or between the server and the server. That is, the heartbeat is sent every tickTime.
- The heartbeat of therole:
- Monitor the working status of the machine.
- Heartbeat is used to control the communication time between followers and the leader. By default, their session duration is twice the heartbeat interval, i.e., 2 * tickTime.
- InitLimit =10: During the startup process, the follower synchronizes all the latest data from the leader and determines the initial state in which the follower can perform external services. The leader allows the follower to complete the work within the initLimit time. The default value is 10, that is, 10*tickTime. By default, you do not need to change this configuration item. As the number of Managed ZooKeeper clusters increases, the follower node takes a longer time to synchronize data from the leader node when starting up. Therefore, the follower node cannot synchronize data in a short time.
- SyncLimit =5: indicates the maximum latency for heartbeat detection on the leader and follower nodes. In a ZooKeeper cluster, the leader node performs heartbeat checks with all the followers nodes to check whether the nodes are alive. The default value is 5, that is, 5*tickTime.
- DataDir =/ TMP /zookeeper: default directory for storing snapshot files on the ZooKeeper server. Files in the/TMP directory may be automatically deleted and may be lost. Therefore, you need to change the directory for storing the files.
- ClientPort =2181: port used by the client to connect to the ZooKeeper server. ZooKeeper listens on this port and receives client access requests.
Warm reminder: we must learn to read official documents, to receive first-hand information. Although it is English, the words and grammar are relatively simple and easy to understand. The official website is introduced as follows:
- tickTime : the basic time unit in milliseconds used by ZooKeeper. It is used to do heartbeats and the minimum session timeout will be twice the tickTime.
- dataDir : the location to store the in-memory database snapshots and, unless specified otherwise, the transaction log of updates to the database.
- clientPort : the port to listen for client connections
Create data and logs files in the zookeeper-3.4.14 directory as follows:
[root@instance-e5cf5719 zookeeper-3.4.14]# mkdir data
[root@instance-e5cf5719 zookeeper-3.4.14]# mkdir logs
Copy the code
The official documentation also explains this, pointing out that ZooKeeper runs for a long time in the production environment, and ZooKeeper storage requires a special file location to store dataDir and logs. The data folder is used to store in-memory database snapshots. The myID file of the cluster is also stored in this folder.
For long running production systems ZooKeeper storage must be managed externally (dataDir and logs).
The modified zoo. CFG is as follows:
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, / TMP here is just # example sakes. # dataDir=/usr/zookeeper-3.4.14/data # DataLogDir =/usr/zookeeper-3.4.14/logs # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1Copy the code
1.4 start
Go to the bin directory of ZooKeeper.
[root@instance-e5cf5719 zookeeper-3.4.14]# cd bin/
[root@instance-e5cf5719 bin]# ls
README.txt zkCleanup.sh zkCli.cmd zkCli.sh zkEnv.cmd zkEnv.sh zkServer.cmd zkServer.sh zkTxnLogToolkit.cmd zkTxnLogToolkit.sh zookeeper.out
Copy the code
- Zkcleanup. sh: clears historical ZooKeeper data, including transaction log files and snapshot data files
- Zkcli. sh: cli client used to connect to the ZooKeeper server
- Zkenv. sh: Sets environment variables
- Zkserver. sh: starts the ZooKeeper server
Start the ZooKeeper:
./zkServer.sh start
Copy the code
The successful startup is shown in the following figure:
./zkServer.sh status
Copy the code
The status information is as follows:
help
[root@instance-e5cf5719 bin]# ./zkServer.sh help ZooKeeper JMX enabled by default Using config: The/usr/zookeeper - 3.4.14 / bin /.. /conf/zoo.cfg Usage: ./zkServer.sh {start|start-foreground|stop|restart|status|upgrade|print-cmd}Copy the code
- Start: starts the ZooKeeper server in the background
- Start-foreground: Foreground starts the server
- Stop: stop
- Restart: restart
- Status: obtains the status
- Upgrade the upgrade:
- Print-cmd: prints the ZooKeeper program command lines and related parameters
1.5 Connecting the ZooKeeper Client
To connect:
. / zkCli. Sh - server 127.0.0.1:2181Copy the code
namely
./zkCli.sh -server <ip>:<port>
Copy the code
The results are as follows:
help
[zk: 127.0.0.1:2181(CONNECTED) 0] help
ZooKeeper -server host:port cmd args
stat path [watch]
set path data [version]
ls path [watch]
delquota [-n|-b] path
ls2 path [watch]
setAcl path acl
setquota -n|-b val path
history
redo cmdno
printwatches on|off
delete path [version]
sync path
listquota path
rmr path
get path [watch]
create [-s] [-e] path data acl
addauth scheme auth
quit
getAcl path
close
connect host:port
Copy the code
The command | describe |
---|---|
help | Displays all operation commands |
stat | Check node status, that is, determine whether a node exists |
set | Updating Node Data |
get | Obtaining Node Data |
ls path [watch] | Use the ls command to view the contents of the current ZNode |
create | Common creation;-s Contains sequence;-e Temporary (Restart or timeout disappears) |
delete | Remove nodes |
rmr | Delete nodes recursively |
You can simply test the commands by creating a new ZNode (run create /zk_test my_data) with the information “my_data” attached.
[zk: 127.0.0.1:2181(CONNECTED) 1] create /zk_test my_data Created /zk_test [Zk: 127.0.0.1:2181(CONNECTED) 1] create /zk_test my_data Created /zk_test 127.0.0.1:2181(CONNECTED) 2] ls / [Zookeeper, zk_test]Copy the code
You can see that zk_test was created successfully. You can use the get command to see the information in the zk_test node:
[zk: 127.0.0.1:2181(CONNECTED) 3] get /zk_test my_data cZxid = 0x7ctime = Thu Dec 05 16:32:20 CST 2019 mZxid = 0x7mtime = Thu Dec 05 16:32:20 CST 2019 pZxid = 0x7 cversion = 0 dataVersion = 0 aclVersion = 0 ephemeralOwner = 0x0 dataLength = 7 numChildren = 0Copy the code
You can use set to modify information in zk_test.
[zk: 127.0.0.1:2181(CONNECTED) 4] set /zk_test junk cZxid = 0x7ctime = Thu Dec 05 16:32:20 CST 2019 mZxid = 0x8mtime = Thu Dec 05 16:37:03 CST 2019 pZxid = 0x7 cversion = 0 dataVersion = 1 aclVersion = 0 ephemeralOwner = 0x0 dataLength = 4 numChildren = 0 [zk: 127.0.0.1:2181(CONNECTED) 5] get /zk_test junk cZxid = 0x7ctime = Thu Dec 05 16:32:20 CST 2019 mZxid = 0x8mtime = Thu Dec 05 16:37:03 CST 2019 pZxid = 0x7 cversion = 0 dataVersion = 1 aclVersion = 0 ephemeralOwner = 0x0 dataLength = 4 numChildren = 0Copy the code
You can delete a node by delete.
[zk: 127.0.0.1:2181(CONNECTED) 6] delete /zk_test
[zk: 127.0.0.1:2181(CONNECTED) 7] ls /
[zookeeper]
Copy the code
2 Bogus cluster construction
We set up three ZooKeepers to build a pseudo-cluster. Mysql > create zookeeper-3.4.14; create zookeeper-3.4.14-1; create zookeeper-3.4.14-2; create zookeeper-3.4.14-1; create zookeeper-3.4.14-2.
[root@instance-e5cf5719 usr]# cp -r zookeeper-3.4.14 zookeeper-3.4.14-1
[root@instance-e5cf5719 usr]# cp -r zookeeper-3.4.14 zookeeper-3.4.14-2
Copy the code
In this case, the three ZooKeeper files are the same. To build a pseudo cluster, you need to modify the configuration files of each ZooKeeper.
Modify the port number, log path, and cluster configuration in /conf/zoo. CFG of three ZooKeeper files.
server.<myid>=<IP>:<Port1>:<Port2>
Copy the code
myid
: indicates the node id. The value is an integer ranging from 1 to 255, andMust be unique in the cluster.IP
: Indicates the IP address of the node, for example, 127.0.0.1 or localhost in the local environment.Port1
: port used by the leader and follower nodes for heartbeat detection and data synchronization.Port2
: Port used for voting communication during the leader election.
If the configuration mode is a pseudo cluster, different Zookeeper instances must be assigned different port numbers because the IP addresses are the same.
Create a myid file in the /data directory of each ZooKeeper file. The myid file only needs to contain the server number (for example, 1,2, 3).
Start three ZooKeeper services (open three Windows to start the service).
The results are as follows:
- Zookeeper – 3.4.14
[root@instance-e5cf5719 bin]# ./zkServer.sh start ZooKeeper JMX enabled by default Using config: The/usr/zookeeper - 3.4.14 / bin /.. /conf/zoo.cfg Starting zookeeper ... STARTED [root@instance-e5cf5719 bin]# ./zkServer.sh status ZooKeeper JMX enabled by default Using config: The/usr/zookeeper - 3.4.14 / bin /.. /conf/zoo.cfg Mode: followerCopy the code
- Zookeeper 3.4.14-1
[root@instance-e5cf5719 bin]# ./zkServer.sh start ZooKeeper JMX enabled by default Using config: The/usr/bin/zookeeper - 3.4.14-1 /.. /conf/zoo.cfg Starting zookeeper ... STARTED [root@instance-e5cf5719 bin]# ./zkServer.sh status ZooKeeper JMX enabled by default Using config: The/usr/bin/zookeeper - 3.4.14-1 /.. /conf/zoo.cfg Mode: leaderCopy the code
- Zookeeper 3.4.14-2
[root@instance-e5cf5719 bin]# ./zkServer.sh start ZooKeeper JMX enabled by default Using config: / usr/zookeeper 3.4.14-2 / bin /.. /conf/zoo.cfg Starting zookeeper ... STARTED [root@instance-e5cf5719 bin]# ./zkServer.sh status ZooKeeper JMX enabled by default Using config: / usr/zookeeper 3.4.14-2 / bin /.. /conf/zoo.cfg Mode: followerCopy the code
Zookeeper-3.4.14-1 is the leader, zookeeper-3.4.14 and zookeeper-3.4.14-2 are followers.
You can refer to the architecture diagram on the official website to help understand.
Stop zookeeper-3.4.14-1 to observe the leader election.
[root@instance-e5cf5719 bin]# ./zkServer.sh stop ZooKeeper JMX enabled by default Using config: The/usr/bin/zookeeper - 3.4.14-1 /.. /conf/zoo.cfg Stopping zookeeper ... STOPPEDCopy the code
Check the status of ZooKeeper-3.4.14 and ZooKeeper-3.4.14-2 respectively.
[root@instance-e5cf5719 bin]# ./zkServer.sh status ZooKeeper JMX enabled by default Using config: The/usr/zookeeper - 3.4.14 / bin /.. /conf/zoo.cfg Mode: followerCopy the code
[root@instance-e5cf5719 bin]# ./zkServer.sh status ZooKeeper JMX enabled by default Using config: / usr/zookeeper 3.4.14-2 / bin /.. /conf/zoo.cfg Mode: leaderCopy the code
You can see that Zookeeper-3.4.14-2 becomes the leader.
3 Cluster deployment
The cluster mode is similar to the pseudo-cluster except that ZooKeeper of the cluster is deployed on different machines and ZooKeeper of the pseudo-cluster is deployed on the same machine. When modifying /conf/zoo. CFG, the port number is not needed because the machines are different (with different IP addresses). In addition to this difference, the other way to build with the pseudo cluster exactly the same, not to do more introduction.
4 summarizes
So far, we have completed the construction of ZooKeeper standalone version, pseudo cluster and cluster environment. To ensure high availability of ZooKeeper in the production environment, you must set up a cluster environment.