They are installed
Download address: archive.apache.org/dist/zookee…
Zookeeper can be deployed in single-machine deployment mode, cluster deployment mode, or pseudo cluster deployment mode.
In general, stand-alone mode is used for local testing; Cluster mode for production environment; The pseudo-cluster mode is used to learn the ZooKeeper cluster.
Stand-alone mode
The single-machine mode means that only one ZooKeeper service is started on one machine. This mode is easy to configure but has single points of failure. Therefore, it is suitable for local debugging only.
Deployment steps:
- Decompress the ZooKeeper package.
- Go to the Zookeeper-3.4.6 directory and create a data directory
- Go to the conf directory and copy zoo_sample. CFG to the zoo. CFG file.
- CFG file, modify the data property:
dataDir=.. /data
- Go to the bin directory of Zookeeper and double-click zkServer. CMD to start the Zookeeper service
Binding to the port 0.0.0.0/0.0.0.0:2181
)
Cluster pattern
To get reliable ZooKeeper services, users should deploy ZooKeeper on a cluster. As long as the clusterMost of theThe ZooKeeper service is started, then the total ZooKeeper service will be available. In addition, it is best to use an odd number of machines. If ZooKeeper had five machines, it could handle two machine failures.
-
The port that
- 2181 Port number connected to the Client. The single-machine mode is the same
- 2888 follower Connects to the port of the leader
- 3888 Port for the leader election
-
myid
- Used to identify the current service in the cluster (1 to 255)
Deployment steps
The cluster deployment mode is similar to the single-machine deployment mode except that the deployment mode needs to be deployed on multiple machines and configurations need to be adjusted. The following uses three servers as an example. The deployment procedure is as follows:
-
For details, see the deployment procedure for single-machine deployment mode
-
Create a file named myID in each machine’s dataDir (property defined in zoo.cfg) directory and write different numbers (1~255) for myID on the three machines, such as here.
1st machine myID contents (data/ myID):
1 Copy the code
Myid (data/ myID):
2 Copy the code
Myid (data/ myID):
3 Copy the code
Note: the contents of myID correspond to server.xx in zoo.cfg
-
Modify the configuration file of each machine (zoo.cfg). The complete configuration is as follows:
The basic event unit, in milliseconds, that indicates the heartbeat tickTime=2000 Maximum number of heartbeats (number of Ticktimes) that can be tolerated during the initial connection between the followers and the leader in the cluster initLimit=10 Maximum number of heartbeats (ticktimes) that can be tolerated between requests and replies between followers and leaders in the cluster syncLimit=5 Store the location of the database snapshot in memory dataDir=../data Listen on the port to which the client is connected clientPort=2181 # format for server id = host: port1, port2 # to indicate all machines in the cluster; There are as many configurations as there are machines #id is the myID of the machine #host is the IP address of the machine The first port is the port from which the followers connect to the leader. The second port is used for the leader election server.1=host1:2888:3888 server.2=host2:2888:3888 server.3=host3:2888:3888 Copy the code
-
Start ZooKeeper on the three machines
Pseudo cluster mode
To put it simply, the cluster pseudo-distribution mode is to simulate the ZooKeeper service of the cluster in a single machine. Therefore, ports need to be modified to solve the problem of port conflict, as shown in the following figure:
Deployment steps:
- Decompress the ZooKeeper package (ZooKeeper-3.4.6.tar. gz). And three folders are obtained by copying. In order to facilitate the demonstration, they are named zookeeper01, ZooKeeper02 and Zookeeper03
- Switch to the ZooKeeper installation directories (zookeeper01, zooKeeper02, and zooKeeper03) and create the data directory
- Go to the three ZooKeeper conf directories and copy zoo_sample. CFG to the zoo. CFG file. Take Zookeeper01 as an example, as shown in the following figure:
- Modify the three configuration files separately. ** The only difference between the configuration files of the three machines is the property
clientPort
All other configuration files are the same. ** contents are as follows:
zookeeper01/config/zoo.cfg
The basic event unit, in milliseconds, that indicates the heartbeat
tickTime=2000
Maximum number of heartbeats (number of Ticktimes) that can be tolerated during the initial connection between the followers and the leader in the cluster
initLimit=10
Maximum number of heartbeats (ticktimes) that can be tolerated between requests and replies between followers and leaders in the cluster
syncLimit=5
Store the location of the database snapshot in memory
dataDir=../data
Listen on the port to which the client is connected
clientPort=12181
# format for server id = host: port1, port2
# to indicate all machines in the cluster; There are as many configurations as there are machines
#id is the myID of the machine
#host is the IP address of the machine
The first port is the port from which the followers connect to the leader.
The second port is used for the leader election
server.1=127.0.0.1:12888:13888
server.2=127.0.0.1:22888:23888
server.3=127.0.0.1:32888:33888
Copy the code
zookeeper02/config/zoo.cfg
The basic event unit, in milliseconds, that indicates the heartbeat
tickTime=2000
Maximum number of heartbeats (number of Ticktimes) that can be tolerated during the initial connection between the followers and the leader in the cluster
initLimit=10
Maximum number of heartbeats (ticktimes) that can be tolerated between requests and replies between followers and leaders in the cluster
syncLimit=5
Store the location of the database snapshot in memory
dataDir=../data
Listen on the port to which the client is connected
clientPort=22181
# format for server id = host: port1, port2
# to indicate all machines in the cluster; There are as many configurations as there are machines
#id is the myID of the machine
#host is the IP address of the machine
The first port is the port from which the followers connect to the leader.
The second port is used for the leader election
server.1=127.0.0.1:12888:13888
server.2=127.0.0.1:22888:23888
server.3=127.0.0.1:32888:33888
Copy the code
zookeeper03/config/zoo.cfg
The basic event unit, in milliseconds, that indicates the heartbeat
tickTime=2000
Maximum number of heartbeats (number of Ticktimes) that can be tolerated during the initial connection between the followers and the leader in the cluster
initLimit=10
Maximum number of heartbeats (ticktimes) that can be tolerated between requests and replies between followers and leaders in the cluster
syncLimit=5
Store the location of the database snapshot in memory
dataDir=../data
Listen on the port to which the client is connected
clientPort=32181
# format for server id = host: port1, port2
# to indicate all machines in the cluster; There are as many configurations as there are machines
#id is the myID of the machine
#host is the IP address of the machine
The first port is the port from which the followers connect to the leader.
The second port is used for the leader election
server.1=127.0.0.1:12888:13888
server.2=127.0.0.1:22888:23888
server.3=127.0.0.1:32888:33888
Copy the code
- Go to the three Zookeeper data directories respectively and create a file named myID. The contents of the file are three different numbers (1~255), which should be the same as those in ook. CFG
server.x
Properties correspond to each other to mark different Zookeepr nodes.
- Start the three ZooKeeper services.
Zookeeper instruction
Start the client, in the installation directory, directly double-click the zkcli. CMD file, you can enter the command
Querying all commandshelp
Query nodels
To enterls /
, direct inputls
No effectls \
Also not
this/
It’s a path, not just an input/
butls
A path must follow
Create a nodecreate
Create path "value"
Such ascreate /app1 "hello"
Creating an ordered nodecreate -s
Creating temporary Nodescreate -e
Close the client and open it again to see the APP3 node disappear (not immediately, it will take some time)
Create ordered temporary nodes (create-e-s)
Query nodeget
Get the path
: Returns node status information
Format is as follows
CZxid = 0x4454 # 0x0 indicates a hex number 0 ctime = Thu Jan 01 08:00:00 CST 1970 # creation time mZxid = 0x4454 # zxID mtime = Thu Jan 01 CST 1970 # zxID = 0x4454 # zxID = 5 # change number of the child node, AclVersion = 0 # Change number of access control list (aclVersion) SessionId ephemeralOwner = 0x0 # If it is not a temporary node, the value is 0 dataLength = 13 # numChildren = 1 # Child node dataCopy the code
Zxid of the Transaction that created the ZNode (ZooKeeper Transaction ID)
ZooKeeper assigns a globally unique ID (zxID) for each update operation or transaction. A smaller value indicates that the update operation is executed earlier
Remove nodesdelete
A child node cannot be deleted
Delete nodes recursivelyrmr
Zookeeper java-related packages
- Native Java API** (not recommended)
ZooKeeper’s native Java API is available in the org.apache.ZooKeeper package
Zookeeper-3. x. Jar (there are several versions here) is an official Java API
- Apache Curator (recommended)
Apache Curator is the Java client library for Apache ZooKeeper.
The goal of the co-curator. Project is to simplify the use of the ZooKeeper client.
For example, in previous code demonstrations, we had to handle the ConnectionLossException ourselves.
In addition, Curator provides a high-quality implementation of common distributed collaborative services. Originally developed by Netfix and later donated to the Apache Foundation, Apache Curator is currently a top-level project at Apache
- ZkClient (not recommended)
Github is an open source ZooKeeper client developed by Datameer engineers Stefan Groschupf and Peter Voss. Zkclient-x.x.jar is also an open source JAVA client that extends on top of the source API.
This section describes how to delete, modify, and query ZooKeeper
Creating a Client
The CuratorFramework class is the entry point for manipulating zk nodes, with the following points:
- CuratorFramework uses fluent style interface.
- The CuratorFramework uses the CuratorFrameworkFactory for allocation. It provides factory methods as well as constructors to create instances.
- CuratorFramework instances are completely thread-safe.
Constructing a CuratorFramework requires a RetryPolicy object, which is used to configure retry policies, with the following implementation classes
RetryPolicy retryPolicy = new RetryNTimes(10.1000);
CuratorFramework client = CuratorFrameworkFactory.newClient("127.0.0.1:2181",retryPolicy);
Copy the code
Open and close client connections
client.start();
client.close();
Copy the code
Query node
Querying child Node
List<String> stringList = client.getChildren().forPath("/");
System.out.println(stringList);
Copy the code
client.create().forPath("/a");
client.create().forPath("/b"."Node b".getBytes());
client.create().creatingParentsIfNeeded().forPath("/c/d");
Copy the code
Example Query the data of a node
byte[] bytes = client.getData().forPath("/b");
System.out.println(new String(bytes));
Copy the code
Modify the node
Add a node
Adding a Common Node
client.create().forPath("/a");// Create a node
client.create().forPath("/b"."Node b".getBytes());// Create a node and specify data
client.create().creatingParentsIfNeeded().forPath("/c/d");// Create a parent node if no parent exists
Copy the code
Add temporary, ordered nodes
client.create().withMode(CreateMode.PERSISTENT).forPath("/e");/ / ordinary
client.create().withMode(CreateMode.EPHEMERAL).forPath("/f");/ / temporary
client.create().withMode(CreateMode.PERSISTENT_SEQUENTIAL).forPath("/g");/ / sequence
client.create().withMode(CreateMode.PERSISTENT_SEQUENTIAL).forPath("/g");/ / sequence
client.create().withMode(CreateMode.PERSISTENT_SEQUENTIAL).forPath("/g");/ / sequence
client.create().withMode(CreateMode.EPHEMERAL_SEQUENTIAL).forPath("/h");// temporary sequence
client.create().withMode(CreateMode.EPHEMERAL_SEQUENTIAL).forPath("/h");// temporary sequence
client.create().withMode(CreateMode.EPHEMERAL_SEQUENTIAL).forPath("/h");// temporary sequence
Copy the code
Modify the nodeset
The set path value
Remove nodes
client.delete().deletingChildrenIfNeeded().forPath("/g");
Copy the code
Zookeeper listener
The concept of Cache is introduced to monitor events on the ZooKeeper server. There are three types of Cache:
- NodeCache can listen for node data changes
- The PathChildrenCache listens for child changes
- TreeCache can listen for node and child node changes
RetryPolicy retryPolicy = new RetryNTimes(10.1000);
CuratorFramework client = CuratorFrameworkFactory.newClient("127.0.0.1:2181",retryPolicy);
client.start();
NodeCache nodeCache = new NodeCache(client,"/b");
nodeCache.getListenable().addListener(()->{
ChildData data = nodeCache.getCurrentData();
if(data == null){
System.out.println("node has been deleted!");
}else {
System.out.println("data:"+newString(data.getData())); }}); nodeCache.start(true);
System.in.read();
client.close();
Copy the code
Zookeeper distributed lock
A simple example simulates modifying a user’s score
If multiple threads modify a user’s score at the same time, there will be concurrency problems, so locks need to be added for control.
UserDao.java
package com.kehao.lock;
public class UserDao {
private int score = 0;
/** * emulated to get user credits * from the database@return* /
public int getScoreFromDb(a) {
// The simulated network request takes 1 ms
try {
Thread.sleep(1L);
} catch (InterruptedException e) {
e.printStackTrace();
}
// query data
return score;
}
/** * Simulation update database user score *@param score
*/
public void updateScore(int score) {
// The simulated network request takes 1 ms
try {
Thread.sleep(1L);
} catch (InterruptedException e) {
e.printStackTrace();
}
this.score = score; }}Copy the code
LockDemo.java
package com.kehao.lock;
import org.apache.curator.RetryPolicy;
import org.apache.curator.framework.CuratorFramework;
import org.apache.curator.framework.CuratorFrameworkFactory;
import org.apache.curator.framework.recipes.locks.InterProcessLock;
import org.apache.curator.framework.recipes.locks.InterProcessMutex;
import org.apache.curator.retry.RetryNTimes;
public class LockDemo {
public static void main(String[] args) throws InterruptedException {
// Create a timeout retry policy with 10 retries every 1s
RetryPolicy retryPolicy = new RetryNTimes(10.1000);
// Construct the client
CuratorFramework client = CuratorFrameworkFactory.newClient("localhost:2181",retryPolicy);
// Start the client
client.start();
/ / lock construction
InterProcessLock lock = new InterProcessMutex(client,"/user/1/update");
/ / userDao construction
UserDao userDao = new UserDao();
// Modify 100 times concurrently
for (int i = 0; i < 100; i++) {
new Thread((new Runnable() {
@Override
public void run(a) {
try {
lock.acquire();
// Query the current integral
int score = userDao.getScoreFromDb();
// The integral increases by 1
score++;
// Update database
userDao.updateScore(score);
} catch (Exception e) {
e.printStackTrace();
} finally {
try {
lock.release();
} catch (Exception e) {
e.printStackTrace();
}
}
}
})).start();
}
// Sleep for 5 seconds and wait for the task to complete
Thread.sleep(5000L);
// Output the final result
System.out.println("Done, result:"+userDao.getScoreFromDb());
/ / close the clientclient.close(); }}Copy the code
Related code: github.com/chenkehao19…