preface

Java has Synchronize and Lock, and mysql modifies SQL with locks. Lock the state of the data, let the state of the data in the concurrent scenario, as we expect the logical state transfer, but in the distributed, clustered situation, how do you lock the state of the data

  • Distributed locking scheme for database
  • Distributed lock based on Redis
  • Distributed lock based on ZooKeeper

Pay attention to the public account, communicate together, wechat search: sneak forward

Making the addressThank you, star,

Distributed locking scheme for database

The difficulty of database distribution lock

  • Single point of failure? Database can make more database backup
  • No expiration time? Each time a lock is added, an expected validity period is inserted. A: It is A scheduled task. It cleans up time-lapse locks at intervals. B: Check whether the current time is longer than the lock validity period to determine whether the lock is invalid
  • Not reentrant? An idempotent unique value field is added when data is locked. When the data is acquired next time, check whether the field is consistent. If the field is consistent, it indicates that the current operation is reentrant

Distributed lock based on Redis

  • Redis is a fast access high performance service, compared to the database, in Redis to implement the lock than directly in the database data lock, performance is better. At the same time, it also relieves the pressure of the database and reduces the blocking of transaction execution due to locking problems
  • The introduction of jedis
Clients </groupId> <artifactId>jedis</artifactId> <version>2.9.0</version> </dependency>Copy the code

setnx + expire

  • There are deadlock issues with setnx + expire. The setnx() method is used to SET IF NOT EXIST, and the expire() method is used to add an expiration time to the lock. Since these are two Redis commands, they are not atomic
Long result = jedis.setnx(lockKey, requestId);
if (result == 1) {
    // If the program crashes suddenly, the expiration time cannot be set and a deadlock will occur
    jedis.expire(lockKey, expireTime);
}
Copy the code

Lua scripts (correct)

  • Lua script execution in Redis is atomic and either succeeds or fails.
// setnx + expire is executed in lua script
String script = "if redis.call('setnx',KEYS[1],ARGV[1]) == 1 then" +
            " redis.call('expire',KEYS[1],ARGV[2]) return 1 else return 0 end";  
Object result = jedis.eval(script, Collections.singletonList(lockKey), Collections.singletonList(requestId));
if(result.equals(1)) {...// Successful lock operation
}
Copy the code

set {key} {value} nx ex {second}(Correct way)

  • This is the SET instruction extension parameter of Redis, which is atomic
String lockKey = "KEY value of lock";/ / fixed
String requestId = "Unique identifier for the next lock operation.";
int  expireTime = 1000;// Expiration time
String result = jedis.set(lockKey, requestId, "NX"."PX", expireTime);
Copy the code

Delete the Redis distribution lock

//-------- error ------------
// Check whether lock and unlock are the same client
if (requestId.equals(jedis.get(lockKey))) {
    // If, at this point, the lock is suddenly not the same client, the lock will be misunderstood
    jedis.del(lockKey);
}
//-------- The correct way to use lua ------------
String script = "if redis.call('get', KEYS[1]) == ARGV[1] then return redis.call('del', KEYS[1]) else return 0 end";
Object result = jedis.eval(script, Collections.singletonList(lockKey), Collections.singletonList(requestId));
Copy the code

Distributed lock based on Redlock algorithm

  • The disadvantage of the above redis distributed lock is that it only acts on one REDis node when it is locked. Even though Redis guarantees high availability through Sentinel, if the master node changes from master to slave for some reason, the lock will be lost. Redis master/slave synchronization does not guarantee consistency. The master returns results first and then synchronizes data to the slave
  • For example, if the lock is obtained on the master node of Redis -> the key with the lock is not synchronized to the slave node -> the master fails, failover occurs, and the slave node is upgraded to the master node -> Locks are lost
  • RedLock algorithm implementation steps

1: obtains the current time, in milliseconds. 2: requests locks for the five master nodes in sequence. The client sets network connection and response timeouts that are less than the lock expiration time. (Assuming automatic lock failure time is 10 seconds, the timeout time is usually between 5-50 ms, let’s assume the timeout time is 50ms). If the timeout occurs, skip the master node and try the next master node as soon as possible. After the lock is added, the client obtains the lock using the current time minus the start time (that is, the time recorded in Step 1). The lock is successful if and only if more than half of the Redis master nodes (N/2+1, here 5/2+1=3 nodes) acquire the lock and the lock is used for less than the lock expiration time. (10s> 30ms+40ms+50ms+20ms+50ms) 4: If the lock is successfully obtained, the actual valid time of the key is equal to the lock expiration time minus the time used to obtain the lock. 5: If the lock fails to be acquired (the lock has not been acquired for at least N/2+1 master instances, or the lock has been acquired for longer than the lock expiration time), the client must unlock all master nodes (even if some master nodes have not been unlocked at all).

<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-redis</artifactId> </dependency> <groupId>org.redisson</groupId> <artifactId>redisson</artifactId> <version>3.4.3</version> </dependency>Copy the code
  • Code sample
Config config = new Config().useSingleServer().setAddress("127.0.0.1:6380").setDatabase(0);
RedissonClient rLock1 = Redisson.create(config);
config = new Config().useSingleServer().setAddress("127.0.0.1:6381").setDatabase(0);
RedissonClient rLock2 = Redisson.create(config);
config = new Config().useSingleServer().setAddress("127.0.0.1:6382").setDatabase(0);
RedissonClient rLock3 = Redisson.create(config);
/ / initialization
String lockKey = "XXX";
RLock rLock1 = redissonRed1.getLock(lockKey);
RLock rLock2 = redissonRed2.getLock(lockKey);
RLock rLock3 = redissonRed2.getLock(lockKey);
RedissonRedLock rLock = new RedissonRedLock(rLock1,rLock2,rLock3);
/ / lock
rLock.lock();
/ / release
rLock.unlock();
Copy the code

Distributed lock based on ZooKeeper

  • Maven is introduced into
< the dependency > < groupId > org. Apache. Curator < / groupId > < artifactId > curator - recipes < / artifactId > < version > against 2.4.1 < / version > </dependency>Copy the code
  • Redlock algorithms tend to require multiple Redis clusters to implement, and the more things you have, the more error-prone you are. But how to implement an efficient and highly available distributed lock? zookeeper
  • Zookeeper characteristics
    • Final consistency: The operation status of the client is consistent across the ZooKeepr cluster
    • Reliability: The ZooKeeper cluster provides simple, robust, and good performance
    • Atomicity: An operation can only succeed or fail, with no intermediate states
    • Chronological: If message A is published before message B, message A precedes message B
  • Zookeeper temporary sequential node: the life cycle of the temporary node is bound to the client session. That is, if the client session fails, the node is automatically cleared (to resolve the automatic failure of distributed locks). In addition, child nodes cannot be created under temporary nodes. In a clustered ZK environment, only one temporary node can be successfully created on the same path
  • Zookeeper monitor: When ZooKeeper creates a node, it registers a monitor for the node. When the node status changes, the Watch is triggered and ZooKeeper sends a notification to the client
  • Zookeeper Distributed lock mechanism

When creating a temporary ordered node, each thread can successfully create the node, but its serial number is different. Only the node with the smallest serial number can have the lock. Other threads only need to listen for the status of the node with the smaller serial number. Thread X acquires the lock in the lock directory and creates a temporary ordered node 3: Thread X acquires all the child nodes in the lock directory and acquires its sibling nodes. If there is no node smaller than itself, the current thread has the smallest serial number and acquires the lock 4 successfully: Thread Y creates temporary nodes and obtains sibling nodes to determine whether it is the smallest node. When thread Y finds that it is not, it sets the monitor (watch) on nodes smaller than itself (here for the above mentioned herd effect) 5: Thread X completes the logic and deletes its own node. Thread Y monitors the changes of the node to further determine whether it has been the smallest node and successfully obtains the lock

  • The code examples
/ / initialization
CuratorFramework curatorFramework= CuratorFrameworkFactory.newClient("zookeeper1.tq.master.cn:2181".new ExponentialBackoffRetry(1000.3));
curatorFramework.start();
// Create temporary node locks
String lockPath = "/distributed/lock/";/ / the root node
// Can reenter it lock
String lockName = "xxxx";
InterProcessMutex interProcessMutex = new InterProcessMutex(curatorFramework, lockPath + lockName);
/ / lock
interProcessMutex.acquire(2, TimeUnit.SECONDS)
/ / releases the lock
if(interProcessMutex.isAcquiredInThisProcess()){
    interProcessMutex.release();
    curatorFramework.delete().inBackground().forPath(lockPath + lockName);
}
Copy the code

Corrections are welcome

Parameters of the article

  • Remember a distributed lock – database based
  • The correct implementation of Redis distributed lock
  • Redis implements distributed lock, single machine – cluster – red lock
  • How to explain the application scenario of Zookeeper distributed lock in a popular way?
  • Distributed lock based on Open source client Curator of Zookeeper