For a simple Redis experiment, see Redis Distributed locks

Redis distributed locks

  • Q1: Why can Redis distributed lock solve the problem of dirty data when multiple machines access shared variables

    A1: We use setnx EX to lock a key, and all requests of setNx EX atomic operation will be sent to Redis service. Since Redis processes all tasks in a single thread, all tasks to Redis will be executed in a queue, so there will not be multiple requests to operate on a lock at the same time.

  • Q2: If the service of the process that obtains the lock fails, the lock resource will not be released and deadlock occurs. How to solve this situation?

    A2: After a task is locked, add a timeout period to the lock. After this time, the lock is invalid, so that other processes can hold the lock and perform normal business.

    Eg: Set product:100001 true ex 10 NX key product 100001 Set the lock timeout period to 10s. If the timeout period exceeds 10s, the lock is automatically invalid. Note that setnx ex is an atomic operation that either succeeds or fails at the same time.

  • Q3: The timeout period is set so short that the lock fails before service A completes, and service B locks and performs service B. User A will resume services at the specified time because the lock of user B has not been released. B services are affected, and locks may become invalid in serious cases.

    A3: Each process generates its own UUID and determines whether the current thread is its own ID when releasing the lock. The lock will only be released if it is in its own process. Of course, the determination and release should be atomic statements, otherwise it will appear that when the determination of the thread is its own ID, but before the lock is released, the lock fails

    Of course, the simplest method is to set the timeout period according to the specific service. Setting a long timeout period also avoids this problem. It costs a little bit less

  • Q4: The problem of setting the timeout time, how large is the setting OK

    A4: Sometimes we don’t know what the execution time of a business is, so we’re not sure what the appropriate timeout is. The optimal method is to add a listening coroutine at the beginning of the main service process line (set the timeout time to 30s). The coroutine monitors whether the execution of the main service is complete at a specified time (10s). If the execution is not complete, set the timeout time to 30s. That’s the problem with lock survival. Timed listening usually sets an initial timeout of 1/3.

  • Q5: Redis distributed lock in the master-slave architecture, lock failure may also occur during the master node switch. How to solve it?

    A5: During the switchover of the master node, the unreleased locks of the original master node may not be copied to the new master node. Then the other master node can continue to apply for the locks on the new master node, causing service disorder.

    The principles of distribution are CAP: consistency, availability, and fault tolerance of partitions. Redis implementations are primarily AP principles, with a greater focus on usability. The ZooKeeper implementation is mainly CA principle and pays more attention to consistency.

    To solve this problem, you can use the ZooKeeper distributed lock. The ZooKeeper data store is a tree structure. When there are multiple nodes, the roles of the nodes are divided into Leader and Fllower. When the Leader receives a key, he copies (synchronizes) the key information to the flower node (more than half of the nodes), and then sends the success information to the client. In this way, when the primary node is switched over, the new node will select the flower node with the most data, and the flower node with the most data must contain all the key information of the original primary node.

    Surely, can’t Redis solve this problem? You can.

    Use the Redlock algorithm. Principle of RedLock: When a lock request is sent, the lock request is sent to each node. The lock is successfully added only when more than half of the Redis nodes are successfully locked.