Focus on PHP, MySQL, Linux and front-end development, thank you for your attention!! The article is organized on GitHub. Gitee mainly includes PHP, Redis, MySQL, JavaScript, HTML&CSS, Linux, Java, Golang, Linux, tools and other relevant theoretical knowledge, interview questions and practical content.

Practical instructions

Recently, in a project marketing campaign, a colleague used Redis to achieve inventory management of goods. During the pressure survey, oversold conditions were found. Here is a summary of how to correctly use Redis to solve the oversold situation in the second kill scenario.

Demonstrate the steps

Here will not directly explain to you, how to achieve a secure, efficient distributed lock. Instead, we can achieve an efficient and safe distributed lock by discovering the disadvantages of each type of lock and how to optimize it in a step-by-step way.

The first scenario

This scenario uses Redis to store the quantity of goods. Obtain the inventory first, according to the inventory judgment, if the inventory is greater than 0, reduce 1, and then update the Redis inventory data. The general schematic diagram is as follows:

  1. After the first request comes in, determine the inventory of Redis. Then reduce the inventory of goods by one, and then update the inventory.

  2. While the first request processes the inventory logic, a second request comes in, the same logic, to read the Redis inventory, determine the inventory number and then perform the inventory reduction operation. They’re dealing with the same product.

  3. Then this logic, in the second kill such a large number of requests, it is easy to the actual number of goods sold far greater than the number of goods in stock.

public function demo1(ResponseInterface $response)
{
    $application = ApplicationContext::getContainer();
    $redisClient = $application->get(Redis::class);
    / * *@varInt $goodsStock Current inventory */
    $goodsStock = $redisClient->get($this->goodsKey);

    if ($goodsStock > 0) {
        $redisClient->decr($this->goodsKey);
        // TODO performs additional business logic
        return $response->json(['msg'= >'Seckill successful'])->withStatus(200);
    }
    return $response->json(['msg'= >'SEC kill failed, merchandise stock is low. '])->withStatus(500);
}
Copy the code

Problem analysis:

  1. This approach uses Redis to manage inventory and reduce the pressure on MySQL.

  2. Assuming that the inventory is only 1 at this time, the first request is in the process of determining that the inventory is greater than 0 and reducing the inventory. If a second request reads the data, the inventory is greater than 0. Both execute the second kill logic, but with only one inventory, they run into an oversold situation.

  3. At this point, let’s imagine if we can only let one request process the inventory, and the other requests have to wait until the last request ends to get the inventory, would we be oversold? This is the implementation of the locking mechanism mentioned in the following scenarios.

The second scenario

Using file locks, open file locks after the first request comes in. After the transaction is completed, the current file lock is released, and the next request is processed and the cycle continues. Ensure that only one request is currently processing inventory for all requests. After the request is processed, the lock is released.

  1. Use file locks to lock a file with a request. Another request is blocked until the previous one successfully releases the lock file.

  2. All requests are queued like a queue, the first one is queued and the second one is queued, each time in FIFO order.

public function demo3(ResponseInterface $response)
{
    $fp = fopen("/tmp/lock.txt"."r+");

    try {
        if (flock($fp, LOCK_EX)) {  // Perform exclusive locking
            $application = ApplicationContext::getContainer();
            $redisClient = $application->get(Redis::class);
            / * *@varInt $goodsStock Current inventory */
            $goodsStock = $redisClient->get($this->goodsKey);
            if ($goodsStock > 0) {
                $redisClient->decr($this->goodsKey);
                // TODO handles additional business logic
                $result = true; // Business logic processes the final result
                flock($fp, LOCK_UN);    // Release the lock
                fclose($fp);
                if ($result) {
                    return $response->json(['msg'= >'Seckill successful'])->withStatus(200);
                }
                return $response->json(['msg'= >'Seckill failed'])->withStatus(200);
            } else {
                flock($fp, LOCK_UN);    // Release the lock
                fclose($fp);
                return $response->json(['msg'= >'Inventory is low, seconds kill failed. '])->withStatus(500); }}else {
            fclose($fp);
            return $response->json(['msg'= >'The activity is too hot, too many people are buying, please try again later. '])->withStatus(500); }}catch (\Exception $exception) {
        fclose($fp);
        return $response->json(['msg'= >'System exception'])->withStatus(500);
    } finally {
        fclose($fp); }}Copy the code

Problem analysis:

  1. If file locks are used, opening a lock and releasing a lock can be time consuming. In the second kill scenario, a large number of requests come in, and most users are waiting for the request.

  2. When a file lock is opened, it is applied to the current server. If our project is on a distributed deployment, the above locks can only be applied to the current server, not to the request. As shown below: Easy time there are multiple multiple servers multiple locks.

The third scenario

The solution is to store the inventory of goods by Redis first. A request is made to reduce the inventory by 1 for the above inventory. If the inventory returned by Redis is less than 0, the current kill fails. It takes advantage of Redis’s single-threaded write. Ensure that only one thread is writing to Redis each time.

public function demo2(ResponseInterface $response)
{
    $application = ApplicationContext::getContainer();
    $redisClient = $application->get(Redis::class);
    / * *@varInt $goodsStock Redis decreases inventory by 1 */
    $goodsStock = $redisClient->decr($this->goodsKey);
    if ($goodsStock > 0) {
        // TODO performs additional business logic
        $result = true;// Result of business processing
        if ($result) {
            return $response->json(['msg'= >'Seckill successful'])->withStatus(200);
        } else {
            $redisClient->incr($this->goodsKey);// Add 1 to the reduced inventory
            return $response->json(['msg'= >'Seckill failed'])->withStatus(500); }}return $response->json(['msg'= >'SEC kill failed, merchandise stock is low. '])->withStatus(500);
}
Copy the code

Problem analysis:

  1. Although the solution utilizes Redis single thread model features, can avoid oversold clearing. When the inventory is 0, a second kill request will reduce the inventory by 1, and Redis will definitely end up with less than 0 cached data.

  2. In this scheme, the number of seconds killed by users is inconsistent with the actual number of seconds killed by commodities. As the above code, when the business processing result is FALSE, add 1 to Redis. If there is an exception in the process of adding 1, the number of goods will be inconsistent if the increase is not successful.

The fourth scenario

Through the analysis of the above three cases, we can conclude that the file lock case is the best one. But file locking does not solve the problem of distributed deployment cleanup. In this case, we can use Redis setNx, EXPIRE to implement distributed locks. The setnx command first sets a lock and expire adds a timeout to the lock.

public function demo4(ResponseInterface $response)
{
    $application = ApplicationContext::getContainer();
    $redisClient = $application->get(Redis::class);

    if ($redisClient->setnx($this->goodsKey, 1)) {
        // Suppose the server goes down when you do the following
        $redisClient->expire($this->goodsKey, 10);
        // TODO handles business logic
        $result = true;// Process the result of the business logic
        / / remove the lock
        $redisClient->del($this->goodsKey);
        if ($result) {
            return $response->json(['msg'= >'Seckill succeeded. '])->withStatus(200);
        }
        return $response->json(['msg'= >'Seckill failed. '])->withStatus(500);
    }
    return $response->json(['msg'= >'System exception. Please try again. '])->withStatus(500);
}
Copy the code

Problem analysis:

  1. From the example code above, we can see that there seems to be nothing wrong with this approach. Add a lock and release the lock. However, after the setnx command was used to add the lock, an exception occurred when the expire time was set for the lock. Had this lock been there the whole time?

  2. So the above situation to achieve Redis distributed lock, is not atomic.

Scenario 5

In the fourth scenario, Redis is used to implement distributed locking. However, the distributed lock is not atomic. Fortunately, Redis provides a combined version of the two commands to achieve atomicity. Set (key, value, [‘nx’, ‘ex’ => ‘expiration time ‘])

public function demo5(ResponseInterface $response)
{
    $application = ApplicationContext::getContainer();
    $redisClient = $application->get(Redis::class);

    if ($redisClient->set($this->goodsKey, 1['nx'.'ex'= >10]) {try {
            // TODO handles the second kill
            $result = true;// Process the result of the business logic
            $redisClient->del($this->goodsKey);
            if ($result) {
                return $response->json(['msg'= >'Seckill succeeded. '])->withStatus(200);
            } else {
                return $response->json(['msg'= >'Seckill failed. '])->withStatus(200); }}catch (\Exception $exception) {
            $redisClient->del($this->goodsKey);
        } finally {
            $redisClient->del($this->goodsKey); }}return $response->json(['msg'= >'System exception. Please try again. '])->withStatus(500);
}
Copy the code

Problem analysis:

  1. Step by step, you may feel that the fifth scenario, Redis, should be a seamless implementation of distributed. We take a closer look at where TODO is played and where business logic is handled. What happens if the business logic exceeds the 10 seconds set by the cache?

  2. If the logical processing takes more than 10 seconds, the second second kill request can normally process its own business request. If the business logic of the first request is completed, the Redis lock of the second request will be removed. The third request is executed normally. Is this an invalid lock like the Redis lock?

  3. This situation causes the current request to delete a Redis lock without deleting its own lock. What if we do a verification when we delete locks, only delete their own locks, to see if this scheme works? Now, let’s look at the sixth case.

Sixth scenario

In the fifth case above, a unique identification judgment of the request is added at the time of deletion. That is, only the identity of the lock is deleted.

public function demo6(ResponseInterface $response)
{
    $application = ApplicationContext::getContainer();
    $redisClient = $application->get(Redis::class);
    / * *@varString $Unique identifier of the current client request */
    $client = md5((string)mt_rand(100000.100000000000000000).uniqid());
    if ($redisClient->set($this->goodsKey, $client['nx'.'ex'= >10]) {try {
            // TODO handles snapkill business logic
            $result = true;// Process the result of the business logic
            $redisClient->del($this->goodsKey);
            if ($result) {
                return $response->json(['msg'= >'Seckill successful'])->withStatus(200);
            }
            return $response->json(['msg'= >'Seckill failed'])->withStatus(500);
        } catch (\Exception $exception) {
            if ($redisClient->get($this->goodsKey) == $client) {
                // There is a time difference here
                $redisClient->del($this->goodsKey); }}finally {
            if ($redisClient->get($this->goodsKey) == $client) {
                // There is a time difference here
                $redisClient->del($this->goodsKey); }}}return $response->json(['msg'= >'Please try again later'])->withStatus(500);
}
Copy the code

Problem analysis

  1. Through the above analysis, it seems that there is no problem at all. However, carefully you can see where I added the comment “there is a time difference here”. If Redis reads from the cache and determines that the unique identifier of the request is consistent, a block, network fluctuation, and so on occurs when del removes the lock. Does the del command run after the lock has expired when the lock is still the currently requested lock?

  2. If you delete the lock at this time, it is definitely not the current requested lock. It is the lock for the next request. In this case, will there be invalid lock?

The problem summary

In the examples above, the big problem is releasing the lock to Redis because it is not an atomic operation. In combination with the sixth case, if we can guarantee that the release lock is an atomicity and the addition lock is an atomicity, does this correctly guarantee that our distributed locks are ok?

  1. When adding locks, atomic operations can be implemented using Redis native commands.

  2. When a lock is released, only the locks that it has added are deleted, as we solved in scenario 6.

  3. Then you just have to worry about atomicity when you release the lock. Since Redis does not have such a command natively, we need to use lua operations to achieve atomicity.

The specific implementation

By opening the official website, you can see that the official website provides several distributed lock implementation clients, you can directly use. The client I used here is rtckit/reactphp-redlock. For specific installation methods, follow the documentation. Here is a brief description of the two ways to call.

The first way

 public function demo7()
{
    / * *@varFactory $Factory initializes a Redis instance */
    $factory = new \Clue\React\Redis\Factory();
    $client  = $factory->createLazyClient('127.0.0.1');

    / * *@varCustodian $Custodian initializes a lock listener */
    $custodian = new \RTCKit\React\Redlock\Custodian($client);
    $custodian->acquire('MyResource'.60.'r4nd0m_token')
        ->then(function (? Lock$lock) use ($custodian) {
            if (is_null($lock)) {
                // Failed to obtain the lock
            } else {
                // Add a lock with a lifetime of 10s
                // TODO handles business logic
                / / releases the lock
                $custodian->release($lock); }}); }Copy the code

The general logic of this method is similar to that in the sixth scheme, which is to use Redis set + nx command to achieve atomic locking, and then set a random string to the current lock, which is used to process the release of the current lock, not to release others’ locks. The big difference is that this method calls a Lua script to remove the lock when release is used. Ensure that lock release is atomic. Below is a snapshot of the lock release.

/ / the lua script
public const RELEASE_SCRIPT = <<;

public function release(Lock $lock) :PromiseInterface
{
    / * *@psalm-suppress InvalidScalarArgument */
    return $this->client->eval(self::RELEASE_SCRIPT, 1.$lock->getResource(), $lock->getToken())
        ->then(function (?string $reply) :bool {
            return $reply= = ='1';
        });
}
Copy the code

The second way

The second method is not too different from the first, but adds a spin lock. Keep trying to get the lock, and if not, discard the current request.

public function demo8()
{
    / * *@varFactory $Factory initializes a Redis instance */
    $factory = new \Clue\React\Redis\Factory();
    $client  = $factory->createLazyClient('127.0.0.1');

    / * *@varCustodian $Custodian initializes a lock listener */
    $custodian = new \RTCKit\React\Redlock\Custodian($client);
    $custodian->spin(100.0.5.'HotResource'.10.'r4nd0m_token')
        ->then(function (? Lock$lock) use ($custodian) : void {
            if (is_null($lock)) {
                // 100 sessions will be performed, each with an interval of 0.5 seconds to acquire the lock, if no lock is obtained. The lock request is abandoned.
            } else {
                // Add a lock with a lifetime of 10s
                // TODO handles business logic
                / / releases the lock
                $custodian->release($lock); }}); }Copy the code

spinlocks

Spinlock, also known as spinlock, is a locking mechanism to protect shared resources. A spin lock is similar to a mutex in that it solves the problem of mutually exclusive use of a resource.

Whether it is a mutex or a spin lock, there can be at most one holder at any time, and only one execution unit can acquire the lock. But the scheduling mechanism is slightly different. For mutex, if the resource is already occupied, the resource applicant can only go to sleep. But a spinlock does not cause the caller to sleep. If the spinlock has been held by another execution unit, the caller keeps looping to see if the holder of the spinlock has released the lock, hence the name “spin”.

conclusion

In fact, through the above several programs, careful you, may also find a lot of problems.

  1. Concurrency itself can be an additional multi-thread processing method, we add a lock here, does the parallel processing become serial processing. Reduced the so-called high performance of seckill.

  2. Does the above solution still work in Redis master-slave replication, clustering, and other deployment architectures?

  3. A lot of people say that ZooKeeper is better suited for distributed locking scenarios, but what does ZooKeeper do better than Redis?

With that in mind, I’ll see you in the next article. Like, interested, you are welcome to pay attention to my article. Any inadequacies in the article are welcome to be corrected.