Idempotency of interfaces

1. What is idempotency

The idempotence of the interface means that the results of one or multiple requests initiated by the user for the same operation are consistent and will not have side effects due to multiple clicks. For example, the payment scenario, the user bought the goods payment deduction success, but returned to the result of the network abnormality, at this time the money has been deducted, the user click the button again, at this time will be the second deduction, returned to the result of success, the user query the balance back found more deduction, water record has become two… , which does not guarantee the idempotency of the interface.

2. What situations need to be prevented

  • The user clicks the button multiple times
  • The user page is rolled back and submitted again
  • Microservices call each other, but the request fails due to network problems. Feign triggers the retry mechanism
  • Other Business Information

3. When is idempotent

In SQL, for example, some operations are naturally idempotent

  • SELECT * FROM table WHER id=?
    • No matter how many times you do it, it doesn’t change the state. It’s naturally idempotent.
  • UPDATE tab1 SET col1=1 WHERE col2=2
    • The state is consistent no matter how many times it succeeds, and it’s an idempotent operation.
  • delete from user where userid=1
    • Multiple operations, same result, idempotent
  • insert into user(userid,name) values(1,’a’)
    • If the userID is the unique primary key, only one piece of user data will be inserted, which is idempotent.

Not power etc.

  • UPDATE tab1 SET col1=col1+1 WHERE col2=2.
    • The result of each execution changes and is not idempotent.
  • insert into user(userid,name) values(1,’a’)
    • If the userID is not the primary key and can be repeated, multiple operations will be performed on the service and more data will be added, which is not idempotent.

4. Idempotent solutions

4.1 token Mechanism

1. The server provides an interface for sending tokens. When we analyze the business, which business has idempotent problem, we must obtain the token before executing the business, and the server will save the token in Redis. 3. The server determines whether the token exists in redis, which indicates the first request, and then deletes the token and continues to perform business. 4. If it is judged that the token does not exist in REDis, it means that the operation is repeated, and the repeat mark is directly returned to the client, so as to ensure that the business code will not be executed repeatedly.

Risk: Whether to delete the token first or later; 1. If the service is deleted first, the request cannot be executed because of anti-redesign. 2. If the token is deleted after deletion, the service is processed successfully, but the service is intermittently interrupted and times out. If the token is not deleted, other people try again, and the service is executed on both sides. The best design is to delete the token first, and if the business call fails, get the token again and request again. Redis. get(Token), token.equals (Token), and redis.del(Token) if the two operations are not atomic, the same data may be obtained in high concurrency. Continue services and execute them concurrently 2. You can do this in Redis using lua scripts

1. Delete the token first or later?

(1) Deletion may cause that the service is not executed and the request cannot be executed due to anti-redesign. (2) Post-deletion may lead to successful service processing, but the service is intermittently interrupted, timeout occurs, the token is not deleted, others continue to retry, resulting in the execution of services on both sides. (3) We had better design to delete the token first. If the service invocation fails, we will obtain the token again and request again.Copy the code

2. Token acquisition, comparison, and deletion must be atomic

Get (token), token.equals (token), and redis. Del (token) if these two operations are not atomic, they may result in the same data in high concurrency. Continue business concurrent execution (2) This can be done in Redis using lua scriptsCopy the code

The LUA script

if redis.call('get', KEYS[1]) == ARGV[1] then return redis.call('del', KEYS[1]) else return 0 end

The sample

@Transactional
    public SubmitORderResponseVo submitOrder(OrderSubmitVo submitVo) {
        MemberResponseVo memberResponseVo = LoginUserInterceptor.loginUser.get();
        / / the LUA script
        String script = "if redis.call('get', KEYS[1]) == ARGV[1] then return redis.call('del', KEYS[1]) else return 0 end";
        // Get the anti-replay token
        String orderToken = submitVo.getOrderToken();
       // Atomically validate and delete tokens
        Long result = redisTemplate.execute(new DefaultRedisScript<Long>(script, Long.class),
                Arrays.asList(OrderConstant.USER_ORDER_TOKEN_PREFIX + memberResponseVo.getId()),
                orderToken);
 		//0 failed 1 Succeeded
        if (result == 0L) {
            // Token verification failed
        } else {
            // Verification succeeded
            // create order, order items and other information}}Copy the code

4.2 Various locking mechanisms

1, database pessimistic locking

select * from xxxx where id = 1 for update;

Pessimistic locking is generally used together with transactions, and the data locking time may be very long. Therefore, it needs to be selected according to the actual situation. Note also that the ID field must be the primary key or unique index, otherwise it may result in a lock table, which can be very troublesome to handle.

2. Database optimistic locking

This approach works well in newer scenarios, Update t_goods set count = count-1, version = version + 1 WHERE good_id=2 and version = 1 That is to get the version version number of the current item before operating the inventory, and then carry the version number when operating. Let’s see, the first time we operate on the inventory, we get version 1, call the inventory service, version 2; The order service invokes the inventory service again. When the version sent by the order service is still 1 and the above SQL statement is executed, it will not be executed. Since version has changed to 2, the WHERE condition does not hold. This ensures that no matter how many times the call is made, it will actually be processed once. Optimistic locking is mainly used to deal with the problem of reading too much and writing too little

4.3 Distributed locking at the service layer

If it is possible for multiple machines to process the same data at the same time, such as multiple machines receiving the same data processing for a timed task, we can add a distributed lock to lock the data and release the lock after the processing is completed. The lock must first determine whether the data has been processed.

@Controller
public class SeckillSkuScheduled { 
    @Autowired
    RedissonClient redissonClient;

    private  final String  upload_lock = "seckill:upload:lock";


public void uploadSeckillSkuLatest3Days(a){
        // Distributed lock. The lock service is executed and the status is updated.
        // After the lock is released. Everyone else who gets it gets the latest status.
        RLock lock = redissonClient.getLock(upload_lock);
        lock.lock(10, TimeUnit.SECONDS);
        try{
            // Execute business code
            seckillService.xxxxxx();
        }finally{ lock.unlock(); }}}Copy the code

3. Various unique constraints

1. Database unique constraints

Insert data, which should be inserted according to a unique index, such as the order number; it is impossible to insert two records for the same order. We prevent duplication at the database level. This mechanism takes advantage of the unique constraint of the database primary key to solve the idempotent problem in insert scenarios. However, the requirement for a primary key is not self-incremented, which requires the business to generate a globally unique primary key. In a separate database and table scenario, routing rules must ensure that the same request is landed in the same database and table, otherwise the database primary key constraint will not be effective, because different databases and table primary keys are unrelated.

2, Redis set anti-weight

A lot of data needs to be processed and can only be processed once. For example, we can calculate the MD5 of the data and put it into the set of Redis. Each time, we can process the data and check whether the MD5 already exists.

4, anti-weight table

Use order number orderNo as the unique index of the de-duplicating table, insert the unique index into the de-duplicating table, and then perform the business operation, and they are in the same transaction. This ensures that repeated requests will fail due to unique constraints on the de-repetition table, avoiding idempotent problems. It is important to note that the de-redo table and the business table should be in the same library. This ensures that the de-redo table data will be rolled back even if the business operation fails in the same transaction. This is a good way to ensure data consistency. Before said redis anti – weight also count

5, global request unique ID

When the interface is called, a unique ID is generated, and Redis stores the data into the collection (de-duplicated), which is processed as it exists. You can use nginx to set a unique ID for each request; proxy_set_header X-Request-Id $request_id;

Open source idempotent plug-in

My.oschina.net/giegie/blog…