In our work, we often encounter such a problem that we need to update the inventory. When we find the available inventory and prepare to modify it, other users may have already modified the inventory data, resulting in problems with the data we query. Let’s look at the solutions below.

In MySQL InnoDB, the default Tansaction Isolation Level is REPEATABLE READ.

If you want to UPDATE the same form after SELECT, use SELECT… The UPDATE.

Here’s an example:

Suppose the goods form Products has a quantity containing the quantity of the goods. Before the order can be established, we must determine whether the quantity of the quantity of the goods is sufficient (Quantity >0) and then update the quantity to 1. The code is as follows:

SELECT quantity FROM products WHERE id=3; UPDATE products SET quantity = 1 WHERE id=3;Copy the code

Why isn’t it safe?

A small amount of data may not be a problem, but a large amount of data access will definitely be. If we need to store quantity>0, let’s say we SELECT quantity 2 from the first row, it looks like the number is correct, but when MySQL is about to UPDATE, someone might store the inventory to 0, but the program doesn’t know about it. UPDATE the wrong down. Therefore, transaction mechanisms must be used to ensure that the data read and submitted is correct.

So we can test this in MySQL as follows:

SET AUTOCOMMIT=0; BEGIN WORK; SELECT quantity FROM products WHERE id=3 FOR UPDATE;Copy the code

At this time, the data in the products data with id=3 is locked (note 3), and other transactions must wait for this transaction to commit before execution

SELECT * FROM products WHERE id=3 FOR UPDATE Copy the code

This ensures that quantity reads correct numbers in other transactions.

UPDATE products SET quantity = '1' WHERE id=3 ; COMMIT WORK;Copy the code

Commit write database, products unlock. Note 1: BEGIN/COMMIT indicates the start and end points of a transaction. You can use two or more MySQL Command Windows to interactively observe the lock status. Note 2: In a transaction, only SELECT… FOR UPDATE or LOCK IN SHARE MODE the same transaction will wait FOR the end of other transactions before executing. Is not affected by this. Note 3: Since InnoDB defaults to row-level locking, see this article for column locking. Note 4: Do not use LOCK TABLES for InnoDB forms. If you must use LOCK TABLES, please read the official instructions for InnoDB to avoid deadlock.

More advanced usage

If we need to query data before updating it, we can use statements like this:

UPDATE products SET quantity = '1' WHERE id=3 AND quantity > 0;Copy the code

This way, you can process things without adding them.

Mysql handles high concurrency and prevents oversold inventory

Saw a very good article, special transfer this study.

Today, Mr. Wang gave us a lesson, in fact, mysql processing high concurrency, prevent oversold inventory problems, last year, Mr. Wang has mentioned; But unfortunately, even though everyone understood at that time, there was no awareness of this aspect in real development. Today, I will sort out this problem based on my understanding and hope that there will be more such courses in the future.

First, to describe the problem of oversold inventory: general e-commerce sites will encounter such activities as group purchase, second kill, special offers, and such activities have a common feature is the surge of visits, thousands or even tens of thousands of people snap up a commodity. However, as active goods, inventory is certainly very limited, how to control inventory not to overbuy, in order to prevent unnecessary losses is a headache for many e-commerce website programmers, which is also the most basic problem.

From a technical perspective, many people will certainly think of transactions, but transactions are a necessary condition for controlling oversold inventory, but not a sufficient and necessary condition.

For example:

Total inventory: 4 items

Requester: A, 1 commodity B, 2 commodity C, 3 commodity

The procedure is as follows:

BeginTranse (start transaction) try{$result = $dbca->query('select amount from s_store where postID = 12345'); $dbca->query('update s_store set amount = amount - quantity where postID = ' 12345 '); }}catch($e Exception){rollBack(rollBack)} commitCopy the code

The above code is the code we usually control inventory to write, most people will write this, it seems that the problem is not big, in fact, there are huge loopholes hidden. Database access is actually the access to disk files, the table in the database is actually saved in the disk a file, even a file contains more than one table. For example, due to high concurrency, there are three users A, B and C in this transaction, at this time, a shared lock will be generated, so in the select, these three users check the inventory quantity is 4, also note that mysql InnoDB check results are version-controlled. Before other users commit updates (i.e., before a new version is created), the current user will still check the results of the original version.

If all three users arrive at the same time, the update statement will serialize the concurrency. That is, the three users will be ordered one by one, and an exclusive lock will be generated. The other users will wait until the current update statement commits. After commit, a new version is generated. After this execution, the inventory must be negative. However, according to the above description, we can modify the code to avoid the phenomenon of overbuying. The code is as follows:

$dbca->query('update s_store set amount = amount - quantity where postID = ' 12345 '); $result = $dbca->query('select amount from s_store where postID = 12345'); If (result->amount < 0){throw new Exception(' insufficient inventory '); }}catch($e Exception){rollBack(rollBack)} commitCopy the code

Alternatively, a more concise approach:

$dbca->query('update s_store set amount = amount-quantity WHERE ' amount>=quantity and postID = 12345'); }catch($e Exception){rollBack(rollBack)} commit(commit transaction)Copy the code

= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

1, in the case of seconds kill, certainly not so high frequency to read and write database, will seriously cause performance problems must use cache, will need to seconds kill goods into the cache, and use locks to deal with its concurrency. When receiving orders submitted by users in seconds, the quantity of goods will be decreased first (lock/unlock) and then processed in other aspects. If the processing fails, the data will be increased by 1 (lock/unlock), otherwise, the transaction is successful. When the number of commodities decreases to 0, the commodity is killed in seconds and requests from other users are rejected.

2, this certainly can’t operate the database directly, will hang. Direct read library write library is too much pressure on the database, to use cache. Put the items you want to sell, say 10 items in cache; Then set a counter in memcache to keep track of requests. This request can be based on the number of items you want to sell in seconds. For example, if you want to sell 10 items, only 100 requests will be allowed. Then when the counter reaches 100, the second kill will be displayed after the second kill, which will reduce the stress on your server. Then, according to the 100 requests, the prompt goods of “pay first, pay first, serve later” will be killed in seconds.

3. First of all, when multiple users concurrently modify the same record, it is certain that the later user will overwrite the results submitted by the former.

This can be solved directly by using locking mechanism, optimistic locking or pessimistic locking. Optimistic locking:, it is in the database design a version number of the field, every change made its + 1, so before submitting than submit when you know you are concurrently submitted version number, but there’s a downside is that can be applied in control, if there are across application to modify the same data optimistic locking is no way out, this time can consider pessimistic locking.

Pessimistic locking: locks data directly at the database level, similar to Oralce where select XXXXX from XXXX where xx=xx for update, so that other threads will not be able to commit data.

In addition to the lock mode can also be used to receive lock way of thinking is a state identity in the database design, the user to modify data, before the state identity identification for the state you are editing, so that other users to edit this record system will be found to have other users are editing, refused to the editor’s request, It’s like when you’re running a file in your operating system and you want to modify it, the system will remind you that the file cannot be edited or deleted.

4. It is not recommended to lock the database level. It is recommended to lock the server memory (lock the primary key). When a user wants to modify the data of a certain ID, the id to be modified is stored in memcache. If another user triggers to modify the data of this ID, when memcache reads the value of this ID, the user is prevented from modifying the data.

5. In practical application, mysql is not allowed to face large concurrent read and write. It will use “external forces”, such as caching, using master and slave libraries to achieve read and write separation, table splitting, using queue writing and other methods to reduce concurrent read and write.

Pessimistic locks and optimistic locks

First of all, when multiple users concurrently modify the same record, it is certain that the later user will overwrite the results submitted by the former. This can be solved directly by using locking mechanism, optimistic locking or pessimistic locking.

Pessimistic locks, as the name implies, are Pessimistic. Each time I fetch the data, I think someone else will change it, so I Lock the data each time I fetch it, so that someone else will try to fetch it and block it until it gets the Lock. Traditional relational database inside used a lot of this locking mechanism, such as row lock, table lock, read lock, write lock, etc., are in the operation before the first lock.

Optimistic Lock, as the name implies, is very Optimistic. Every time I go to get data, I think that others will not modify it, so I will not Lock it. But when UPDATING, I will judge whether others have updated the data during this period, and I can use the version number and other mechanisms. Optimistic locks are suitable for multi-read applications to improve throughput. For example, if a database provides a mechanism similar to write_condition, it will provide optimistic locks.

The two kinds of locks have their own advantages and disadvantages, and cannot simply define which is better than which. Optimistic locking is more suitable for less data modification, read more frequent scenarios, even if there is a small number of conflicts, it also saves a lot of lock overhead, so as to improve the throughput of the system. However, pessimistic locking is more appropriate in situations where conflicts occur frequently (in the case of a large amount of data being written) and the upper layer application is not continuously retry, thereby reducing performance.

In actual combat

Modify the amount on this table to open two command line Windows

First window A;

SET AUTOCOMMIT=0; BEGIN WORK; SELECT * FROM order_tbl WHERE order_id='124' FOR UPDATE;Copy the code

Second window B:

UPDATE 'order_tbl' SET amount = 1 WHERE order_id = 124;Copy the code

We can see that window A adds an object and locks the data, and window B executes with this problem:

The first window to complete the submission of things:

SET AUTOCOMMIT=0; BEGIN WORK; SELECT * FROM order_tbl WHERE order_id='124' FOR UPDATE;
UPDATE `order_tbl` SET amount = 10 WHERE order_id = 124;
COMMIT WORK;Copy the code

Mysql handles high concurrency and prevents oversold inventory