1. Concurrent database scenarios

In a high concurrency scenario, without considering other middleware, the database will have the following scenarios:

  • Read: There are no problems and no concurrency control required.
  • Read and write: there are thread safety issues that may cause transaction isolation issues, and you may encounter dirty reads, phantom reads, and unrepeatable reads.
  • Write: There are thread safety issues, there may be update loss issues, such as type 1 update loss, type 2 update loss.

In response to the above problems, the SQL standard states that different isolation levels can cause different problems:

MySQL isolation levels

Isolation level Dirty read Unrepeatable read Phantom read
READ UNCOMMITTED: Indicates that the READ is not committed May occur May occur May occur
READ COMMITTED: Indicates that the READ is COMMITTED To solve May occur May occur
REPEATABLE READ: REPEATABLE READ To solve To solve May occur
SERIALIZABLE: SERIALIZABLE To solve To solve To solve

As can be seen, MySQL actually solved the non-repeatability problem in REPEATABLE READ isolation level, basically solved the phantom reading problem, but phantom reading phenomenon still exists in extreme cases.

So what’s the solution? Generally speaking, there are two options:

1️ read operation MVCC, write operation lock

For read, under RR MVCC, a ReadView is generated when a transaction is started. The ReadView is then used to find the historical version that matches the condition. This version is built from undo logs. In an RR transaction, a ReadView is generated only for the first time a SELECT operation is performed. Subsequent SELECT operations reuse this ReadView, thus avoiding unrepeatable reads and largely avoiding phantom reads.

For writes, since no records in the table are locked during snapshot reads or consistent reads, and the ReadView transaction is the historical version, and the latest version of the write operation does not conflict with the ReadView transaction, other transactions are free to modify the records in the table.

2️ read and write operations are locked

If some of our business scenarios do not allow reading the old version of the record, but must read the latest version of the record every time, for example, in a bank deposit transaction, you need to read the account balance first, then add it to the current deposit amount, and finally write to the database. After the account balance is read out, other transactions are not allowed to access the account balance until the deposit transaction completes. This also locks the record when it is read, which means that read and write operations are queued like write-write operations.

For dirty reads, this is because the current transaction has read a record from another uncommitted transaction, but if the other transaction locks the record while writing to the record, the current transaction cannot continue to read the record, so there is no dirty read problem.

For not repeatable read, because the current transaction reads a record, first the other transactions made after the changes and submit for the record, the current transaction reads again will get different values, if while reading records in the current transaction will give the record locking, then another transaction will not be able to modify the record, also won’t happen naturally not repeatable read.

For phantom reading, the reason is that the current transaction reads a range of records, and then another transaction inserts new records into the range. When the current transaction reads the range of records again, it finds the new inserted records. We call the newly inserted records phantom records.

How do you understand this range? As follows:

  1. Suppose there is only one entry in the table userid=1The data.
  2. When transaction A executes Aid = 1Can query out the data, if it is a range query, such asId in (1, 2)Only one piece of data will be queried.
  3. Transaction B then executes oneid = 2To add a new action, and commit.
  4. Transaction A then executes againId in (1, 2)“, will read out two records, resulting in a phantom read.

SQL > select * from RR where id = 2; SQL > select * from RR where id = 2; Where id = 2 and id = 2

It is not easy to solve phantom reading problems by locking the phantom records because the phantom records do not exist when the current transaction first reads them, so it is a bit troublesome to lock the phantom records because you do not know who to lock.

So how does InnoDB solve this? Let’s start by looking at what locks InnoDB storage engine has.

2. MySQL locks and classification

In the MySQL documentation, InnoDB storage engine introduces the following locks:

Again, it’s still a bit of a puzzle, but we can categorize it the same way we learned locks in the JDK:

3. Lock granularity classification

What is the granularity of locks? The granularity of the lock is how much you want to lock.

For example, you go to the toilet at home, you want to lock the toilet as long as it is ok, do not need to lock up the whole home not to let the family into the door, toilet is your lock granularity.

How to calculate the reasonable locking granularity?

In fact, the toilet is not only used to go to the toilet, but also to take a bath and wash your hands. This involves the optimization of locking granularity.

If you take a bath in the bathroom, other people can also wash their hands in the bathroom at the same time, as long as they are isolated. If the toilet, bathtub and bathroom are separated from each other (dry and wet are separated), in fact, the bathroom can be used by three people at the same time, of course, three people can not do the same thing. In this way, the granularity of locking is refined. When you take a shower, you just close the bathroom door, and others can still go in and wash their hands. If at the beginning when designing toilet, did not divide different functional area to lie between leave, cannot realize the maximum use of toilet resource.

Similarly, there is granularity of locks in MySQL. There are usually three types of row locks, table locks and page locks.

3.1 row locks

Both shared and exclusive locks are for a single row, so they can also be called row locks.

Locking a record affects only that record, so the locking granularity of row locks is the smallest in MySQL. InnoDB storage engine default lock is row lock.

It has the following characteristics:

  1. The lock conflict probability is the lowest and the concurrency is high

    Because the granularity of row lock is small, the probability of contention for lock resource is also minimum, thus the probability of lock conflict is low, and the concurrency is higher.

  2. Expensive and slow to lock

    Locking is very performance consuming. If you lock multiple pieces of data in a database, it will consume a lot of resources, and locks need to wait for the previous lock release before locking.

  3. Deadlocks can occur

    Read on to see what a deadlock is.

3.2 table locks

Table level lock is a table-level lock that locks the entire table, avoiding deadlocks and being the largest granularity locking mechanism in MySQL.

The default lock for MyISAM storage engine is a table lock.

It has the following characteristics:

  1. Low overhead, fast locking

    Because the lock is for the whole table, it must be faster than the lock for a single data.

  2. No deadlocks occur

    If the entire table is locked, no other transaction can get the lock, so there is no deadlock.

  3. Large lock granularity, high lock conflict probability, low concurrency

3.3 page locks

Page-level locking is a unique locking level in MySQL and is not common in other database management software.

The granularity of page-level locks is between row-level locks and table-level locks, so the resource cost of obtaining locks and the concurrent processing power provided by locks are also in between. In addition, page-level locks, like row-level locks, can be deadlocked.

Row locks Table locks Page locks
The granularity of the lock small big Between the two
The efficiency of lock slow fast Between the two
The probability of conflict low high
Concurrent performance high low general
Performance overhead big small Between the two
If a deadlock is no is

4. Compatibility classification of locks

Data read in MySQL is divided into current read and snapshot read:

  • Read the snapshot

    Snapshot read reads snapshot data. SELECT without lock belongs to snapshot read.

    SELECT * FROM table WHERE ...
    Copy the code
  • The current reading

    The current read is used to read the latest data, not the historical data. Lock SELECT, or adding, deleting, or modifying data, is used to read the current data.

    SELECT * FROM table LOCK IN SHARE MODE;
    SELECT FROM table FOR UPDATE;
    INSERT INTO table values ...
    DELETE FROM table WHERE ...
    UPDATE table SET ...
    Copy the code

In most cases, we operate the database in the current read situation, but in the concurrent scenario, not only to allow read-read situation is not affected, but also to make writ-write, read-write or writ-read operations block each other, we need to use the MySQL shared lock and exclusive lock.

4.1 Shared locks and Exclusive Locks

Shared Locks are also called read Locks, or S-locks for short. Data can be read concurrently, but no transaction can modify the data.

Exclusive Locks, also known as write or Exclusive Locks, are X Locks. If a transaction holds an exclusive lock on a row, only that transaction can read or write it. No other transaction can hold any lock on it until the transaction ends. Other processes can read, but not write, and wait for it to be released.

Let’s analyze the lock acquisition scenario: suppose there are transactions A and B

  1. Transaction A obtains the S lock of A record, and then transaction B also wants to obtain the S lock of the record, so transaction B can also obtain the lock, that is to say, transaction A and transaction B both hold the S lock of the record.
  2. If transaction B wants to acquire the X lock on the record, the operation is blocked until the S lock is released after transaction A commits.
  3. If transaction A first acquires the X lock, either the S lock or the X lock that transaction B wants to acquire the record will be blocked until transaction A commits.

Therefore, we can say that S locks are compatible with S locks, S locks are incompatible with X locks, and X locks are incompatible with X locks.

4.2 intention to lock

Intention Shared Lock (IS Lock) When a transaction IS about to place an S lock on a record, an IS lock needs to be placed at the table level.

Intention Exclusive Lock (IX). When a transaction is about to place an X lock on a record, an IX lock is required at the table level.

Intent locks are table-level locks, and they are proposed only to quickly determine whether the records in the table are locked when the table level S and X locks are added later, so as to avoid traversing to check whether there are locked records in the table. IS locks are compatible with IS locks, and IX locks are compatible with IX locks.

Why do I need intent locks?

InnoDB’s intent lock is the main user of multi-granularity lock coexistence. For example, if transaction A wants to lock S on A table, if A row in the table has been locked X by transaction B, the lock application should also be blocked. If there is a lot of data in the table, checking lock flags row by row can be expensive and system performance can suffer.

For example, if there are 100 million records in the table, transaction A locks some of them, then transaction B needs to add table level lock to the table, if there is no intention to lock, then we need to check whether the 100 million records in the table are locked. If there is an intent lock, then if transaction A adds an intent lock and then X lock before updating A record, transaction B checks whether there is an intent lock on the table and whether the existing intent lock conflicts with the lock it intends to add. If there is A conflict, transaction B will wait until transaction A releases, without detecting each record. When transaction B updates the table, it doesn’t need to know which row is locked, it just needs to know that one row is locked anyway.

To put it bluntly, the main purpose of intent locks is to resolve the conflict between row locks and table locks, and to indicate that a transaction is holding the lock on a row or is about to hold the lock.

Table level compatibility of various locks:

S IS X IX
S Compatible with Compatible with Are not compatible Are not compatible
IS Compatible with Compatible with Are not compatible Are not compatible
X Are not compatible Are not compatible Are not compatible Are not compatible
IS Compatible with Compatible with Are not compatible Are not compatible

4.3 Lock for Read Operations

For MySQL read operations, there are two ways to lock.

1️ SELECT * FROM table LOCK IN SHARE MODE

If this statement is executed by the current transaction, it will place an S lock on the read record, allowing other transactions to continue to acquire the S lock on the read record (for example, other transactions also use SELECT… LOCK IN SHARE MODE statement to read these records), but cannot acquire the X LOCK for these records (for example, using SELECT… FOR UPDATE statement to read these records, or modify them directly).

If other transactions attempt to acquire the X lock on those records, they block until the S lock on those records is released after the current transaction commits

2️ SELECT FROM table FOR UPDATE

If this statement is executed by the current transaction, it will place an X lock on the read records, so that no other transaction can acquire the S lock on those records (for example, other transactions use SELECT… The LOCK IN SHARE MODE statement reads these records) and does not allow X locks to acquire these records (such as using SELECT… FOR UPDATE statement to read these records, or modify them directly).

If another transaction attempts to acquire the S or X lock on those records, they will block until the X lock on those records is released after the current transaction commits.

4.4 Lock for Write Operations

MySQL write operations include DELETE, UPDATE, and INSERT. Implicit lock, automatic lock, unlock.

1 ️ ⃣ DELETE

The process of deleting a record is to locate the record in the B+ tree, obtain the record’s X lock, and then perform the DELETE mark. We can also think of the process of locating the record to be deleted in the B+ tree as a lock read for an X lock.

2 ️ ⃣ INSERT

InnoDB uses a method called implicit locking to protect the newly inserted record from being accessed by other transactions until the transaction is committed.

3 ️ ⃣ UPDATE

There are three cases when an UPDATE operation is performed on a record:

(1) If the key value of the record is not changed and the storage space occupied by the updated column does not change before and after the modification, locate the record in the B+ tree, obtain the X lock of the record, and modify the record in the original position. In fact, we can also think of the process of locating the record to be modified in the B+ tree as a lock read to acquire an X lock.

(2) if you do not modify the record key values and at least one updated column occupy storage space change before and after modification, the positioning in the B + tree to the first record of the location, and then get the record X lock, such records will be completely deleted records (that is, to completely into garbage list), and then insert a new record. The process of locating the record to be modified in the B+ tree is treated as a lock read to acquire an X lock, and the newly inserted record is protected by an implicit lock provided by the INSERT operation.

③ If the key value of the record is changed, it is equivalent to an INSERT operation after the DELETE operation on the original record, and the lock operation needs to follow the DELETE and INSERT rules.

PS: why is the write lock on, other transactions can also read operation?

Because InnoDB has MVCC (multi-version concurrency control), snapshot reads can be used without blocking.

4. Lock granularity classification

What is the granularity of locks? The granularity of the lock is how much you want to lock.

For example, you go to the toilet at home, you want to lock the toilet as long as it is ok, do not need to lock up the whole home not to let the family into the door, toilet is your lock granularity.

How to calculate the reasonable locking granularity?

In fact, the toilet is not only used to go to the toilet, but also to take a bath and wash your hands. This involves the optimization of locking granularity.

If you take a bath in the bathroom, other people can also wash their hands in the bathroom at the same time, as long as they are isolated. If the toilet, bathtub and bathroom are separated from each other (dry and wet are separated), in fact, the bathroom can be used by three people at the same time, of course, three people can not do the same thing. In this way, the granularity of locking is refined. When you take a shower, you just close the bathroom door, and others can still go in and wash their hands. If at the beginning when designing toilet, did not divide different functional area to lie between leave, cannot realize the maximum use of toilet resource.

Similarly, there is granularity of locks in MySQL. There are usually three types of row locks, table locks and page locks.

4.1 row locks

Both shared and exclusive locks are for a single row, so they can also be called row locks.

Locking a record affects only that record, so the locking granularity of row locks is the smallest in MySQL. InnoDB storage engine default lock is row lock.

It has the following characteristics:

  1. The lock conflict probability is the lowest and the concurrency is high

    Because the granularity of row lock is small, the probability of contention for lock resource is also minimum, thus the probability of lock conflict is low, and the concurrency is higher.

  2. Expensive and slow to lock

    Locking is very performance consuming. If you lock multiple pieces of data in a database, it will consume a lot of resources, and locks need to wait for the previous lock release before locking.

  3. Deadlocks can occur

    Read on to see what a deadlock is.

4.2 table locks

Table level lock is a table-level lock that locks the entire table, avoiding deadlocks and being the largest granularity locking mechanism in MySQL.

The default lock for MyISAM storage engine is a table lock.

It has the following characteristics:

  1. Low overhead, fast locking

    Because the lock is for the whole table, it must be faster than the lock for a single data.

  2. No deadlocks occur

    If the entire table is locked, no other transaction can get the lock, so there is no deadlock.

  3. Large lock granularity, high lock conflict probability, low concurrency

4.3 page locks

Page-level locking is a unique locking level in MySQL and is not common in other database management software.

The granularity of page-level locks is between row-level locks and table-level locks, so the resource cost of obtaining locks and the concurrent processing power provided by locks are also in between. In addition, page-level locks, like row-level locks, can be deadlocked.

Row locks Table locks Page locks
The granularity of the lock small big Between the two
The efficiency of lock slow fast Between the two
The probability of conflict low high
Concurrent performance high low general
Performance overhead big small Between the two
If a deadlock is no is

5. The algorithm realizes classification

InnoDB uses a row lock, also known as a record lock, but it is important to note that this record refers to the lock on the index entry.

InnoDB’s row-lock implementation means that InnoDB uses row-level locks only if data is retrieved by index criteria. Otherwise, InnoDB uses table locks.

InnoDB uses row locks to lock data whether using a primary key index, a unique index, or a normal index.

Row locks can only be used if the execution plan actually uses an index: Even if index fields are used in a condition, it is up to MySQL to decide whether to use indexes to retrieve data by determining the cost of different execution plans. If MySQL thinks full table scans are more efficient, such as for some small tables, InnoDB will use table locks instead of row locks.

Also, when we retrieve data using a range condition rather than an equality condition and request a lock, InnoDB locks the index entries of existing data records that meet the condition.

But even row locks are divided into different types in InnoDB. In other words, even if a row lock is added to the same record, the effect is different if the type is different. There are several common types of row locks.

5.1 Record the Lock

Record lock, a lock on a single index record.

Innodb creates a hidden aggregate primary key index in the background and locks the hidden aggregate primary key index even if there are no indexes on the table.

Record lock can be divided into S lock and X lock. After a transaction acquires the S-type record lock of a record, other transactions can also continue to acquire the S-type record lock of the record, but cannot continue to acquire the X-type record lock. Once a transaction acquires an X-lock for a record, other transactions cannot continue to acquire either the S-lock or the X-lock for that record.

5.2 Gap the Locks

Gap lock: locks the gaps before and after the index, not the index itself.

In REPEATABLE READ isolation level, MySQL can solve the phantom problem. There are two solutions: MVCC solution and lock solution. However, there is a problem with the locking solution. The phantom records do not exist when the transaction performs the first read operation, so we cannot lock the phantom records. So we can use a gap lock to lock it.

If there is a table like this:

CREATE TABLE test (id INT (1) NOT NULL AUTO_INCREMENT, number INT (1) NOT NULL COMMENT 'numeral ', PRIMARY KEY (id), KEY number (number) USING BTREE ) ENGINE = INNODB AUTO_INCREMENT = 1 DEFAULT CHARSET = utf8; INSERT INTO test VALUES (1, 1); INSERT INTO test VALUES (5, 3); INSERT INTO test VALUES (7, 8); INSERT INTO test VALUES (11, 12);Copy the code

As follows:

Start A transaction A:

BEGIN;
​
SELECT * FROM test WHERE number = 3 FOR UPDATE;
Copy the code

(1,1),(5,3)) and (5,3),(7,8)) are locked.

If a transaction B is opened at this time to insert data, as follows:

BEGIN; INSERT INTO test (id, number) VALUES (2,2);Copy the code

The results are as follows:

Why not insert? Insert (2,2) is locked between (1,1),(5,3) and (5,3),(7,8). If it is outside the range, of course it can be inserted, as in:

INSERT INTO test (id, number) VALUES (8,8); 
Copy the code

5.3 Next-Key Locks

A next-key lock is a combination of a record lock on an index record and a gap lock on the gap that precedes the index record, including the record itself. Each next-key lock is a pre-open and post-closed interval, meaning that a gap lock is only a gap between locks and does not lock the row. Next-key locks lock the right boundary row on a gap-lock basis.

By default, InnoDB runs at REPEATABLE READ isolation level. In this case, InnoDB uses next-key Locks for search and index scans, which prevents phantom reads from happening.

6. Optimistic and pessimistic locks

Optimistic lock and pessimistic lock are not specific locks, but a kind of lock ideas, not only reflected in MySQL, common Redis and other middleware can apply this idea.

6.1 optimistic locking

Optimistic locking is when we update a record with an optimistic attitude, assuming that no one else is working on the data during that time.

A common way to implement optimistic locking

A common way to do this is to add a version field to the table, controlling the version number and +1 after each change.

Before each data update, query the version number of the data and then perform service operations. Before data update, compare the version number with the version number in the current database. If the version number is the same, it indicates that no other thread has modified the data.

6.2 pessimistic locks

The so-called pessimistic lock is to hold a pessimistic attitude and assume that the changed data will be modified by others at the beginning.

Pessimistic locks can be implemented in two ways

Shared locks (read locks) and exclusive locks (write locks), see above.

7. A deadlock

A blockage in the execution of two or more processes, caused by competing for resources or by communicating with each other, which will not proceed without external forces. The system is in a deadlock state or a deadlock occurs.

Condition of generation

  • Mutual exclusion: a resource can only be used by one process at a time.
  • Request and hold conditions: when a process is blocked by requesting resources, it holds on to acquired resources.
  • Non-deprivation condition: the process can not be forcibly deprived of the resources it has acquired before they are used up.
  • Circular wait condition: A relationship between multiple processes in which resources are waiting for each other to loop.

MySQL > select * from ‘MySQL’;

CREATE TABLE 'user' (' id 'bigint NOT NULL COMMENT '主键',' name 'varchar(20) DEFAULT NULL COMMENT '主键', 'sex' char(1) DEFAULT NULL COMMENT 'gender ',' age 'varchar(10) DEFAULT NULL COMMENT' age ', 'url' varchar(40) DEFAULT NULL, PRIMARY KEY (`id`), KEY `suf_index_url` (`name`(3)) USING BTREE ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3; # data INSERT INTO ` user ` (` id `, ` name `, ` sex `, ` age `, ` url `) VALUES (' 1 ', 'a', '1', '18', 'https://javatv.net'); INSERT INTO `user` (`id`, `name`, `sex`, `age`, `url`) VALUES ('2', 'b', '1', '18', 'https://javatv.net');Copy the code

Execute in the following order:

A B
1. BEGIN
2. BEGIN
3. SELECT * FROM user WHERE name=’a’ FOR UPDATE
(4) SELECT * FROM user WHERE name=’b’ FOR UPDATE
5. SELECT * FROM user WHERE name=’b’ FOR UPDATE
6. SELECT * FROM user WHERE name=’a’ FOR UPDATE

1. Start transactions A and B;

Select * from ‘A’ where name=’ A ‘and name=’ B’;

3. If B does not release the lock, A tries to lock the data whose name=’ B ‘, it will block.

4, if transaction B tries to lock data on name=’a’ without releasing the lock, deadlock occurs.

At this point, MySQL detects A deadlock and terminates the transaction in B. At this point, it cuts back to transaction A and finds that the execution of the blocked SQL statement has completed. Deadlocks can be checked by running show Engine Innodb status \G.

How to avoid

As you can see from the above example, the key to deadlocks is that two (or more) sessions are locked in a different order, so we need to lock as many rows as possible at once when we perform SQL operations in the same order.