1. Slow queries caused by table locks

  2. What are the risks of modifying table structures online?

  3. Class 3 data read problems (dirty read, unrepeatable read, illusory read), Class 2 data update problems (class 1 missing update, Class 2 missing update)

  4. Analysis of lock wait problems


1. Slow query caused by table locks

A table lock is created before the operation
The write lock blocks other operations (both read and write), which makes all operations serial
Read – read operations can be parallel, but read – write operations are still serial

Solutions:

1) Try not to use MyISAM storage engine. In MySQL8.0, all tables of MyISAM storage engine have been removed. InnoDB storage engine is recommended.

2) If you must use MyISAM storage engine, reduce the write operation time;

2. What are the risks of modifying the table structure online

If one day the business system needs to increase the length of a field, can it be directly modified online?

When performing DML (SELECT, UPDATE, DELETE, INSERT) operations, a metadata lock is added to the table to ensure that the table structure will not be modified during the query, so the above ALTER statement is blocked.

What if the order of execution is reversed, with the ALTER statement executed first and the DML statement executed later? Can DML statements be blocked? For example, if I am modifying the table structure in the online environment, will the DML statement on the online environment be blocked? The answer is: not sure.

Conclusion: If we need to modify the table structure online during business development, we can refer to the following solutions:

1, try to carry out in the time period of low business volume;

2. Check the official document to make sure that the table modification can be concurrent with DML without blocking online services;

The pt-online-schema-change tool from Percona is more powerful than the official ONLINE DDL. The basic principle is: insert… The select… The statement makes a full copy of the table and records the increments generated during the change of the table structure through triggers.

For example, to change table A, the main steps are as follows:

  • Create an empty table of the destination table structure, A_new;
  • Create trigger on A table, including add, delete, change trigger;
  • Through the insert… The select… The limit N statement copies data to the destination table in fragments
  • After the Copy is complete, rename table A_new to table A.

3. Three types of data read problems (dirty read, non-repeatable read, unreal read), two types of data update problems (the first type of lost update, the second type of lost update)

Dirty read

Transaction A reads the change data that transaction B has not committed yet and acts on that data. If transaction B rolls back, the data read by transaction A is not valid at all and is called A dirty read

The theoretical inventory should have 9, but due to the dirty read (T4 inventory detected 9 is dirty read), there are 8

Unrepeatable read

Transaction A reads the change (or delete) data that transaction B has committed. For example, transaction A reads the data for the first time, then transaction B changes the data and commits it, and transaction A reads the data again, but the two reads are different.

The data read by transaction A at T2 and T4 is different, that is, non-repeatability

Phantom Read

Transaction A reads the new data that transaction B has already committed. Note the difference between a new read and a non-repeatable read, which is a change (or delete).

  • The policies for the two cases are different. For non-repeatable reads, only row-level locking is required to prevent the record data from being changed or deleted.
  • However, a table level lock must be added to phantom readings to prevent new entries from being added to the table

Of course, it is also possible to set the database transaction isolation level to Serializable, but this is generally not done because this policy completely blocks, fully serializes access to the database, and has the worst concurrency performance.

T5 when the data is puzzling, it is illusory reading phenomenon. If transaction A only performs read operations but not write operations, set the database transaction isolation level to REPEATable Read, start the transaction with consistent Snapshot and perform snapshot reads at the same time. It is also possible to prevent illusory reading (because under the reread strategy, snapshot points are created not when a transaction is started, but when the first query is made).

Lost update

  • The first type of missing update: when A commits, the data of the committed B transaction is overwritten.

  • The second type of lost update: when A transaction is rolled back, the committed transaction B’s data is overwritten.

Lost updates cannot be solved by transactions alone. In this case, we need to combine database pessimistic locking to prevent such problems.

//0. Start transaction

begin; /begin work; /start transaction; (Choose one of the three.)

//1. Query the product information

select status from t_goods where id=1 for update; (Add exclusive lock)

//2. Generate orders based on product information

insert into t_orders (id,goods_id) values (null,1);

//3. Change the status of the product to 2

update t_goods set status=2;

//4. Commit transaction

commit; /commit work;