series

MySQL > select * from mysql. select * from mysql. select

MySQL series 2: Redo log

preface

Recently database often appear the following deadlock situation, by these two situations set out a detailed understanding of MySQL lock.

Lock wait timeout exceeded; try restarting transaction

Deadlock found when trying to get lock; try restarting transaction

MySQL > select * from ‘MySQL’

Global lock

Lock a whole. Lock a whole. Lock a whole. A global lock is a lock on the entire database instance.

For Flush tables with read Lock, the entire database is read-only and all statements are blocked, including add, delete, alter, create, and modify table structures.

Table locks

Lock tables kaka read,kaka2 write No other thread can write kaka2 read to kaka until unlock tables.

The thread executing the command can also only read to Kaka and write to Kaka2.

Row locks

Row locking is implemented by the engines themselves at the engine level. Innodb storage engine in MySQL supports row locking. If it does not support row locking, it means that only table locks can be used for concurrency control. For tables in this engine, only one update can be executed on the same table at any time, which will affect the business concurrency. (For reasons of space, more on this in the next installment)

Global locking

Demonstrates the state of the database after executing the Flush tables with read lock command.

Terminal 1 executes the global lock command

Port 2 performs the deletion operation. It does not succeed directly, but returns after port 1 is unlocked.

This SQL takes 3 minutes to execute, which is how long it takes to open terminal 2 and connect to the database.

Now that you’ve seen that the global lock from the beginning directly makes the entire library read only, this is just a demonstration of the delete operation and a few other operations to try yourself.

In Jiang’s article, the most typical scenario of global locking is used for logical backup, that is, every table in the whole library is selected to store text.

Now, think of the context in which this scenario arises.

If there is only one master library, a global lock is performed and the entire library is read-only, then the business is basically shut down and the product cannot be used.

At this point you will have a question I backup from the library ah! During backup, the binlogs synchronized from the primary database cannot be executed. If the data volume is too large, the primary/secondary delay is too large, and full backup must be performed.

These are the negative aspects of global locking, but let’s look at what happens when you don’t have a global lock on a backup.

I believe that most of you have developed payment projects, the next payment case to let you very clear understanding of the backup without global lock caused by the problem.

Initiate a logical backup. If a user purchases your service during the backup period, deduct the user balance in the business logic and then add your product to the user.

Obviously, this logic is fine, but performing a backup in a special case can cause problems.

If the user balance is backed up in chronological order, then the user initiates the purchase, and then backs up the product list purchased by the user.

A very clear problem appears, the user balance is not successful but the user is given the corresponding product.

That’s lucrative from the user’s point of view, but the reverse of the execution order would have produced a different result.

First backup user product table, and then backup user balance table, there will be a user money spent things did not get, this also got, users are the parents of food and clothing this is not cut parents leek.

In other words, if the backup is not locked, the backup sequence between different tables is different. If a table is updated during the backup and successfully backed up, but the associated table has been backed up and cannot be updated again, data inconsistency will occur.

A very important concept mentioned in the MVCC article is the read View, which is the collection of all uncommitted transactions at the moment a snapshot is read, provided that the isolation level is repeatable.

You should know what to say now, and that’s right: mysqldump.

The backup principle of mysqldump is to connect to the MySQL database through the protocol, query the data to be backed up, and convert the queried data into the corresponding INSERT statements. When we need to restore the data, we can restore the corresponding data as long as we execute these INSERT statements.

Such as backup test library commands for the mysqldump uroot – p test > / backup/mysqldump/test. The db.

When mysqldump uses single-Transaction, a transaction is started before the data is backed up and the read view is retrieved, so updates are supported throughout the backup process.

Why flush tables with read Lock when you have the mysqldump tool available?

Remember that you can update during the backup process if you get the consistency view, and if you get the consistency view if you start a transaction. You should be aware here that not all storage engines support things.

Flush tables with Read Lock is the only way to create tables that are not supported by other storage engines. We encourage you to use Innodb when creating tables.

It’s been a while. Do you remember what we’re doing? The requirement is that the full library be read-only.

If you’ve built a master-slave architecture for MySQL, you know that the master library is used for writing data, the slave library is used for reading data, and the slave library does not support writing. This effect is achieved by the readonly parameter.

Set Global ReadOnly =true can also be used to make the whole library read-only, so there is a reason why this is not mentioned in the first place.

One is that the master-slave architecture mentioned above requires readonly to identify master libraries from slave libraries.

Second, there are differences in the way exceptions are handled. If you run the flush talbes with read lock command on the client, the MySQL database automatically releases the global lock. If the entire library is set to readonly, it will remain in the read-only state once an exception occurs, causing the entire library to remain in the unwritable state for a long time.

Therefore, once the database is added to the global lock, the increase, deletion, modification of table structure, modification of fields and other operations will be locked.

Third, table locks

Table locks are the same as global lock release commands unlock tables, also automatically released when the client is disconnected.

In the old generation of revolutionary predecessors to deal with concurrent use of table lock, should all know that the impact of lock table is not as big as the impact of lock library, but in today’s lock granularity has supported the row lock (premise is to use Innodb storage engine, there is no need to use row lock to deal with concurrent.

Another cousin of table locking, metalock or MDL, is probably less well known because there is no actual syntax to turn it on or off during development.

This feature was introduced after MySQL5.5 to solve the problem that thread A was querying for A table and thread B was modifying the table’s data structure, which would cause the query result to be inconsistent with the table structure.

MDL write locks are used by default when accessing a table. At all times, remember that read locks are not mutually exclusive with read locks, read locks are mutually exclusive with write locks, and write locks are mutually exclusive with write locks.

So MDL does not need to display calls, so when is it released?

The answer is, “THE MDL is released after the transaction commits, which means the MDL is held for the duration of the transaction.”

So look at a scene.

First, thread A places an MDL lock on the table when it starts the transaction and executes the query.

Thread B then performs a query and does not block because reads do not conflict with reads.

Then, thread C changes the table structure. Thread A has not committed the transaction and the MDL has not been released, and the thread is blocked because it cannot acquire the MDL write lock.

What happens when thread D finally executes the query?

The answer is congestion.

Thread C does not acquire the MDL write lock. Thread D can obtain the MDL read lock.

This is because the MDL lock application will form a queue. In the queue, the write lock acquisition priority is higher than the read lock. Once the MDL write lock wait occurs, all the subsequent CURL operations in the table will be blocked.

If you do a DDL operation after an uncommitted transaction, the result will be MySQL hangs, the client will retry, all curds will retry after a timeout, and the library thread will fill up quickly.

So how can you safely perform DDL operations on a table?

First, long transactions must be resolved, which cannot be released without committing the MDL lock.

Then, find innodb_TRx in the infomation_SCHEMA library in the MySQL system table to see the transaction ID that is currently executing. This table was also mentioned in the transactions section.

Then, you want to kill these long transactions and execute the DDL.

Imagine that the next time you kill a transaction, a new transaction comes in, and you execute the DDL operation at the same time. This operation is definitely not possible.

How could the authorities allow this to happen!

SQL > alter table kaka wait 30 add name; alter table kaka wait 30 add name; Try executing the command again.

Four,

Insist on learning, insist on writing, insist on sharing is the belief that Kaka has been upholding since he started his career. May the article in the big Internet can give you a little help, I am kaka, see you next time.